Monocular Advantage for Face Perception Implicates Subcortical Mechanisms in Adult Humans
Gabay, Shai; Nestor, Adrian; Dundas, Eva; Behrmann, Marlene
2014-01-01
The ability to recognize faces accurately and rapidly is an evolutionarily adaptive process. Most studies examining the neural correlates of face perception in adult humans have focused on a distributed cortical network of face-selective regions. There is, however, robust evidence from phylogenetic and ontogenetic studies that implicates subcortical structures, and recently, some investigations in adult humans indicate subcortical correlates of face perception as well. The questions addressed here are whether low-level subcortical mechanisms for face perception (in the absence of changes in expression) are conserved in human adults, and if so, what is the nature of these subcortical representations. In a series of four experiments, we presented pairs of images to the same or different eyes. Participants’ performance demonstrated that subcortical mechanisms, indexed by monocular portions of the visual system, play a functional role in face perception. These mechanisms are sensitive to face-like configurations and afford a coarse representation of a face, comprised of primarily low spatial frequency information, which suffices for matching faces but not for more complex aspects of face perception such as sex differentiation. Importantly, these subcortical mechanisms are not implicated in the perception of other visual stimuli, such as cars or letter strings. These findings suggest a conservation of phylogenetically and ontogenetically lower-order systems in adult human face perception. The involvement of subcortical structures in face recognition provokes a reconsideration of current theories of face perception, which are reliant on cortical level processing, inasmuch as it bolsters the cross-species continuity of the biological system for face recognition. PMID:24236767
Visual adaptation of the perception of "life": animacy is a basic perceptual dimension of faces.
Koldewyn, Kami; Hanus, Patricia; Balas, Benjamin
2014-08-01
One critical component of understanding another's mind is the perception of "life" in a face. However, little is known about the cognitive and neural mechanisms underlying this perception of animacy. Here, using a visual adaptation paradigm, we ask whether face animacy is (1) a basic dimension of face perception and (2) supported by a common neural mechanism across distinct face categories defined by age and species. Observers rated the perceived animacy of adult human faces before and after adaptation to (1) adult faces, (2) child faces, and (3) dog faces. When testing the perception of animacy in human faces, we found significant adaptation to both adult and child faces, but not dog faces. We did, however, find significant adaptation when morphed dog images and dog adaptors were used. Thus, animacy perception in faces appears to be a basic dimension of face perception that is species specific but not constrained by age categories.
The many faces of research on face perception.
Little, Anthony C; Jones, Benedict C; DeBruine, Lisa M
2011-06-12
Face perception is fundamental to human social interaction. Many different types of important information are visible in faces and the processes and mechanisms involved in extracting this information are complex and can be highly specialized. The importance of faces has long been recognized by a wide range of scientists. Importantly, the range of perspectives and techniques that this breadth has brought to face perception research has, in recent years, led to many important advances in our understanding of face processing. The articles in this issue on face perception each review a particular arena of interest in face perception, variously focusing on (i) the social aspects of face perception (attraction, recognition and emotion), (ii) the neural mechanisms underlying face perception (using brain scanning, patient data, direct stimulation of the brain, visual adaptation and single-cell recording), and (iii) comparative aspects of face perception (comparing adult human abilities with those of chimpanzees and children). Here, we introduce the central themes of the issue and present an overview of the articles.
Is the Face-Perception System Human-Specific at Birth?
ERIC Educational Resources Information Center
Di Giorgio, Elisa; Leo, Irene; Pascalis, Olivier; Simion, Francesca
2012-01-01
The present study investigates the human-specificity of the orienting system that allows neonates to look preferentially at faces. Three experiments were carried out to determine whether the face-perception system that is present at birth is broad enough to include both human and nonhuman primate faces. The results demonstrate that the newborns…
Understanding face perception by means of human electrophysiology.
Rossion, Bruno
2014-06-01
Electrophysiological recordings on the human scalp provide a wealth of information about the temporal dynamics and nature of face perception at a global level of brain organization. The time window between 100 and 200 ms witnesses the transition between low-level and high-level vision, an N170 component correlating with conscious interpretation of a visual stimulus as a face. This face representation is rapidly refined as information accumulates during this time window, allowing the individualization of faces. To improve the sensitivity and objectivity of face perception measures, it is increasingly important to go beyond transient visual stimulation by recording electrophysiological responses at periodic frequency rates. This approach has recently provided face perception thresholds and the first objective signature of integration of facial parts in the human brain. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Comparative View of Face Perception
Leopold, David A.; Rhodes, Gillian
2010-01-01
Face perception serves as the basis for much of human social exchange. Diverse information can be extracted about an individual from a single glance at their face, including their identity, emotional state, and direction of attention. Neuropsychological and fMRI experiments reveal a complex network of specialized areas in the human brain supporting these face-reading skills. Here we consider the evolutionary roots of human face perception by exploring the manner in which different animal species view and respond to faces. We focus on behavioral experiments collected from both primates and non-primates, assessing the types of information that animals are able to extract from the faces of their conspecifics, human experimenters, and natural predators. These experiments reveal that faces are an important category of visual stimuli for animals in all major vertebrate taxa, possibly reflecting the early emergence of neural specialization for faces in vertebrate evolution. At the same time, some aspects of facial perception are only evident in primates and a few other social mammals, and may therefore have evolved to suit the needs of complex social communication. Since the human brain likely utilizes both primitive and recently evolved neural specializations for the processing of faces, comparative studies may hold the key to understanding how these parallel circuits emerged during human evolution. PMID:20695655
A comparative view of face perception.
Leopold, David A; Rhodes, Gillian
2010-08-01
Face perception serves as the basis for much of human social exchange. Diverse information can be extracted about an individual from a single glance at their face, including their identity, emotional state, and direction of attention. Neuropsychological and functional magnetic resonance imaging (fMRI) experiments reveal a complex network of specialized areas in the human brain supporting these face-reading skills. Here we consider the evolutionary roots of human face perception by exploring the manner in which different animal species view and respond to faces. We focus on behavioral experiments collected from both primates and nonprimates, assessing the types of information that animals are able to extract from the faces of their conspecifics, human experimenters, and natural predators. These experiments reveal that faces are an important category of visual stimuli for animals in all major vertebrate taxa, possibly reflecting the early emergence of neural specialization for faces in vertebrate evolution. At the same time, some aspects of facial perception are only evident in primates and a few other social mammals, and may therefore have evolved to suit the needs of complex social communication. Because the human brain likely utilizes both primitive and recently evolved neural specializations for the processing of faces, comparative studies may hold the key to understanding how these parallel circuits emerged during human evolution. 2010 APA, all rights reserved
Prakash, Akanksha; Rogers, Wendy A
2015-04-01
Ample research in social psychology has highlighted the importance of the human face in human-human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger ( N = 32) and older adults ( N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots.
Rogers, Wendy A.
2015-01-01
Ample research in social psychology has highlighted the importance of the human face in human–human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger (N = 32) and older adults (N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots. PMID:26294936
Differences between perception of human faces and body shapes: evidence from the composite illusion.
Soria Bauser, Denise A; Suchan, Boris; Daum, Irene
2011-01-01
The present study aimed to investigate whether human body forms--like human faces--undergo holistic processing. Evidence for holistic face processing comes from the face composite effect: two identical top halves of a face are perceived as being different if they are presented with different bottom parts. This effect disappears if both bottom halves are shifted laterally (misaligned) or if the stimulus is rotated by 180°. We investigated whether comparable composite effects are observed for human faces and human body forms. Matching of upright faces was more accurate and faster for misaligned compared to aligned presentations. By contrast, there were no processing differences between aligned and misaligned bodies. An inversion effect emerged, with better recognition performance for upright compared to inverted bodies but not faces. The present findings provide evidence for the assumption that holistic processing--investigated with the composite illusion--is not involved in the perception of human body forms. Copyright © 2010 Elsevier Ltd. All rights reserved.
Extracted facial feature of racial closely related faces
NASA Astrophysics Data System (ADS)
Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu
2010-02-01
Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.
Koda, Hiroki; Sato, Anna; Kato, Akemi
2013-09-01
Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
LaMontagne, Ramona Marie
2012-01-01
This qualitative study examined the perceptions of human resource managers who had faced ethical dilemmas in the workplace, to gain an understanding of how they felt their life experiences shaped their values in making ethical decisions. The experiences of ten human resource managers who believed they chose a right course of action when faced with…
Frässle, Stefan; Paulus, Frieder Michel; Krach, Sören; Schweinberger, Stefan Robert; Stephan, Klaas Enno; Jansen, Andreas
2016-01-01
Perceiving human faces constitutes a fundamental ability of the human mind, integrating a wealth of information essential for social interactions in everyday life. Neuroimaging studies have unveiled a distributed neural network consisting of multiple brain regions in both hemispheres. Whereas the individual regions in the face perception network and the right-hemispheric dominance for face processing have been subject to intensive research, the functional integration among these regions and hemispheres has received considerably less attention. Using dynamic causal modeling (DCM) for fMRI, we analyzed the effective connectivity between the core regions in the face perception network of healthy humans to unveil the mechanisms underlying both intra- and interhemispheric integration. Our results suggest that the right-hemispheric lateralization of the network is due to an asymmetric face-specific interhemispheric recruitment at an early processing stage - that is, at the level of the occipital face area (OFA) but not the fusiform face area (FFA). As a structural correlate, we found that OFA gray matter volume was correlated with this asymmetric interhemispheric recruitment. Furthermore, exploratory analyses revealed that interhemispheric connection asymmetries were correlated with the strength of pupil constriction in response to faces, a measure with potential sensitivity to holistic (as opposed to feature-based) processing of faces. Overall, our findings thus provide a mechanistic description for lateralized processes in the core face perception network, point to a decisive role of interhemispheric integration at an early stage of face processing among bilateral OFA, and tentatively indicate a relation to individual variability in processing strategies for faces. These findings provide a promising avenue for systematic investigations of the potential role of interhemispheric integration in future studies. Copyright © 2015 Elsevier Inc. All rights reserved.
The non-linear development of the right hemispheric specialization for human face perception.
Lochy, Aliette; de Heering, Adélaïde; Rossion, Bruno
2017-06-24
The developmental origins of human adults' right hemispheric specialization for face perception remain unclear. On the one hand, infant studies have shown a right hemispheric advantage for face perception. On the other hand, it has been proposed that the adult right hemispheric lateralization for face perception slowly emerges during childhood due to reading acquisition, which increases left lateralized posterior responses to competing written material (e.g., visual letters and words). Since methodological approaches used in infant and children typically differ when their face capabilities are explored, resolving this issue has been difficult. Here we tested 5-year-old preschoolers varying in their level of visual letter knowledge with the same fast periodic visual stimulation (FPVS) paradigm leading to strongly right lateralized electrophysiological occipito-temporal face-selective responses in 4- to 6-month-old infants (de Heering and Rossion, 2015). Children's face-selective response was quantitatively larger and differed in scalp topography from infants', but did not differ across hemispheres. There was a small positive correlation between preschoolers' letter knowledge and a non-normalized index of right hemispheric specialization for faces. These observations show that previous discrepant results in the literature reflect a genuine nonlinear development of the neural processes underlying face perception and are not merely due to methodological differences across age groups. We discuss several factors that could contribute to the adult right hemispheric lateralization for faces, such as myelination of the corpus callosum and reading acquisition. Our findings point to the value of FPVS coupled with electroencephalography to assess specialized face perception processes throughout development with the same methodology. Copyright © 2017 Elsevier Ltd. All rights reserved.
Toward a Social Psychophysics of Face Communication.
Jack, Rachael E; Schyns, Philippe G
2017-01-03
As a highly social species, humans are equipped with a powerful tool for social communication-the face. Although seemingly simple, the human face can elicit multiple social perceptions due to the rich variations of its movements, morphology, and complexion. Consequently, identifying precisely what face information elicits different social perceptions is a complex empirical challenge that has largely remained beyond the reach of traditional methods. In the past decade, the emerging field of social psychophysics has developed new methods to address this challenge, with the potential to transfer psychophysical laws of social perception to the digital economy via avatars and social robots. At this exciting juncture, it is timely to review these new methodological developments. In this article, we introduce and review the foundational methodological developments of social psychophysics, present work done in the past decade that has advanced understanding of the face as a tool for social communication, and discuss the major challenges that lie ahead.
Kreifelts, Benjamin; Ethofer, Thomas; Huberle, Elisabeth; Grodd, Wolfgang; Wildgruber, Dirk
2010-07-01
Multimodal integration of nonverbal social signals is essential for successful social interaction. Previous studies have implicated the posterior superior temporal sulcus (pSTS) in the perception of social signals such as nonverbal emotional signals as well as in social cognitive functions like mentalizing/theory of mind. In the present study, we evaluated the relationships between trait emotional intelligence (EI) and fMRI activation patterns in individual subjects during the multimodal perception of nonverbal emotional signals from voice and face. Trait EI was linked to hemodynamic responses in the right pSTS, an area which also exhibits a distinct sensitivity to human voices and faces. Within all other regions known to subserve the perceptual audiovisual integration of human social signals (i.e., amygdala, fusiform gyrus, thalamus), no such linked responses were observed. This functional difference in the network for the audiovisual perception of human social signals indicates a specific contribution of the pSTS as a possible interface between the perception of social information and social cognition. (c) 2009 Wiley-Liss, Inc.
Face Pareidolia in the Rhesus Monkey.
Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G
2017-08-21
Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.
Van Belle, Goedele; Busigny, Thomas; Lefèvre, Philippe; Joubert, Sven; Felician, Olivier; Gentile, Francesco; Rossion, Bruno
2011-09-01
Gaze-contingency is a method traditionally used to investigate the perceptual span in reading by selectively revealing/masking a portion of the visual field in real time. Introducing this approach in face perception research showed that the performance pattern of a brain-damaged patient with acquired prosopagnosia (PS) in a face matching task was reversed, as compared to normal observers: the patient showed almost no further decrease of performance when only one facial part (eye, mouth, nose, etc.) was available at a time (foveal window condition, forcing part-based analysis), but a very large impairment when the fixated part was selectively masked (mask condition, promoting holistic perception) (Van Belle, De Graef, Verfaillie, Busigny, & Rossion, 2010a; Van Belle, De Graef, Verfaillie, Rossion, & Lefèvre, 2010b). Here we tested the same manipulation in a recently reported case of pure prosopagnosia (GG) with unilateral right hemisphere damage (Busigny, Joubert, Felician, Ceccaldi, & Rossion, 2010). Contrary to normal observers, GG was also significantly more impaired with a mask than with a window, demonstrating impairment with holistic face perception. Together with our previous study, these observations support a generalized account of acquired prosopagnosia as a critical impairment of holistic (individual) face perception, implying that this function is a key element of normal human face recognition. Furthermore, the similar behavioral pattern of the two patients despite different lesion localizations supports a distributed network view of the neural face processing structures, suggesting that the key function of human face processing, namely holistic perception of individual faces, requires the activity of several brain areas of the right hemisphere and their mutual connectivity. Copyright © 2011 Elsevier Ltd. All rights reserved.
Crossing the “Uncanny Valley”: adaptation to cartoon faces can influence perception of human faces
Chen, Haiwen; Russell, Richard; Nakayama, Ken; Livingstone, Margaret
2013-01-01
Adaptation can shift what individuals identify to be a prototypical or attractive face. Past work suggests that low-level shape adaptation can affect high-level face processing but is position dependent. Adaptation to distorted images of faces can also affect face processing but only within sub-categories of faces, such as gender, age, and race/ethnicity. This study assesses whether there is a representation of face that is specific to faces (as opposed to all shapes) but general to all kinds of faces (as opposed to subcategories) by testing whether adaptation to one type of face can affect perception of another. Participants were shown cartoon videos containing faces with abnormally large eyes. Using animated videos allowed us to simulate naturalistic exposure and avoid positional shape adaptation. Results suggest that adaptation to cartoon faces with large eyes shifts preferences for human faces toward larger eyes, supporting the existence of general face representations. PMID:20465173
The Functional Neuroanatomy of Human Face Perception.
Grill-Spector, Kalanit; Weiner, Kevin S; Kay, Kendrick; Gomez, Jesse
2017-09-15
Face perception is critical for normal social functioning and is mediated by a network of regions in the ventral visual stream. In this review, we describe recent neuroimaging findings regarding the macro- and microscopic anatomical features of the ventral face network, the characteristics of white matter connections, and basic computations performed by population receptive fields within face-selective regions composing this network. We emphasize the importance of the neural tissue properties and white matter connections of each region, as these anatomical properties may be tightly linked to the functional characteristics of the ventral face network. We end by considering how empirical investigations of the neural architecture of the face network may inform the development of computational models and shed light on how computations in the face network enable efficient face perception.
Rutishauser, Ueli; Mamelak, Adam N.; Adolphs, Ralph
2015-01-01
The amygdala’s role in emotion and social perception has been intensively investigated primarily through studies using fMRI. Recently, this topic has been examined using single-unit recordings in both humans and monkeys, with a focus on face processing. The findings provide novel insights, including several surprises: amygdala neurons have very long response latencies, show highly nonlinear responses to whole faces, and can be exquisitely selective for very specific parts of faces such as the eyes. In humans, the responses of amygdala neurons correlate with internal states evoked by faces, rather than with their objective features. Current and future studies extend the investigations to psychiatric illnesses such as autism, in which atypical face processing is a hallmark of social dysfunction. PMID:25847686
Holistic processing of human body postures: evidence from the composite effect.
Willems, Sam; Vrancken, Leia; Germeys, Filip; Verfaillie, Karl
2014-01-01
The perception of socially relevant stimuli (e.g., faces and bodies) has received considerable attention in the vision science community. It is now widely accepted that human faces are processed holistically and not only analytically. One observation that has been taken as evidence for holistic face processing is the face composite effect: two identical top halves of a face tend to be perceived as being different when combined with different bottom halves. This supports the hypothesis that face processing proceeds holistically. Indeed, the interference effect disappears when the two face parts are misaligned (blocking holistic perception). In the present study, we investigated whether there is also a composite effect for the perception of body postures: are two identical body halves perceived as being in different poses when the irrelevant body halves differ from each other? Both a horizontal (i.e., top-bottom body halves; Experiment 1) and a vertical composite effect (i.e., left-right body halves; Experiment 2) were examined by means of a delayed matching-to-sample task. Results of both experiments indicate the existence of a body posture composite effect. This provides evidence for the hypothesis that body postures, as faces, are processed holistically.
Holistic processing of human body postures: evidence from the composite effect
Willems, Sam; Vrancken, Leia; Germeys, Filip; Verfaillie, Karl
2014-01-01
The perception of socially relevant stimuli (e.g., faces and bodies) has received considerable attention in the vision science community. It is now widely accepted that human faces are processed holistically and not only analytically. One observation that has been taken as evidence for holistic face processing is the face composite effect: two identical top halves of a face tend to be perceived as being different when combined with different bottom halves. This supports the hypothesis that face processing proceeds holistically. Indeed, the interference effect disappears when the two face parts are misaligned (blocking holistic perception). In the present study, we investigated whether there is also a composite effect for the perception of body postures: are two identical body halves perceived as being in different poses when the irrelevant body halves differ from each other? Both a horizontal (i.e., top-bottom body halves; Experiment 1) and a vertical composite effect (i.e., left-right body halves; Experiment 2) were examined by means of a delayed matching-to-sample task. Results of both experiments indicate the existence of a body posture composite effect. This provides evidence for the hypothesis that body postures, as faces, are processed holistically. PMID:24999337
Human gamma band activity and perception of a gestalt.
Keil, A; Müller, M M; Ray, W J; Gruber, T; Elbert, T
1999-08-15
Neuronal oscillations in the gamma band (above 30 Hz) have been proposed to be a possible mechanism for the visual representation of objects. The present study examined the topography of gamma band spectral power and event-related potentials in human EEG associated with perceptual switching effected by rotating ambiguous (bistable) figures. Eleven healthy human subjects were presented two rotating bistable figures: first, a face figure that allowed perception of a sad or happy face depending on orientation and therefore caused a perceptual switch at defined points in time when rotated, and, second, a modified version of the Rubin vase, allowing perception as a vase or two faces whereby the switch was orientation-independent. Nonrotating figures served as further control stimuli. EEG was recorded using a high-density array with 128 electrodes. We found a negative event-related potential associated with the switching of the sad-happy figure, which was most pronounced at central prefrontal sites. Gamma band activity (GBA) was enhanced at occipital electrode sites in the rotating bistable figures compared with the standing stimuli, being maximal at vertical stimulus orientations that allowed an easy recognition of the sad and happy face or the vase-faces, respectively. At anterior electrodes, GBA showed a complementary pattern, being maximal when stimuli were oriented horizontally. The findings support the notion that formation of a visual percept may involve oscillations in a distributed neuronal assembly.
ERIC Educational Resources Information Center
Rossion, Bruno; Hanseeuw, Bernard; Dricot, Laurence
2012-01-01
A number of human brain areas showing a larger response to faces than to objects from different categories, or to scrambled faces, have been identified in neuroimaging studies. Depending on the statistical criteria used, the set of areas can be overextended or minimized, both at the local (size of areas) and global (number of areas) levels. Here…
Rangarajan, Vinitha; Hermes, Dora; Foster, Brett L.; Weiner, Kevin S.; Jacques, Corentin; Grill-Spector, Kalanit
2014-01-01
Neuroimaging and electrophysiological studies across species have confirmed bilateral face-selective responses in the ventral temporal cortex (VTC) and prosopagnosia is reported in patients with lesions in the VTC including the fusiform gyrus (FG). As imaging and electrophysiological studies provide correlative evidence, and brain lesions often comprise both white and gray matter structures beyond the FG, we designed the current study to explore the link between face-related electrophysiological responses in the FG and the causal effects of electrical stimulation of the left or right FG in face perception. We used a combination of electrocorticography (ECoG) and electrical brain stimulation (EBS) in 10 human subjects implanted with intracranial electrodes in either the left (5 participants, 30 FG sites) or right (5 participants, 26 FG sites) hemispheres. We identified FG sites with face-selective ECoG responses, and recorded perceptual reports during EBS of these sites. In line with existing literature, face-selective ECoG responses were present in both left and right FG sites. However, when the same sites were stimulated, we observed a striking difference between hemispheres. Only EBS of the right FG caused changes in the conscious perception of faces, whereas EBS of strongly face-selective regions in the left FG produced non-face-related visual changes, such as phosphenes. This study examines the relationship between correlative versus causal nature of ECoG and EBS, respectively, and provides important insight into the differential roles of the right versus left FG in conscious face perception. PMID:25232118
Ranking fluctuating asymmetry in a dot figure and the significant impact of imagining a face.
Neby, Magne; Ivar, Folstad
2013-01-01
Fluctuating asymmetry and averageness is correlated with our perception of beauty in human faces. Yet, whether deviations of centrality in the positioning of the eyes, the nose, and the mouth have different effects on our perception of asymmetry in a holistic human face, is still uncertain. In this study we aimed to test the relative effect of decentralising the horizontal position of three sets of paired dots representing eyes, nostrils, or mouth from the vertical midline of ambiguous dot figures, vaguely resembling a face. The figures were ranked according to perceived asymmetry by human observers. When associating the figures with non-facial objects (eg a butterfly), none of the figures' rank distribution differed from each other. However, when observers imagined the figures to represent a human face, the figure with the decentralised pair of dots representing the nostrils was significantly ranked as more asymmetric than the other figures. This result provides indications that the brain may deal with information about facial asymmetry and averageness heavily depending on the centrality of the nasal region.
Cross-Category Adaptation: Objects Produce Gender Adaptation in the Perception of Faces
Javadi, Amir Homayoun; Wee, Natalie
2012-01-01
Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes. PMID:23049942
Kujala, Miiamaaria V; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri
2017-01-01
Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.
Kujala, Miiamaaria V.; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri
2017-01-01
Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people’s perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects’ personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs’ emotional facial expressions. PMID:28114335
ERIC Educational Resources Information Center
Gross, Thomas F.
2004-01-01
Children who experienced autism, mental retardation, and language disorders; and, children in a clinical control group were shown photographs of human female, orangutan, and canine (boxer) faces expressing happiness, sadness, anger, surprise and a neutral expression. For each species of faces, children were asked to identify the happy, sad, angry,…
Ramírez, Fernando M
2018-05-01
Viewpoint-invariant face recognition is thought to be subserved by a distributed network of occipitotemporal face-selective areas that, except for the human anterior temporal lobe, have been shown to also contain face-orientation information. This review begins by highlighting the importance of bilateral symmetry for viewpoint-invariant recognition and face-orientation perception. Then, monkey electrophysiological evidence is surveyed describing key tuning properties of face-selective neurons-including neurons bimodally tuned to mirror-symmetric face-views-followed by studies combining functional magnetic resonance imaging (fMRI) and multivariate pattern analyses to probe the representation of face-orientation and identity information in humans. Altogether, neuroimaging studies suggest that face-identity is gradually disentangled from face-orientation information along the ventral visual processing stream. The evidence seems to diverge, however, regarding the prevalent form of tuning of neural populations in human face-selective areas. In this context, caveats possibly leading to erroneous inferences regarding mirror-symmetric coding are exposed, including the need to distinguish angular from Euclidean distances when interpreting multivariate pattern analyses. On this basis, this review argues that evidence from the fusiform face area is best explained by a view-sensitive code reflecting head angular disparity, consistent with a role of this area in face-orientation perception. Finally, the importance is stressed of explicit models relating neural properties to large-scale signals.
Quest Hierarchy for Hyperspectral Face Recognition
2011-03-01
numerous face recognition algorithms available, several very good literature surveys are available that include Abate [29], Samal [110], Kong [18], Zou...Perception, Japan (January 1994). [110] Samal , Ashok and P. Iyengar, Automatic Recognition and Analysis of Human Faces and Facial Expressions: A Survey
Minami, T; Goto, K; Kitazaki, M; Nakauchi, S
2011-03-10
In humans, face configuration, contour and color may affect face perception, which is important for social interactions. This study aimed to determine the effect of color information on face perception by measuring event-related potentials (ERPs) during the presentation of natural- and bluish-colored faces. Our results demonstrated that the amplitude of the N170 event-related potential, which correlates strongly with face processing, was higher in response to a bluish-colored face than to a natural-colored face. However, gamma-band activity was insensitive to the deviation from a natural face color. These results indicated that color information affects the N170 associated with a face detection mechanism, which suggests that face color is important for face detection. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
The Effects of Prediction on the Perception for Own-Race and Other-Race Faces
Ran, Guangming; Zhang, Qi; Chen, Xu; Pan, Yangu
2014-01-01
Human beings do not passively perceive important social features about others such as race and age in social interactions. Instead, it is proposed that humans might continuously generate predictions about these social features based on prior similar experiences. Pre-awareness of racial information conveyed by others' faces enables individuals to act in “culturally appropriate” ways, which is useful for interpersonal relations in different ethnicity groups. However, little is known about the effects of prediction on the perception for own-race and other-race faces. Here, we addressed this issue using high temporal resolution event-related potential techniques. In total, data from 24 participants (13 women and 11 men) were analyzed. It was found that the N170 amplitudes elicited by other-race faces, but not own-race faces, were significantly smaller in the predictable condition compared to the unpredictable condition, reflecting a switch to holistic processing of other-race faces when those faces were predictable. In this respect, top-down prediction about face race might contribute to the elimination of the other-race effect (one face recognition impairment). Furthermore, smaller P300 amplitudes were observed for the predictable than for unpredictable conditions, which suggested that the prediction of race reduced the neural responses of human brains. PMID:25422892
Vrancken, Leia; Germeys, Filip; Verfaillie, Karl
2017-01-01
A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.
ERIC Educational Resources Information Center
Kaupins, Gundars Egons; Wanek, James Edward; Coco, Malcolm Paulin
2014-01-01
Based on a survey of 264 human resources professionals from 10 Society for Human Resource Management chapters in Texas, the authors investigated how human resources professionals accept online degrees compared to degrees based on face-to-face coursework for hiring and promotion purposes. If respondents were satisfied with their own online course…
Robust Selectivity for Faces in the Human Amygdala in the Absence of Expressions
Mende-Siedlecki, Peter; Verosky, Sara C.; Turk-Browne, Nicholas B.; Todorov, Alexander
2014-01-01
There is a well-established posterior network of cortical regions that plays a central role in face processing and that has been investigated extensively. In contrast, although responsive to faces, the amygdala is not considered a core face-selective region, and its face selectivity has never been a topic of systematic research in human neuroimaging studies. Here, we conducted a large-scale group analysis of fMRI data from 215 participants. We replicated the posterior network observed in prior studies but found equally robust and reliable responses to faces in the amygdala. These responses were detectable in most individual participants, but they were also highly sensitive to the initial statistical threshold and habituated more rapidly than the responses in posterior face-selective regions. A multivariate analysis showed that the pattern of responses to faces across voxels in the amygdala had high reliability over time. Finally, functional connectivity analyses showed stronger coupling between the amygdala and posterior face-selective regions during the perception of faces than during the perception of control visual categories. These findings suggest that the amygdala should be considered a core face-selective region. PMID:23984945
Infants Experience Perceptual Narrowing for Nonprimate Faces
ERIC Educational Resources Information Center
Simpson, Elizabeth A.; Varga, Krisztina; Frick, Janet E.; Fragaszy, Dorothy
2011-01-01
Perceptual narrowing--a phenomenon in which perception is broad from birth, but narrows as a function of experience--has previously been tested with primate faces. In the first 6 months of life, infants can discriminate among individual human and monkey faces. Though the ability to discriminate monkey faces is lost after about 9 months, infants…
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
Visual adaptation and face perception.
Webster, Michael A; MacLeod, Donald I A
2011-06-12
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.
Face recognition increases during saccade preparation.
Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian
2014-01-01
Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.
Fox, Christopher J; Barton, Jason J S
2007-01-05
The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.
Impressions of dominance are made relative to others in the visual environment.
Re, Daniel E; Lefevre, Carmen E; DeBruine, Lisa M; Jones, Benedict C; Perrett, David I
2014-03-27
Face judgments of dominance play an important role in human social interaction. Perceived facial dominance is thought to indicate physical formidability, as well as resource acquisition and holding potential. Dominance cues in the face affect perceptions of attractiveness, emotional state, and physical strength. Most experimental paradigms test perceptions of facial dominance in individual faces, or they use manipulated versions of the same face in a forced-choice task but in the absence of other faces. Here, we extend this work by assessing whether dominance ratings are absolute or are judged relative to other faces. We presented participants with faces to be rated for dominance (target faces), while also presenting a second face (non-target faces) that was not to be rated. We found that both the masculinity and sex of the non-target face affected dominance ratings of the target face. Masculinized non-target faces decreased the perceived dominance of a target face relative to a feminized non-target face, and displaying a male non-target face decreased perceived dominance of a target face more so than a female non-target face. Perceived dominance of male target faces was affected more by masculinization of male non-target faces than female non-target faces. These results indicate that dominance perceptions can be altered by surrounding faces, demonstrating that facial dominance is judged at least partly relative to other faces.
The Superior Temporal Sulcus Is Causally Connected to the Amygdala: A Combined TBS-fMRI Study.
Pitcher, David; Japee, Shruti; Rauth, Lionel; Ungerleider, Leslie G
2017-02-01
Nonhuman primate neuroanatomical studies have identified a cortical pathway from the superior temporal sulcus (STS) projecting into dorsal subregions of the amygdala, but whether this same pathway exists in humans is unknown. Here, we addressed this question by combining theta burst transcranial magnetic stimulation (TBS) with fMRI to test the prediction that the STS and amygdala are functionally connected during face perception. Human participants (N = 17) were scanned, over two sessions, while viewing 3 s video clips of moving faces, bodies, and objects. During these sessions, TBS was delivered over the face-selective right posterior STS (rpSTS) or over the vertex control site. A region-of-interest analysis revealed results consistent with our hypothesis. Namely, TBS delivered over the rpSTS reduced the neural response to faces (but not to bodies or objects) in the rpSTS, right anterior STS (raSTS), and right amygdala, compared with TBS delivered over the vertex. By contrast, TBS delivered over the rpSTS did not significantly reduce the neural response to faces in the right fusiform face area or right occipital face area. This pattern of results is consistent with the existence of a cortico-amygdala pathway in humans for processing face information projecting from the rpSTS, via the raSTS, into the amygdala. This conclusion is consistent with nonhuman primate neuroanatomy and with existing face perception models. Neuroimaging studies have identified multiple face-selective regions in the brain, but the functional connections between these regions are unknown. In the present study, participants were scanned with fMRI while viewing movie clips of faces, bodies, and objects before and after transient disruption of the face-selective right posterior superior temporal sulcus (rpSTS). Results showed that TBS disruption reduced the neural response to faces, but not to bodies or objects, in the rpSTS, right anterior STS (raSTS), and right amygdala. These results are consistent with the existence of a cortico-amygdala pathway in humans for processing face information projecting from the rpSTS, via the raSTS, into the amygdala. This conclusion is consistent with nonhuman primate neuroanatomy and with existing face perception models. Copyright © 2017 the authors 0270-6474/17/371156-06$15.00/0.
Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations
Schwarzkopf, D. Samuel; Alvarez, Ivan; Lawson, Rebecca P.; Henriksson, Linda; Kriegeskorte, Nikolaus; Rees, Geraint
2016-01-01
Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders. PMID:27605606
Jessen, Sarah; Altvater-Mackensen, Nicole; Grossmann, Tobias
2016-05-01
Sensitive responding to others' emotions is essential during social interactions among humans. There is evidence for the existence of subcortically mediated emotion discrimination processes that occur independent of conscious perception in adults. However, only recently work has begun to examine the development of automatic emotion processing systems during infancy. In particular, it is unclear whether emotional expressions impact infants' autonomic nervous system regardless of conscious perception. We examined this question by measuring pupillary responses while subliminally and supraliminally presenting 7-month-old infants with happy and fearful faces. Our results show greater pupil dilation, indexing enhanced autonomic arousal, in response to happy compared to fearful faces regardless of conscious perception. Our findings suggest that, early in ontogeny, emotion discrimination occurs independent of conscious perception and is associated with differential autonomic responses. This provides evidence for the view that automatic emotion processing systems are an early-developing building block of human social functioning. Copyright © 2016 Elsevier B.V. All rights reserved.
The fusiform face area: a cortical region specialized for the perception of faces
Kanwisher, Nancy; Yovel, Galit
2006-01-01
Faces are among the most important visual stimuli we perceive, informing us not only about a person's identity, but also about their mood, sex, age and direction of gaze. The ability to extract this information within a fraction of a second of viewing a face is important for normal social interactions and has probably played a critical role in the survival of our primate ancestors. Considerable evidence from behavioural, neuropsychological and neurophysiological investigations supports the hypothesis that humans have specialized cognitive and neural mechanisms dedicated to the perception of faces (the face-specificity hypothesis). Here, we review the literature on a region of the human brain that appears to play a key role in face perception, known as the fusiform face area (FFA). Section 1 outlines the theoretical background for much of this work. The face-specificity hypothesis falls squarely on one side of a longstanding debate in the fields of cognitive science and cognitive neuroscience concerning the extent to which the mind/brain is composed of: (i) special-purpose (‘domain-specific’) mechanisms, each dedicated to processing a specific kind of information (e.g. faces, according to the face-specificity hypothesis), versus (ii) general-purpose (‘domain-general’) mechanisms, each capable of operating on any kind of information. Face perception has long served both as one of the prime candidates of a domain-specific process and as a key target for attack by proponents of domain-general theories of brain and mind. Section 2 briefly reviews the prior literature on face perception from behaviour and neurophysiology. This work supports the face-specificity hypothesis and argues against its domain-general alternatives (the individuation hypothesis, the expertise hypothesis and others). Section 3 outlines the more recent evidence on this debate from brain imaging, focusing particularly on the FFA. We review the evidence that the FFA is selectively engaged in face perception, by addressing (and rebutting) five of the most widely discussed alternatives to this hypothesis. In §4, we consider recent findings that are beginning to provide clues into the computations conducted in the FFA and the nature of the representations the FFA extracts from faces. We argue that the FFA is engaged both in detecting faces and in extracting the necessary perceptual information to recognize them, and that the properties of the FFA mirror previously identified behavioural signatures of face-specific processing (e.g. the face-inversion effect). Section 5 asks how the computations and representations in the FFA differ from those occurring in other nearby regions of cortex that respond strongly to faces and objects. The evidence indicates clear functional dissociations between these regions, demonstrating that the FFA shows not only functional specificity but also area specificity. We end by speculating in §6 on some of the broader questions raised by current research on the FFA, including the developmental origins of this region and the question of whether faces are unique versus whether similarly specialized mechanisms also exist for other domains of high-level perception and cognition. PMID:17118927
Perceived aggressiveness predicts fighting performance in mixed-martial-arts fighters.
Trebicky, Vít; Havlícek, Jan; Roberts, S Craig; Little, Anthony C; Kleisner, Karel
2013-09-01
Accurate assessment of competitive ability is a critical component of contest behavior in animals, and it could be just as important in human competition, particularly in human ancestral populations. Here, we tested the role that facial perception plays in this assessment by investigating the association between both perceived aggressiveness and perceived fighting ability in fighters' faces and their actual fighting success. Perceived aggressiveness was positively associated with the proportion of fights won, after we controlled for the effect of weight, which also independently predicted perceived aggression. In contrast, perception of fighting ability was confounded by weight, and an association between perceived fighting ability and actual fighting success was restricted to heavyweight fighters. Shape regressions revealed that aggressive-looking faces are generally wider and have a broader chin, more prominent eyebrows, and a larger nose than less aggressive-looking faces. Our results indicate that perception of aggressiveness and fighting ability might cue different aspects of success in male-male physical confrontation.
The nature of face representations in subcortical regions.
Gabay, Shai; Burlingham, Charles; Behrmann, Marlene
2014-07-01
Studies examining the neural correlates of face perception in humans have focused almost exclusively on the distributed cortical network of face-selective regions. Recently, however, investigations have also identified subcortical correlates of face perception and the question addressed here concerns the nature of these subcortical face representations. To explore this issue, we presented to participants pairs of images sequentially to the same or to different eyes. Superior performance in the former over latter condition implicates monocular, prestriate portions of the visual system. Over a series of five experiments, we manipulated both lower-level (size, location) as well as higher-level (identity) similarity across the pair of faces. A monocular advantage was observed even when the faces in a pair differed in location and in size, implicating some subcortical invariance across lower-level image properties. A monocular advantage was also observed when the faces in a pair were two different images of the same individual, indicating the engagement of subcortical representations in more abstract, higher-level aspects of face processing. We conclude that subcortical structures of the visual system are involved, perhaps interactively, in multiple aspects of face perception, and not simply in deriving initial coarse representations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl
2012-02-01
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
Greater sensitivity of the cortical face processing system to perceptually-equated face detection
Maher, S.; Ekstrom, T.; Tong, Y.; Nickerson, L.D.; Frederick, B.; Chen, Y.
2015-01-01
Face detection, the perceptual capacity to identify a visual stimulus as a face before probing deeper into specific attributes (such as its identity or emotion), is essential for social functioning. Despite the importance of this functional capacity, face detection and its underlying brain mechanisms are not well understood. This study evaluated the roles that the cortical face processing system, which is identified largely through studying other aspects of face perception, play in face detection. Specifically, we used functional magnetic resonance imaging (fMRI) to examine the activations of the fusifom face area (FFA), occipital face area (OFA) and superior temporal sulcus (STS) when face detection was isolated from other aspects of face perception and when face detection was perceptually-equated across individual human participants (n=20). During face detection, FFA and OFA were significantly activated, even for stimuli presented at perceptual-threshold levels, whereas STS was not. During tree detection, however, FFA and OFA were responsive only for highly salient (i.e., high contrast) stimuli. Moreover, activation of FFA during face detection predicted a significant portion of the perceptual performance levels that were determined psychophysically for each participant. This pattern of result indicates that FFA and OFA have a greater sensitivity to face detection signals and selectively support the initial process of face vs. non-face object perception. PMID:26592952
The Face Perception System becomes Species-Specific at 3 Months: An Eye-Tracking Study
ERIC Educational Resources Information Center
Di Giorgio, Elisa; Meary, David; Pascalis, Olivier; Simion, Francesca
2013-01-01
The current study aimed at investigating own- vs. other-species preferences in 3-month-old infants. The infants' eye movements were recorded during a visual preference paradigm to assess whether they show a preference for own-species faces when contrasted with other-species faces. Human and monkey faces, equated for all low-level perceptual…
Looser, Christine E; Guntupalli, Jyothi S; Wheatley, Thalia
2013-10-01
More than a decade of research has demonstrated that faces evoke prioritized processing in a 'core face network' of three brain regions. However, whether these regions prioritize the detection of global facial form (shared by humans and mannequins) or the detection of life in a face has remained unclear. Here, we dissociate form-based and animacy-based encoding of faces by using animate and inanimate faces with human form (humans, mannequins) and dog form (real dogs, toy dogs). We used multivariate pattern analysis of BOLD responses to uncover the representational similarity space for each area in the core face network. Here, we show that only responses in the inferior occipital gyrus are organized by global facial form alone (human vs dog) while animacy becomes an additional organizational priority in later face-processing regions: the lateral fusiform gyri (latFG) and right superior temporal sulcus. Additionally, patterns evoked by human faces were maximally distinct from all other face categories in the latFG and parts of the extended face perception system. These results suggest that once a face configuration is perceived, faces are further scrutinized for whether the face is alive and worthy of social cognitive resources.
Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G
2013-11-01
The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar. Copyright © 2013 Elsevier Ltd. All rights reserved.
Waller, Bridget M; Bard, Kim A; Vick, Sarah-Jane; Smith Pasqualini, Marcia C
2007-11-01
Human face perception is a finely tuned, specialized process. When comparing faces between species, therefore, it is essential to consider how people make these observational judgments. Comparing facial expressions may be particularly problematic, given that people tend to consider them categorically as emotional signals, which may affect how accurately specific details are processed. The bared-teeth display (BT), observed in most primates, has been proposed as a homologue of the human smile (J. A. R. A. M. van Hooff, 1972). In this study, judgments of similarity between BT displays of chimpanzees (Pan troglodytes) and human smiles varied in relation to perceived emotional valence. When a chimpanzee BT was interpreted as fearful, observers tended to underestimate the magnitude of the relationship between certain features (the extent of lip corner raise) and human smiles. These judgments may reflect the combined effects of categorical emotional perception, configural face processing, and perceptual organization in mental imagery and may demonstrate the advantages of using standardized observational methods in comparative facial expression research. Copyright 2007 APA.
Configural information in gender categorisation.
Baudouin, Jean-Yves; Humphreys, Glyn W
2006-01-01
The role of configural information in gender categorisation was studied by aligning the top half of one face with the bottom half of another. The two faces had the same or different genders. Experiment 1 shows that participants were slower and made more errors in categorising the gender in either half of these composite faces when the two faces had a different gender, relative to control conditions where the two faces were nonaligned or had the same gender. This result parallels the composite effect for face recognition (Young et al, 1987 Perception 16 747-759) and facial-expression recognition (Calder et al, 2000 Journal of Experimental Psychology: Human Perception and Performance 26 527-551). Similarly to responses to face identity and expression, the composite effect on gender discrimination was disrupted by inverting the faces (experiment 2). Both experiments also show that the composite paradigm is sensitive to general contextual interference in gender categorisation.
Speech perception: Some new directions in research and theory
Pisoni, David B.
2012-01-01
The perception of speech is one of the most fascinating attributes of human behavior; both the auditory periphery and higher centers help define the parameters of sound perception. In this paper some of the fundamental perceptual problems facing speech sciences are described. The paper focuses on several of the new directions speech perception research is taking to solve these problems. Recent developments suggest that major breakthroughs in research and theory will soon be possible. The current study of segmentation, invariance, and normalization are described. The paper summarizes some of the new techniques used to understand auditory perception of speech signals and their linguistic significance to the human listener. PMID:4031245
The Human Face: A View From the Infant's Eye.
ERIC Educational Resources Information Center
Souther, Arthur F.; Banks, Martin S.
This study explores the reason why very young infants are unable to respond differentially to faces and the cause for developmental changes in infant face perception by age 3 months. Linear systems analysis (LSA) and the contrast sensitivity function (CSF) were used to estimate the facial pattern information available to 1- and 3-month-old…
Intersensory Perception at Birth: Newborns Match Nonhuman Primate Faces and Voices
ERIC Educational Resources Information Center
Lewkowicz, David J.; Leo, Irene; Simion, Francesca
2010-01-01
Previous studies have shown that infants, including newborns, can match previously unseen and unheard human faces and vocalizations. More recently, it has been reported that infants as young as 4 months of age also can match the faces and vocalizations of other species raising the possibility that such broad multisensory perceptual tuning is…
Effects of configural processing on the perceptual spatial resolution for face features.
Namdar, Gal; Avidan, Galia; Ganel, Tzvi
2015-11-01
Configural processing governs human perception across various domains, including face perception. An established marker of configural face perception is the face inversion effect, in which performance is typically better for upright compared to inverted faces. In two experiments, we tested whether configural processing could influence basic visual abilities such as perceptual spatial resolution (i.e., the ability to detect spatial visual changes). Face-related perceptual spatial resolution was assessed by measuring the just noticeable difference (JND) to subtle positional changes between specific features in upright and inverted faces. The results revealed robust inversion effect for spatial sensitivity to configural-based changes, such as the distance between the mouth and the nose, or the distance between the eyes and the nose. Critically, spatial resolution for face features within the region of the eyes (e.g., the interocular distance between the eyes) was not affected by inversion, suggesting that the eye region operates as a separate 'gestalt' unit which is relatively immune to manipulations that would normally hamper configural processing. Together these findings suggest that face orientation modulates fundamental psychophysical abilities including spatial resolution. Furthermore, they indicate that classic psychophysical methods can be used as a valid measure of configural face processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
A model for production, perception, and acquisition of actions in face-to-face communication.
Kröger, Bernd J; Kopp, Stefan; Lowit, Anja
2010-08-01
The concept of action as basic motor control unit for goal-directed movement behavior has been used primarily for private or non-communicative actions like walking, reaching, or grasping. In this paper, literature is reviewed indicating that this concept can also be used in all domains of face-to-face communication like speech, co-verbal facial expression, and co-verbal gesturing. Three domain-specific types of actions, i.e. speech actions, facial actions, and hand-arm actions, are defined in this paper and a model is proposed that elucidates the underlying biological mechanisms of action production, action perception, and action acquisition in all domains of face-to-face communication. This model can be used as theoretical framework for empirical analysis or simulation with embodied conversational agents, and thus for advanced human-computer interaction technologies.
Young children perceive less humanness in outgroup faces.
McLoughlin, Niamh; Tipper, Steven P; Over, Harriet
2018-03-01
We investigated when young children first dehumanize outgroups. Across two studies, 5- and 6-year-olds were asked to rate how human they thought a set of ambiguous doll-human face morphs were. We manipulated whether these faces belonged to their gender in- or gender outgroup (Study 1) and to a geographically based in- or outgroup (Study 2). In both studies, the tendency to perceive outgroup faces as less human relative to ingroup faces increased with age. Explicit ingroup preference, in contrast, was present even in the youngest children and remained stable across age. These results demonstrate that children dehumanize outgroup members from relatively early in development and suggest that the tendency to do so may be partially distinguishable from intergroup preference. This research has important implications for our understanding of children's perception of humanness and the origins of intergroup bias. © 2017 John Wiley & Sons Ltd.
Development of Neural Sensitivity to Face Identity Correlates with Perceptual Discriminability
Barnett, Michael A.; Hartley, Jake; Gomez, Jesse; Stigliani, Anthony; Grill-Spector, Kalanit
2016-01-01
Face perception is subserved by a series of face-selective regions in the human ventral stream, which undergo prolonged development from childhood to adulthood. However, it is unknown how neural development of these regions relates to the development of face-perception abilities. Here, we used functional magnetic resonance imaging (fMRI) to measure brain responses of ventral occipitotemporal regions in children (ages, 5–12 years) and adults (ages, 19–34 years) when they viewed faces that parametrically varied in dissimilarity. Since similar faces generate lower responses than dissimilar faces due to fMRI adaptation, this design objectively evaluates neural sensitivity to face identity across development. Additionally, a subset of subjects participated in a behavioral experiment to assess perceptual discriminability of face identity. Our data reveal three main findings: (1) neural sensitivity to face identity increases with age in face-selective but not object-selective regions; (2) the amplitude of responses to faces increases with age in both face-selective and object-selective regions; and (3) perceptual discriminability of face identity is correlated with the neural sensitivity to face identity of face-selective regions. In contrast, perceptual discriminability is not correlated with the amplitude of response in face-selective regions or of responses of object-selective regions. These data suggest that developmental increases in neural sensitivity to face identity in face-selective regions improve perceptual discriminability of faces. Our findings significantly advance the understanding of the neural mechanisms of development of face perception and open new avenues for using fMRI adaptation to study the neural development of high-level visual and cognitive functions more broadly. SIGNIFICANCE STATEMENT Face perception, which is critical for daily social interactions, develops from childhood to adulthood. However, it is unknown what developmental changes in the brain lead to improved performance. Using fMRI in children and adults, we find that from childhood to adulthood, neural sensitivity to changes in face identity increases in face-selective regions. Critically, subjects' perceptual discriminability among faces is linked to neural sensitivity: participants with higher neural sensitivity in face-selective regions demonstrate higher perceptual discriminability. Thus, our results suggest that developmental increases in face-selective regions' sensitivity to face identity improve perceptual discrimination of faces. These findings significantly advance understanding of the neural mechanisms underlying the development of face perception and have important implications for assessing both typical and atypical development. PMID:27798143
Efficient search for a face by chimpanzees (Pan troglodytes).
Tomonaga, Masaki; Imura, Tomoko
2015-07-16
The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces--but not monkey faces--efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model.
Efficient search for a face by chimpanzees (Pan troglodytes)
Tomonaga, Masaki; Imura, Tomoko
2015-01-01
The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces-but not monkey faces-efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model. PMID:26180944
Different Cortical Dynamics in Face and Body Perception: An MEG study
Meeren, Hanneke K. M.; de Gelder, Beatrice; Ahlfors, Seppo P.; Hämäläinen, Matti S.; Hadjikhani, Nouchine
2013-01-01
Evidence from functional neuroimaging indicates that visual perception of human faces and bodies is carried out by distributed networks of face and body-sensitive areas in the occipito-temporal cortex. However, the dynamics of activity in these areas, needed to understand their respective functional roles, are still largely unknown. We monitored brain activity with millisecond time resolution by recording magnetoencephalographic (MEG) responses while participants viewed photographs of faces, bodies, and control stimuli. The cortical activity underlying the evoked responses was estimated with anatomically-constrained noise-normalised minimum-norm estimate and statistically analysed with spatiotemporal cluster analysis. Our findings point to distinct spatiotemporal organization of the neural systems for face and body perception. Face-selective cortical currents were found at early latencies (120–200 ms) in a widespread occipito-temporal network including the ventral temporal cortex (VTC). In contrast, early body-related responses were confined to the lateral occipito-temporal cortex (LOTC). These were followed by strong sustained body-selective responses in the orbitofrontal cortex from 200–700 ms, and in the lateral temporal cortex and VTC after 500 ms latency. Our data suggest that the VTC region has a key role in the early processing of faces, but not of bodies. Instead, the LOTC, which includes the extra-striate body area (EBA), appears the dominant area for early body perception, whereas the VTC contributes to late and post-perceptual processing. PMID:24039712
Experience-dependent changes in the development of face preferences in infant rhesus monkeys.
Parr, Lisa A; Murphy, Lauren; Feczko, Eric; Brooks, Jenna; Collantes, Marie; Heitz, Thomas R
2016-12-01
It is well known that early experience shapes the development of visual perception for faces in humans. However, the effect of experience on the development of social attention in non-human primates is unknown. In two studies, we examined the effect of cumulative social experience on developmental changes in attention to the faces of unfamiliar conspecifics or heterospecifics, and mom versus an unfamiliar female. From birth, infant rhesus monkeys preferred to look at conspecific compared to heterospecific faces, but this pattern reversed over time. In contrast, no consistent differences were found for attention to mom's face compared to an unfamiliar female. These results suggest differential roles of social experience in shaping the development of face preferences in infant monkeys. Results have important implications for establishing normative trajectories for the development of face preferences in an animal model of human social behavior. © 2016 Wiley Periodicals, Inc.
Perceptions of emergency nurses during the human swine influenza outbreak: a qualitative study.
Lam, Kam Ki; Hung, Shuk Yu Maria
2013-10-01
The primary aim of this study was to explore the perception of Hong Kong emergency nurses regarding their work during the human swine influenza pandemic outbreak. In this exploratory, qualitative study, 10 emergency nurses from a regional hospital in Hong Kong were recruited using purposive sampling. Semi-structured, face-to-face individual interviews were conducted. Qualitative content analysis was utilized to analyze the transcripts. The three following categories emerged from the interview data: concerns about health, comments on the administration, and attitudes of professionalism. Nurses viewed the human swine influenza as a threat to their personal and families' health. However, nurses perceived that the severity of the disease was exaggerated by the public. Improvements in planning the circulation of information, allocation of manpower, and utilization of personal protective equipment were indicated. The emergency nurses demonstrated a sense of commitment and professional morale in promoting a high quality of nursing care. Various factors affecting the perceptions of emergency nurses toward their professional duties during the influenza pandemic were identified. By understanding these perceptions, appropriate planning, policies, and guidelines can be formulated to meet the healthcare needs of patients during future pandemic outbreaks. Copyright © 2012 Elsevier Ltd. All rights reserved.
Rossion, Bruno; Dricot, Laurence; Goebel, Rainer; Busigny, Thomas
2011-01-01
How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex. PMID:21267432
Neuronal integration in visual cortex elevates face category tuning to conscious face perception
Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.
2012-01-01
The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162
The role of the fusiform face area in social cognition: implications for the pathobiology of autism.
Schultz, Robert T; Grelotti, David J; Klin, Ami; Kleinman, Jamie; Van der Gaag, Christiaan; Marois, René; Skudlarski, Pawel
2003-01-01
A region in the lateral aspect of the fusiform gyrus (FG) is more engaged by human faces than any other category of image. It has come to be known as the 'fusiform face area' (FFA). The origin and extent of this specialization is currently a topic of great interest and debate. This is of special relevance to autism, because recent studies have shown that the FFA is hypoactive to faces in this disorder. In two linked functional magnetic resonance imaging (fMRI) studies of healthy young adults, we show here that the FFA is engaged by a social attribution task (SAT) involving perception of human-like interactions among three simple geometric shapes. The amygdala, temporal pole, medial prefrontal cortex, inferolateral frontal cortex and superior temporal sulci were also significantly engaged. Activation of the FFA to a task without faces challenges the received view that the FFA is restricted in its activities to the perception of faces. We speculate that abstract semantic information associated with faces is encoded in the FG region and retrieved for social computations. From this perspective, the literature on hypoactivation of the FFA in autism may be interpreted as a reflection of a core social cognitive mechanism underlying the disorder. PMID:12639338
Individual differences in perceiving and recognizing faces-One element of social cognition.
Wilhelm, Oliver; Herzmann, Grit; Kunina, Olga; Danthiir, Vanessa; Schacht, Annekathrin; Sommer, Werner
2010-09-01
Recognizing faces swiftly and accurately is of paramount importance to humans as a social species. Individual differences in the ability to perform these tasks may therefore reflect important aspects of social or emotional intelligence. Although functional models of face cognition based on group and single case studies postulate multiple component processes, little is known about the ability structure underlying individual differences in face cognition. In 2 large individual differences experiments (N = 151 and N = 209), a broad variety of face-cognition tasks were tested and the component abilities of face cognition-face perception, face memory, and the speed of face cognition-were identified and then replicated. Experiment 2 also showed that the 3 face-cognition abilities are clearly distinct from immediate and delayed memory, mental speed, general cognitive ability, and object cognition. These results converge with functional and neuroanatomical models of face cognition by demonstrating the difference between face perception and face memory. The results also underline the importance of distinguishing between speed and accuracy of face cognition. Together our results provide a first step toward establishing face-processing abilities as an independent ability reflecting elements of social intelligence. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Development of Neural Sensitivity to Face Identity Correlates with Perceptual Discriminability.
Natu, Vaidehi S; Barnett, Michael A; Hartley, Jake; Gomez, Jesse; Stigliani, Anthony; Grill-Spector, Kalanit
2016-10-19
Face perception is subserved by a series of face-selective regions in the human ventral stream, which undergo prolonged development from childhood to adulthood. However, it is unknown how neural development of these regions relates to the development of face-perception abilities. Here, we used functional magnetic resonance imaging (fMRI) to measure brain responses of ventral occipitotemporal regions in children (ages, 5-12 years) and adults (ages, 19-34 years) when they viewed faces that parametrically varied in dissimilarity. Since similar faces generate lower responses than dissimilar faces due to fMRI adaptation, this design objectively evaluates neural sensitivity to face identity across development. Additionally, a subset of subjects participated in a behavioral experiment to assess perceptual discriminability of face identity. Our data reveal three main findings: (1) neural sensitivity to face identity increases with age in face-selective but not object-selective regions; (2) the amplitude of responses to faces increases with age in both face-selective and object-selective regions; and (3) perceptual discriminability of face identity is correlated with the neural sensitivity to face identity of face-selective regions. In contrast, perceptual discriminability is not correlated with the amplitude of response in face-selective regions or of responses of object-selective regions. These data suggest that developmental increases in neural sensitivity to face identity in face-selective regions improve perceptual discriminability of faces. Our findings significantly advance the understanding of the neural mechanisms of development of face perception and open new avenues for using fMRI adaptation to study the neural development of high-level visual and cognitive functions more broadly. Face perception, which is critical for daily social interactions, develops from childhood to adulthood. However, it is unknown what developmental changes in the brain lead to improved performance. Using fMRI in children and adults, we find that from childhood to adulthood, neural sensitivity to changes in face identity increases in face-selective regions. Critically, subjects' perceptual discriminability among faces is linked to neural sensitivity: participants with higher neural sensitivity in face-selective regions demonstrate higher perceptual discriminability. Thus, our results suggest that developmental increases in face-selective regions' sensitivity to face identity improve perceptual discrimination of faces. These findings significantly advance understanding of the neural mechanisms underlying the development of face perception and have important implications for assessing both typical and atypical development. Copyright © 2016 the authors 0270-6474/16/3610893-15$15.00/0.
Little, Anthony C; DeBruine, Lisa M; Jones, Benedict C
2011-01-01
A face appears normal when it approximates the average of a population. Consequently, exposure to faces biases perceptions of subsequently viewed faces such that faces similar to those recently seen are perceived as more normal. Simultaneously inducing such aftereffects in opposite directions for two groups of faces indicates somewhat discrete representations for those groups. Here we examine how labelling influences the perception of category in faces differing in colour. We show category-contingent aftereffects following exposure to faces differing in eye spacing (wide versus narrow) for blue versus red faces when such groups are consistently labelled with socially meaningful labels (Extravert versus Introvert; Soldier versus Builder). Category-contingent aftereffects were not seen using identical methodology when labels were not meaningful or were absent. These data suggest that human representations of faces can be rapidly tuned to code for meaningful social categories and that such tuning requires both a label and an associated visual difference. Results highlight the flexibility of the cognitive visual system to discriminate categories even in adulthood. Copyright © 2010 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Quayle, Michael; Essack, Zaynab
2007-01-01
Universities in South Africa face the challenge of redressing past (and continuing) inequalities in higher education by increasing accessibility to previously (and currently) disadvantaged students. One means of doing so is through 'access' or 'bridging' programmes. This article explores successful students' perceptions of one such programme at…
Anterior temporal face patches: a meta-analysis and empirical study
Von Der Heide, Rebecca J.; Skipper, Laura M.; Olson, Ingrid R.
2013-01-01
Evidence suggests the anterior temporal lobe (ATL) plays an important role in person identification and memory. In humans, neuroimaging studies of person memory report consistent activations in the ATL to famous and personally familiar faces and studies of patients report resection or damage of the ATL causes an associative prosopagnosia in which face perception is intact but face memory is compromised. In addition, high-resolution fMRI studies of non-human primates and electrophysiological studies of humans also suggest regions of the ventral ATL are sensitive to novel faces. The current study extends previous findings by investigating whether similar subregions in the dorsal, ventral, lateral, or polar aspects of the ATL are sensitive to personally familiar, famous, and novel faces. We present the results of two studies of person memory: a meta-analysis of existing fMRI studies and an empirical fMRI study using optimized imaging parameters. Both studies showed left-lateralized ATL activations to familiar individuals while novel faces activated the right ATL. Activations to famous faces were quite ventral, similar to what has been reported in previous high-resolution fMRI studies of non-human primates. These findings suggest that face memory-sensitive patches in the human ATL are in the ventral/polar ATL. PMID:23378834
ERIC Educational Resources Information Center
Wollner, Clemens; Deconinck, Frederik J. A.; Parkinson, Jim; Hove, Michael J.; Keller, Peter E.
2012-01-01
Aesthetic theories have long suggested perceptual advantages for prototypical exemplars of a given class of objects or events. Empirical evidence confirmed that morphed (quantitatively averaged) human faces, musical interpretations, and human voices are preferred over most individual ones. In this study, biological human motion was morphed and…
Neural Correlate of the Thatcher Face Illusion in a Monkey Face-Selective Patch.
Taubert, Jessica; Van Belle, Goedele; Vanduffel, Wim; Rossion, Bruno; Vogels, Rufin
2015-07-08
Compelling evidence that our sensitivity to facial structure is conserved across the primate order comes from studies of the "Thatcher face illusion": humans and monkeys notice changes in the orientation of facial features (e.g., the eyes) only when faces are upright, not when faces are upside down. Although it is presumed that face perception in primates depends on face-selective neurons in the inferior temporal (IT) cortex, it is not known whether these neurons respond differentially to upright faces with inverted features. Using microelectrodes guided by functional MRI mapping, we recorded cell responses in three regions of monkey IT cortex. We report an interaction in the middle lateral face patch (ML) between the global orientation of a face and the local orientation of its eyes, a response profile consistent with the perception of the Thatcher illusion. This increased sensitivity to eye orientation in upright faces resisted changes in screen location and was not found among face-selective neurons in other areas of IT cortex, including neurons in another face-selective region, the anterior lateral face patch. We conclude that the Thatcher face illusion is correlated with a pattern of activity in the ML that encodes faces according to a flexible holistic template. Copyright © 2015 the authors 0270-6474/15/359872-07$15.00/0.
The face of female dominance: Women with dominant faces have lower cortisol.
Gonzalez-Santoyo, Isaac; Wheatley, John R; Welling, Lisa L M; Cárdenas, Rodrigo A; Jimenez-Trejo, Francisco; Dawood, Khytam; Puts, David A
2015-05-01
The human face displays a wealth of information, including information about dominance and fecundity. Dominance and fecundity are also associated with lower concentrations of the stress hormone cortisol, suggesting that cortisol may negatively predict facial dominance and attractiveness. We digitally photographed 61 women's faces, had these images rated by men and women for dominance, attractiveness, and femininity, and explored relationships between these perceptions and women's salivary cortisol concentrations. In a first study, we found that women with more dominant-appearing, but not more attractive, faces had lower cortisol levels. These associations were not due to age, ethnicity, time since waking, testosterone, or its interaction with cortisol. In a second study, composite images of women with low cortisol were perceived as more dominant than those of women with high cortisol significantly more often than chance by two samples of viewers, with a similar but non-significant trend in a third sample. However, data on perceptions of attractiveness were mixed; low-cortisol images were viewed as more attractive by two samples of US viewers and as less attractive by a sample of Mexican viewers. Our results suggest that having a more dominant-appearing face may be associated with lower stress and hence lower cortisol in women, and provide further evidence regarding the information content of the human face. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Al Musawi, Ali; Amer, Talal
2017-01-01
This study attempts to investigate the stakeholders' perceptions of quality and prospective improvements in the learning resources centres (LRC) at Omani basic education schools. It focuses on different aspects of the LRCs: organisation, human resources, technological, and educational aspects along with the difficulties faced by these LRCs and…
Somppi, Sanni; Törnqvist, Heini; Topál, József; Koskela, Aija; Hänninen, Laura; Krause, Christina M.; Vainio, Outi
2017-01-01
The neuropeptide oxytocin plays a critical role in social behavior and emotion regulation in mammals. The aim of this study was to explore how nasal oxytocin administration affects gazing behavior during emotional perception in domestic dogs. Looking patterns of dogs, as a measure of voluntary attention, were recorded during the viewing of human facial expression photographs. The pupil diameters of dogs were also measured as a physiological index of emotional arousal. In a placebo-controlled within-subjects experimental design, 43 dogs, after having received either oxytocin or placebo (saline) nasal spray treatment, were presented with pictures of unfamiliar male human faces displaying either a happy or an angry expression. We found that, depending on the facial expression, the dogs’ gaze patterns were affected selectively by oxytocin treatment. After receiving oxytocin, dogs fixated less often on the eye regions of angry faces and revisited (glanced back at) more often the eye regions of smiling (happy) faces than after the placebo treatment. Furthermore, following the oxytocin treatment dogs fixated and revisited the eyes of happy faces significantly more often than the eyes of angry faces. The analysis of dogs’ pupil diameters during viewing of human facial expressions indicated that oxytocin may also have a modulatory effect on dogs’ emotional arousal. While subjects’ pupil sizes were significantly larger when viewing angry faces than happy faces in the control (placebo treatment) condition, oxytocin treatment not only eliminated this effect but caused an opposite pupil response. Overall, these findings suggest that nasal oxytocin administration selectively changes the allocation of attention and emotional arousal in domestic dogs. Oxytocin has the potential to decrease vigilance toward threatening social stimuli and increase the salience of positive social stimuli thus making eye gaze of friendly human faces more salient for dogs. Our study provides further support for the role of the oxytocinergic system in the social perception abilities of domestic dogs. We propose that oxytocin modulates fundamental emotional processing in dogs through a mechanism that may facilitate communication between humans and dogs. PMID:29089919
The effect of face patch microstimulation on perception of faces and objects.
Moeller, Sebastian; Crapse, Trinity; Chang, Le; Tsao, Doris Y
2017-05-01
What is the range of stimuli encoded by face-selective regions of the brain? We asked how electrical microstimulation of face patches in macaque inferotemporal cortex affects perception of faces and objects. We found that microstimulation strongly distorted face percepts and that this effect depended on precise targeting to the center of face patches. While microstimulation had no effect on the percept of many non-face objects, it did affect the percept of some, including non-face objects whose shape is consistent with a face (for example, apples) as well as somewhat facelike abstract images (for example, cartoon houses). Microstimulation even perturbed the percept of certain objects that did not activate the stimulated face patch at all. Overall, these results indicate that representation of facial identity is localized to face patches, but activity in these patches can also affect perception of face-compatible non-face objects, including objects normally represented in other parts of inferotemporal cortex.
Baby schema in human and animal faces induces cuteness perception and gaze allocation in children.
Borgi, Marta; Cogliati-Dezza, Irene; Brelsford, Victoria; Meints, Kerstin; Cirulli, Francesca
2014-01-01
The baby schema concept was originally proposed as a set of infantile traits with high appeal for humans, subsequently shown to elicit caretaking behavior and to affect cuteness perception and attentional processes. However, it is unclear whether the response to the baby schema may be extended to the human-animal bond context. Moreover, questions remain as to whether the cute response is constant and persistent or whether it changes with development. In the present study we parametrically manipulated the baby schema in images of humans, dogs, and cats. We analyzed responses of 3-6 year-old children, using both explicit (i.e., cuteness ratings) and implicit (i.e., eye gaze patterns) measures. By means of eye-tracking, we assessed children's preferential attention to images varying only for the degree of baby schema and explored participants' fixation patterns during a cuteness task. For comparative purposes, cuteness ratings were also obtained in a sample of adults. Overall our results show that the response to an infantile facial configuration emerges early during development. In children, the baby schema affects both cuteness perception and gaze allocation to infantile stimuli and to specific facial features, an effect not simply limited to human faces. In line with previous research, results confirm human positive appraisal toward animals and inform both educational and therapeutic interventions involving pets, helping to minimize risk factors (e.g., dog bites).
Some Supporting Evidence for Accurate Multivariate Perceptions with Chernoff Faces, Project 547.
ERIC Educational Resources Information Center
Wainer, Howard
A scheme, using features in a cartoon-like human face to represent variables, is tested as to its ability to graphically depict multivariate data. A factor analysis of Harman's "24 Psychological Tests" was performed and yielded four orthogonal factors. Nose width represented the loading on Factor 1; eye size on Factor 2; curve of mouth…
ERIC Educational Resources Information Center
Hills, Peter J.; Holland, Andrew M.; Lewis, Michael B.
2010-01-01
Adults can be adapted to a particular facial distortion in which both eyes are shifted symmetrically (Robbins, R., McKone, E., & Edwards, M. (2007). "Aftereffects for face attributes with different natural variability: Adapter position effects and neural models." "Journal of Experimental Psychology: Human Perception and Performance, 33," 570-592),…
Lazar, Steven M; Evans, David W; Myers, Scott M; Moreno-De Luca, Andres; Moore, Gregory J
2014-04-15
Social cognition is an important aspect of social behavior in humans. Social cognitive deficits are associated with neurodevelopmental and neuropsychiatric disorders. In this study we examine the neural substrates of social cognition and face processing in a group of healthy young adults to examine the neural substrates of social cognition. Fifty-seven undergraduates completed a battery of social cognition tasks and were assessed with electroencephalography (EEG) during a face-perception task. A subset (N=22) were administered a face-perception task during functional magnetic resonance imaging. Variance in the N170 EEG was predicted by social attribution performance and by a quantitative measure of empathy. Neurally, face processing was more bilateral in females than in males. Variance in fMRI voxel count in the face-sensitive fusiform gyrus was predicted by quantitative measures of social behavior, including the Social Responsiveness Scale (SRS) and the Empathizing Quotient. When measured as a quantitative trait, social behaviors in typical and pathological populations share common neural pathways. The results highlight the importance of viewing neurodevelopmental and neuropsychiatric disorders as spectrum phenomena that may be informed by studies of the normal distribution of relevant traits in the general population. Copyright © 2014 Elsevier B.V. All rights reserved.
Poirier, Frédéric J A M; Faubert, Jocelyn
2012-06-22
Facial expressions are important for human communications. Face perception studies often measure the impact of major degradation (e.g., noise, inversion, short presentations, masking, alterations) on natural expression recognition performance. Here, we introduce a novel face perception technique using rich and undegraded stimuli. Participants modified faces to create optimal representations of given expressions. Using sliders, participants adjusted 53 face components (including 37 dynamic) including head, eye, eyebrows, mouth, and nose shape and position. Data was collected from six participants and 10 conditions (six emotions + pain + gender + neutral). Some expressions had unique features (e.g., frown for anger, upward-curved mouth for happiness), whereas others had shared features (e.g., open eyes and mouth for surprise and fear). Happiness was different from other emotions. Surprise was different from other emotions except fear. Weighted sum morphing provides acceptable stimuli for gender-neutral and dynamic stimuli. Many features were correlated, including (1) head size with internal feature sizes as related to gender, (2) internal feature scaling, and (3) eyebrow height and eye openness as related to surprise and fear. These findings demonstrate the method's validity for measuring the optimal facial expressions, which we argue is a more direct measure of their internal representations.
Putting the face in context: Body expressions impact facial emotion processing in human infants.
Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias
2016-06-01
Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Emotional modulation of visual remapping of touch.
Cardini, Flavia; Bertini, Caterina; Serino, Andrea; Ladavas, Elisabetta
2012-10-01
The perception of tactile stimuli on the face is modulated if subjects concurrently observe a face being touched; this effect is termed "visual remapping of touch" or the VRT effect. Given the high social value of this mechanism, we investigated whether it might be modulated by specific key information processed in face-to-face interactions: facial emotional expression. In two separate experiments, participants received tactile stimuli, near the perceptual threshold, either on their right, left, or both cheeks. Concurrently, they watched several blocks of movies depicting a face with a neutral, happy, or fearful expression that was touched or just approached by human fingers (Experiment 1). Participants were asked to distinguish between unilateral and bilateral felt tactile stimulation. Tactile perception was enhanced when viewing touch toward a fearful face compared with viewing touch toward the other two expressions. In order to test whether this result can be generalized to other negative emotions or whether it is a fear-specific effect, we ran a second experiment, where participants watched movies of faces-touched or approached by fingers-with either a fearful or an angry expression (Experiment 2). In line with the first experiment, tactile perception was enhanced when subjects viewed touch toward a fearful face and not toward an angry face. Results of the present experiments are interpreted in light of different mechanisms underlying different emotions recognition, with a specific involvement of the somatosensory system when viewing a fearful expression and a resulting fear-specific modulation of the VRT effect.
Understanding the symptoms of schizophrenia using visual scan paths.
Phillips, M L; David, A S
1994-11-01
This paper highlights the role of the visual scan path as a physiological marker of information processing, while investigating positive symptomatology in schizophrenia. The current literature is reviewed using computer search facilities (Medline). Schizophrenics either scan or stare extensively, the latter related to negative symptoms. Schizophrenics particularly scan when viewing human faces. Scan paths in schizophrenics are important when viewing meaningful stimuli such as human faces, because of the relationship between abnormal perception of stimuli and symptomatology in these subjects.
From Caregivers to Peers: Puberty Shapes Human Face Perception.
Picci, Giorgia; Scherf, K Suzanne
2016-11-01
Puberty prepares mammals to sexually reproduce during adolescence. It is also hypothesized to invoke a social metamorphosis that prepares adolescents to take on adult social roles. We provide the first evidence to support this hypothesis in humans and show that pubertal development retunes the face-processing system from a caregiver bias to a peer bias. Prior to puberty, children exhibit enhanced recognition for adult female faces. With puberty, superior recognition emerges for peer faces that match one's pubertal status. As puberty progresses, so does the peer recognition bias. Adolescents become better at recognizing faces with a pubertal status similar to their own. These findings reconceptualize the adolescent "dip" in face recognition by showing that it is a recalibration of the face-processing system away from caregivers toward peers. Thus, in addition to preparing the physical body for sexual reproduction, puberty shapes the perceptual system for processing the social world in new ways. © The Author(s) 2016.
Framing faces: Frame alignment impacts holistic face perception.
Curby, Kim M; Entenman, Robert
2016-11-01
Traditional accounts of face perception emphasise the importance of the prototypical configuration of features within faces. However, here we probe influences of more general perceptual grouping mechanisms on holistic face perception. Participants made part-matching judgments about composite faces presented in intact external oval frames or frames made from misaligned oval parts. This manipulation served to disrupt basic perceptual grouping cues that facilitate the grouping of the two face halves together. This manipulation also produced an external face contour like that in the standard misaligned condition used within the classic composite face task. Notably, by introducing a discontinuity in the external contour, grouping of the face halves into a cohesive unit was discouraged, but face configuration was preserved. Conditions where both the face parts and the frames were misaligned together, as in the typical composite task paradigm, or where just the internal face parts where misaligned, were also included. Disrupting only the face frame similarly disrupted holistic face perception as disrupting both the frame and face configuration. However, misaligned face parts presented in aligned frames also incurred a cost to holistic perception. These findings provide support for the contribution of general-purpose perceptual grouping mechanisms to holistic face perception and are presented and discussed in the context of an enhanced object-based selection account of holistic perception.
Face and emotion expression processing and the serotonin transporter polymorphism 5-HTTLPR/rs22531.
Hildebrandt, A; Kiy, A; Reuter, M; Sommer, W; Wilhelm, O
2016-06-01
Face cognition, including face identity and facial expression processing, is a crucial component of socio-emotional abilities, characterizing humans as highest developed social beings. However, for these trait domains molecular genetic studies investigating gene-behavior associations based on well-founded phenotype definitions are still rare. We examined the relationship between 5-HTTLPR/rs25531 polymorphisms - related to serotonin-reuptake - and the ability to perceive and recognize faces and emotional expressions in human faces. For this aim we conducted structural equation modeling on data from 230 young adults, obtained by using a comprehensive, multivariate task battery with maximal effort tasks. By additionally modeling fluid intelligence and immediate and delayed memory factors, we aimed to address the discriminant relationships of the 5-HTTLPR/rs25531 polymorphisms with socio-emotional abilities. We found a robust association between the 5-HTTLPR/rs25531 polymorphism and facial emotion perception. Carriers of two long (L) alleles outperformed carriers of one or two S alleles. Weaker associations were present for face identity perception and memory for emotional facial expressions. There was no association between the 5-HTTLPR/rs25531 polymorphism and non-social abilities, demonstrating discriminant validity of the relationships. We discuss the implications and possible neural mechanisms underlying these novel findings. © 2016 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.
Human milk sharing practices in the U.S.
Palmquist, Aunchalee E L; Doehler, Kirsten
2016-04-01
The primary objective of this study is to describe human milk sharing practices in the U.S. Specifically, we examine milk sharing social networks, donor compensation, the prevalence of anonymous milk sharing interactions, recipients' concerns about specific milk sharing risks, and lay screening behaviors. Data on human milk sharing practices were collected via an online survey September 2013-March 2014. Chi-square analyses were used to test the association between risk perception and screening practices. A total of 867 (661 donors, 206 recipients) respondents were included in the analyses. Most (96.1%) reported sharing milk face-to-face. Only 10% of respondents reported giving or receiving milk through a non-profit human milk bank, respectively. There were no reports of anonymous purchases of human milk. A small proportion of recipients (4.0%) reported that their infant had a serious medical condition. Screening of prospective donors was common (90.7%) but varied with social relationship and familiarity. Likewise, concern about specific milk sharing risks was varied, and risk perception was significantly associated (P-values = 0.01 or less) with donor screening for all risk variables except diet. Understanding lay perceptions of milk sharing risk and risk reduction strategies that parents are using is an essential first step in developing public health interventions and clinical practices that promote infant safety. © 2015 The Authors. Maternal & Child Nutrition published by John Wiley & Sons Ltd.
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J
2013-04-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this aftereffect increased with adaptor extremity, as predicted by norm-based, opponent coding of body identity. A size change between adapt and test bodies minimized the effects of low-level, retinotopic adaptation. These results demonstrate that body identity, like face identity, is opponent coded in higher-level vision. More generally, they show that a norm-based multidimensional framework, which is well established for face perception, may provide a powerful framework for understanding body perception.
The “Visual Shock” of Francis Bacon: an essay in neuroesthetics
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a “visual shock.”We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his “visual shock” because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts. PMID:24339812
The "Visual Shock" of Francis Bacon: an essay in neuroesthetics.
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a "visual shock."We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his "visual shock" because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts.
Holistic face perception is modulated by experience-dependent perceptual grouping.
Curby, Kim M; Entenman, Robert J; Fleming, Justin T
2016-07-01
What role do general-purpose, experience-sensitive perceptual mechanisms play in producing characteristic features of face perception? We previously demonstrated that different-colored, misaligned framing backgrounds, designed to disrupt perceptual grouping of face parts appearing upon them, disrupt holistic face perception. In the current experiments, a similar part-judgment task with composite faces was performed: face parts appeared in either misaligned, different-colored rectangles or aligned, same-colored rectangles. To investigate whether experience can shape impacts of perceptual grouping on holistic face perception, a pre-task fostered the perception of either (a) the misaligned, differently colored rectangle frames as parts of a single, multicolored polygon or (b) the aligned, same-colored rectangle frames as a single square shape. Faces appearing in the misaligned, differently colored rectangles were processed more holistically by those in the polygon-, compared with the square-, pre-task group. Holistic effects for faces appearing in aligned, same-colored rectangles showed the opposite pattern. Experiment 2, which included a pre-task condition fostering the perception of the aligned, same-colored frames as pairs of independent rectangles, provided converging evidence that experience can modulate impacts of perceptual grouping on holistic face perception. These results are surprising given the proposed impenetrability of holistic face perception and provide insights into the elusive mechanisms underlying holistic perception.
Cheetham, Marcus; Suter, Pascal; Jäncke, Lutz
2011-01-01
The uncanny valley hypothesis (Mori, 1970) predicts differential experience of negative and positive affect as a function of human likeness. Affective experience of humanlike robots and computer-generated characters (avatars) dominates “uncanny” research, but findings are inconsistent. Importantly, it is unknown how objects are actually perceived along the hypothesis’ dimension of human likeness (DOH), defined in terms of human physical similarity. To examine whether the DOH can also be defined in terms of effects of categorical perception (CP), stimuli from morph continua with controlled differences in physical human likeness between avatar and human faces as endpoints were presented. Two behavioral studies found a sharp category boundary along the DOH and enhanced visual discrimination (i.e., CP) of fine-grained differences between pairs of faces at the category boundary. Discrimination was better for face pairs presenting category change in the human-to-avatar than avatar-to-human direction along the DOH. To investigate brain representation of physical change and category change along the DOH, an event-related functional magnetic resonance imaging study used the same stimuli in a pair-repetition priming paradigm. Bilateral mid-fusiform areas and a different right mid-fusiform area were sensitive to physical change within the human and avatar categories, respectively, whereas entirely different regions were sensitive to the human-to-avatar (caudate head, putamen, thalamus, red nucleus) and avatar-to-human (hippocampus, amygdala, mid-insula) direction of category change. These findings show that Mori’s DOH definition does not reflect subjective perception of human likeness and suggest that future “uncanny” studies consider CP and the DOH’s category structure in guiding experience of non-human objects. PMID:22131970
Global shape information increases but color information decreases the composite face effect.
Retter, Talia L; Rossion, Bruno
2015-01-01
The separation of visual shape and surface information may be useful for understanding holistic face perception--that is, the perception of a face as a single unit (Jiang, Blanz, & Rossion, 2011, Visual Cognition, 19, 1003-1034). A widely used measure of holistic face perception is the composite face effect (CFE), in which identical top face halves appear different when aligned with bottom face halves from different identities. In the present study the influences of global face shape (ie contour of the face) and color information on the CFE are investigated, with the hypothesis that global face shape supports but color impairs holistic face perception as measured in this paradigm. In experiment 1 the CFE is significantly increased when face stimuli possess natural global shape information than when cropped to a generic (ie oval) global shape; this effect is not found when the stimuli are presented inverted. In experiment 2 the CFE is significantly decreased when face stimuli are presented with color information than when presented in grayscale. These findings indicate that grayscale stimuli maintaining natural global face shape information provide the most adept measure of holistic face perception in the behavioral composite face paradigm. More generally, they show that reducing different types of information diagnostic for individual face perception can have opposite effects on the CFE, illustrating the functional dissociation between shape and surface information in face perception.
Facial aesthetics: babies prefer attractiveness to symmetry.
Samuels, Curtis A; Butterworth, George; Roberts, Tony; Graupner, Lida; Hole, Graham
2013-01-01
The visual preferences of human infants for faces that varied in their attractiveness and in their symmetry about the midline were explored. The aim was to establish whether infants' visual preference for attractive faces may be mediated by the vertical symmetry of the face. Chimeric faces, made from photographs of attractive and unattractive female faces, were produced by computer graphics. Babies looked longer at normal and at chimeric attractive faces than at normal and at chimeric unattractive faces. There were no developmental differences between the younger and older infants: all preferred to look at the attractive faces. Infants as young as 4 months showed similarity with adults in the 'aesthetic perception' of attractiveness and this preference was not based on the vertical symmetry of the face.
Chen, Yue; Ekstrom, Tor
2016-05-01
Face perception impairment in schizophrenia has been demonstrated, mostly through experimental studies. How this laboratory-defined behavioral impairment is associated with patients' perceptual experience of various faces in everyday life is however unclear. This question is important because a first-person account of face perception has direct consequences on social functioning of patients. In this study, we adapted and administered a self-reported questionnaire on narrative perceptual experience of faces along with psychophysical assessments of face perception in schizophrenia. The self-reported questionnaire includes six rating items of face-related functioning in everyday life, providing a subjective measure of face perception. The psychophysical assessment determines perceptual threshold for discriminating different facial identities, providing an objective measure of face perception. Compared to controls (n = 25), patients (n = 35) showed significantly lower scores (worse performance) in the subjective assessment and significantly higher thresholds (worse performance) in the objective assessment. The subjective and objective face perception assessments were moderately correlated in controls but not in patients. The subjective face perception assessments were significantly correlated with measurements of a social cognitive ability (Theory of Mind), again in controls but not in patients. These results suggest that in schizophrenia the quality of face-related functioning in everyday life is degraded and the role that basic face discrimination capacity plays in face-related everyday functioning is disrupted. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chen, Yue; Ekstrom, Tor
2016-01-01
Objectives Face perception impairment in schizophrenia has been demonstrated, mostly through experimental studies. How this laboratory-defined behavioral impairment is associated with patients’ perceptual experience of various faces in everyday life is however unclear. This question is important because a first-person account of face perception has direct consequences on social functioning of patients. In this study, we adapted and administered a self-reported questionnaire on narrative perceptual experience of faces along with psychophysical assessments of face perception in schizophrenia. Methods The self-reported questionnaire includes six rating items of face-related functioning in everyday life, providing a subjective measure of face perception. The psychophysical assessment determines perceptual threshold for discriminating different facial identities, providing an objective measure of face perception. Results Compared to controls (n=25), patients (n=35) showed significantly lower scores (worse performance) in the subjective assessment and significantly higher thresholds (worse performance) in the objective assessment. The subjective and objective face perception assessments were moderately correlated in controls but not in patients. The subjective face perception assessments were significantly correlated with measurements of a social cognitive ability (Theory of Mind), again in controls but not in patients. Conclusion These results suggest that in schizophrenia the quality of face-related functioning in everyday life is degraded and the role that basic face discrimination capacity plays in face-related everyday functioning is disrupted. PMID:26938027
Seeing Jesus in toast: Neural and behavioral correlates of face pareidolia
Liu, Jiangang; Li, Jun; Feng, Lu; Li, Ling; Tian, Jie; Lee, Kang
2014-01-01
Face pareidolia is the illusory perception of non-existent faces. The present study, for the first time, contrasted behavioral and neural responses of face pareidolia with those of letter pareidolia to explore face-specific behavioral and neural responses during illusory face processing. Participants were shown pure-noise images but were led to believe that 50% of them contained either faces or letters; they reported seeing faces or letters illusorily 34% and 38% of the time, respectively. The right fusiform face area (rFFA) showed a specific response when participants “saw” faces as opposed to letters in the pure-noise images. Behavioral responses during face pareidolia produced a classification image that resembled a face, whereas those during letter pareidolia produced a classification image that was letter-like. Further, the extent to which such behavioral classification images resembled faces was directly related to the level of face-specific activations in the right FFA. This finding suggests that the right FFA plays a specific role not only in processing of real faces but also in illusory face perception, perhaps serving to facilitate the interaction between bottom-up information from the primary visual cortex and top-down signals from the prefrontal cortex (PFC). Whole brain analyses revealed a network specialized in face pareidolia, including both the frontal and occipito-temporal regions. Our findings suggest that human face processing has a strong top-down component whereby sensory input with even the slightest suggestion of a face can result in the interpretation of a face. PMID:24583223
ERIC Educational Resources Information Center
Gross, Thomas F.
2005-01-01
Global information processing and perception of facial age and emotional expression was studied in children with autism, language disorders, mental retardation, and a clinical control group. Children were given a global-local task and asked to recognize age and emotion in human and canine faces. Children with autism made fewer global responses and…
Adaptation effects to attractiveness of face photographs and art portraits are domain-specific
Hayn-Leichsenring, Gregor U.; Kloth, Nadine; Schweinberger, Stefan R.; Redies, Christoph
2013-01-01
We studied the neural coding of facial attractiveness by investigating effects of adaptation to attractive and unattractive human faces on the perceived attractiveness of veridical human face pictures (Experiment 1) and art portraits (Experiment 2). Experiment 1 revealed a clear pattern of contrastive aftereffects. Relative to a pre-adaptation baseline, the perceived attractiveness of faces was increased after adaptation to unattractive faces, and was decreased after adaptation to attractive faces. Experiment 2 revealed similar aftereffects when art portraits rather than face photographs were used as adaptors and test stimuli, suggesting that effects of adaptation to attractiveness are not restricted to facial photographs. Additionally, we found similar aftereffects in art portraits for beauty, another aesthetic feature that, unlike attractiveness, relates to the properties of the image (rather than to the face displayed). Importantly, Experiment 3 showed that aftereffects were abolished when adaptors were art portraits and face photographs were test stimuli. These results suggest that adaptation to facial attractiveness elicits aftereffects in the perception of subsequently presented faces, for both face photographs and art portraits, and that these effects do not cross image domains. PMID:24349690
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Subtle perceptions of male sexual orientation influence occupational opportunities.
Rule, Nicholas O; Bjornsdottir, R Thora; Tskhay, Konstantin O; Ambady, Nalini
2016-12-01
Theories linking the literatures on stereotyping and human resource management have proposed that individuals may enjoy greater success obtaining jobs congruent with stereotypes about their social categories or traits. Here, we explored such effects for a detectable, but not obvious, social group distinction: male sexual orientation. Bridging previous work on prejudice and occupational success with that on social perception, we found that perceivers rated gay and straight men as more suited to professions consistent with stereotypes about their groups (nurses, pediatricians, and English teachers vs. engineers, managers, surgeons, and math teachers) from mere photos of their faces. Notably, distinct evaluations of the gay and straight men emerged based on perceptions of their faces with no explicit indication of sexual orientation. Neither perceivers' expertise with hiring decisions nor diagnostic information about the targets eliminated these biases, but encouraging fair decisions did contribute to partly ameliorating the differences. Mediation analysis further showed that perceptions of the targets' sexual orientations and facial affect accounted for these effects. Individuals may therefore infer characteristics about individuals' group memberships from their faces and use this information in a way that meaningfully influences evaluations of their suitability for particular jobs. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Jamal, Wasifa; Das, Saptarshi; Maharatna, Koushik; Pan, Indranil; Kuyucu, Doga
2015-09-01
Degree of phase synchronization between different Electroencephalogram (EEG) channels is known to be the manifestation of the underlying mechanism of information coupling between different brain regions. In this paper, we apply a continuous wavelet transform (CWT) based analysis technique on EEG data, captured during face perception tasks, to explore the temporal evolution of phase synchronization, from the onset of a stimulus. Our explorations show that there exists a small set (typically 3-5) of unique synchronized patterns or synchrostates, each of which are stable of the order of milliseconds. Particularly, in the beta (β) band, which has been reported to be associated with visual processing task, the number of such stable states has been found to be three consistently. During processing of the stimulus, the switching between these states occurs abruptly but the switching characteristic follows a well-behaved and repeatable sequence. This is observed in a single subject analysis as well as a multiple-subject group-analysis in adults during face perception. We also show that although these patterns remain topographically similar for the general category of face perception task, the sequence of their occurrence and their temporal stability varies markedly between different face perception scenarios (stimuli) indicating toward different dynamical characteristics for information processing, which is stimulus-specific in nature. Subsequently, we translated these stable states into brain complex networks and derived informative network measures for characterizing the degree of segregated processing and information integration in those synchrostates, leading to a new methodology for characterizing information processing in human brain. The proposed methodology of modeling the functional brain connectivity through the synchrostates may be viewed as a new way of quantitative characterization of the cognitive ability of the subject, stimuli and information integration/segregation capability.
Second to fourth digit ratio and face shape
Fink, Bernhard; Grammer, Karl; Mitteroecker, Philipp; Gunz, Philipp; Schaefer, Katrin; Bookstein, Fred L; Manning, John T
2005-01-01
The average human male face differs from the average female face in size and shape of the jaws, cheek-bones, lips, eyes and nose. It is possible that this dimorphism is determined by sex steroids such as testosterone (T) and oestrogen (E), and several studies on the perception of such characteristics have been based on this assumption, but those studies focussed mainly on the relationship of male faces with circulating hormone levels; the corresponding biology of the female face remains mainly speculative. This paper is concerned with the relative importance of prenatal T and E levels (assessed via the 2D : 4D finger length ratio, a proxy for the ratio of T/E) and sex in the determination of facial form as characterized by 64 landmark points on facial photographs of 106 Austrians of college age. We found that (i) prenatal sex steroid ratios (in terms of 2D : 4D) and actual chromosomal sex dimorphism operate differently on faces, (ii) 2D : 4D affects male and female face shape by similar patterns, but (iii) is three times more intense in men than in women. There was no evidence that these effects were confounded by allometry or facial asymmetry. Our results suggest that studies on the perception of facial characteristics need to consider differential effects of prenatal hormone exposure and actual chromosomal gender in order to understand how characteristics have come to be rated ‘masculine’ or ‘feminine’ and the consequences of these perceptions in terms of mate preferences. PMID:16191608
Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M
2017-05-01
This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Matsumiya, Kazumichi
2013-10-01
Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.
Leube, Dirk T; Yoon, Hyo Woon; Rapp, Alexander; Erb, Michael; Grodd, Wolfgang; Bartels, Mathias; Kircher, Tilo T J
2003-05-22
Perception of upright faces relies on configural processing. Therefore recognition of inverted, compared to upright faces is impaired. In a functional magnetic resonance imaging experiment we investigated the neural correlate of a face inversion task. Thirteen healthy subjects were presented with a equal number of upright and inverted faces alternating with a low level baseline with an upright and inverted picture of an abstract symbol. Brain activation was calculated for upright minus inverted faces. For this differential contrast, we found a signal change in the right superior temporal sulcus and right insula. Configural properties are processed in a network comprising right superior temporal and insular cortex.
Face perception in women with Turner syndrome and its underlying factors.
Anaki, David; Zadikov Mor, Tal; Gepstein, Vardit; Hochberg, Ze'ev
2016-09-01
Turner syndrome (TS) is a chromosomal condition that affects development in females. It is characterized by short stature, ovarian failure and other congenital malformations, due to a partial or complete absence of the sex chromosome. Women with TS frequently suffer from various physical and hormonal dysfunctions, along with impairments in visual-spatial processing and social cognition difficulties. Previous research has also shown difficulties in face and emotion perception. In the current study we examined two questions: First, whether women with TS, that are impaired in face perception, also suffer from deficits in face-specific processes. The second question was whether these face impairments in TS are related to visual-spatial perceptual dysfunctions exhibited by TS individuals, or to impaired social cognition skills. Twenty-six women with TS and 26 control participants were tested on various cognitive and psychological tests to assess visual-spatial perception, face and facial expression perception, and social cognition skills. Results show that women with TS were less accurate in face perception and facial expression processing, yet they exhibited normal face-specific processes (configural and holistic processing). They also showed difficulties in spatial perception and social cognition capacities. Additional analyses revealed that their face perception impairments were related to their deficits in visual-spatial processing. Thus, our results do not support the claim that the impairments in face processing observed in TS are related to difficulties in social cognition. Rather, our data point to the possibility that face perception difficulties in TS stem from visual-spatial impairments and may not be specific to faces. Copyright © 2016 Elsevier Ltd. All rights reserved.
Monkeys and Humans Share a Common Computation for Face/Voice Integration
Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.
2011-01-01
Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576
Seeing Jesus in toast: neural and behavioral correlates of face pareidolia.
Liu, Jiangang; Li, Jun; Feng, Lu; Li, Ling; Tian, Jie; Lee, Kang
2014-04-01
Face pareidolia is the illusory perception of non-existent faces. The present study, for the first time, contrasted behavioral and neural responses of face pareidolia with those of letter pareidolia to explore face-specific behavioral and neural responses during illusory face processing. Participants were shown pure-noise images but were led to believe that 50% of them contained either faces or letters; they reported seeing faces or letters illusorily 34% and 38% of the time, respectively. The right fusiform face area (rFFA) showed a specific response when participants "saw" faces as opposed to letters in the pure-noise images. Behavioral responses during face pareidolia produced a classification image (CI) that resembled a face, whereas those during letter pareidolia produced a CI that was letter-like. Further, the extent to which such behavioral CIs resembled faces was directly related to the level of face-specific activations in the rFFA. This finding suggests that the rFFA plays a specific role not only in processing of real faces but also in illusory face perception, perhaps serving to facilitate the interaction between bottom-up information from the primary visual cortex and top-down signals from the prefrontal cortex (PFC). Whole brain analyses revealed a network specialized in face pareidolia, including both the frontal and occipitotemporal regions. Our findings suggest that human face processing has a strong top-down component whereby sensory input with even the slightest suggestion of a face can result in the interpretation of a face. Copyright © 2014 Elsevier Ltd. All rights reserved.
Heenan, Adam; Troje, Nikolaus F
2014-01-01
Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli.
Heenan, Adam; Troje, Nikolaus F.
2014-01-01
Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli. PMID:24987956
The development of emotion perception in face and voice during infancy.
Grossmann, Tobias
2010-01-01
Interacting with others by reading their emotional expressions is an essential social skill in humans. How this ability develops during infancy and what brain processes underpin infants' perception of emotion in different modalities are the questions dealt with in this paper. Literature review. The first part provides a systematic review of behavioral findings on infants' developing emotion-reading abilities. The second part presents a set of new electrophysiological studies that provide insights into the brain processes underlying infants' developing abilities. Throughout, evidence from unimodal (face or voice) and multimodal (face and voice) processing of emotion is considered. The implications of the reviewed findings for our understanding of developmental models of emotion processing are discussed. The reviewed infant data suggest that (a) early in development, emotion enhances the sensory processing of faces and voices, (b) infants' ability to allocate increased attentional resources to negative emotional information develops earlier in the vocal domain than in the facial domain, and (c) at least by the age of 7 months, infants reliably match and recognize emotional information across face and voice.
Herzmann, Grit; Bird, Christopher W.; Freeman, Megan; Curran, Tim
2013-01-01
Oxytocin has been shown to affect human social information processing including recognition memory for faces. Here we investigated the neural processes underlying the effect of oxytocin on memorizing own-race and other-race faces in men and women. In a placebo-controlled, doubleblind, between-subject study, participants received either oxytocin or placebo before studying own-race and other-race faces. We recorded event-related potentials (ERPs) during both the study and recognition phase to investigate neural correlates of oxytocin’s effect on memory encoding, memory retrieval, and perception. Oxytocin increased the accuracy of familiarity judgments in the recognition test. Neural correlates for this effect were found in ERPs related to memory encoding and retrieval but not perception. In contrast to its facilitating effects on familiarity, oxytocin impaired recollection judgments, but in men only. Oxytocin did not differentially affect own-race and other-race faces. This study shows that oxytocin influences memory, but not perceptual processes, in a face recognition task and is the first to reveal sex differences in the effect of oxytocin on face memory. Contrary to recent findings in oxytocin and moral decision making, oxytocin did not preferentially improve memory for own-race faces. PMID:23648370
Herzmann, Grit; Bird, Christopher W; Freeman, Megan; Curran, Tim
2013-10-01
Oxytocin has been shown to affect human social information processing including recognition memory for faces. Here we investigated the neural processes underlying the effect of oxytocin on memorizing own-race and other-race faces in men and women. In a placebo-controlled, double-blind, between-subject study, participants received either oxytocin or placebo before studying own-race and other-race faces. We recorded event-related potentials (ERPs) during both the study and recognition phase to investigate neural correlates of oxytocin's effect on memory encoding, memory retrieval, and perception. Oxytocin increased the accuracy of familiarity judgments in the recognition test. Neural correlates for this effect were found in ERPs related to memory encoding and retrieval but not perception. In contrast to its facilitating effects on familiarity, oxytocin impaired recollection judgments, but in men only. Oxytocin did not differentially affect own-race and other-race faces. This study shows that oxytocin influences memory, but not perceptual processes, in a face recognition task and is the first to reveal sex differences in the effect of oxytocin on face memory. Contrary to recent findings in oxytocin and moral decision making, oxytocin did not preferentially improve memory for own-race faces. Copyright © 2013 Elsevier Ltd. All rights reserved.
Plastic reorganization of neural systems for perception of others in the congenitally blind.
Fairhall, S L; Porter, K B; Bellucci, C; Mazzetti, M; Cipolli, C; Gobbini, M I
2017-09-01
Recent evidence suggests that the function of the core system for face perception might extend beyond visual face-perception to a broader role in person perception. To critically test the broader role of core face-system in person perception, we examined the role of the core system during the perception of others in 7 congenitally blind individuals and 15 sighted subjects by measuring their neural responses using fMRI while they listened to voices and performed identity and emotion recognition tasks. We hypothesised that in people who have had no visual experience of faces, core face-system areas may assume a role in the perception of others via voices. Results showed that emotions conveyed by voices can be decoded in homologues of the core face system only in the blind. Moreover, there was a specific enhancement of response to verbal as compared to non-verbal stimuli in bilateral fusiform face areas and the right posterior superior temporal sulcus showing that the core system also assumes some language-related functions in the blind. These results indicate that, in individuals with no history of visual experience, areas of the core system for face perception may assume a role in aspects of voice perception that are relevant to social cognition and perception of others' emotions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Ma, Yuanxiao; Ran, Guangming; Chen, Xu; Ma, Haijing; Hu, Na
2017-01-01
Adult attachment style is a key for understanding emotion regulation and feelings of security in human interactions as well as for the construction of the caregiving system. The caregiving system is a group of representations about affiliative behaviors, which is guided by the caregiver's sensitivity and empathy, and is mature in young adulthood. Appropriate perception and interpretation of infant emotions is a crucial component of the formation of a secure attachment relationship between infant and caregiver. As attachment styles influence the ways in which people perceive emotional information, we examined how different attachment styles associated with brain response to the perception of infant facial expressions in nulliparous females with secure, anxious, and avoidant attachment styles. The event-related potentials of 65 nulliparous females were assessed during a facial recognition task with joy, neutral, and crying infant faces. The results showed that anxiously attached females exhibited larger N170 amplitudes than those with avoidant attachment in response to all infant faces. Regarding the P300 component, securely attached females showed larger amplitudes to all infant faces in comparison with avoidantly attached females. Moreover, anxiously attached females exhibited greater amplitudes than avoidantly attached females to only crying infant faces. In conclusion, the current results provide evidence that attachment style differences are associated with brain responses to the perception of infant faces. Furthermore, these findings further separate the psychological mechanisms underlying the caregiving behavior of those with anxious and avoidant attachment from secure attachment.
Differential hemispheric and visual stream contributions to ensemble coding of crowd emotion
Im, Hee Yeon; Albohn, Daniel N.; Steiner, Troy G.; Cushing, Cody A.; Adams, Reginald B.; Kveraga, Kestutis
2017-01-01
In crowds, where scrutinizing individual facial expressions is inefficient, humans can make snap judgments about the prevailing mood by reading “crowd emotion”. We investigated how the brain accomplishes this feat in a set of behavioral and fMRI studies. Participants were asked to either avoid or approach one of two crowds of faces presented in the left and right visual hemifields. Perception of crowd emotion was improved when crowd stimuli contained goal-congruent cues and was highly lateralized to the right hemisphere. The dorsal visual stream was preferentially activated in crowd emotion processing, with activity in the intraparietal sulcus and superior frontal gyrus predicting perceptual accuracy for crowd emotion perception, whereas activity in the fusiform cortex in the ventral stream predicted better perception of individual facial expressions. Our findings thus reveal significant behavioral differences and differential involvement of the hemispheres and the major visual streams in reading crowd versus individual face expressions. PMID:29226255
It is all in the face: carotenoid skin coloration loses attractiveness outside the face.
Lefevre, C E; Ewbank, M P; Calder, A J; von dem Hagen, E; Perrett, D I
2013-01-01
Recently, the importance of skin colour for facial attractiveness has been recognized. In particular, dietary carotenoid-induced skin colour has been proposed as a signal of health and therefore attractiveness. While perceptual results are highly consistent, it is currently not clear whether carotenoid skin colour is preferred because it poses a cue to current health condition in humans or whether it is simply seen as a more aesthetically pleasing colour, independently of skin-specific signalling properties. Here, we tested this question by comparing attractiveness ratings of faces to corresponding ratings of meaningless scrambled face images matching the colours and contrasts found in the face. We produced sets of face and non-face stimuli with either healthy (high-carotenoid coloration) or unhealthy (low-carotenoid coloration) colour and asked participants for attractiveness ratings. Results showed that, while for faces increased carotenoid coloration significantly improved attractiveness, there was no equivalent effect on perception of scrambled images. These findings are consistent with a specific signalling system of current condition through skin coloration in humans and indicate that preferences are not caused by sensory biases in observers.
A computer-generated animated face stimulus set for psychophysiological research
Naples, Adam; Nguyen-Phuc, Alyssa; Coffman, Marika; Kresse, Anna; Faja, Susan; Bernier, Raphael; McPartland., James
2014-01-01
Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception. PMID:25028164
Emotional tears facilitate the recognition of sadness and the perceived need for social support.
Balsters, Martijn J H; Krahmer, Emiel J; Swerts, Marc G J; Vingerhoets, Ad J J M
2013-02-12
The tearing effect refers to the relevance of tears as an important visual cue adding meaning to human facial expression. However, little is known about how people process these visual cues and their mediating role in terms of emotion perception and person judgment. We therefore conducted two experiments in which we measured the influence of tears on the identification of sadness and the perceived need for social support at an early perceptional level. In two experiments (1 and 2), participants were exposed to sad and neutral faces. In both experiments, the face stimuli were presented for 50 milliseconds. In experiment 1, tears were digitally added to sad faces in one condition. Participants demonstrated a significant faster recognition of sad faces with tears compared to those without tears. In experiment 2, tears were added to neutral faces as well. Participants had to indicate to what extent the displayed individuals were in need of social support. Study participants reported a greater perceived need for social support to both sad and neutral faces with tears than to those without tears. This study thus demonstrated that emotional tears serve as important visual cues at an early (pre-attentive) level.
Broadbent, Elizabeth; Kumar, Vinayak; Li, Xingyan; Sollers, John; Stafford, Rebecca Q.; MacDonald, Bruce A.; Wegner, Daniel M.
2013-01-01
It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users’ perceptions of the robot’s personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot’s mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot’s mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot’s face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot’s personality. Designers should be aware that the face on a robot’s display screen can affect both the perceived mind and personality of the robot. PMID:24015263
Broadbent, Elizabeth; Kumar, Vinayak; Li, Xingyan; Sollers, John; Stafford, Rebecca Q; MacDonald, Bruce A; Wegner, Daniel M
2013-01-01
It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users' perceptions of the robot's personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot's mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot's mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot's face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot's personality. Designers should be aware that the face on a robot's display screen can affect both the perceived mind and personality of the robot.
Reconstructing Perceived and Retrieved Faces from Activity Patterns in Lateral Parietal Cortex.
Lee, Hongmi; Kuhl, Brice A
2016-06-01
Recent findings suggest that the contents of memory encoding and retrieval can be decoded from the angular gyrus (ANG), a subregion of posterior lateral parietal cortex. However, typical decoding approaches provide little insight into the nature of ANG content representations. Here, we tested whether complex, multidimensional stimuli (faces) could be reconstructed from ANG by predicting underlying face components from fMRI activity patterns in humans. Using an approach inspired by computer vision methods for face recognition, we applied principal component analysis to a large set of face images to generate eigenfaces. We then modeled relationships between eigenface values and patterns of fMRI activity. Activity patterns evoked by individual faces were then used to generate predicted eigenface values, which could be transformed into reconstructions of individual faces. We show that visually perceived faces were reliably reconstructed from activity patterns in occipitotemporal cortex and several lateral parietal subregions, including ANG. Subjective assessment of reconstructed faces revealed specific sources of information (e.g., affect and skin color) that were successfully reconstructed in ANG. Strikingly, we also found that a model trained on ANG activity patterns during face perception was able to successfully reconstruct an independent set of face images that were held in memory. Together, these findings provide compelling evidence that ANG forms complex, stimulus-specific representations that are reflected in activity patterns evoked during perception and remembering. Neuroimaging studies have consistently implicated lateral parietal cortex in episodic remembering, but the functional contributions of lateral parietal cortex to memory remain a topic of debate. Here, we used an innovative form of fMRI pattern analysis to test whether lateral parietal cortex actively represents the contents of memory. Using a large set of human face images, we first extracted latent face components (eigenfaces). We then used machine learning algorithms to predict face components from fMRI activity patterns and, ultimately, to reconstruct images of individual faces. We show that activity patterns in a subregion of lateral parietal cortex, the angular gyrus, supported successful reconstruction of perceived and remembered faces, confirming a role for this region in actively representing remembered content. Copyright © 2016 the authors 0270-6474/16/366069-14$15.00/0.
ERIC Educational Resources Information Center
Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina
2013-01-01
Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the intersensory redundancy hypothesis (IRH), that face discrimination, which relies on detection of visual…
Face-selective neurons maintain consistent visual responses across months
McMahon, David B. T.; Jones, Adam P.; Bondar, Igor V.; Leopold, David A.
2014-01-01
Face perception in both humans and monkeys is thought to depend on neurons clustered in discrete, specialized brain regions. Because primates are frequently called upon to recognize and remember new individuals, the neuronal representation of faces in the brain might be expected to change over time. The functional properties of neurons in behaving animals are typically assessed over time periods ranging from minutes to hours, which amounts to a snapshot compared to a lifespan of a neuron. It therefore remains unclear how neuronal properties observed on a given day predict that same neuron's activity months or years later. Here we show that the macaque inferotemporal cortex contains face-selective cells that show virtually no change in their patterns of visual responses over time periods as long as one year. Using chronically implanted microwire electrodes guided by functional MRI targeting, we obtained distinct profiles of selectivity for face and nonface stimuli that served as fingerprints for individual neurons in the anterior fundus (AF) face patch within the superior temporal sulcus. Longitudinal tracking over a series of daily recording sessions revealed that face-selective neurons maintain consistent visual response profiles across months-long time spans despite the influence of ongoing daily experience. We propose that neurons in the AF face patch are specialized for aspects of face perception that demand stability as opposed to plasticity. PMID:24799679
Face-selective neurons maintain consistent visual responses across months.
McMahon, David B T; Jones, Adam P; Bondar, Igor V; Leopold, David A
2014-06-03
Face perception in both humans and monkeys is thought to depend on neurons clustered in discrete, specialized brain regions. Because primates are frequently called upon to recognize and remember new individuals, the neuronal representation of faces in the brain might be expected to change over time. The functional properties of neurons in behaving animals are typically assessed over time periods ranging from minutes to hours, which amounts to a snapshot compared to a lifespan of a neuron. It therefore remains unclear how neuronal properties observed on a given day predict that same neuron's activity months or years later. Here we show that the macaque inferotemporal cortex contains face-selective cells that show virtually no change in their patterns of visual responses over time periods as long as one year. Using chronically implanted microwire electrodes guided by functional MRI targeting, we obtained distinct profiles of selectivity for face and nonface stimuli that served as fingerprints for individual neurons in the anterior fundus (AF) face patch within the superior temporal sulcus. Longitudinal tracking over a series of daily recording sessions revealed that face-selective neurons maintain consistent visual response profiles across months-long time spans despite the influence of ongoing daily experience. We propose that neurons in the AF face patch are specialized for aspects of face perception that demand stability as opposed to plasticity.
Baby Schema in Infant Faces Induces Cuteness Perception and Motivation for Caretaking in Adults.
Glocker, Melanie L; Langleben, Daniel D; Ruparel, Kosha; Loughead, James W; Gur, Ruben C; Sachser, Norbert
2009-03-01
Ethologist Konrad Lorenz proposed that baby schema ('Kindchenschema') is a set of infantile physical features such as the large head, round face and big eyes that is perceived as cute and motivates caretaking behavior in other individuals, with the evolutionary function of enhancing offspring survival. Previous work on this fundamental concept was restricted to schematic baby representations or correlative approaches. Here, we experimentally tested the effects of baby schema on the perception of cuteness and the motivation for caretaking using photographs of infant faces. Employing quantitative techniques, we parametrically manipulated the baby schema content to produce infant faces with high (e.g. round face and high forehead), and low (e. g. narrow face and low forehead) baby schema features that retained all the characteristics of a photographic portrait. Undergraduate students (n = 122) rated these infants' cuteness and their motivation to take care of them. The high baby schema infants were rated as more cute and elicited stronger motivation for caretaking than the unmanipulated and the low baby schema infants. This is the first experimental proof of the baby schema effects in actual infant faces. Our findings indicate that the baby schema response is a critical function of human social cognition that may be the basis of caregiving and have implications for infant-caretaker interactions.
The other-race and other-species effects in face perception – a subordinate-level analysis
Dahl, Christoph D.; Rasch, Malte J.; Chen, Chien-Chung
2014-01-01
The ability of face discrimination is modulated by the frequency of exposure to a category of faces. In other words, lower discrimination performance was measured for infrequently encountered faces as opposed to frequently encountered ones. This phenomenon has been described in the literature: the own-race advantage, a benefit in processing own-race as opposed to the other-race faces, and the own-species advantage, a benefit in processing the conspecific type of faces as opposed to the heterospecific type. So far, the exact parameters that drive either of these two effects are not fully understood. In the following we present a full assessment of data in human participants describing the discrimination performances across two races (Asian and Caucasian) as well as a range of non-human primate faces (chimpanzee, Rhesus macaque and marmoset). We measured reaction times of Asian participants performing a delayed matching-to-sample task, and correlated the results with similarity estimates of facial configuration and face parts. We found faster discrimination of own-race above other-race/species faces. Further, we found a strong reliance on configural information in upright own-species/-race faces and on individual face parts in all inverted face classes, supporting the assumption of specialized processing for the face class of most frequent exposure. PMID:25285092
Lahnakoski, Juha M; Glerean, Enrico; Salmi, Juha; Jääskeläinen, Iiro P; Sams, Mikko; Hari, Riitta; Nummenmaa, Lauri
2012-01-01
Despite the abundant data on brain networks processing static social signals, such as pictures of faces, the neural systems supporting social perception in naturalistic conditions are still poorly understood. Here we delineated brain networks subserving social perception under naturalistic conditions in 19 healthy humans who watched, during 3-T functional magnetic resonance imaging (fMRI), a set of 137 short (approximately 16 s each, total 27 min) audiovisual movie clips depicting pre-selected social signals. Two independent raters estimated how well each clip represented eight social features (faces, human bodies, biological motion, goal-oriented actions, emotion, social interaction, pain, and speech) and six filler features (places, objects, rigid motion, people not in social interaction, non-goal-oriented action, and non-human sounds) lacking social content. These ratings were used as predictors in the fMRI analysis. The posterior superior temporal sulcus (STS) responded to all social features but not to any non-social features, and the anterior STS responded to all social features except bodies and biological motion. We also found four partially segregated, extended networks for processing of specific social signals: (1) a fronto-temporal network responding to multiple social categories, (2) a fronto-parietal network preferentially activated to bodies, motion, and pain, (3) a temporo-amygdalar network responding to faces, social interaction, and speech, and (4) a fronto-insular network responding to pain, emotions, social interactions, and speech. Our results highlight the role of the pSTS in processing multiple aspects of social information, as well as the feasibility and efficiency of fMRI mapping under conditions that resemble the complexity of real life.
Face processing pattern under top-down perception: a functional MRI study
NASA Astrophysics Data System (ADS)
Li, Jun; Liang, Jimin; Tian, Jie; Liu, Jiangang; Zhao, Jizheng; Zhang, Hui; Shi, Guangming
2009-02-01
Although top-down perceptual process plays an important role in face processing, its neural substrate is still puzzling because the top-down stream is extracted difficultly from the activation pattern associated with contamination caused by bottom-up face perception input. In the present study, a novel paradigm of instructing participants to detect faces from pure noise images is employed, which could efficiently eliminate the interference of bottom-up face perception in topdown face processing. Analyzing the map of functional connectivity with right FFA analyzed by conventional Pearson's correlation, a possible face processing pattern induced by top-down perception can be obtained. Apart from the brain areas of bilateral fusiform gyrus (FG), left inferior occipital gyrus (IOG) and left superior temporal sulcus (STS), which are consistent with a core system in the distributed cortical network for face perception, activation induced by top-down face processing is also found in these regions that include the anterior cingulate gyrus (ACC), right oribitofrontal cortex (OFC), left precuneus, right parahippocampal cortex, left dorsolateral prefrontal cortex (DLPFC), right frontal pole, bilateral premotor cortex, left inferior parietal cortex and bilateral thalamus. The results indicate that making-decision, attention, episodic memory retrieving and contextual associative processing network cooperate with general face processing regions to process face information under top-down perception.
Hearing faces: how the infant brain matches the face it sees with the speech it hears.
Bristow, Davina; Dehaene-Lambertz, Ghislaine; Mattout, Jeremie; Soares, Catherine; Gliga, Teodora; Baillet, Sylvain; Mangin, Jean-François
2009-05-01
Speech is not a purely auditory signal. From around 2 months of age, infants are able to correctly match the vowel they hear with the appropriate articulating face. However, there is no behavioral evidence of integrated audiovisual perception until 4 months of age, at the earliest, when an illusory percept can be created by the fusion of the auditory stimulus and of the facial cues (McGurk effect). To understand how infants initially match the articulatory movements they see with the sounds they hear, we recorded high-density ERPs in response to auditory vowels that followed a congruent or incongruent silently articulating face in 10-week-old infants. In a first experiment, we determined that auditory-visual integration occurs during the early stages of perception as in adults. The mismatch response was similar in timing and in topography whether the preceding vowels were presented visually or aurally. In the second experiment, we studied audiovisual integration in the linguistic (vowel perception) and nonlinguistic (gender perception) domain. We observed a mismatch response for both types of change at similar latencies. Their topographies were significantly different demonstrating that cross-modal integration of these features is computed in parallel by two different networks. Indeed, brain source modeling revealed that phoneme and gender computations were lateralized toward the left and toward the right hemisphere, respectively, suggesting that each hemisphere possesses an early processing bias. We also observed repetition suppression in temporal regions and repetition enhancement in frontal regions. These results underscore how complex and structured is the human cortical organization which sustains communication from the first weeks of life on.
Face Context Influences Local Part Processing: An ERP Study.
Zhang, Hong; Sun, Yaoru; Zhao, Lun
2017-09-01
Perception of face parts on the basis of features is thought to be different from perception of whole faces, which is more based on configural information. Face context is also suggested to play an important role in face processing. To investigate how face context influences the early-stage perception of facial local parts, we used an oddball paradigm that tested perceptual stages of face processing rather than recognition. We recorded the event-related potentials (ERPs) elicited by whole faces and face parts presented in four conditions (upright-normal, upright-thatcherised, inverted-normal and inverted-thatcherised), as well as the ERPs elicited by non-face objects (whole houses and house parts) with corresponding conditions. The results showed that face context significantly affected the N170 with increased amplitudes and earlier peak latency for upright normal faces. Removing face context delayed the P1 latency but did not affect the P1 amplitude prominently for both upright and inverted normal faces. Across all conditions, neither the N170 nor the P1 was modulated by house context. The significant changes on the N170 and P1 components revealed that face context influences local part processing at the early stage of face processing and this context effect might be specific for face perception. We further suggested that perceptions of whole faces and face parts are functionally distinguished.
Tsao, Doris Y.
2009-01-01
Faces are among the most informative stimuli we ever perceive: Even a split-second glimpse of a person's face tells us their identity, sex, mood, age, race, and direction of attention. The specialness of face processing is acknowledged in the artificial vision community, where contests for face recognition algorithms abound. Neurological evidence strongly implicates a dedicated machinery for face processing in the human brain, to explain the double dissociability of face and object recognition deficits. Furthermore, it has recently become clear that macaques too have specialized neural machinery for processing faces. Here we propose a unifying hypothesis, deduced from computational, neurological, fMRI, and single-unit experiments: that what makes face processing special is that it is gated by an obligatory detection process. We will clarify this idea in concrete algorithmic terms, and show how it can explain a variety of phenomena associated with face processing. PMID:18558862
Automatic prediction of facial trait judgments: appearance vs. structural models.
Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi
2011-01-01
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.
Intranasal oxytocin selectively attenuates rhesus monkeys' attention to negative facial expressions.
Parr, Lisa A; Modi, Meera; Siebert, Erin; Young, Larry J
2013-09-01
Intranasal oxytocin (IN-OT) modulates social perception and cognition in humans and could be an effective pharmacotherapy for treating social impairments associated with neuropsychiatric disorders, like autism. However, it is unknown how IN-OT modulates social cognition, its effect after repeated use, or its impact on the developing brain. Animal models are urgently needed. This study examined the effect of IN-OT on social perception in monkeys using tasks that reveal some of the social impairments seen in autism. Six rhesus macaques (Macaca mulatta, 4 males) received a 48 IU dose of OT or saline placebo using a pediatric nebulizer. An hour later, they performed a computerized task (the dot-probe task) to measure their attentional bias to social, emotional, and nonsocial images. Results showed that IN-OT significantly reduced monkeys' attention to negative facial expressions, but not neutral faces or clip art images and, additionally, showed a trend to enhance monkeys' attention to direct vs. averted gaze faces. This study is the first to demonstrate an effect of IN-OT on social perception in monkeys, IN-OT selectively reduced monkey's attention to negative facial expressions, but not neutral social or nonsocial images. These findings complement several reports in humans showing that IN-OT reduces the aversive quality of social images suggesting that, like humans, monkey social perception is mediated by the oxytocinergic system. Importantly, these results in monkeys suggest that IN-OT does not dampen the emotional salience of social stimuli, but rather acts to affect the evaluation of emotional images during the early stages of information processing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Boy with cortical visual impairment and unilateral hemiparesis in Jeff Huntington's "Slip" (2011).
Bianucci, R; Perciaccante, A; Appenzeller, O
2016-11-15
Face recognition is strongly associated with the human face and face perception is an important part in identifying health qualities of a person and is an integral part of so called spot diagnosis in clinical neurology. Neurology depends in part on observation, description and interpretation of visual information. Similar skills are required in visual art. Here we report a case of eye cortical visual impairment (CVI) and unilateral facial weakness in a boy depicted by the painter Jeff Huntington (2011). The corollary of this is that art serves medical clinical exercise. Art interpretation helps neurology students to apply the same skills they will use in clinical experience and to develop their observational and interpretive skills in non-clinical settings. Furthermore, the development of an increased awareness of emotional and character expression in the human face may facilitate successful doctor-patient relationships. Copyright © 2016 Elsevier B.V. All rights reserved.
Perception of the average size of multiple objects in chimpanzees (Pan troglodytes).
Imura, Tomoko; Kawakami, Fumito; Shirai, Nobu; Tomonaga, Masaki
2017-08-30
Humans can extract statistical information, such as the average size of a group of objects or the general emotion of faces in a crowd without paying attention to any individual object or face. To determine whether summary perception is unique to humans, we investigated the evolutional origins of this ability by assessing whether chimpanzees, which are closely related to humans, can also determine the average size of multiple visual objects. Five chimpanzees and 18 humans were able to choose the array in which the average size was larger, when presented with a pair of arrays, each containing 12 circles of different or the same sizes. Furthermore, both species were more accurate in judging the average size of arrays consisting of 12 circles of different or the same sizes than they were in judging the average size of arrays consisting of a single circle. Our findings could not be explained by the use of a strategy in which the chimpanzee detected the largest or smallest circle among those in the array. Our study provides the first evidence that chimpanzees can perceive the average size of multiple visual objects. This indicates that the ability to compute the statistical properties of a complex visual scene is not unique to humans, but is shared between both species. © 2017 The Authors.
Perception of the average size of multiple objects in chimpanzees (Pan troglodytes)
Kawakami, Fumito; Shirai, Nobu; Tomonaga, Masaki
2017-01-01
Humans can extract statistical information, such as the average size of a group of objects or the general emotion of faces in a crowd without paying attention to any individual object or face. To determine whether summary perception is unique to humans, we investigated the evolutional origins of this ability by assessing whether chimpanzees, which are closely related to humans, can also determine the average size of multiple visual objects. Five chimpanzees and 18 humans were able to choose the array in which the average size was larger, when presented with a pair of arrays, each containing 12 circles of different or the same sizes. Furthermore, both species were more accurate in judging the average size of arrays consisting of 12 circles of different or the same sizes than they were in judging the average size of arrays consisting of a single circle. Our findings could not be explained by the use of a strategy in which the chimpanzee detected the largest or smallest circle among those in the array. Our study provides the first evidence that chimpanzees can perceive the average size of multiple visual objects. This indicates that the ability to compute the statistical properties of a complex visual scene is not unique to humans, but is shared between both species. PMID:28835550
Awasthi, Bhuvanesh
2017-01-01
In the context of objectification and violence, little attention has been paid to the perception neuroscience of how the human brain perceives bodies and objectifies them. Various studies point to how external cues such as appearance and attire could play a key role in encouraging objectification, dehumanization and the denial of agency. Reviewing new experimental findings across several areas of research, it seems that common threads run through issues of clothing, sexual objectification, body perception, dehumanization, and assault. Collating findings from several different lines of research, this article reviews additional evidence from cognitive and neural dynamics of person perception (body and face perception processes) that predict downstream social behavior. Specifically, new findings demonstrate cognitive processing of sexualized female bodies as object-like, a crucial aspect of dehumanized percept devoid of agency and personhood. Sexual violence is a consequence of a dehumanized perception of female bodies that aggressors acquire through their exposure and interpretation of objectified body images. Integrating these findings and identifying triggers for sexual violence may help develop remedial measures and inform law enforcement processes and policy makers alike.
Awasthi, Bhuvanesh
2017-01-01
In the context of objectification and violence, little attention has been paid to the perception neuroscience of how the human brain perceives bodies and objectifies them. Various studies point to how external cues such as appearance and attire could play a key role in encouraging objectification, dehumanization and the denial of agency. Reviewing new experimental findings across several areas of research, it seems that common threads run through issues of clothing, sexual objectification, body perception, dehumanization, and assault. Collating findings from several different lines of research, this article reviews additional evidence from cognitive and neural dynamics of person perception (body and face perception processes) that predict downstream social behavior. Specifically, new findings demonstrate cognitive processing of sexualized female bodies as object-like, a crucial aspect of dehumanized percept devoid of agency and personhood. Sexual violence is a consequence of a dehumanized perception of female bodies that aggressors acquire through their exposure and interpretation of objectified body images. Integrating these findings and identifying triggers for sexual violence may help develop remedial measures and inform law enforcement processes and policy makers alike. PMID:28344565
Developmental Social Cognitive Neuroscience: Insights from Deafness
ERIC Educational Resources Information Center
Corina, David; Singleton, Jenny
2009-01-01
The condition of deafness presents a developmental context that provides insight into the biological, cultural, and linguistic factors underlying the development of neural systems that impact social cognition. Studies of visual attention, behavioral regulation, language development, and face and human action perception are discussed. Visually…
Rangarajan, Vinitha; Parvizi, Josef
2016-03-01
The ventral temporal cortex (VTC) contains several areas with selective responses to words, numbers, faces, and objects as demonstrated by numerous human and primate imaging and electrophysiological studies. Our recent work using electrocorticography (ECoG) confirmed the presence of face-selective neuronal populations in the human fusiform gyrus (FG) in patients implanted with intracranial electrodes in either the left or right hemisphere. Electrical brain stimulation (EBS) disrupted the conscious perception of faces only when it was delivered in the right, but not left, FG. In contrast to our previous findings, here we report both negative and positive EBS effects in right and left FG, respectively. The presence of right hemisphere language dominance in the first, and strong left-handedness and poor language processing performance in the second case, provide indirect clues about the functional architecture of the human VTC in relation to hemispheric asymmetries in language processing and handedness. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ma, Yuanxiao; Ran, Guangming; Chen, Xu; Ma, Haijing; Hu, Na
2017-01-01
Adult attachment style is a key for understanding emotion regulation and feelings of security in human interactions as well as for the construction of the caregiving system. The caregiving system is a group of representations about affiliative behaviors, which is guided by the caregiver’s sensitivity and empathy, and is mature in young adulthood. Appropriate perception and interpretation of infant emotions is a crucial component of the formation of a secure attachment relationship between infant and caregiver. As attachment styles influence the ways in which people perceive emotional information, we examined how different attachment styles associated with brain response to the perception of infant facial expressions in nulliparous females with secure, anxious, and avoidant attachment styles. The event-related potentials of 65 nulliparous females were assessed during a facial recognition task with joy, neutral, and crying infant faces. The results showed that anxiously attached females exhibited larger N170 amplitudes than those with avoidant attachment in response to all infant faces. Regarding the P300 component, securely attached females showed larger amplitudes to all infant faces in comparison with avoidantly attached females. Moreover, anxiously attached females exhibited greater amplitudes than avoidantly attached females to only crying infant faces. In conclusion, the current results provide evidence that attachment style differences are associated with brain responses to the perception of infant faces. Furthermore, these findings further separate the psychological mechanisms underlying the caregiving behavior of those with anxious and avoidant attachment from secure attachment. PMID:28484415
Mutic, Smiljana; Moellers, Eileen M; Wiesmann, Martin; Freiherr, Jessica
2015-01-01
Human body odor is a source of important social information. In this study, we explore whether the sex of an individual can be established based on smelling axillary odor and whether exposure to male and female odors biases chemosensory and social perception. In a double-blind, pseudo-randomized application, 31 healthy normosmic heterosexual male and female raters were exposed to male and female chemosignals (odor samples of 27 heterosexual donors collected during a cardio workout) and a no odor sample. Recipients rated chemosensory samples on a masculinity-femininity scale and provided intensity, familiarity and pleasantness ratings. Additionally, the modulation of social perception (gender-neutral faces and personality attributes) and affective introspection (mood) by male and female chemosignals was assessed. Male and female axillary odors were rated as rather masculine, regardless of the sex of the donor. As opposed to the masculinity bias in the odor perception, a femininity bias modulating social perception appeared. A facilitated femininity detection in gender-neutral faces and personality attributes in male and female chemosignals appeared. No chemosensory effect on mood of the rater was observed. The results are discussed with regards to the use of male and female chemosignals in affective and social communication.
Mutic, Smiljana; Moellers, Eileen M.; Wiesmann, Martin; Freiherr, Jessica
2016-01-01
Human body odor is a source of important social information. In this study, we explore whether the sex of an individual can be established based on smelling axillary odor and whether exposure to male and female odors biases chemosensory and social perception. In a double-blind, pseudo-randomized application, 31 healthy normosmic heterosexual male and female raters were exposed to male and female chemosignals (odor samples of 27 heterosexual donors collected during a cardio workout) and a no odor sample. Recipients rated chemosensory samples on a masculinity-femininity scale and provided intensity, familiarity and pleasantness ratings. Additionally, the modulation of social perception (gender-neutral faces and personality attributes) and affective introspection (mood) by male and female chemosignals was assessed. Male and female axillary odors were rated as rather masculine, regardless of the sex of the donor. As opposed to the masculinity bias in the odor perception, a femininity bias modulating social perception appeared. A facilitated femininity detection in gender-neutral faces and personality attributes in male and female chemosignals appeared. No chemosensory effect on mood of the rater was observed. The results are discussed with regards to the use of male and female chemosignals in affective and social communication. PMID:26834656
The processing of social stimuli in early infancy: from faces to biological motion perception.
Simion, Francesca; Di Giorgio, Elisa; Leo, Irene; Bardi, Lara
2011-01-01
There are several lines of evidence which suggests that, since birth, the human system detects social agents on the basis of at least two properties: the presence of a face and the way they move. This chapter reviews the infant research on the origin of brain specialization for social stimuli and on the role of innate mechanisms and perceptual experience in shaping the development of the social brain. Two lines of convergent evidence on face detection and biological motion detection will be presented to demonstrate the innate predispositions of the human system to detect social stimuli at birth. As for face detection, experiments will be presented to demonstrate that, by virtue of nonspecific attentional biases, a very coarse template of faces become active at birth. As for biological motion detection, studies will be presented to demonstrate that, since birth, the human system is able to detect social stimuli on the basis of their properties such as the presence of a semi-rigid motion named biological motion. Overall, the empirical evidence converges in supporting the notion that the human system begins life broadly tuned to detect social stimuli and that the progressive specialization will narrow the system for social stimuli as a function of experience. Copyright © 2011 Elsevier B.V. All rights reserved.
Face and body perception in schizophrenia: a configural processing deficit?
Soria Bauser, Denise; Thoma, Patrizia; Aizenberg, Victoria; Brüne, Martin; Juckel, Georg; Daum, Irene
2012-01-30
Face and body perception rely on common processing mechanisms and activate similar but not identical brain networks. Patients with schizophrenia show impaired face perception, and the present study addressed for the first time body perception in this group. Seventeen patients diagnosed with schizophrenia or schizoaffective disorder were compared to 17 healthy controls on standardized tests assessing basic face perception skills (identity discrimination, memory for faces, recognition of facial affect). A matching-to-sample task including emotional and neutral faces, bodies and cars either in an upright or in an inverted position was administered to assess potential category-specific performance deficits and impairments of configural processing. Relative to healthy controls, schizophrenia patients showed poorer performance on the tasks assessing face perception skills. In the matching-to-sample task, they also responded more slowly and less accurately than controls, regardless of the stimulus category. Accuracy analysis showed significant inversion effects for faces and bodies across groups, reflecting configural processing mechanisms; however reaction time analysis indicated evidence of reduced inversion effects regardless of category in schizophrenia patients. The magnitude of the inversion effects was not related to clinical symptoms. Overall, the data point towards reduced configural processing, not only for faces but also for bodies and cars in individuals with schizophrenia. © 2011 Elsevier Ltd. All rights reserved.
Adaptation aftereffects in the perception of gender from biological motion.
Troje, Nikolaus F; Sadr, Javid; Geyer, Henning; Nakayama, Ken
2006-07-28
Human visual perception is highly adaptive. While this has been known and studied for a long time in domains such as color vision, motion perception, or the processing of spatial frequency, a number of more recent studies have shown that adaptation and adaptation aftereffects also occur in high-level visual domains like shape perception and face recognition. Here, we present data that demonstrate a pronounced aftereffect in response to adaptation to the perceived gender of biological motion point-light walkers. A walker that is perceived to be ambiguous in gender under neutral adaptation appears to be male after adaptation with an exaggerated female walker and female after adaptation with an exaggerated male walker. We discuss this adaptation aftereffect as a tool to characterize and probe the mechanisms underlying biological motion perception.
Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G
2017-08-01
Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.
Neurons in the human amygdala selective for perceived emotion
Wang, Shuo; Tudusciuc, Oana; Mamelak, Adam N.; Ross, Ian B.; Adolphs, Ralph; Rutishauser, Ueli
2014-01-01
The human amygdala plays a key role in recognizing facial emotions and neurons in the monkey and human amygdala respond to the emotional expression of faces. However, it remains unknown whether these responses are driven primarily by properties of the stimulus or by the perceptual judgments of the perceiver. We investigated these questions by recording from over 200 single neurons in the amygdalae of 7 neurosurgical patients with implanted depth electrodes. We presented degraded fear and happy faces and asked subjects to discriminate their emotion by button press. During trials where subjects responded correctly, we found neurons that distinguished fear vs. happy emotions as expressed by the displayed faces. During incorrect trials, these neurons indicated the patients’ subjective judgment. Additional analysis revealed that, on average, all neuronal responses were modulated most by increases or decreases in response to happy faces, and driven predominantly by judgments about the eye region of the face stimuli. Following the same analyses, we showed that hippocampal neurons, unlike amygdala neurons, only encoded emotions but not subjective judgment. Our results suggest that the amygdala specifically encodes the subjective judgment of emotional faces, but that it plays less of a role in simply encoding aspects of the image array. The conscious percept of the emotion shown in a face may thus arise from interactions between the amygdala and its connections within a distributed cortical network, a scheme also consistent with the long response latencies observed in human amygdala recordings. PMID:24982200
Yang, Ping; Wang, Min; Jin, Zhenlan; Li, Ling
2015-01-01
The ability to focus on task-relevant information, while suppressing distraction, is critical for human cognition and behavior. Using a delayed-match-to-sample (DMS) task, we investigated the effects of emotional face distractors (positive, negative, and neutral faces) on early and late phases of visual short-term memory (VSTM) maintenance intervals, using low and high VSTM loads. Behavioral results showed decreased accuracy and delayed reaction times (RTs) for high vs. low VSTM load. Event-related potentials (ERPs) showed enhanced frontal N1 and occipital P1 amplitudes for negative faces vs. neutral or positive faces, implying rapid attentional alerting effects and early perceptual processing of negative distractors. However, high VSTM load appeared to inhibit face processing in general, showing decreased N1 amplitudes and delayed P1 latencies. An inverse correlation between the N1 activation difference (high-load minus low-load) and RT costs (high-load minus low-load) was found at left frontal areas when viewing negative distractors, suggesting that the greater the inhibition the lower the RT cost for negative faces. Emotional interference effect was not found in the late VSTM-related parietal P300, frontal positive slow wave (PSW) and occipital negative slow wave (NSW) components. In general, our findings suggest that the VSTM load modulates the early attention and perception of emotional distractors. PMID:26388763
Neural signatures of conscious and unconscious emotional face processing in human infants.
Jessen, Sarah; Grossmann, Tobias
2015-03-01
Human adults can process emotional information both with and without conscious awareness, and it has been suggested that the two processes rely on partly distinct brain mechanisms. However, the developmental origins of these brain processes are unknown. In the present event-related brain potential (ERP) study, we examined the brain responses of 7-month-old infants in response to subliminally (50 and 100 msec) and supraliminally (500 msec) presented happy and fearful facial expressions. Our results revealed that infants' brain responses (Pb and Nc) over central electrodes distinguished between emotions irrespective of stimulus duration, whereas the discrimination between emotions at occipital electrodes (N290 and P400) only occurred when faces were presented supraliminally (above threshold). This suggests that early in development the human brain not only discriminates between happy and fearful facial expressions irrespective of conscious perception, but also that, similar to adults, supraliminal and subliminal emotion processing relies on distinct neural processes. Our data further suggest that the processing of emotional facial expressions differs across infants depending on their behaviorally shown perceptual sensitivity. The current ERP findings suggest that distinct brain processes underpinning conscious and unconscious emotion perception emerge early in ontogeny and can therefore be seen as a key feature of human social functioning. Copyright © 2014 Elsevier Ltd. All rights reserved.
2017-01-01
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. PMID:28179553
Zhu, Lin L; Beauchamp, Michael S
2017-03-08
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. Copyright © 2017 the authors 0270-6474/17/372697-12$15.00/0.
Being BOLD: The neural dynamics of face perception.
Gentile, Francesco; Ales, Justin; Rossion, Bruno
2017-01-01
According to a non-hierarchical view of human cortical face processing, selective responses to faces may emerge in a higher-order area of the hierarchy, in the lateral part of the middle fusiform gyrus (fusiform face area [FFA]) independently from face-selective responses in the lateral inferior occipital gyrus (occipital face area [OFA]), a lower order area. Here we provide a stringent test of this hypothesis by gradually revealing segmented face stimuli throughout strict linear descrambling of phase information [Ales et al., 2012]. Using a short sampling rate (500 ms) of fMRI acquisition and single subject statistical analysis, we show a face-selective responses emerging earlier, that is, at a lower level of structural (i.e., phase) information, in the FFA compared with the OFA. In both regions, a face detection response emerging at a lower level of structural information for upright than inverted faces, both in the FFA and OFA, in line with behavioral responses and with previous findings of delayed responses to inverted faces with direct recordings of neural activity were also reported. Overall, these results support the non-hierarchical view of human cortical face processing and open new perspectives for time-resolved analysis at the single subject level of fMRI data obtained during continuously evolving visual stimulation. Hum Brain Mapp 38:120-139, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Soria Bauser, Denise A; Schriewer, Elisabeth; Suchan, Boris
2015-08-01
Several studies have reported similarities between perceptual processes underlying face and body perception, particularly emphasizing the importance of configural processes. Differences between the perception of faces and the perception of bodies were observed by means of a manipulation targeting a specific subtype of configural processing: the composite illusion. The composite face illusion describes the fact that two identical top halves of a face are perceived as being different if they are presented with different bottom parts. This effect disappears, if both halves are laterally shifted. Crucially, the effect of misalignment is not observed for bodies. This study aimed to further explore differences in the time course of face and body perception by using the composite effect. The present results replicated behavioural effects illustrating that misalignment affects the perception of faces but not bodies. Thus, face but not body perception relies on holistic processing. However, differences in the time course of the processing of both stimulus categories emerged at the N170 and P200. The pattern of the behavioural data seemed to be related to the P200. Thus, the present data indicate that holistic processes associated with the effect of misalignment might occur 200 ms after stimulus onset. © 2014 The British Psychological Society.
Dog owners show experience-based viewing behaviour in judging dog face approachability.
Gavin, Carla Jade; Houghton, Sarah; Guo, Kun
2017-01-01
Our prior visual experience plays a critical role in face perception. We show superior perceptual performance for differentiating conspecific (vs non-conspecific), own-race (vs other-race) and familiar (vs unfamiliar) faces. However, it remains unclear whether our experience with faces of other species would influence our gaze allocation for extracting salient facial information. In this eye-tracking study, we asked both dog owners and non-owners to judge the approachability of human, monkey and dog faces, and systematically compared their behavioural performance and gaze pattern associated with the task. Compared to non-owners, dog owners assessed dog faces with shorter time and fewer fixations, but gave higher approachability ratings. The gaze allocation within local facial features was also modulated by the ownership. The averaged proportion of the fixations and viewing time directed at the dog mouth region were significantly less for the dog owners, and more experienced dog owners tended to look more at the dog eyes, suggesting the adoption of a prior experience-based viewing behaviour for assessing dog approachability. No differences in behavioural performance and gaze pattern were observed between dog owners and non-owners when judging human and monkey faces, implying that the dog owner's experience-based gaze strategy for viewing dog faces was not transferable across faces of other species.
Rivera-Gutierrez, Diego; Ferdig, Rick; Li, Jian; Lok, Benjamin
2014-04-01
We have created You, M.D., an interactive museum exhibit in which users learn about topics in public health literacy while interacting with virtual humans. You, M.D. is equipped with a weight sensor, a height sensor and a Microsoft Kinect that gather basic user information. Conceptually, You, M.D. could use this user information to dynamically select the appearance of the virtual humans in the interaction attempting to improve learning outcomes and user perception for each particular user. For this concept to be possible, a better understanding of how different elements of the visual appearance of a virtual human affects user perceptions is required. In this paper, we present the results of an initial user study with a large sample size (n =333) ran using You, M.D. The study measured users reactions based on the users gender and body-mass index (BMI) when facing virtual humans with BMI either concordant or discordant from the users BMI. The results of the study indicate that concordance between the users BMI and the virtual humans BMI affects male and female users differently. The results also show that female users rate virtual humans as more knowledgeable than male users rate the same virtual humans.
Network Configurations in the Human Brain Reflect Choice Bias during Rapid Face Processing.
Tu, Tao; Schneck, Noam; Muraskin, Jordan; Sajda, Paul
2017-12-13
Network interactions are likely to be instrumental in processes underlying rapid perception and cognition. Specifically, high-level and perceptual regions must interact to balance pre-existing models of the environment with new incoming stimuli. Simultaneous electroencephalography (EEG) and fMRI (EEG/fMRI) enables temporal characterization of brain-network interactions combined with improved anatomical localization of regional activity. In this paper, we use simultaneous EEG/fMRI and multivariate dynamical systems (MDS) analysis to characterize network relationships between constitute brain areas that reflect a subject's choice for a face versus nonface categorization task. Our simultaneous EEG and fMRI analysis on 21 human subjects (12 males, 9 females) identifies early perceptual and late frontal subsystems that are selective to the categorical choice of faces versus nonfaces. We analyze the interactions between these subsystems using an MDS in the space of the BOLD signal. Our main findings show that differences between face-choice and house-choice networks are seen in the network interactions between the early and late subsystems, and that the magnitude of the difference in network interaction positively correlates with the behavioral false-positive rate of face choices. We interpret this to reflect the role of saliency and expectations likely encoded in frontal "late" regions on perceptual processes occurring in "early" perceptual regions. SIGNIFICANCE STATEMENT Our choices are affected by our biases. In visual perception and cognition such biases can be commonplace and quite curious-e.g., we see a human face when staring up at a cloud formation or down at a piece of toast at the breakfast table. Here we use multimodal neuroimaging and dynamical systems analysis to measure whole-brain spatiotemporal dynamics while subjects make decisions regarding the type of object they see in rapidly flashed images. We find that the degree of interaction in these networks accounts for a substantial fraction of our bias to see faces. In general, our findings illustrate how the properties of spatiotemporal networks yield insight into the mechanisms of how we form decisions. Copyright © 2017 the authors 0270-6474/17/3712226-12$15.00/0.
Network Configurations in the Human Brain Reflect Choice Bias during Rapid Face Processing
Schneck, Noam
2017-01-01
Network interactions are likely to be instrumental in processes underlying rapid perception and cognition. Specifically, high-level and perceptual regions must interact to balance pre-existing models of the environment with new incoming stimuli. Simultaneous electroencephalography (EEG) and fMRI (EEG/fMRI) enables temporal characterization of brain–network interactions combined with improved anatomical localization of regional activity. In this paper, we use simultaneous EEG/fMRI and multivariate dynamical systems (MDS) analysis to characterize network relationships between constitute brain areas that reflect a subject's choice for a face versus nonface categorization task. Our simultaneous EEG and fMRI analysis on 21 human subjects (12 males, 9 females) identifies early perceptual and late frontal subsystems that are selective to the categorical choice of faces versus nonfaces. We analyze the interactions between these subsystems using an MDS in the space of the BOLD signal. Our main findings show that differences between face-choice and house-choice networks are seen in the network interactions between the early and late subsystems, and that the magnitude of the difference in network interaction positively correlates with the behavioral false-positive rate of face choices. We interpret this to reflect the role of saliency and expectations likely encoded in frontal “late” regions on perceptual processes occurring in “early” perceptual regions. SIGNIFICANCE STATEMENT Our choices are affected by our biases. In visual perception and cognition such biases can be commonplace and quite curious—e.g., we see a human face when staring up at a cloud formation or down at a piece of toast at the breakfast table. Here we use multimodal neuroimaging and dynamical systems analysis to measure whole-brain spatiotemporal dynamics while subjects make decisions regarding the type of object they see in rapidly flashed images. We find that the degree of interaction in these networks accounts for a substantial fraction of our bias to see faces. In general, our findings illustrate how the properties of spatiotemporal networks yield insight into the mechanisms of how we form decisions. PMID:29118108
Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina
2014-01-01
Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the Intersensory Redundancy Hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired in the context of intersensory redundancy provided by audiovisual speech, and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development. Later in development, following improvements in attention, faces should be discriminated in both redundant audiovisual and nonredundant stimulation. Results supported these predictions. Two-month-old infants discriminated a novel face in unimodal visual and asynchronous audiovisual speech but not in synchronous audiovisual speech. By 3 months, face discrimination was evident even during synchronous audiovisual speech. These findings indicate that infant face perception is enhanced and emerges developmentally earlier following unimodal visual than synchronous audiovisual exposure and that intersensory redundancy generated by naturalistic audiovisual speech can interfere with face processing. PMID:23244407
Parkinson, Jim; Garfinkel, Sarah; Critchley, Hugo; Dienes, Zoltan; Seth, Anil K
2017-04-01
Volitional action and self-control-feelings of acting according to one's own intentions and in being control of one's own actions-are fundamental aspects of human conscious experience. However, it is unknown whether high-level cognitive control mechanisms are affected by socially salient but nonconscious emotional cues. In this study, we manipulated free choice decisions to act or withhold an action by subliminally presenting emotional faces: In a novel version of the Go/NoGo paradigm, participants made speeded button-press responses to Go targets, withheld responses to NoGo targets, and made spontaneous, free choices to execute or withhold the response for Choice targets. Before each target, we presented emotional faces, backwards masked to render them nonconscious. In Intentional trials, subliminal angry faces made participants more likely to voluntarily withhold the action, whereas fearful and happy faces had no effects. In a second experiment, the faces were made supraliminal, which eliminated the effects of angry faces on volitional choices. A third experiment measured neural correlates of the effects of subliminal angry faces on intentional choice using EEG. After replicating the behavioural results found in Experiment 1, we identified a frontal-midline theta component-associated with cognitive control processes-which is present for volitional decisions, and is modulated by subliminal angry faces. This suggests a mechanism whereby subliminally presented "threat" stimuli affect conscious control processes. In summary, nonconscious perception of angry faces increases choices to inhibit, and subliminal influences on volitional action are deep seated and ecologically embedded.
Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K
2015-03-27
Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.
Cortical visual prostheses: from microstimulation to functional percept
NASA Astrophysics Data System (ADS)
Najarpour Foroushani, Armin; Pack, Christopher C.; Sawan, Mohamad
2018-04-01
Cortical visual prostheses are intended to restore vision by targeted electrical stimulation of the visual cortex. The perception of spots of light, called phosphenes, resulting from microstimulation of the visual pathway, suggests the possibility of creating meaningful percept made of phosphenes. However, to date electrical stimulation of V1 has still not resulted in perception of phosphenated images that goes beyond punctate spots of light. In this review, we summarize the clinical and experimental progress that has been made in generating phosphenes and modulating their associated perceptual characteristics in human and macaque primary visual cortex (V1). We focus specifically on the effects of different microstimulation parameters on perception and we analyse key challenges facing the generation of meaningful artificial percepts. Finally, we propose solutions to these challenges based on the application of supervised learning of population codes for spatial stimulation of visual cortex.
Peelen, Marius V; Wiggett, Alison J; Downing, Paul E
2006-03-16
Accurate perception of the actions and intentions of other people is essential for successful interactions in a social environment. Several cortical areas that support this process respond selectively in fMRI to static and dynamic displays of human bodies and faces. Here we apply pattern-analysis techniques to arrive at a new understanding of the neural response to biological motion. Functionally defined body-, face-, and motion-selective visual areas all responded significantly to "point-light" human motion. Strikingly, however, only body selectivity was correlated, on a voxel-by-voxel basis, with biological motion selectivity. We conclude that (1) biological motion, through the process of structure-from-motion, engages areas involved in the analysis of the static human form; (2) body-selective regions in posterior fusiform gyrus and posterior inferior temporal sulcus overlap with, but are distinct from, face- and motion-selective regions; (3) the interpretation of region-of-interest findings may be substantially altered when multiple patterns of selectivity are considered.
Atypical Face Perception in Autism: A Point of View?
Morin, Karine; Guy, Jacalyn; Habak, Claudine; Wilson, Hugh R; Pagani, Linda; Mottron, Laurent; Bertone, Armando
2015-10-01
Face perception is the most commonly used visual metric of social perception in autism. However, when found to be atypical, the origin of face perception differences in autism is contentious. One hypothesis proposes that a locally oriented visual analysis, characteristic of individuals with autism, ultimately affects performance on face tasks where a global analysis is optimal. The objective of this study was to evaluate this hypothesis by assessing face identity discrimination with synthetic faces presented with and without changes in viewpoint, with the former condition minimizing access to local face attributes used for identity discrimination. Twenty-eight individuals with autism and 30 neurotypical participants performed a face identity discrimination task. Stimuli were synthetic faces extracted from traditional face photographs in both front and 20° side viewpoints, digitized from 37 points to provide a continuous measure of facial geometry. Face identity discrimination thresholds were obtained using a two-alternative, temporal forced choice match-to-sample paradigm. Analyses revealed an interaction between group and condition, with group differences found only for the viewpoint change condition, where performance in the autism group was decreased compared to that of neurotypical participants. The selective decrease in performance for the viewpoint change condition suggests that face identity discrimination in autism is more difficult when access to local cues is minimized, and/or when dependence on integrative analysis is increased. These results lend support to a perceptual contribution of atypical face perception in autism. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Eye contact perception in the West and East: a cross-cultural study.
Uono, Shota; Hietanen, Jari K
2015-01-01
This study investigated whether eye contact perception differs in people with different cultural backgrounds. Finnish (European) and Japanese (East Asian) participants were asked to determine whether Finnish and Japanese neutral faces with various gaze directions were looking at them. Further, participants rated the face stimuli for emotion and other affect-related dimensions. The results indicated that Finnish viewers had a smaller bias toward judging slightly averted gazes as directed at them when judging Finnish rather than Japanese faces, while the bias of Japanese viewers did not differ between faces from their own and other cultural backgrounds. This may be explained by Westerners experiencing more eye contact in their daily life leading to larger visual experience of gaze perception generally, and to more accurate perception of eye contact with people from their own cultural background particularly. The results also revealed cultural differences in the perception of emotion from neutral faces that could also contribute to the bias in eye contact perception.
Kitada, Ryo; Johnsrude, Ingrid S; Kochiyama, Takanori; Lederman, Susan J
2009-10-01
Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.
Singh, Leher; Loh, Darrell; Xiao, Naiqi G.
2017-01-01
Perceptual narrowing is a highly significant development associated with the first year of life. It conventionally refers to an orientation toward nativeness whereby infant's perceptual sensitivities begin to align with the phonetic properties of their native environment. Nativeness effects, such as perceptual narrowing, have been observed in several domains, most notably, in face discrimination within other-race faces and speech discrimination of non-native phonemes. Thus, far, nativeness effects in the face and speech perception have been theoretically linked, but have mostly been investigated independently. An important caveat to nativeness effects is that diversifying experiences, such as bilingualism or multiracial exposure, can lead to a reduction or postponement in attunement to the native environment. The present study was designed to investigate whether bilingualism influences nativeness effects in phonetic and face perception. Eleven-month-old monolingual and bilingual infants were tested on their abilities to discriminate native and non-native speech contrasts as well as own-race and other-race face contrasts. While monolingual infants demonstrated nativeness effects in face and speech perception, bilingual infants demonstrated nativeness effects in the face perception but demonstrated flexibility in speech perception. Results support domain-specific effects of bilingual experience on nativeness effects. PMID:28955278
Putting a Human Face on Chemistry: A Project for Liberal Arts Chemistry.
ERIC Educational Resources Information Center
Kriz, George; Popejoy, Kate
A collaborative project in liberal arts chemistry, involving faculty in chemistry and science education, is described. The project includes various components: an introductory test (DAST) to examine students' perceptions of scientists, a group library research exercise, oral and written presentation of the results of the library research, a…
Matsuda, Yoshi-Taka; Okamoto, Yoko; Ida, Misako; Okanoya, Kazuo; Myowa-Yamakoshi, Masako
2012-10-23
The 'uncanny valley' response is a phenomenon involving the elicitation of a negative feeling and subsequent avoidant behaviour in human adults and infants as a result of viewing very realistic human-like robots or computer avatars. It is hypothesized that this uncanny feeling occurs because the realistic synthetic characters elicit the concept of 'human' but fail to satisfy it. Such violations of our normal expectations regarding social signals generate a feeling of unease. This conflict-induced uncanny valley between mutually exclusive categories (human and synthetic agent) raises a new question: could an uncanny feeling be elicited by other mutually exclusive categories, such as familiarity and novelty? Given that infants prefer both familiarity and novelty in social objects, we address this question as well as the associated developmental profile. Using the morphing technique and a preferential-looking paradigm, we demonstrated uncanny valley responses of infants to faces of mothers (i.e. familiarity) and strangers (i.e. novelty). Furthermore, this effect strengthened with the infant's age. We excluded the possibility that infants detect and avoid traces of morphing. This conclusion follows from our finding that the infants equally preferred strangers' faces and the morphed faces of two strangers. These results indicate that an uncanny valley between familiarity and novelty may accentuate the categorical perception of familiar and novel objects.
Miki, Kensaku; Takeshima, Yasuyuki; Watanabe, Shoko; Honda, Yukiko; Kakigi, Ryusuke
2011-04-06
We investigated the effects of inverting facial contour (hair and chin) and features (eyes, nose and mouth) on processing for static and dynamic face perception using magnetoencephalography (MEG). We used apparent motion, in which the first stimulus (S1) was replaced by a second stimulus (S2) with no interstimulus interval and subjects perceived visual motion, and presented three conditions as follows: (1) U&U: Upright contour and Upright features, (2) U&I: Upright contour and Inverted features, and (3) I&I: Inverted contour and Inverted features. In static face perception (S1 onset), the peak latency of the fusiform area's activity, which was related to static face perception, was significantly longer for U&I and I&I than for U&U in the right hemisphere and for U&I than for U&U and I&I in the left. In dynamic face perception (S2 onset), the strength (moment) of the occipitotemporal area's activity, which was related to dynamic face perception, was significantly larger for I&I than for U&U and U&I in the right hemisphere, but not the left. These results can be summarized as follows: (1) in static face perception, the activity of the right fusiform area was more affected by the inversion of features while that of the left fusiform area was more affected by the disruption of the spatial relation between the contour and features, and (2) in dynamic face perception, the activity of the right occipitotemporal area was affected by the inversion of the facial contour. Copyright © 2011 Elsevier B.V. All rights reserved.
The Occipital Face Area Is Causally Involved in Facial Viewpoint Perception
Poltoratski, Sonia; König, Peter; Blake, Randolph; Tong, Frank; Ling, Sam
2015-01-01
Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: (1) judging the viewpoint symmetry; or (2) judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. Although viewpoint symmetry judgments were significantly disrupted, we observed no effect on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of interhemispheric crosstalk in the formation of viewpoint-invariant face perception. SIGNIFICANCE STATEMENT Faces are among the most salient objects we encounter during our everyday activities. Moreover, we are remarkably adept at identifying people at a glance, despite the diversity of viewpoints during our social encounters. Here, we investigate the cortical mechanisms underlying this ability by focusing on effects of viewpoint symmetry, i.e., the invariance of neural responses to mirror-symmetric facial viewpoints. We did this by temporarily disrupting neural processing in the occipital face area (OFA) using transcranial magnetic stimulation. Our results demonstrate that the OFA causally contributes to judgments facial viewpoints and suggest that effects of viewpoint symmetry, previously observed using fMRI, arise from an interhemispheric integration of visual information even when only one hemisphere receives direct visual stimulation. PMID:26674865
The Occipital Face Area Is Causally Involved in Facial Viewpoint Perception.
Kietzmann, Tim C; Poltoratski, Sonia; König, Peter; Blake, Randolph; Tong, Frank; Ling, Sam
2015-12-16
Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: (1) judging the viewpoint symmetry; or (2) judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. Although viewpoint symmetry judgments were significantly disrupted, we observed no effect on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of interhemispheric crosstalk in the formation of viewpoint-invariant face perception. Faces are among the most salient objects we encounter during our everyday activities. Moreover, we are remarkably adept at identifying people at a glance, despite the diversity of viewpoints during our social encounters. Here, we investigate the cortical mechanisms underlying this ability by focusing on effects of viewpoint symmetry, i.e., the invariance of neural responses to mirror-symmetric facial viewpoints. We did this by temporarily disrupting neural processing in the occipital face area (OFA) using transcranial magnetic stimulation. Our results demonstrate that the OFA causally contributes to judgments facial viewpoints and suggest that effects of viewpoint symmetry, previously observed using fMRI, arise from an interhemispheric integration of visual information even when only one hemisphere receives direct visual stimulation. Copyright © 2015 the authors 0270-6474/15/3516398-06$15.00/0.
Yang, Tao; Penton, Tegan; Köybaşı, Şerife Leman; Banissy, Michael J
2017-09-01
Previous findings suggest that older adults show impairments in the social perception of faces, including the perception of emotion and facial identity. The majority of this work has tended to examine performance on tasks involving young adult faces and prototypical emotions. While useful, this can influence performance differences between groups due to perceptual biases and limitations on task performance. Here we sought to examine how typical aging is associated with the perception of subtle changes in facial happiness and facial identity in older adult faces. We developed novel tasks that permitted the ability to assess facial happiness, facial identity, and non-social perception (object perception) across similar task parameters. We observe that aging is linked with declines in the ability to make fine-grained judgements in the perception of facial happiness and facial identity (from older adult faces), but not for non-social (object) perception. This pattern of results is discussed in relation to mechanisms that may contribute to declines in facial perceptual processing in older adulthood. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
[Mothers of children with autistic disorder: perceptions and trajectories].
Ebert, Michele; Lorenzini, Elisiane; da Silva, Eveline Franco
2015-03-01
Childhood autism is characterized by severe and global impairment in several areas of human development and demands extensive care and dependence on the parents. The objective of this study was to understand the perceptions of mothers of children with autism regarding changes suffered by the child and their trajectories in search of an autism diagnosis. This is an exploratory descriptive study with a qualitative approach conducted with ten participant mothers. Data were collected in 2013 by means of semi-structured interviews. Thematic content analysis produced the following categories: perceptions of mothers as to changes in behaviour and/or development of their children; and trajectories of mothers in search of a diagnosis for their children. After the perception of changes in behaviour/development, mothers face an arduous trajectory of healthcare service utilization.
NASA Astrophysics Data System (ADS)
Kalsow, Susan Christensen
1999-11-01
The problem. The dual purposes of this research were to determine if there is a difference in student performance in three Human Development classes when the modes of delivery are different and to analyze student perceptions of using Web-based learning as all or part of their course experience. Procedures. Data for this study were collected from three Human Development courses taught at Drake University. Grades from five essays, projects, and overall grades were used in the three classes and analyzed using a single factor analysis of variance to determine if there was a significant difference. Content analysis was used on the evaluation comments of the participants in the online and combined classes to determine their perceptions of Web-based learning. Findings. The single factor analysis of variance measuring student performance showed no significant difference among the online, face-to-face, and combined scores at the .05 level of significance, however, the difference was significant at the .06. The content analysis of the online and combined course showed the three major strengths of learning totally or partly online to be increased comfort in using the computer, the quality of the overall experience, and convenience in terms of increased access to educational opportunities. The barriers included lack of human interaction and access to the professor. Conclusions. The study indicates that Web-based learning is a viable option for postsecondary educational delivery in terms of student performance and learning. On the average, performance is at least as good as performance in traditional face-to-face classrooms. Improved performance, however, is contingent on adequate access to equipment, faculty skill in teaching using a new mode of delivery, and the personality of the student. The convenient access to educational opportunities and becoming more comfortable with technology are benefits that were important to these two groups. Web-based learning is not for everyone, but Web-assisted learning may be. It has the potential to reach a population of students who otherwise would not have access to postsecondary education. Recommendations. Technology in the twenty-first century will continue to explode and impact our lives. Universities and colleges have the potential to reach a more diverse population, but face-to-face learning will always have value. Consideration must be given to how technology and the use of Web-based learning can be used in varying degrees to meet the needs of students. Classes in the future should have some expected component of navigation and productive use of online learning. Web classes vary from totally online to mostly face-to-face, but all students in the twenty-first century should be expected to know and use this powerful educational resource.
Cloutier, Jasmin; Li, Tianyi; Mišic, Bratislav; Correll, Joshua; Berman, Marc G
2017-09-01
An extended distributed network of brain regions supports face perception. Face familiarity influences activity in brain regions involved in this network, but the impact of perceptual familiarity on this network has never been directly assessed with the use of partial least squares analysis. In the present work, we use this multivariate statistical analysis to examine how face-processing systems are differentially recruited by characteristics of the targets (i.e. perceptual familiarity and race) and of the perceivers (i.e. childhood interracial contact). Novel faces were found to preferentially recruit a large distributed face-processing network compared with perceptually familiar faces. Additionally, increased interracial contact during childhood led to decreased recruitment of distributed brain networks previously implicated in face perception, salience detection, and social cognition. Current results provide a novel perspective on the impact of cross-race exposure, suggesting that interracial contact early in life may dramatically shape the neural substrates of face perception generally. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
The three modern faces of mercury.
Clarkson, Thomas W
2002-01-01
The three modern "faces" of mercury are our perceptions of risk from the exposure of billions of people to methyl mercury in fish, mercury vapor from amalgam tooth fillings, and ethyl mercury in the form of thimerosal added as an antiseptic to widely used vaccines. In this article I review human exposure to and the toxicology of each of these three species of mercury. Mechanisms of action are discussed where possible. Key gaps in our current knowledge are identified from the points of view both of risk assessment and of mechanisms of action. PMID:11834460
Young, Steven G; Hugenberg, Kurt; Bernstein, Michael J; Sacco, Donald F
2012-05-01
Although humans possess well-developed face processing expertise, face processing is nevertheless subject to a variety of biases. Perhaps the best known of these biases is the Cross-Race Effect--the tendency to have more accurate recognition for same-race than cross-race faces. The current work reviews the evidence for and provides a critical review of theories of the Cross-Race Effect, including perceptual expertise and social cognitive accounts of the bias. The authors conclude that recent hybrid models of the Cross-Race Effect, which combine elements of both perceptual expertise and social cognitive frameworks, provide an opportunity for theoretical synthesis and advancement not afforded by independent expertise or social cognitive models. Finally, the authors suggest future research directions intended to further develop a comprehensive and integrative understanding of biases in face recognition.
Duan, Junya; Wang, Yafei; Fan, Chen; Xia, Beicheng; de Groot, Rudolf
2018-05-28
Cities face many challenging environmental problems that affect human well-being. Environmental risks can be reduced by Urban Green Infrastructures (UGIs). The effects of UGIs on the urban environment have been widely studied, but less attention has been given to the public perception of these effects. This paper presents the results of a study in Guangzhou, China, on UGI users' perceptions of these effects and their relationship with sociodemographic variables. A questionnaire survey was conducted in four public green spaces. Descriptive statistics, a binary logistic regression model and cross-tabulation analysis were applied on the data from 396 valid questionnaires. The results show that UGI users were more concerned about poor air quality and high temperature than about flooding events. Their awareness of environmental risks was partly in accordance with official records. Regarding the perception of the impacts of environmental risks on human well-being, elderly and female respondents with higher education levels were the most sensitive to these impacts. The respondents' perceptions of these impacts differed among the different green spaces. The effects of UGIs were well perceived and directly observed by the UGI users, but were not significantly influenced by most sociodemographic variables. Moreover, tourists had a lower perception of the impacts of environmental risks and the effects of UGI than residents did. This study provides strong support for UGIs as an effective tool to mitigate environmental risks. Local governments should consider the role of UGIs in environmental risk mitigation and human well-being with regard to urban planning and policy making.
Face-to-face: Perceived personal relevance amplifies face processing
Pittig, Andre; Schupp, Harald T.; Alpers, Georg W.
2017-01-01
Abstract The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer—conveyed by facial expression and face direction—amplifies emotional face processing within triadic group situations. PMID:28158672
Interaction between Social Categories in the Composite Face Paradigm
ERIC Educational Resources Information Center
Chen, Wenfeng; Ren, Naixin; Young, Andrew W.; Liu, Chang Hong
2018-01-01
The composite face paradigm (Young, Hellawell, & Hay, 1987) is widely used to demonstrate holistic perception of faces (Rossion, 2013). In the paradigm, parts from different faces (usually the top and bottom halves) are recombined. The principal criterion for holistic perception is that responses involving the component parts of composites in…
Human sex differences in emotional processing of own-race and other-race faces.
Ran, Guangming; Chen, Xu; Pan, Yangu
2014-06-18
There is evidence that women and men show differences in the perception of affective facial expressions. However, none of the previous studies directly investigated sex differences in emotional processing of own-race and other-race faces. The current study addressed this issue using high time resolution event-related potential techniques. In total, data from 25 participants (13 women and 12 men) were analyzed. It was found that women showed increased N170 amplitudes to negative White faces compared with negative Chinese faces over the right hemisphere electrodes. This result suggests that women show enhanced sensitivity to other-race faces showing negative emotions (fear or disgust), which may contribute toward evolution. However, the current data showed that men had increased N170 amplitudes to happy Chinese versus happy White faces over the left hemisphere electrodes, indicating that men show enhanced sensitivity to own-race faces showing positive emotions (happiness). In this respect, men might use past pleasant emotional experiences to boost recognition of own-race faces.
Typical and Atypical Development of Functional Connectivity in the Face Network.
Song, Yiying; Zhu, Qi; Li, Jingguang; Wang, Xu; Liu, Jia
2015-10-28
Extensive studies have demonstrated that face recognition performance does not reach adult levels until adolescence. However, there is no consensus on whether such prolonged improvement stems from development of general cognitive factors or face-specific mechanisms. Here, we used behavioral experiments and functional magnetic resonance imaging (fMRI) to evaluate these two hypotheses. With a large cohort of children (n = 379), we found that the ability of face-specific recognition in humans increased with age throughout childhood and into late adolescence in both face memory and face perception. Neurally, to circumvent the potential problem of age differences in task performance, attention, or cognitive strategies in task-state fMRI studies, we measured the resting-state functional connectivity (RSFC) between the occipital face area (OFA) and fusiform face area (FFA) in human brain and found that the OFA-FFA RSFC increased until 11-13 years of age. Moreover, the OFA-FFA RSFC was selectively impaired in adults with developmental prosopagnosia (DP). In contrast, no age-related changes or differences between DP and normal adults were observed for RSFCs in the object system. Finally, the OFA-FFA RSFC matured earlier than face selectivity in either the OFA or FFA. These results suggest the critical role of the OFA-FFA RSFC in the development of face recognition. Together, our findings support the hypothesis that prolonged development of face recognition is face specific, not domain general. Copyright © 2015 the authors 0270-6474/15/3514624-12$15.00/0.
Njomo, Doris W; Karimurio, Jefitha; Odhiambo, Gladys O; Mukuria, Mukiri; Wanyama, Ernest B; Rono, Hillary K; Gichangi, Micheal
2016-01-01
Trachoma is the leading infectious cause of blindness in the world. It is commonly found in cultural groups with poor hygiene. Trachoma control includes S urgery, A ntibiotics, F acial cleanliness and E nvironmental Improvement (SAFE). Potentially blinding and active trachoma are monitored using trachomatous trichiasis (TT) in adults and trachoma inflammation-follicular (TF) in children aged 1-9 years respectively. A cross-sectional study to assess the knowledge, practices and perceptions of trachoma and its control was conducted in the endemic communities in Narok County. Qualitative methods were used for data collection. Using purposive sampling, 12 focus group discussions (FGDs) with single sex adult and young men and women groups of homogenous characteristics, 12 key informant interviews with opinion leaders and 5 in-depth interviews (IDIs) with trichiasis patients and 6 with persons who have undergone trichiasis surgery were conducted. Data was audio recorded, transcribed, coded and analyzed manually by study themes; knowledge, practices and perceptions of trachoma transmission, infection signs, prevention and control. Majority of the community members had knowledge of trachoma and its transmission. The practices that contributed to transmission of infection included: failure to wash faces and bathe regularly, sharing of water basins and towels for face washing, traditional methods of trachoma treatment and dirty household environment. Due to socio-cultural perceptions, toilets were unacceptable and use of bushes for human waste disposal was common. Poor perceptions on disease susceptibility, flies on children's faces, latrine ownership and usage and separation of human and animal dwellings also played a role in the transmission of trachoma. Fear of loss of sight during surgery was a deterrent to its uptake and a desire to be able to see and take care of domestic animals promoted surgery uptake. Majority of the community members were appreciative of Mass Drug Administration (MDA) though side effect such as vomiting and diarrhoea were reported. Poor practices and related socio-cultural perceptions are important risk factors in sustaining trachoma infection and transmission. Community members require health education for behavior change and awareness creation about surgery, MDA and its potential side effects for elimination of trachoma in Narok County, Kenya. KEMRI SSC 2785. Registered 2 September 2014.
Face perception is tuned to horizontal orientation in the N170 time window.
Jacques, Corentin; Schiltz, Christine; Goffaux, Valerie
2014-02-07
The specificity of face perception is thought to reside both in its dramatic vulnerability to picture-plane inversion and its strong reliance on horizontally oriented image content. Here we asked when in the visual processing stream face-specific perception is tuned to horizontal information. We measured the behavioral performance and scalp event-related potentials (ERP) when participants viewed upright and inverted images of faces and cars (and natural scenes) that were phase-randomized in a narrow orientation band centered either on vertical or horizontal orientation. For faces, the magnitude of the inversion effect (IE) on behavioral discrimination performance was significantly reduced for horizontally randomized compared to vertically or nonrandomized images, confirming the importance of horizontal information for the recruitment of face-specific processing. Inversion affected the processing of nonrandomized and vertically randomized faces early, in the N170 time window. In contrast, the magnitude of the N170 IE was much smaller for horizontally randomized faces. The present research indicates that the early face-specific neural representations are preferentially tuned to horizontal information and offers new perspectives for a description of the visual information feeding face-specific perception.
Face Recognition Deficits in Autism Spectrum Disorders Are Both Domain Specific and Process Specific
Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy
2013-01-01
Although many studies have reported face identity recognition deficits in autism spectrum disorders (ASD), two fundamental question remains: 1) Is this deficit “process specific” for face memory in particular, or does it extend to perceptual discrimination of faces as well? And 2) Is the deficit “domain specific” for faces, or is it found more generally for other social or even nonsocial stimuli? The answers to these questions are important both for understanding the nature of autism and its developmental etiology, and for understanding the functional architecture of face processing in the typical brain. Here we show that children with ASD are impaired (compared to age and IQ-matched typical children) in face memory, but not face perception, demonstrating process specificity. Further, we find no deficit for either memory or perception of places or cars, indicating domain specificity. Importantly, we further showed deficits in both the perception and memory of bodies, suggesting that the relevant domain of deficit may be social rather than specifically facial. These results provide a more precise characterization of the cognitive phenotype of autism and further indicate a functional dissociation between face memory and face perception. PMID:24040276
Wang, Yamin; Zhou, Lu
2016-10-01
Most young Chinese people now learn about Caucasian individuals via media, especially American and European movies and television series (AEMT). The current study aimed to explore whether long-term exposure to AEMT facilitates Caucasian face perception in young Chinese watchers. Before the experiment, we created Chinese, Caucasian, and generic average faces (generic average face was created from both Chinese and Caucasian faces) and tested participants' ability to identify them. In the experiment, we asked AEMT watchers and Chinese movie and television series (CMT) watchers to complete a facial norm detection task. This task was developed recently to detect norms used in facial perception. The results indicated that AEMT watchers coded Caucasian faces relative to a Caucasian face norm better than they did to a generic face norm, whereas no such difference was found among CMT watchers. All watchers coded Chinese faces by referencing a Chinese norm better than they did relative to a generic norm. The results suggested that long-term exposure to AEMT has the same effect as daily other-race face contact in shaping facial perception. © The Author(s) 2016.
Face averages enhance user recognition for smartphone security.
Robertson, David J; Kramer, Robin S S; Burton, A Mike
2015-01-01
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
Lip colour affects perceived sex typicality and attractiveness of human faces.
Stephen, Ian D; McKeegan, Angela M
2010-01-01
The luminance contrast between facial features and facial skin is greater in women than in men, and women's use of make-up enhances this contrast. In black-and-white photographs, increased luminance contrast enhances femininity and attractiveness in women's faces, but reduces masculinity and attractiveness in men's faces. In Caucasians, much of the contrast between the lips and facial skin is in redness. Red lips have been considered attractive in women in geographically and temporally diverse cultures, possibly because they mimic vasodilation associated with sexual arousal. Here, we investigate the effects of lip luminance and colour contrast on the attractiveness and sex typicality (masculinity/femininity) of human faces. In a Caucasian sample, we allowed participants to manipulate the colour of the lips in colour-calibrated face photographs along CIELab L* (light--dark), a* (red--green), and b* (yellow--blue) axes to enhance apparent attractiveness and sex typicality. Participants increased redness contrast to enhance femininity and attractiveness of female faces, but reduced redness contrast to enhance masculinity of men's faces. Lip blueness was reduced more in female than male faces. Increased lightness contrast enhanced the attractiveness of both sexes, and had little effect on perceptions of sex typicality. The association between lip colour contrast and attractiveness in women's faces may be attributable to its association with oxygenated blood perfusion indicating oestrogen levels, sexual arousal, and cardiac and respiratory health.
Is fear perception special? Evidence at the level of decision-making and subjective confidence.
Koizumi, Ai; Mobbs, Dean; Lau, Hakwan
2016-11-01
Fearful faces are believed to be prioritized in visual perception. However, it is unclear whether the processing of low-level facial features alone can facilitate such prioritization or whether higher-level mechanisms also contribute. We examined potential biases for fearful face perception at the levels of perceptual decision-making and perceptual confidence. We controlled for lower-level visual processing capacity by titrating luminance contrasts of backward masks, and the emotional intensity of fearful, angry and happy faces. Under these conditions, participants showed liberal biases in perceiving a fearful face, in both detection and discrimination tasks. This effect was stronger among individuals with reduced density in dorsolateral prefrontal cortex, a region linked to perceptual decision-making. Moreover, participants reported higher confidence when they accurately perceived a fearful face, suggesting that fearful faces may have privileged access to consciousness. Together, the results suggest that mechanisms in the prefrontal cortex contribute to making fearful face perception special. © The Author (2016). Published by Oxford University Press.
Master's Thesis Projects: Student Perceptions of Supervisor Feedback
ERIC Educational Resources Information Center
de Kleijn, Renske A. M.; Mainhard, M. Tim; Meijer, Paulien C.; Brekelmans, Mieke; Pilot, Albert
2013-01-01
A growing body of research has investigated student perceptions of written feedback in higher education coursework, but few studies have considered feedback perceptions in one-on-one and face-to-face contexts such as master's thesis projects. In this article, student perceptions of feedback are explored in the context of the supervision of…
Huang, Yujing; Pan, Xuwei; Mo, Yan; Ma, Qingguo
2016-03-23
Perceptions of facial attractiveness are sensitive to emotional expression of the perceived face. However, little is known about whether the emotional expression on the face of another observer of the perceived face may have an effect on perceptions of facial attractiveness. The present study used event-related potential technique to examine social influence of the emotional expression on the face of another observer of the perceived face on perceptions of facial attractiveness. The experiment consisted of two phases. In the first phase, a neutral target face was paired with two images of individuals gazing at the target face with smiling, fearful or neutral expressions. In the second phase, participants were asked to judge the attractiveness of the target face. We found that a target face was more attractive when other observers positively gazing at the target face in contrast to the condition when other observers were negative. Additionally, the results of brain potentials showed that the visual positive component P3 with peak latency from 270 to 330 ms was larger after participants observed the target face paired with smiling individuals than the target face paired with neutral individuals. These findings suggested that facial attractiveness of an individual may be influenced by the emotional expression on the face of another observer of the perceived face. Copyright © 2016. Published by Elsevier Ireland Ltd.
NASA Astrophysics Data System (ADS)
Sandini, Giulio; Morasso, Pietro
2018-03-01
In engineering cybernetics, observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. Moreover, observability and controllability of a system are mathematically inter-related properties in the sense that it does not matter to have access to hidden states if this knowledge is not exploited for achieving a goal. While such issues can be well posed in the engineering field, in cognitive neuroscience it is quite difficult to restrict the analysis in such a way to isolate direct perception from other cognitive processes, named as "inferences" by the authors [1], without losing a great part of the action (unless one trivializes the meaning of "direct" by stating that "all perception is direct": Gallagher and Zahavi [6]). In other words, in spite of the elegance and scientific rigor of the proposed experimental strategy, in our opinion it misses the fact that in real human-human interactions "direct perception" and "inference" are two faces of the same coin and mental states in a social context are, in a general sense, accessible on the basis of directly perceived sensory signals (here and now) tuned by expectations. In the following, we elaborate this opinion with reference to a competitive interaction paradigm, namely the attempt of a goalkeeper to save a soccer penalty kick by "reading the mind" of his opponent.
Skuse, David H.; Lori, Adriana; Cubells, Joseph F.; Lee, Irene; Conneely, Karen N.; Puura, Kaija; Lehtimäki, Terho; Binder, Elisabeth B.; Young, Larry J.
2014-01-01
The neuropeptides oxytocin and vasopressin are evolutionarily conserved regulators of social perception and behavior. Evidence is building that they are critically involved in the development of social recognition skills within rodent species, primates, and humans. We investigated whether common polymorphisms in the genes encoding the oxytocin and vasopressin 1a receptors influence social memory for faces. Our sample comprised 198 families, from the United Kingdom and Finland, in whom a single child had been diagnosed with high-functioning autism. Previous research has shown that impaired social perception, characteristic of autism, extends to the first-degree relatives of autistic individuals, implying heritable risk. Assessments of face recognition memory, discrimination of facial emotions, and direction of gaze detection were standardized for age (7–60 y) and sex. A common SNP in the oxytocin receptor (rs237887) was strongly associated with recognition memory in combined probands, parents, and siblings after correction for multiple comparisons. Homozygotes for the ancestral A allele had impairments in the range −0.6 to −1.15 SD scores, irrespective of their diagnostic status. Our findings imply that a critical role for the oxytocin system in social recognition has been conserved across perceptual boundaries through evolution, from olfaction in rodents to visual memory in humans. PMID:24367110
Skuse, David H; Lori, Adriana; Cubells, Joseph F; Lee, Irene; Conneely, Karen N; Puura, Kaija; Lehtimäki, Terho; Binder, Elisabeth B; Young, Larry J
2014-02-04
The neuropeptides oxytocin and vasopressin are evolutionarily conserved regulators of social perception and behavior. Evidence is building that they are critically involved in the development of social recognition skills within rodent species, primates, and humans. We investigated whether common polymorphisms in the genes encoding the oxytocin and vasopressin 1a receptors influence social memory for faces. Our sample comprised 198 families, from the United Kingdom and Finland, in whom a single child had been diagnosed with high-functioning autism. Previous research has shown that impaired social perception, characteristic of autism, extends to the first-degree relatives of autistic individuals, implying heritable risk. Assessments of face recognition memory, discrimination of facial emotions, and direction of gaze detection were standardized for age (7-60 y) and sex. A common SNP in the oxytocin receptor (rs237887) was strongly associated with recognition memory in combined probands, parents, and siblings after correction for multiple comparisons. Homozygotes for the ancestral A allele had impairments in the range -0.6 to -1.15 SD scores, irrespective of their diagnostic status. Our findings imply that a critical role for the oxytocin system in social recognition has been conserved across perceptual boundaries through evolution, from olfaction in rodents to visual memory in humans.
Bortolon, Catherine; Capdevielle, Delphine; Altman, Rosalie; Macgregor, Alexandra; Attal, Jérôme; Raffard, Stéphane
2017-07-01
Self-face recognition is crucial for sense of identity and for maintaining a coherent sense of self. Most of our daily life experiences with the image of our own face happen when we look at ourselves in the mirror. However, to date, mirror self-perception in schizophrenia has received little attention despite evidence that face recognition deficits and self abnormalities have been described in schizophrenia. Thus, this study aims to investigate mirror self-face perception in schizophrenia patients and its correlation with clinical symptoms. Twenty-four schizophrenia patients and twenty-five healthy controls were explicitly requested to describe their image in detail during 2min whilst looking at themselves in a mirror. Then, they were asked to report whether they experienced any self-face recognition difficulties. Results showed that schizophrenia patients reported more feelings of strangeness towards their face compared to healthy controls (U=209.5, p=0.048, r=0.28), but no statistically significant differences were found regarding misidentification (p=0.111) and failures in recognition (p=0.081). Symptoms such as hallucinations, somatic concerns and depression were also associated with self-face perception abnormalities (all p-values>0.05). Feelings of strangeness toward one's own face in schizophrenia might be part of a familiar face perception deficit or a more global self-disturbance, which is characterized by a loss of self-other boundaries and has been associated with abnormal body experiences and first rank symptoms. Regarding this last hypothesis, multisensorial integration might have an impact on the way patients perceive themselves since it has an important role in mirror self-perception. Copyright © 2017. Published by Elsevier B.V.
Job Loss at Mid-Life: Managers and Executives Face the "New Risk Economy"
ERIC Educational Resources Information Center
Mendenhall, Ruby; Kalil, Ariel; Spindel, Laurel J.; Hart, Cassandra M. D.
2008-01-01
We use a life course framework to examine how the "new risk economy" has left middle-age professionals, managers and executives more vulnerable to job loss and unemployment despite high levels of human capital. Using in-depth qualitative data from 77 recently-unemployed white-collar workers, we examine perceptions of macro-economic…
Adaptation to faces and voices: unimodal, cross-modal, and sex-specific effects.
Little, Anthony C; Feinberg, David R; Debruine, Lisa M; Jones, Benedict C
2013-11-01
Exposure, or adaptation, to faces or voices biases perceptions of subsequent stimuli, for example, causing faces to appear more normal than they would be otherwise if they are similar to the previously presented stimuli. Studies also suggest that there may be cross-modal adaptation between sound and vision, although the evidence is inconsistent. We examined adaptation effects within and across voices and faces and also tested whether adaptation crosses between male and female stimuli. We exposed participants to sex-typical or sex-atypical stimuli and measured the perceived normality of subsequent stimuli. Exposure to female faces or voices altered perceptions of subsequent female stimuli, and these adaptation effects crossed modality; exposure to voices influenced judgments of faces, and vice versa. We also found that exposure to female stimuli did not influence perception of subsequent male stimuli. Our data demonstrate that recent experience of faces and voices changes subsequent perception and that mental representations of faces and voices may not be modality dependent. Both unimodal and cross-modal adaptation effects appear to be relatively sex-specific.
Neural architecture underlying classification of face perception paradigms.
Laird, Angela R; Riedel, Michael C; Sutherland, Matthew T; Eickhoff, Simon B; Ray, Kimberly L; Uecker, Angela M; Fox, P Mickle; Turner, Jessica A; Fox, Peter T
2015-10-01
We present a novel strategy for deriving a classification system of functional neuroimaging paradigms that relies on hierarchical clustering of experiments archived in the BrainMap database. The goal of our proof-of-concept application was to examine the underlying neural architecture of the face perception literature from a meta-analytic perspective, as these studies include a wide range of tasks. Task-based results exhibiting similar activation patterns were grouped as similar, while tasks activating different brain networks were classified as functionally distinct. We identified four sub-classes of face tasks: (1) Visuospatial Attention and Visuomotor Coordination to Faces, (2) Perception and Recognition of Faces, (3) Social Processing and Episodic Recall of Faces, and (4) Face Naming and Lexical Retrieval. Interpretation of these sub-classes supports an extension of a well-known model of face perception to include a core system for visual analysis and extended systems for personal information, emotion, and salience processing. Overall, these results demonstrate that a large-scale data mining approach can inform the evolution of theoretical cognitive models by probing the range of behavioral manipulations across experimental tasks. Copyright © 2015 Elsevier Inc. All rights reserved.
Sharpening vision by adapting to flicker.
Arnold, Derek H; Williams, Jeremy D; Phipps, Natasha E; Goodale, Melvyn A
2016-11-01
Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision-allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because "blur" signals are mitigated.
Sharpening vision by adapting to flicker
Arnold, Derek H.; Williams, Jeremy D.; Phipps, Natasha E.; Goodale, Melvyn A.
2016-01-01
Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision—allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because “blur” signals are mitigated. PMID:27791115
Dogs recognize dog and human emotions.
Albuquerque, Natalia; Guo, Kun; Wilkinson, Anna; Savalli, Carine; Otta, Emma; Mills, Daniel
2016-01-01
The perception of emotional expressions allows animals to evaluate the social intentions and motivations of each other. This usually takes place within species; however, in the case of domestic dogs, it might be advantageous to recognize the emotions of humans as well as other dogs. In this sense, the combination of visual and auditory cues to categorize others' emotions facilitates the information processing and indicates high-level cognitive representations. Using a cross-modal preferential looking paradigm, we presented dogs with either human or dog faces with different emotional valences (happy/playful versus angry/aggressive) paired with a single vocalization from the same individual with either a positive or negative valence or Brownian noise. Dogs looked significantly longer at the face whose expression was congruent to the valence of vocalization, for both conspecifics and heterospecifics, an ability previously known only in humans. These results demonstrate that dogs can extract and integrate bimodal sensory emotional information, and discriminate between positive and negative emotions from both humans and dogs. © 2016 The Author(s).
Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing
Frässle, Stefan; Krach, Sören; Paulus, Frieder Michel; Jansen, Andreas
2016-01-01
While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness-related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization. PMID:27250879
Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing
NASA Astrophysics Data System (ADS)
Frässle, Stefan; Krach, Sören; Paulus, Frieder Michel; Jansen, Andreas
2016-06-01
While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness-related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization.
Perceptions of a hospital-based animal assisted intervention program: An exploratory study.
Abrahamson, Kathleen; Cai, Yun; Richards, Elizabeth; Cline, Krista; O'Haire, Marguerite E
2016-11-01
Research has shown that there are multiple benefits of animal assisted interventions for patients. However, the impact of interaction with these animals in staff is understudied, particularly in the acute care setting, and is thus a novel contribution to the literature on human-animal interaction. The purpose of this qualitative pilot study was to contribute to the body of knowledge surrounding the experiences and perceptions of hospital staff who have participated in a hospital-based animal assisted intervention program. Nine face-to-face semi-structured interviews were conducted (4 staff nurses, 3 support staff members, and 2 hospital volunteers). Five themes emerged from the respondent interviews: (1) descriptions of the therapy dogs; (2) contacts with the dogs at work; (3) connection with the dogs outside of work; (4) benefits; (5) drawbacks. Our findings reflect abundantly positive hospital staff experiences. Copyright © 2016 Elsevier Ltd. All rights reserved.
Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W
2014-01-01
Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.
Halliday, Drew W. R.; MacDonald, Stuart W. S.; Sherf, Suzanne K.; Tanaka, James W.
2014-01-01
Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862
Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception
Rohe, Tim; Noppeney, Uta
2015-01-01
To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world. PMID:25710328
Holistic processing of static and moving faces.
Zhao, Mintao; Bülthoff, Isabelle
2017-07-01
Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Berezowska, Aleksandra; Fischer, Arnout R H; Ronteltap, Amber; Kuznesof, Sharron; Macready, Anna; Fallaize, Rosalind; van Trijp, Hans C M
2014-01-01
Personalised nutrition (PN) may provide major health benefits to consumers. A potential barrier to the uptake of PN is consumers' reluctance to disclose sensitive information upon which PN is based. This study adopts the privacy calculus to explore how PN service attributes contribute to consumers' privacy risk and personalisation benefit perceptions. Sixteen focus groups (n = 124) were held in 8 EU countries and discussed 9 PN services that differed in terms of personal information, communication channel, service provider, advice justification, scope, frequency, and customer lock-in. Transcripts were content analysed. The personal information that underpinned PN contributed to both privacy risk perception and personalisation benefit perception. Disclosing information face-to-face mitigated the perception of privacy risk and amplified the perception of personalisation benefit. PN provided by a qualified expert and justified by scientific evidence increased participants' value perception. Enhancing convenience, offering regular face-to face support, and employing customer lock-in strategies were perceived as beneficial. This study suggests that to encourage consumer adoption, PN has to account for face-to-face communication, expert advice providers, support, a lifestyle-change focus, and customised offers. The results provide an initial insight into service attributes that influence consumer adoption of PN. © 2014 S. Karger AG, Basel.
Piepers, Daniel W.; Robbins, Rachel A.
2012-01-01
It is widely agreed that the human face is processed differently from other objects. However there is a lack of consensus on what is meant by a wide array of terms used to describe this “special” face processing (e.g., holistic and configural) and the perceptually relevant information within a face (e.g., relational properties and configuration). This paper will review existing models of holistic/configural processing, discuss how they differ from one another conceptually, and review the wide variety of measures used to tap into these concepts. In general we favor a model where holistic processing of a face includes some or all of the interrelations between features and has separate coding for features. However, some aspects of the model remain unclear. We propose the use of moving faces as a way of clarifying what types of information are included in the holistic representation of a face. PMID:23413184
The face-selective N170 component is modulated by facial color.
Nakajima, Kae; Minami, Tetsuto; Nakauchi, Shigeki
2012-08-01
Faces play an important role in social interaction by conveying information and emotion. Of the various components of the face, color particularly provides important clues with regard to perception of age, sex, health status, and attractiveness. In event-related potential (ERP) studies, the N170 component has been identified as face-selective. To determine the effect of color on face processing, we investigated the modulation of N170 by facial color. We recorded ERPs while subjects viewed facial color stimuli at 8 hue angles, which were generated by rotating the original facial color distribution around the white point by 45° for each human face. Responses to facial color were localized to the left, but not to the right hemisphere. N170 amplitudes gradually increased in proportion to the increase in hue angle from the natural-colored face. This suggests that N170 amplitude in the left hemisphere reflects processing of facial color information. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Influence of Flankers on Race Categorization of Faces
Sun, Hsin-Mei; Balas, Benjamin
2012-01-01
Context affects multiple cognitive and perceptual processes. In the present study, we asked how the context of a set of faces affected the perception of a target face’s race in two distinct tasks. In Experiments 1 and 2, participants categorized target faces according to perceived racial category (Black or White). In Experiment 1, the target face was presented alone, or with Black or White flanker faces. The orientation of flanker faces was also manipulated to investigate how face inversion effect interacts with the influences of flanker faces on the target face. The results showed that participants were more likely to categorize the target face as White when it was surrounded by inverted White faces (an assimilation effect). Experiment 2 further examined how different aspects of the visual context affect the perception of the target face by manipulating flanker faces’ shape and pigmentation as well as their orientation. The results showed that flanker faces’ shape and pigmentation affected the perception of the target face differently. While shape elicited a contrast effect, pigmentation appeared to be assimilative. These novel findings suggest that the perceived race of a face is modulated by the appearance of other faces and their distinct shape and pigmentation properties. However, the contrast and assimilation effects elicited by flanker faces’ shape and pigmentation may be specific to race categorization, since the same stimuli used in a delayed matching task (Experiment 3) revealed that flanker pigmentation induced a contrast effect on the perception of target pigmentation. PMID:22825930
Bayesian Face Recognition and Perceptual Narrowing in Face-Space
Balas, Benjamin
2012-01-01
During the first year of life, infants’ face recognition abilities are subject to “perceptual narrowing,” the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in developing humans and primates. Though the phenomenon is highly robust and replicable, there have been few efforts to model the emergence of perceptual narrowing as a function of the accumulation of experience with faces during infancy. The goal of the current study is to examine how perceptual narrowing might manifest as statistical estimation in “face space,” a geometric framework for describing face recognition that has been successfully applied to adult face perception. Here, I use a computer vision algorithm for Bayesian face recognition to study how the acquisition of experience in face space and the presence of race categories affect performance for own and other-race faces. Perceptual narrowing follows from the establishment of distinct race categories, suggesting that the acquisition of category boundaries for race is a key computational mechanism in developing face expertise. PMID:22709406
ERIC Educational Resources Information Center
Woods, Robert; Baker, Jason D.; Hopper, Dave
2004-01-01
The researchers examined responses from 862 faculty members at 38 institutions nationwide using the blackboard Learning Management System (LMS) to supplement their face-to-face instruction. The four research questions addressed the primary uses that faculty make of blackboard, perceptions that faculty have of how certain blackboard features…
Who Expressed What Emotion? Men Grab Anger, Women Grab Happiness
Neel, Rebecca; Becker, D. Vaughn; Neuberg, Steven L.; Kenrick, Douglas T.
2011-01-01
When anger or happiness flashes on a face in the crowd, do we misperceive that emotion as belonging to someone else? Two studies found that misperception of apparent emotional expressions – “illusory conjunctions” – depended on the gender of the target: male faces tended to “grab” anger from neighboring faces, and female faces tended to grab happiness. Importantly, the evidence did not suggest that this effect was due to the general tendency to misperceive male or female faces as angry or happy, but instead indicated a more subtle interaction of expectations and early visual processes. This suggests a novel aspect of affordance-management in human perception, whereby cues to threat, when they appear, are attributed to those with the greatest capability of doing harm, whereas cues to friendship are attributed to those with the greatest likelihood of providing affiliation opportunities. PMID:22368303
Komes, Jessica; Schweinberger, Stefan R.; Wiese, Holger
2015-01-01
Previous event-related potential (ERP) research revealed that older relative to younger adults show reduced inversion effects in the N170 (with more negative amplitudes for inverted than upright faces), suggestive of impairments in face perception. However, as these studies used young to middle-aged faces only, this finding may reflect preferential processing of own- relative to other-age faces rather than age-related decline. We conducted an ERP study in which young and older participants categorized young and old upright or inverted faces by age. Stimuli were presented either unfiltered or low-pass filtered at 30, 20, or 10 cycles per image (CPI). Response times revealed larger inversion effects, with slower responses for inverted faces, for young faces in young participants. Older participants did not show a corresponding effect. ERPs yielded a trend toward reduced N170 inversion effects in older relative to younger adults independent of face age. Moreover, larger inversion effects for young relative to old faces were detected, and filtering resulted in smaller N170 amplitudes. The reduced N170 inversion effect in older adults may reflect age-related changes in neural correlates of face perception. A smaller N170 inversion effect for old faces may indicate that facial changes with age hamper early face perception stages. PMID:26441790
Fukushima, Hirokata; Hirata, Satoshi; Ueno, Ari; Matsuda, Goh; Fuwa, Kohki; Sugama, Keiko; Kusunoki, Kiyo; Hirai, Masahiro; Hiraki, Kazuo; Tomonaga, Masaki; Hasegawa, Toshikazu
2010-01-01
Background The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking. Methodology/Principal Findings In the present report, skin-surface event-related brain potentials (ERPs) were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars) were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150–200 ms in either experiment. Conclusions/Significance Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species. PMID:20967284
The neural representation of social status in the extended face-processing network.
Koski, Jessica E; Collins, Jessica A; Olson, Ingrid R
2017-12-01
Social status is a salient cue that shapes our perceptions of other people and ultimately guides our social interactions. Despite the pervasive influence of status on social behavior, how information about the status of others is represented in the brain remains unclear. Here, we tested the hypothesis that social status information is embedded in our neural representations of other individuals. Participants learned to associate faces with names, job titles that varied in associated status, and explicit markers of reputational status (star ratings). Trained stimuli were presented in an functional magnetic resonance imaging experiment where participants performed a target detection task orthogonal to the variable of interest. A network of face-selective brain regions extending from the occipital lobe to the orbitofrontal cortex was localized and served as regions of interest. Using multivoxel pattern analysis, we found that face-selective voxels in the lateral orbitofrontal cortex - a region involved in social and nonsocial valuation, could decode faces based on their status. Similar effects were observed with two different status manipulations - one based on stored semantic knowledge (e.g., different careers) and one based on learned reputation (e.g., star ranking). These data suggest that a face-selective region of the lateral orbitofrontal cortex may contribute to the perception of social status, potentially underlying the preferential attention and favorable biases humans display toward high-status individuals. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Schneider, Till R; Hipp, Joerg F; Domnick, Claudia; Carl, Christine; Büchel, Christian; Engel, Andreas K
2018-05-26
Human faces are among the most salient visual stimuli and act both as socially and emotionally relevant signals. Faces and especially faces with emotional expression receive prioritized processing in the human brain and activate a distributed network of brain areas reflected, e.g., in enhanced oscillatory neuronal activity. However, an inconsistent picture emerged so far regarding neuronal oscillatory activity across different frequency-bands modulated by emotionally and socially relevant stimuli. The individual level of anxiety among healthy populations might be one explanation for these inconsistent findings. Therefore, we tested the hypothesis whether oscillatory neuronal activity is associated with individual anxiety levels during perception of faces with neutral and fearful facial expressions. We recorded neuronal activity using magnetoencephalography (MEG) in 27 healthy participants and determined their individual state anxiety levels. Images of human faces with neutral and fearful expressions, and physically matched visual control stimuli were presented while participants performed a simple color detection task. Spectral analyses revealed that face processing and in particular processing of fearful faces was characterized by enhanced neuronal activity in the theta- and gamma-band and decreased activity in the beta-band in early visual cortex and the fusiform gyrus (FFG). Moreover, the individuals' state anxiety levels correlated positively with the gamma-band response and negatively with the beta response in the FFG and the amygdala. Our results suggest that oscillatory neuronal activity plays an important role in affective face processing and is dependent on the individual level of state anxiety. Our work provides new insights on the role of oscillatory neuronal activity underlying processing of faces. Copyright © 2018. Published by Elsevier Inc.
Hilliar, Kirin F; Kemp, Richard I
2008-01-01
Does semantic information in the form of stereotypical names influence participants' perceptions of the appearance of multiracial faces? Asian-Australian and European-Australian participants were asked to rate the appearance of Asian-Australian faces given typically Asian names, European-Australian faces given typically European names, multiracial faces given Asian names, and multiracial faces given European names. Participants rated the multiracial faces given European names as looking significantly 'more European' than the same multiracial faces given Asian names. This study demonstrates how socially derived expectations and stereotypes can influence face perception.
Emotion Words Shape Emotion Percepts
Gendron, Maria; Lindquist, Kristen A.; Barsalou, Lawrence; Barrett, Lisa Feldman
2015-01-01
People believe they see emotion written on the faces of other people. In an instant, simple facial actions are transformed into information about another's emotional state. The present research examined whether a perceiver unknowingly contributes to emotion perception with emotion word knowledge. We present 2 studies that together support a role for emotion concepts in the formation of visual percepts of emotion. As predicted, we found that perceptual priming of emotional faces (e.g., a scowling face) was disrupted when the accessibility of a relevant emotion word (e.g., anger) was temporarily reduced, demonstrating that the exact same face was encoded differently when a word was accessible versus when it was not. The implications of these findings for a linguistically relative view of emotion perception are discussed. PMID:22309717
Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus.
Furl, Nicholas; Henson, Richard N; Friston, Karl J; Calder, Andrew J
2015-09-01
The superior temporal sulcus (STS) in the human and monkey is sensitive to the motion of complex forms such as facial and bodily actions. We used functional magnetic resonance imaging (fMRI) to explore network-level explanations for how the form and motion information in dynamic facial expressions might be combined in the human STS. Ventral occipitotemporal areas selective for facial form were localized in occipital and fusiform face areas (OFA and FFA), and motion sensitivity was localized in the more dorsal temporal area V5. We then tested various connectivity models that modeled communication between the ventral form and dorsal motion pathways. We show that facial form information modulated transmission of motion information from V5 to the STS, and that this face-selective modulation likely originated in OFA. This finding shows that form-selective motion sensitivity in the STS can be explained in terms of modulation of gain control on information flow in the motion pathway, and provides a substantial constraint for theories of the perception of faces and biological motion. © The Author 2014. Published by Oxford University Press.
Effects of induced sad mood on facial emotion perception in young and older adults.
Lawrie, Louisa; Jackson, Margaret C; Phillips, Louise H
2018-02-15
Older adults perceive less intense negative emotion in facial expressions compared to younger counterparts. Prior research has also demonstrated that mood alters facial emotion perception. Nevertheless, there is little evidence which evaluates the interactive effects of age and mood on emotion perception. This study investigated the effects of sad mood on younger and older adults' perception of emotional and neutral faces. Participants rated the intensity of stimuli while listening to sad music and in silence. Measures of mood were administered. Younger and older participants' rated sad faces as displaying stronger sadness when they experienced sad mood. While younger participants showed no influence of sad mood on happiness ratings of happy faces, older adults rated happy faces as conveying less happiness when they experienced sad mood. This study demonstrates how emotion perception can change when a controlled mood induction procedure is applied to alter mood in young and older participants.
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Face-to-face: Perceived personal relevance amplifies face processing.
Bublatzky, Florian; Pittig, Andre; Schupp, Harald T; Alpers, Georg W
2017-05-01
The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer-conveyed by facial expression and face direction-amplifies emotional face processing within triadic group situations. © The Author (2017). Published by Oxford University Press.
ERIC Educational Resources Information Center
Horspool, Agi; Lange, Carsten
2012-01-01
This study compares student perceptions, learning behaviours and success in online and face-to-face versions of a Principles of Microeconomics course. It follows a Scholarship of Teaching and Learning (SoTL) approach by using a cycle of empirical analysis, reflection and action to improve the learning experience for students. The online course…
Teaching Time Investment: Does Online Really Take More Time than Face-to-Face?
ERIC Educational Resources Information Center
Van de Vord, Rebecca; Pogue, Korolyn
2012-01-01
Enrollments in online programs are growing, increasing demand for online courses. The perception that teaching online takes more time than teaching face-to-face creates concerns related to faculty workload. To date, the research on teaching time does not provide a clear answer as to the accuracy of this perception. This study was designed to…
ERIC Educational Resources Information Center
Winslow, Cessna Catherine Smith
2014-01-01
This study explored perceptions of Public Relations (PR) among graduate higher education publics regarding distance learning as contrasted with face-to-face learning contexts. The research questions assessed student, faculty and administrator perceptions of characteristics of PR: trust, communication, quality, respect and rigor. Participants…
Otten, Marte; Banaji, Mahzarin R.
2012-01-01
A number of recent behavioral studies have shown that emotional expressions are differently perceived depending on the race of a face, and that perception of race cues is influenced by emotional expressions. However, neural processes related to the perception of invariant cues that indicate the identity of a face (such as race) are often described to proceed independently of processes related to the perception of cues that can vary over time (such as emotion). Using a visual face adaptation paradigm, we tested whether these behavioral interactions between emotion and race also reflect interdependent neural representation of emotion and race. We compared visual emotion aftereffects when the adapting face and ambiguous test face differed in race or not. Emotion aftereffects were much smaller in different race (DR) trials than same race (SR) trials, indicating that the neural representation of a facial expression is significantly different depending on whether the emotional face is black or white. It thus seems that invariable cues such as race interact with variable face cues such as emotion not just at a response level, but also at the level of perception and neural representation. PMID:22403531
Subliminally perceived odours modulate female intrasexual competition: an eye movement study.
Parma, Valentina; Tirindelli, Roberto; Bisazza, Angelo; Massaccesi, Stefano; Castiello, Umberto
2012-01-01
Evidence suggests that subliminal odorants influence human perception and behavior. It has been hypothesized that the human sex-steroid derived compound 4,16-androstadien-3-one (androstadienone) functions as a human chemosignal. The most intensively studied steroid compound, androstadienone is known to be biologically relevant since it seems to convey information about male mate quality to women. It is unclear if the effects of androstadienone are menstrual cycle related. In the first experiment, heterosexual women were exposed to androstadienone or a control compound and asked to view stimuli such as female faces, male faces and familiar objects while their eye movements were recorded. In the second experiment the same women were asked to rate the level of stimuli attractiveness following exposure to the study or control compound. The results indicated that women at high conception risk spent more time viewing the female than the male faces regardless of the compound administered. Women at a low conception risk exhibited a preference for female faces only following exposure to androstadienone. We contend that a woman's level of fertility influences her evaluation of potential competitors (e.g., faces of other women) during times critical for reproduction. Subliminally perceived odorants, such as androstadienone, might similarly enhance intrasexual competition strategies in women during fertility phases not critical for conception. These findings offer a substantial contribution to the current debate about the effects that subliminally perceived body odors might have on behavior.
Face inversion and acquired prosopagnosia reduce the size of the perceptual field of view.
Van Belle, Goedele; Lefèvre, Philippe; Rossion, Bruno
2015-03-01
Using a gaze-contingent morphing approach, we asked human observers to choose one of two faces that best matched the identity of a target face: one face corresponded to the reference face's fixated part only (e.g., one eye), the other corresponded to the unfixated area of the reference face. The face corresponding to the fixated part was selected significantly more frequently in the inverted than in the upright orientation. This observation provides evidence that face inversion reduces an observer's perceptual field of view, even when both upright and inverted faces are displayed at full view and there is no performance difference between these conditions. It rules out an account of the drop of performance for inverted faces--one of the most robust effects in experimental psychology--in terms of a mere difference in local processing efficiency. A brain-damaged patient with pure prosopagnosia, viewing only upright faces, systematically selected the face corresponding to the fixated part, as if her perceptual field was reduced relative to normal observers. Altogether, these observations indicate that the absence of visual knowledge reduces the perceptual field of view, supporting an indirect view of visual perception. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Eidels, Ami; Townsend, James T.; Pomerantz, James R.
2008-01-01
People are especially efficient in processing certain visual stimuli such as human faces or good configurations. It has been suggested that topology and geometry play important roles in configural perception. Visual search is one area in which configurality seems to matter. When either of 2 target features leads to a correct response and the…
On facial asymmetry and self-perception.
Lu, Stephen M; Bartlett, Scott P
2014-06-01
Self-perception has been an enduring human concern since ancient times and remains a significant component of the preoperative and postoperative consultation. Despite modern technological attempts to reproduce the first-hand experience, there is no perfect substitute for human, stereoscopic, three-dimensional vision in evaluating appearance. Nowadays, however, the primary tools available to a patient for examining his or her own appearance, particularly the face, are photographs and mirrors. Patients are often unaware of how cameras and photographs can distort and degrade image quality, leading to an inaccurate representation of true appearance. Everyone knows that mirrors reverse an image, left and right, and most people recognize their own natural facial asymmetry at some level. However, few realize that emotions are not only expressed unequally by the left and right sides of the face but also perceived unequally by others. The impact and effect of this "facedness" is completely reversed by mirrors, potentially creating a significant discrepancy between what a patient perceives of himself or herself and what the surgeon or other third party sees. This article ties together the diverse threads leading to this problem and suggests several ways of mitigating the issue through technology and patient counseling.
Parallel Processing in Face Perception
ERIC Educational Resources Information Center
Martens, Ulla; Leuthold, Hartmut; Schweinberger, Stefan R.
2010-01-01
The authors examined face perception models with regard to the functional and temporal organization of facial identity and expression analysis. Participants performed a manual 2-choice go/no-go task to classify faces, where response hand depended on facial familiarity (famous vs. unfamiliar) and response execution depended on facial expression…
Sliwa, Julia; Planté, Aurélie; Duhamel, Jean-René; Wirth, Sylvia
2016-03-01
Social interactions make up to a large extent the prime material of episodic memories. We therefore asked how social signals are coded by neurons in the hippocampus. Human hippocampus is home to neurons representing familiar individuals in an abstract and invariant manner ( Quian Quiroga et al. 2009). In contradistinction, activity of rat hippocampal cells is only weakly altered by the presence of other rats ( von Heimendahl et al. 2012; Zynyuk et al. 2012). We probed the activity of monkey hippocampal neurons to faces and voices of familiar and unfamiliar individuals (monkeys and humans). Thirty-one percent of neurons recorded without prescreening responded to faces or to voices. Yet responses to faces were more informative about individuals than responses to voices and neuronal responses to facial and vocal identities were not correlated, indicating that in our sample identity information was not conveyed in an invariant manner like in human neurons. Overall, responses displayed by monkey hippocampal neurons were similar to the ones of neurons recorded simultaneously in inferotemporal cortex, whose role in face perception is established. These results demonstrate that the monkey hippocampus participates in the read-out of social information contrary to the rat hippocampus, but possibly lack an explicit conceptual coding of as found in humans. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Automatic recognition of emotions from facial expressions
NASA Astrophysics Data System (ADS)
Xue, Henry; Gertner, Izidor
2014-06-01
In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).
Corrow, Sherryse L; Mathison, Jordan; Granrud, Carl E; Yonas, Albert
2014-01-01
Corrow, Granrud, Mathison, and Yonas (2011, Perception, 40, 1376-1383) found evidence that 6-month-old infants perceive the hollow face illusion. In the present study we asked whether 6-month-old infants perceive illusory depth reversal for a nonface object and whether infants' perception of the hollow face illusion is affected by mask orientation inversion. In experiment 1 infants viewed a concave bowl, and their reaches were recorded under monocular and binocular viewing conditions. Infants reached to the bowl as if it were convex significantly more often in the monocular than in the binocular viewing condition. These results suggest that infants perceive illusory depth reversal with a nonface stimulus and that the infant visual system has a bias to perceive objects as convex. Infants in experiment 2 viewed a concave face-like mask in upright and inverted orientations. Infants reached to the display as if it were convex more in the monocular than in the binocular condition; however, mask orientation had no effect on reaching. Previous findings that adults' perception of the hollow face illusion is affected by mask orientation inversion have been interpreted as evidence of stored-knowledge influences on perception. However, we found no evidence of such influences in infants, suggesting that their perception of this illusion may not be affected by stored knowledge, and that perceived depth reversal is not face-specific in infants.
Evidence from Meta-Analyses of the Facial Width-to-Height Ratio as an Evolved Cue of Threat.
Geniole, Shawn N; Denson, Thomas F; Dixson, Barnaby J; Carré, Justin M; McCormick, Cheryl M
2015-01-01
The facial width-to-height ratio (FWHR) is the width of the face divided by the height of the upper face. There is mixed evidence for the hypothesis that the FWHR is a cue of threat and dominance in the human face. We conducted a systematic review and meta-analyses of all peer-reviewed studies (and 2 unpublished studies) to estimate the magnitude of the sex difference in the FWHR, and the magnitude of the relationship between the FWHR and threatening and dominant behaviours and perceptions. Studies were eligible for inclusion if the authors reported an analysis involving the FWHR. Our analyses revealed that the FWHR was larger in men than in women (d = .11, n = 10,853), cued judgements of masculinity in men (r = .35, n of faces = 487; n of observers = 339), and was related to body mass index (r = .31, n = 2,506). Further, the FWHR predicted both threat behaviour in men (r = .16, n = 4,603) and dominance behaviour in both sexes (r = .12, n = 948) across a variety of indices. Individuals with larger FWHRs were judged by observers as more threatening (r = .46, n of faces = 1,691; n of observers = 2,076) and more dominant (r = .20, n of faces = 603; n of observers = 236) than those with smaller FWHRs. Individuals with larger FWHRs were also judged as less attractive (r = -.26, n of faces = 721; n of observers = 335), especially when women made the judgements. These findings provide some support for the hypothesis that the FWHR is part of an evolved cueing system of intra-sexual threat and dominance in men. A limitation of the meta-analyses on perceptions of threat and dominance were the low number of stimuli involving female and older adult faces.
Face value: amygdala response reflects the validity of first impressions.
Rule, Nicholas O; Moran, Joseph M; Freeman, Jonathan B; Whitfield-Gabrieli, Susan; Gabrieli, John D E; Ambady, Nalini
2011-01-01
The human amygdala responds to first impressions of people as judged from their faces, such as normative judgments about the trustworthiness of strangers. It is unknown, however, whether amygdala responses to first impressions can be validated by objective criteria. Here, we examined amygdala responses to faces of Chief Executive Officers (CEOs) where real-world outcomes could be measured objectively by the amounts of profits made by each CEO's company. During fMRI scanning, participants made incidental judgments about the symmetry of each CEO's face. After scanning, participants rated each CEO's face on leadership ability. Parametric analyses showed that greater left amygdala response to the CEOs' faces was associated with higher post-scan ratings of the CEOs' leadership ability. In addition, greater left amygdala response was also associated with greater profits made by the CEOs' companies and this relationship was statistically mediated by external raters' perceptions of arousal. Thus, amygdala response reflected both subjective judgments and objective measures of leadership ability based on first impressions. Copyright © 2010 Elsevier Inc. All rights reserved.
Aesthetic Response and Cosmic Aesthetic Distance
NASA Astrophysics Data System (ADS)
Madacsi, D.
2013-04-01
For Homo sapiens, the experience of a primal aesthetic response to nature was perhaps a necessary precursor to the arousal of an artistic impulse. Among the likely visual candidates for primal initiators of aesthetic response, arguments can be made in favor of the flower, the human face and form, and the sky and light itself as primordial aesthetic stimulants. Although visual perception of the sensory world of flowers and human faces and forms is mediated by light, it was most certainly in the sky that humans first could respond to the beauty of light per se. It is clear that as a species we do not yet identify and comprehend as nature, or part of nature, the entire universe beyond our terrestrial environs, the universe from which we remain inexorably separated by space and time. However, we now enjoy a technologically-enabled opportunity to probe the ultimate limits of visual aesthetic distance and the origins of human aesthetic response as we remotely explore deep space via the Hubble Space Telescope and its successors.
Environmental Inversion Effects in Face Perception
ERIC Educational Resources Information Center
Davidenko, Nicolas; Flusberg, Stephen J.
2012-01-01
Visual processing is highly sensitive to stimulus orientation; for example, face perception is drastically worse when faces are oriented inverted vs. upright. However, stimulus orientation must be established in relation to a particular reference frame, and in most studies, several reference frames are conflated. Which reference frame(s) matter in…
Sandberg, Kristian; Bahrami, Bahador; Kanai, Ryota; Barnes, Gareth Robert; Overgaard, Morten; Rees, Geraint
2014-01-01
Previous studies indicate that conscious face perception may be related to neural activity in a large time window around 170-800ms after stimulus presentation, yet in the majority of these studies changes in conscious experience are confounded with changes in physical stimulation. Using multivariate classification on MEG data recorded when participants reported changes in conscious perception evoked by binocular rivalry between a face and a grating, we showed that only MEG signals in the 120-320ms time range, peaking at the M170 around 180ms and the P2m at around 260ms, reliably predicted conscious experience. Conscious perception could not only be decoded significantly better than chance from the sensors that showed the largest average difference, as previous studies suggest, but also from patterns of activity across groups of occipital sensors that individually were unable to predict perception better than chance. Additionally, source space analyses showed that sources in the early and late visual system predicted conscious perception more accurately than frontal and parietal sites, although conscious perception could also be decoded there. Finally, the patterns of neural activity associated with conscious face perception generalized from one participant to another around the times of maximum prediction accuracy. Our work thus demonstrates that the neural correlates of particular conscious contents (here, faces) are highly consistent in time and space within individuals and that these correlates are shared to some extent between individuals. PMID:23281780
Face Averages Enhance User Recognition for Smartphone Security
Robertson, David J.; Kramer, Robin S. S.; Burton, A. Mike
2015-01-01
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ – a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251
A note on age differences in mood-congruent vs. mood-incongruent emotion processing in faces
Voelkle, Manuel C.; Ebner, Natalie C.; Lindenberger, Ulman; Riediger, Michaela
2014-01-01
This article addresses four interrelated research questions: (1) Does experienced mood affect emotion perception in faces and is this perception mood-congruent or mood-incongruent?(2) Are there age-group differences in the interplay between experienced mood and emotion perception? (3) Does emotion perception in faces change as a function of the temporal sequence of study sessions and stimuli presentation, and (4) does emotion perception in faces serve a mood-regulatory function? One hundred fifty-four adults of three different age groups (younger: 20–31 years; middle-aged: 44–55 years; older adults: 70–81 years) were asked to provide multidimensional emotion ratings of a total of 1026 face pictures of younger, middle-aged, and older men and women, each displaying six different prototypical (primary) emotional expressions. By analyzing the likelihood of ascribing an additional emotional expression to a face whose primary emotion had been correctly recognized, the multidimensional rating approach permits the study of emotion perception while controlling for emotion recognition. Following up on previous research on mood responses to recurring unpleasant situations using the same dataset (Voelkle et al., 2013), crossed random effects analyses supported a mood-congruent relationship between experienced mood and perceived emotions in faces. In particular older adults were more likely to perceive happiness in faces when being in a positive mood and less likely to do so when being in a negative mood. This did not apply to younger adults. Temporal sequence of study sessions and stimuli presentation had a strong effect on the likelihood of ascribing an additional emotional expression. In contrast to previous findings, however, there was neither evidence for a change from mood-congruent to mood-incongruent responses over time nor evidence for a mood-regulatory effect. PMID:25018740
A note on age differences in mood-congruent vs. mood-incongruent emotion processing in faces.
Voelkle, Manuel C; Ebner, Natalie C; Lindenberger, Ulman; Riediger, Michaela
2014-01-01
(1) Does experienced mood affect emotion perception in faces and is this perception mood-congruent or mood-incongruent?(2) Are there age-group differences in the interplay between experienced mood and emotion perception? (3) Does emotion perception in faces change as a function of the temporal sequence of study sessions and stimuli presentation, and (4) does emotion perception in faces serve a mood-regulatory function? One hundred fifty-four adults of three different age groups (younger: 20-31 years; middle-aged: 44-55 years; older adults: 70-81 years) were asked to provide multidimensional emotion ratings of a total of 1026 face pictures of younger, middle-aged, and older men and women, each displaying six different prototypical (primary) emotional expressions. By analyzing the likelihood of ascribing an additional emotional expression to a face whose primary emotion had been correctly recognized, the multidimensional rating approach permits the study of emotion perception while controlling for emotion recognition. Following up on previous research on mood responses to recurring unpleasant situations using the same dataset (Voelkle et al., 2013), crossed random effects analyses supported a mood-congruent relationship between experienced mood and perceived emotions in faces. In particular older adults were more likely to perceive happiness in faces when being in a positive mood and less likely to do so when being in a negative mood. This did not apply to younger adults. Temporal sequence of study sessions and stimuli presentation had a strong effect on the likelihood of ascribing an additional emotional expression. In contrast to previous findings, however, there was neither evidence for a change from mood-congruent to mood-incongruent responses over time nor evidence for a mood-regulatory effect.
Puglia, Meghan H.; Lillard, Travis S.; Morris, James P.; Connelly, Jessica J.
2015-01-01
In humans, the neuropeptide oxytocin plays a critical role in social and emotional behavior. The actions of this molecule are dependent on a protein that acts as its receptor, which is encoded by the oxytocin receptor gene (OXTR). DNA methylation of OXTR, an epigenetic modification, directly influences gene transcription and is variable in humans. However, the impact of this variability on specific social behaviors is unknown. We hypothesized that variability in OXTR methylation impacts social perceptual processes often linked with oxytocin, such as perception of facial emotions. Using an imaging epigenetic approach, we established a relationship between OXTR methylation and neural activity in response to emotional face processing. Specifically, high levels of OXTR methylation were associated with greater amounts of activity in regions associated with face and emotion processing including amygdala, fusiform, and insula. Importantly, we found that these higher levels of OXTR methylation were also associated with decreased functional coupling of amygdala with regions involved in affect appraisal and emotion regulation. These data indicate that the human endogenous oxytocin system is involved in attenuation of the fear response, corroborating research implicating intranasal oxytocin in the same processes. Our findings highlight the importance of including epigenetic mechanisms in the description of the endogenous oxytocin system and further support a central role for oxytocin in social cognition. This approach linking epigenetic variability with neural endophenotypes may broadly explain individual differences in phenotype including susceptibility or resilience to disease. PMID:25675509
ERIC Educational Resources Information Center
Carver, Diane L.; Kosloski, Michael F., Jr.
2015-01-01
This study analyzed student perceptions of the psychosocial learning environment in online and face-to-face career and technical education courses, and used survey data from a school district in Washington state. A Mann-Whitney "U" test was used to measure variability and compare the mean scores for a series of psychosocial learning…
ERIC Educational Resources Information Center
Terras, Katherine; Chiasson, Kari; Sansale, Adam
2012-01-01
According to Ayala (2009), blended learning is "the purposeful integration of traditional (i.e., face-to-face) and online learning in order to provide educational opportunities that maximize the benefits of each platform and thus more effectively facilitate student learning. The purpose of this study was to explore students' perceptions of…
Human preferences for sexually dimorphic faces may be evolutionarily novel
Scott, Isabel M.; Clark, Andrew P.; Josephson, Steven C.; Boyette, Adam H.; Cuthill, Innes C.; Fried, Ruby L.; Gibson, Mhairi A.; Hewlett, Barry S.; Jamieson, Mark; Jankowiak, William; Honey, P. Lynne; Huang, Zejun; Liebert, Melissa A.; Purzycki, Benjamin G.; Shaver, John H.; Snodgrass, J. Josh; Sosis, Richard; Sugiyama, Lawrence S.; Swami, Viren; Yu, Douglas W.; Zhao, Yangke; Penton-Voak, Ian S.
2014-01-01
A large literature proposes that preferences for exaggerated sex typicality in human faces (masculinity/femininity) reflect a long evolutionary history of sexual and social selection. This proposal implies that dimorphism was important to judgments of attractiveness and personality in ancestral environments. It is difficult to evaluate, however, because most available data come from large-scale, industrialized, urban populations. Here, we report the results for 12 populations with very diverse levels of economic development. Surprisingly, preferences for exaggerated sex-specific traits are only found in the novel, highly developed environments. Similarly, perceptions that masculine males look aggressive increase strongly with development and, specifically, urbanization. These data challenge the hypothesis that facial dimorphism was an important ancestral signal of heritable mate value. One possibility is that highly developed environments provide novel opportunities to discern relationships between facial traits and behavior by exposing individuals to large numbers of unfamiliar faces, revealing patterns too subtle to detect with smaller samples. PMID:25246593
Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates.
Wang, Xiaodong; Guo, Xiaotao; Chen, Lin; Liu, Yijun; Goldberg, Michael E; Xu, Hong
2017-02-01
Adaptation is fundamental in sensory processing and has been studied extensively within the same sensory modality. However, little is known about adaptation across sensory modalities, especially in the context of high-level processing, such as the perception of emotion. Previous studies have shown that prolonged exposure to a face exhibiting one emotion, such as happiness, leads to contrastive biases in the perception of subsequently presented faces toward the opposite emotion, such as sadness. Such work has shown the importance of adaptation in calibrating face perception based on prior visual exposure. In the present study, we showed for the first time that emotion-laden sounds, like laughter, adapt the visual perception of emotional faces, that is, subjects more frequently perceived faces as sad after listening to a happy sound. Furthermore, via electroencephalography recordings and event-related potential analysis, we showed that there was a neural correlate underlying the perceptual bias: There was an attenuated response occurring at ∼ 400 ms to happy test faces and a quickened response to sad test faces, after exposure to a happy sound. Our results provide the first direct evidence for a behavioral cross-modal adaptation effect on the perception of facial emotion, and its neural correlate. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Integrating body movement into attractiveness research.
Fink, Bernhard; Weege, Bettina; Neave, Nick; Pham, Michael N; Shackelford, Todd K
2015-01-01
People judge attractiveness and make trait inferences from the physical appearance of others, and research reveals high agreement among observers making such judgments. Evolutionary psychologists have argued that interest in physical appearance and beauty reflects adaptations that motivate the search for desirable qualities in a potential partner. Although men more than women value the physical appearance of a partner, appearance universally affects social perception in both sexes. Most studies of attractiveness perceptions have focused on third party assessments of static representations of the face and body. Corroborating evidence suggests that body movement, such as dance, also conveys information about mate quality. Here we review evidence that dynamic cues (e.g., gait, dance) also influence perceptions of mate quality, including personality traits, strength, and overall attractiveness. We recommend that attractiveness research considers the informational value of body movement in addition to static cues, to present an integrated perspective on human social perception.
Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick J; Ishii, Lisa E
2014-01-01
Determine the effect of facial reanimation surgery on observer-graded attractiveness and negative facial perception of patients with facial paralysis. Randomized controlled experiment. Ninety observers viewed images of paralyzed faces, smiling and in repose, before and after reanimation surgery, as well as normal comparison faces. Observers rated the attractiveness of each face and characterized the paralyzed faces by rating severity, disfigured/bothersome, and importance to repair. Iterated factor analysis indicated these highly correlated variables measure a common domain, so they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score. Mixed effects linear regression determined the effect of facial reanimation surgery on attractiveness and DIBS score. Facial paralysis induces an attractiveness penalty of 2.51 on a 10-point scale for faces in repose and 3.38 for smiling faces. Mixed effects linear regression showed that reanimation surgery improved attractiveness for faces both in repose and smiling by 0.84 (95% confidence interval [CI]: 0.67, 1.01) and 1.24 (95% CI: 1.07, 1.42) respectively. Planned hypothesis tests confirmed statistically significant differences in attractiveness ratings between postoperative and normal faces, indicating attractiveness was not completely normalized. Regression analysis also showed that reanimation surgery decreased DIBS by 0.807 (95% CI: 0.704, 0.911) for faces in repose and 0.989 (95% CI: 0.886, 1.093), an entire standard deviation, for smiling faces. Facial reanimation surgery increases attractiveness and decreases negative facial perception of patients with facial paralysis. These data emphasize the need to optimize reanimation surgery to restore not only function, but also symmetry and cosmesis to improve facial perception and patient quality of life. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
ERIC Educational Resources Information Center
Walsh, Jennifer A.; Creighton, Sarah E.; Rutherford, M. D.
2016-01-01
Some, but not all, relevant studies have revealed face processing deficits among those with autism spectrum disorder (ASD). In particular, deficits are revealed in face processing tasks that involve emotion perception. The current study examined whether either deficits in processing emotional expression or deficits in processing social cognitive…
Developmental Change in Infant Categorization: The Perception of Correlations among Facial Features.
ERIC Educational Resources Information Center
Younger, Barbara
1992-01-01
Tested 7 and 10 month olds for perception of correlations among facial features. After habituation to faces displaying a pattern of correlation, 10 month olds generalized to a novel face that preserved the pattern of correlation but showed increased attention to a novel face that violated the pattern. (BC)
Neurons in the Fusiform Gyrus are Fewer and Smaller in Autism
ERIC Educational Resources Information Center
van Kooten, Imke A. J.; Palmen, Saskia J. M. C.; von Cappeln, Patricia; Steinbusch, Harry W. M.; Korr, Hubert; Heinsen, Helmut; Hof, Patrick R.; van Engeland, Herman; Schmitz, Christoph
2008-01-01
Abnormalities in face perception are a core feature of social disabilities in autism. Recent functional magnetic resonance imaging studies showed that patients with autism could perform face perception tasks. However, the fusiform gyrus (FG) and other cortical regions supporting face processing in controls are hypoactive in patients with autism.…
Trustworthy-Looking Face Meets Brown Eyes
Kleisner, Karel; Priplatova, Lenka; Frost, Peter; Flegr, Jaroslav
2013-01-01
We tested whether eye color influences perception of trustworthiness. Facial photographs of 40 female and 40 male students were rated for perceived trustworthiness. Eye color had a significant effect, the brown-eyed faces being perceived as more trustworthy than the blue-eyed ones. Geometric morphometrics, however, revealed significant correlations between eye color and face shape. Thus, face shape likewise had a significant effect on perceived trustworthiness but only for male faces, the effect for female faces not being significant. To determine whether perception of trustworthiness was being influenced primarily by eye color or by face shape, we recolored the eyes on the same male facial photos and repeated the test procedure. Eye color now had no effect on perceived trustworthiness. We concluded that although the brown-eyed faces were perceived as more trustworthy than the blue-eyed ones, it was not brown eye color per se that caused the stronger perception of trustworthiness but rather the facial features associated with brown eyes. PMID:23326406
Brain systems for assessing the affective value of faces
Said, Christopher P.; Haxby, James V.; Todorov, Alexander
2011-01-01
Cognitive neuroscience research on facial expression recognition and face evaluation has proliferated over the past 15 years. Nevertheless, large questions remain unanswered. In this overview, we discuss the current understanding in the field, and describe what is known and what remains unknown. In §2, we describe three types of behavioural evidence that the perception of traits in neutral faces is related to the perception of facial expressions, and may rely on the same mechanisms. In §3, we discuss cortical systems for the perception of facial expressions, and argue for a partial segregation of function in the superior temporal sulcus and the fusiform gyrus. In §4, we describe the current understanding of how the brain responds to emotionally neutral faces. To resolve some of the inconsistencies in the literature, we perform a large group analysis across three different studies, and argue that one parsimonious explanation of prior findings is that faces are coded in terms of their typicality. In §5, we discuss how these two lines of research—perception of emotional expressions and face evaluation—could be integrated into a common, cognitive neuroscience framework. PMID:21536552
Face Coding Is Bilateral in the Female Brain
Proverbio, Alice Mado; Riva, Federica; Martin, Eleonora; Zani, Alberto
2010-01-01
Background It is currently believed that face processing predominantly activates the right hemisphere in humans, but available literature is very inconsistent. Methodology/Principal Findings In this study, ERPs were recorded in 50 right-handed women and men in response to 390 faces (of different age and sex), and 130 technological objects. Results showed no sex difference in the amplitude of N170 to objects; a much larger face-specific response over the right hemisphere in men, and a bilateral response in women; a lack of face-age coding effect over the left hemisphere in men, with no differences in N170 to faces as a function of age; a significant bilateral face-age coding effect in women. Conclusions/Significance LORETA reconstruction showed a significant left and right asymmetry in the activation of the fusiform gyrus (BA19), in women and men, respectively. The present data reveal a lesser degree of lateralization of brain functions related to face coding in women than men. In this light, they may provide an explanation of the inconsistencies in the available literature concerning the asymmetric activity of left and right occipito-temporal cortices devoted to face perception during processing of face identity, structure, familiarity or affective content. PMID:20574528
Face coding is bilateral in the female brain.
Proverbio, Alice Mado; Riva, Federica; Martin, Eleonora; Zani, Alberto
2010-06-21
It is currently believed that face processing predominantly activates the right hemisphere in humans, but available literature is very inconsistent. In this study, ERPs were recorded in 50 right-handed women and men in response to 390 faces (of different age and sex), and 130 technological objects. Results showed no sex difference in the amplitude of N170 to objects; a much larger face-specific response over the right hemisphere in men, and a bilateral response in women; a lack of face-age coding effect over the left hemisphere in men, with no differences in N170 to faces as a function of age; a significant bilateral face-age coding effect in women. LORETA reconstruction showed a significant left and right asymmetry in the activation of the fusiform gyrus (BA19), in women and men, respectively. The present data reveal a lesser degree of lateralization of brain functions related to face coding in women than men. In this light, they may provide an explanation of the inconsistencies in the available literature concerning the asymmetric activity of left and right occipito-temporal cortices devoted to face perception during processing of face identity, structure, familiarity or affective content.
Fink, B; Matts, P J; Brauckmann, C; Gundlach, S
2018-04-01
Previous studies investigating the effects of skin surface topography and colouration cues on the perception of female faces reported a differential weighting for the perception of skin topography and colour evenness, where topography was a stronger visual cue for the perception of age, whereas skin colour evenness was a stronger visual cue for the perception of health. We extend these findings in a study of the effect of skin surface topography and colour evenness cues on the perceptions of facial age, health and attractiveness in males. Facial images of six men (aged 40 to 70 years), selected for co-expression of lines/wrinkles and discolouration, were manipulated digitally to create eight stimuli, namely, separate removal of these two features (a) on the forehead, (b) in the periorbital area, (c) on the cheeks and (d) across the entire face. Omnibus (within-face) pairwise combinations, including the original (unmodified) face, were presented to a total of 240 male and female judges, who selected the face they considered younger, healthier and more attractive. Significant effects were detected for facial image choice, in response to skin feature manipulation. The combined removal of skin surface topography resulted in younger age perception compared with that seen with the removal of skin colouration cues, whereas the opposite pattern was found for health preference. No difference was detected for the perception of attractiveness. These perceptual effects were seen particularly on the forehead and cheeks. Removing skin topography cues (but not discolouration) in the periorbital area resulted in higher preferences for all three attributes. Skin surface topography and colouration cues affect the perception of age, health and attractiveness in men's faces. The combined removal of these features on the forehead, cheeks and in the periorbital area results in the most positive assessments. © 2018 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
ERIC Educational Resources Information Center
Vásquez Martínez, Claudio Rafael; Girón, Graciela; Bañuelos, Antonio Ayón
2012-01-01
This paper is based on a study of the perceptions of the distance education mode compared with face-to-face teaching on the part of students on the university distance education programme at the University of Antioch over the period from 2001 to 2007. It is not possible to ignore the close links between educational processes and social, economic,…
Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.
2012-01-01
We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355
African perceptions of female attractiveness.
Coetzee, Vinet; Faerber, Stella J; Greeff, Jaco M; Lefevre, Carmen E; Re, Daniel E; Perrett, David I
2012-01-01
Little is known about mate choice preferences outside Western, educated, industrialised, rich and democratic societies, even though these Western populations may be particularly unrepresentative of human populations. To our knowledge, this is the first study to test which facial cues contribute to African perceptions of African female attractiveness and also the first study to test the combined role of facial adiposity, skin colour (lightness, yellowness and redness), skin homogeneity and youthfulness in the facial attractiveness preferences of any population. Results show that youthfulness, skin colour, skin homogeneity and facial adiposity significantly and independently predict attractiveness in female African faces. Younger, thinner women with a lighter, yellower skin colour and a more homogenous skin tone are considered more attractive. These findings provide a more global perspective on human mate choice and point to a universal role for these four facial cues in female facial attractiveness.
Gilaie-Dotan, Sharon; Doron, Ravid
2017-06-01
Visual categories are associated with eccentricity biases in high-order visual cortex: Faces and reading with foveally-biased regions, while common objects and space with mid- and peripherally-biased regions. As face perception and reading are among the most challenging human visual skills, and are often regarded as the peak achievements of a distributed neural network supporting common objects perception, it is unclear why objects, which also rely on foveal vision to be processed, are associated with mid-peripheral rather than with a foveal bias. Here, we studied BN, a 9 y.o. boy who has normal basic-level vision, abnormal (limited) oculomotor pursuit and saccades, and shows developmental object and contour integration deficits but with no indication of prosopagnosia. Although we cannot infer causation from the data presented here, we suggest that normal pursuit and saccades could be critical for the development of contour integration and object perception. While faces and perhaps reading, when fixated upon, take up a small portion of central visual field and require only small eye movements to be properly processed, common objects typically prevail in mid-peripheral visual field and rely on longer-distance voluntary eye movements as saccades to be brought to fixation. While retinal information feeds into early visual cortex in an eccentricity orderly manner, we hypothesize that propagation of non-foveal information to mid and high-order visual cortex critically relies on circuitry involving eye movements. Limited or atypical eye movements, as in the case of BN, may hinder normal information flow to mid-eccentricity biased high-order visual cortex, adversely affecting its development and consequently inducing visual perceptual deficits predominantly for categories associated with these regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tanaka, James W; Kaiser, Martha D; Hagen, Simen; Pierce, Lara J
2014-05-01
Given that all faces share the same set of features-two eyes, a nose, and a mouth-that are arranged in similar configuration, recognition of a specific face must depend on our ability to discern subtle differences in its featural and configural properties. An enduring question in the face-processing literature is whether featural or configural information plays a larger role in the recognition process. To address this question, the face dimensions task was designed, in which the featural and configural properties in the upper (eye) and lower (mouth) regions of a face were parametrically and independently manipulated. In a same-different task, two faces were sequentially presented and tested in their upright or in their inverted orientation. Inversion disrupted the perception of featural size (Exp. 1), featural shape (Exp. 2), and configural changes in the mouth region, but it had relatively little effect on the discrimination of featural size and shape and configural differences in the eye region. Inversion had little effect on the perception of information in the top and bottom halves of houses (Exp. 3), suggesting that the lower-half impairment was specific to faces. Spatial cueing to the mouth region eliminated the inversion effect (Exp. 4), suggesting that participants have a bias to attend to the eye region of an inverted face. The collective findings from these experiments suggest that inversion does not differentially impair featural or configural face perceptions, but rather impairs the perception of information in the mouth region of the face.
Facial movements strategically camouflage involuntary social signals of face morphology.
Gill, Daniel; Garrod, Oliver G B; Jack, Rachael E; Schyns, Philippe G
2014-05-01
Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.
Visual attractiveness is leaky: the asymmetrical relationship between face and hair.
Saegusa, Chihiro; Intoy, Janis; Shimojo, Shinsuke
2015-01-01
Predicting personality is crucial when communicating with people. It has been revealed that the perceived attractiveness or beauty of the face is a cue. As shown in the well-known "what is beautiful is good" stereotype, perceived attractiveness is often associated with desirable personality. Although such research on attractiveness used mainly the face isolated from other body parts, the face is not always seen in isolation in the real world. Rather, it is surrounded by one's hairstyle, and is perceived as a part of total presence. In human vision, perceptual organization/integration occurs mostly in a bottom up, task-irrelevant fashion. This raises an intriguing possibility that task-irrelevant stimulus that is perceptually integrated with a target may influence our affective evaluation. In such a case, there should be a mutual influence between attractiveness perception of the face and surrounding hair, since they are assumed to share strong and unique perceptual organization. In the current study, we examined the influence of a task-irrelevant stimulus on our attractiveness evaluation, using face and hair as stimuli. The results revealed asymmetrical influences in the evaluation of one while ignoring the other. When hair was task-irrelevant, it still affected attractiveness of the face, but only if the hair itself had never been evaluated by the same evaluator. On the other hand, the face affected the hair regardless of whether the face itself was evaluated before. This has intriguing implications on the asymmetry between face and hair, and perceptual integration between them in general. Together with data from a post hoc questionnaire, it is suggested that both implicit non-selective and explicit selective processes contribute to attractiveness evaluation. The findings provide an understanding of attractiveness perception in real-life situations, as well as a new paradigm to reveal unknown implicit aspects of information integration for emotional judgment.
Visual attractiveness is leaky: the asymmetrical relationship between face and hair
Saegusa, Chihiro; Intoy, Janis; Shimojo, Shinsuke
2015-01-01
Predicting personality is crucial when communicating with people. It has been revealed that the perceived attractiveness or beauty of the face is a cue. As shown in the well-known “what is beautiful is good” stereotype, perceived attractiveness is often associated with desirable personality. Although such research on attractiveness used mainly the face isolated from other body parts, the face is not always seen in isolation in the real world. Rather, it is surrounded by one’s hairstyle, and is perceived as a part of total presence. In human vision, perceptual organization/integration occurs mostly in a bottom up, task-irrelevant fashion. This raises an intriguing possibility that task-irrelevant stimulus that is perceptually integrated with a target may influence our affective evaluation. In such a case, there should be a mutual influence between attractiveness perception of the face and surrounding hair, since they are assumed to share strong and unique perceptual organization. In the current study, we examined the influence of a task-irrelevant stimulus on our attractiveness evaluation, using face and hair as stimuli. The results revealed asymmetrical influences in the evaluation of one while ignoring the other. When hair was task-irrelevant, it still affected attractiveness of the face, but only if the hair itself had never been evaluated by the same evaluator. On the other hand, the face affected the hair regardless of whether the face itself was evaluated before. This has intriguing implications on the asymmetry between face and hair, and perceptual integration between them in general. Together with data from a post hoc questionnaire, it is suggested that both implicit non-selective and explicit selective processes contribute to attractiveness evaluation. The findings provide an understanding of attractiveness perception in real-life situations, as well as a new paradigm to reveal unknown implicit aspects of information integration for emotional judgment. PMID:25914656
The Caledonian face test: A new test of face discrimination.
Logan, Andrew J; Wilkinson, Frances; Wilson, Hugh R; Gordon, Gael E; Loffler, Gunter
2016-02-01
This study aimed to develop a clinical test of face perception which is applicable to a wide range of patients and can capture normal variability. The Caledonian face test utilises synthetic faces which combine simplicity with sufficient realism to permit individual identification. Face discrimination thresholds (i.e. minimum difference between faces required for accurate discrimination) were determined in an "odd-one-out" task. The difference between faces was controlled by an adaptive QUEST procedure. A broad range of face discrimination sensitivity was determined from a group (N=52) of young adults (mean 5.75%; SD 1.18; range 3.33-8.84%). The test is fast (3-4 min), repeatable (test-re-test r(2)=0.795) and demonstrates a significant inversion effect. The potential to identify impairments of face discrimination was evaluated by testing LM who reported a lifelong difficulty with face perception. While LM's impairment for two established face tests was close to the criterion for significance (Z-scores of -2.20 and -2.27) for the Caledonian face test, her Z-score was -7.26, implying a more than threefold higher sensitivity. The new face test provides a quantifiable and repeatable assessment of face discrimination ability. The enhanced sensitivity suggests that the Caledonian face test may be capable of detecting more subtle impairments of face perception than available tests. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rhodes, Gillian; Pond, Stephen; Burton, Nichola; Kloth, Nadine; Jeffery, Linda; Bell, Jason; Ewing, Louise; Calder, Andrew J; Palermo, Romina
2015-09-01
Traditional models of face perception emphasize distinct routes for processing face identity and expression. These models have been highly influential in guiding neural and behavioural research on the mechanisms of face perception. However, it is becoming clear that specialised brain areas for coding identity and expression may respond to both attributes and that identity and expression perception can interact. Here we use perceptual aftereffects to demonstrate the existence of dimensions in perceptual face space that code both identity and expression, further challenging the traditional view. Specifically, we find a significant positive association between face identity aftereffects and expression aftereffects, which dissociates from other face (gaze) and non-face (tilt) aftereffects. Importantly, individual variation in the adaptive calibration of these common dimensions significantly predicts ability to recognize both identity and expression. These results highlight the role of common dimensions in our ability to recognize identity and expression, and show why the high-level visual processing of these attributes is not entirely distinct. Copyright © 2015 Elsevier B.V. All rights reserved.
The effect of face eccentricity on the perception of gaze direction.
Todorović, Dejan
2009-01-01
The perception of a looker's gaze direction depends not only on iris eccentricity (the position of the looker's irises within the sclera) but also on the orientation of the lookers' head. One among several potential cues of head orientation is face eccentricity, the position of the inner features of the face (eyes, nose, mouth) within the head contour, as viewed by the observer. For natural faces this cue is confounded with many other head-orientation cues, but in schematic faces it can be studied in isolation. Salient novel illustrations of the effectiveness of face eccentricity are 'Necker faces', which involve equal iris eccentricities but multiple perceived gaze directions. In four experiments, iris and face eccentricity in schematic faces were manipulated, revealing strong and consistent effects of face eccentricity on perceived gaze direction, with different types of tasks. An additional experiment confirmed the 'Mona Lisa' effect with this type of stimuli. Face eccentricity most likely acted as a simple but robust cue of head turn. A simple computational account of combined effects of cues of eye and head turn on perceived gaze direction is presented, including a formal condition for the perception of direct gaze. An account of the 'Mona Lisa' effect is presented.
Early stages of figure-ground segregation during perception of the face-vase.
Pitts, Michael A; Martínez, Antígona; Brewer, James B; Hillyard, Steven A
2011-04-01
The temporal sequence of neural processes supporting figure-ground perception was investigated by recording ERPs associated with subjects' perceptions of the face-vase figure. In Experiment 1, subjects continuously reported whether they perceived the face or the vase as the foreground figure by pressing one of two buttons. Each button press triggered a probe flash to the face region, the vase region, or the borders between the two. The N170/vertex positive potential (VPP) component of the ERP elicited by probes to the face region was larger when subjects perceived the faces as figure. Preceding the N170/VPP, two additional components were identified. First, when the borders were probed, ERPs differed in amplitude as early as 110 msec after probe onset depending on subjects' figure-ground perceptions. Second, when the face or vase regions were probed, ERPs were more positive (at ∼ 150-200 msec) when that region was perceived as figure versus background. These components likely reflect an early "border ownership" stage, and a subsequent "figure-ground segregation" stage of processing. To explore the influence of attention on these stages of processing, two additional experiments were conducted. In Experiment 2, subjects selectively attended to the face or vase region, and the same early ERP components were again produced. In Experiment 3, subjects performed an identical selective attention task, but on a display lacking distinctive figure-ground borders, and neither of the early components were produced. Results from these experiments suggest sequential stages of processing underlying figure-ground perception, each which are subject to modifications by selective attention.
Children's Perceptions of and Beliefs about Facial Maturity
ERIC Educational Resources Information Center
Thomas, Gross F.
2004-01-01
The author studied children's and young adult's perceptions of facial age and beliefs about the sociability, cognitive ability, and physical fitness of adult faces. From pairs of photographs of adult faces, participants (4-6 years old, 8-10 years old, 13-16 years old, and 19-23 years old) selected the one face that appeared younger, older, better…
Asymmetric Cultural Effects on Perceptual Expertise Underlie an Own-Race Bias for Voices
ERIC Educational Resources Information Center
Perrachione, Tyler K.; Chiao, Joan Y.; Wong, Patrick C. M.
2010-01-01
The own-race bias in memory for faces has been a rich source of empirical work on the mechanisms of person perception. This effect is thought to arise because the face-perception system differentially encodes the relevant structural dimensions of features and their configuration based on experiences with different groups of faces. However, the…
Perception of Multisensory Gender Coherence in 6- and 9-month-old Infants
de Boisferon, Anne Hillairet; Dupierrix, Eve; Quinn, Paul C.; Lœvenbruck, Hélène; Lewkowicz, David J.; Lee, Kang; Pascalis, Olivier
2015-01-01
One of the most salient social categories conveyed by human faces and voices is gender. We investigated the developmental emergence of the ability to perceive the coherence of auditory and visual attributes of gender in 6- and 9-month-old infants. Infants viewed two side-by-side video clips of a man and a woman singing a nursery rhyme and heard a synchronous male or female soundtrack. Results showed that 6-month-old infants did not match the audible and visible attributes of gender, and 9-month-old infants matched only female faces and voices. These findings indicate that the ability to perceive the multisensory coherence of gender emerges relatively late in infancy and that it reflects the greater experience that most infants have with female faces and voices. PMID:26561475
Bate, Sarah; Bennetts, Rachel; Mole, Joseph A; Ainge, James A; Gregory, Nicola J; Bobak, Anna K; Bussunt, Amanda
2015-01-01
In this paper we describe the case of EM, a female adolescent who acquired prosopagnosia following encephalitis at the age of eight. Initial neuropsychological and eye-movement investigations indicated that EM had profound difficulties in face perception as well as face recognition. EM underwent 14 weeks of perceptual training in an online programme that attempted to improve her ability to make fine-grained discriminations between faces. Following training, EM's face perception skills had improved, and the effect generalised to untrained faces. Eye-movement analyses also indicated that EM spent more time viewing the inner facial features post-training. Examination of EM's face recognition skills revealed an improvement in her recognition of personally-known faces when presented in a laboratory-based test, although the same gains were not noted in her everyday experiences with these faces. In addition, EM did not improve on a test assessing the recognition of newly encoded faces. One month after training, EM had maintained the improvement on the eye-tracking test, and to a lesser extent, her performance on the familiar faces test. This pattern of findings is interpreted as promising evidence that the programme can improve face perception skills, and with some adjustments, may at least partially improve face recognition skills.
Subliminally Perceived Odours Modulate Female Intrasexual Competition: An Eye Movement Study
Parma, Valentina; Tirindelli, Roberto; Bisazza, Angelo; Massaccesi, Stefano; Castiello, Umberto
2012-01-01
Background Evidence suggests that subliminal odorants influence human perception and behavior. It has been hypothesized that the human sex-steroid derived compound 4,16-androstadien-3-one (androstadienone) functions as a human chemosignal. The most intensively studied steroid compound, androstadienone is known to be biologically relevant since it seems to convey information about male mate quality to women. It is unclear if the effects of androstadienone are menstrual cycle related. Methodology/Principal Findings In the first experiment, heterosexual women were exposed to androstadienone or a control compound and asked to view stimuli such as female faces, male faces and familiar objects while their eye movements were recorded. In the second experiment the same women were asked to rate the level of stimuli attractiveness following exposure to the study or control compound. The results indicated that women at high conception risk spent more time viewing the female than the male faces regardless of the compound administered. Women at a low conception risk exhibited a preference for female faces only following exposure to androstadienone. Conclusions/Significance We contend that a woman's level of fertility influences her evaluation of potential competitors (e.g., faces of other women) during times critical for reproduction. Subliminally perceived odorants, such as androstadienone, might similarly enhance intrasexual competition strategies in women during fertility phases not critical for conception. These findings offer a substantial contribution to the current debate about the effects that subliminally perceived body odors might have on behavior. PMID:22383968
The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex
Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A.; Grill-Spector, Kalanit; Rossion, Bruno
2016-01-01
Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. SIGNIFICANCE STATEMENT Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. PMID:27511014
Early (M170) activation of face-specific cortex by face-like objects.
Hadjikhani, Nouchine; Kveraga, Kestutis; Naik, Paulami; Ahlfors, Seppo P
2009-03-04
The tendency to perceive faces in random patterns exhibiting configural properties of faces is an example of pareidolia. Perception of 'real' faces has been associated with a cortical response signal arising at approximately 170 ms after stimulus onset, but what happens when nonface objects are perceived as faces? Using magnetoencephalography, we found that objects incidentally perceived as faces evoked an early (165 ms) activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas common objects did not evoke such activation. An earlier peak at 130 ms was also seen for images of real faces only. Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late reinterpretation cognitive phenomenon.
Effects of spatial frequency and location of fearful faces on human amygdala activity.
Morawetz, Carmen; Baudewig, Juergen; Treue, Stefan; Dechent, Peter
2011-01-31
Facial emotion perception plays a fundamental role in interpersonal social interactions. Images of faces contain visual information at various spatial frequencies. The amygdala has previously been reported to be preferentially responsive to low-spatial frequency (LSF) rather than to high-spatial frequency (HSF) filtered images of faces presented at the center of the visual field. Furthermore, it has been proposed that the amygdala might be especially sensitive to affective stimuli in the periphery. In the present study we investigated the impact of spatial frequency and stimulus eccentricity on face processing in the human amygdala and fusiform gyrus using functional magnetic resonance imaging (fMRI). The spatial frequencies of pictures of fearful faces were filtered to produce images that retained only LSF or HSF information. Facial images were presented either in the left or right visual field at two different eccentricities. In contrast to previous findings, we found that the amygdala responds to LSF and HSF stimuli in a similar manner regardless of the location of the affective stimuli in the visual field. Furthermore, the fusiform gyrus did not show differential responses to spatial frequency filtered images of faces. Our findings argue against the view that LSF information plays a crucial role in the processing of facial expressions in the amygdala and of a higher sensitivity to affective stimuli in the periphery. Copyright © 2010 Elsevier B.V. All rights reserved.
The Bangor Voice Matching Test: A standardized test for the assessment of voice perception ability.
Mühl, Constanze; Sheil, Orla; Jarutytė, Lina; Bestelmeyer, Patricia E G
2017-11-09
Recognising the identity of conspecifics is an important yet highly variable skill. Approximately 2 % of the population suffers from a socially debilitating deficit in face recognition. More recently the existence of a similar deficit in voice perception has emerged (phonagnosia). Face perception tests have been readily available for years, advancing our understanding of underlying mechanisms in face perception. In contrast, voice perception has received less attention, and the construction of standardized voice perception tests has been neglected. Here we report the construction of the first standardized test for voice perception ability. Participants make a same/different identity decision after hearing two voice samples. Item Response Theory guided item selection to ensure the test discriminates between a range of abilities. The test provides a starting point for the systematic exploration of the cognitive and neural mechanisms underlying voice perception. With a high test-retest reliability (r=.86) and short assessment duration (~10 min) this test examines individual abilities reliably and quickly and therefore also has potential for use in developmental and neuropsychological populations.
Repetition suppression of faces is modulated by emotion
NASA Astrophysics Data System (ADS)
Ishai, Alumit; Pessoa, Luiz; Bikle, Philip C.; Ungerleider, Leslie G.
2004-06-01
Single-unit recordings and functional brain imaging studies have shown reduced neural responses to repeated stimuli in the visual cortex. By using event-related functional MRI, we compared the activation evoked by repetitions of neutral and fearful faces, which were either task relevant (targets) or irrelevant (distracters). We found that within the inferior occipital gyri, lateral fusiform gyri, superior temporal sulci, amygdala, and the inferior frontal gyri/insula, targets evoked stronger responses than distracters and their repetition was associated with significantly reduced responses. Repetition suppression, as manifested by the difference in response amplitude between the first and third repetitions of a target, was stronger for fearful than neutral faces. Distracter faces, regardless of their repetition or valence, evoked negligible activation, indicating top-down attenuation of behaviorally irrelevant stimuli. Our findings demonstrate a three-way interaction between emotional valence, repetition, and task relevance and suggest that repetition suppression is influenced by high-level cognitive processes in the human brain. face perception | functional MRI
Tanzer, Michal; Shahar, Golan; Avidan, Galia
2014-01-01
The aim of the proposed theoretical model is to illuminate personal and interpersonal resilience by drawing from the field of emotional face perception. We suggest that perception/recognition of emotional facial expressions serves as a central link between subjective, self-related processes and the social context. Emotional face perception constitutes a salient social cue underlying interpersonal communication and behavior. Because problems in communication and interpersonal behavior underlie most, if not all, forms of psychopathology, it follows that perception/recognition of emotional facial expressions impacts psychopathology. The ability to accurately interpret one’s facial expression is crucial in subsequently deciding on an appropriate course of action. However, perception in general, and of emotional facial expressions in particular, is highly influenced by individuals’ personality and the self-concept. Herein we briefly outline well-established theories of personal and interpersonal resilience and link them to the neuro-cognitive basis of face perception. We then describe the findings of our ongoing program of research linking two well-established resilience factors, general self-efficacy (GSE) and perceived social support (PSS), with face perception. We conclude by pointing out avenues for future research focusing on possible genetic markers and patterns of brain connectivity associated with the proposed model. Implications of our integrative model to psychotherapy are discussed. PMID:25165439
Etcoff, Nancy L; Stock, Shannon; Haley, Lauren E; Vickery, Sarah A; House, David M
2011-01-01
Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first glance and at longer inspection.
Etcoff, Nancy L.; Stock, Shannon; Haley, Lauren E.; Vickery, Sarah A.; House, David M.
2011-01-01
Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first glance and at longer inspection. PMID:21991328
Gender differences in BOLD activation to face photographs and video vignettes.
Fine, Jodene Goldenring; Semrud-Clikeman, Margaret; Zhu, David C
2009-07-19
Few neuroimaging studies have reported gender differences in response to human emotions, and those that have examined such differences have utilized face photographs. This study presented not only human face photographs of positive and negative emotions, but also video vignettes of positive and negative social human interactions in an attempt to provide a more ecologically appropriate stimuli paradigm. Ten male and 10 female healthy right-handed young adults were shown positive and negative affective social human faces and video vignettes to elicit gender differences in social/emotional perception. Conservative ROI (region of interest) analysis indicated greater male than female activation to positive affective photos in the anterior cingulate, medial frontal gyrus, superior frontal gyrus and superior temporal gyrus, all in the right hemisphere. No significant ROI gender differences were observed to negative affective photos. Male greater than female activation was seen in ROIs of the left posterior cingulate and the right inferior temporal gyrus to positive social videos. Male greater than female activation occurred in only the left middle temporal ROI for negative social videos. Consistent with previous findings, males were more lateralized than females. Although more activation was observed overall to video compared to photo conditions, males and females appear to process social video stimuli more similarly to one another than they do for photos. This study is a step forward in understanding the social brain with more ecologically valid stimuli that more closely approximates the demands of real-time social and affective processing.
What’s in a Face? How Face Gender and Current Affect Influence Perceived Emotion
Harris, Daniel A.; Hayes-Skelton, Sarah A.; Ciaramitaro, Vivian M.
2016-01-01
Faces drive our social interactions. A vast literature suggests an interaction between gender and emotional face perception, with studies using different methodologies demonstrating that the gender of a face can affect how emotions are processed. However, how different is our perception of affective male and female faces? Furthermore, how does our current affective state when viewing faces influence our perceptual biases? We presented participants with a series of faces morphed along an emotional continuum from happy to angry. Participants judged each face morph as either happy or angry. We determined each participant’s unique emotional ‘neutral’ point, defined as the face morph judged to be perceived equally happy and angry, separately for male and female faces. We also assessed how current state affect influenced these perceptual neutral points. Our results indicate that, for both male and female participants, the emotional neutral point for male faces is perceptually biased to be happier than for female faces. This bias suggests that more happiness is required to perceive a male face as emotionally neutral, i.e., we are biased to perceive a male face as more negative. Interestingly, we also find that perceptual biases in perceiving female faces are correlated with current mood, such that positive state affect correlates with perceiving female faces as happier, while we find no significant correlation between negative state affect and the perception of facial emotion. Furthermore, we find reaction time biases, with slower responses for angry male faces compared to angry female faces. PMID:27733839
Lopez-Rosenfeld, Matías; Calero, Cecilia I; Fernandez Slezak, Diego; Garbulsky, Gerry; Bergman, Mariano; Trevisan, Marcos; Sigman, Mariano
2015-01-01
There is a prevailing belief that interruptions using cellular phones during face to face interactions may affect severely how people relate and perceive each other. We set out to determine this cost quantitatively through an experiment performed in dyads, in a large audience in a TEDx event. One of the two participants (the speaker) narrates a story vividly. The listener is asked to deliberately ignore the speaker during part of the story (for instance, attending to their cell-phone). The speaker is not aware of this treatment. We show that total amount of attention is the major factor driving subjective beliefs about the story and the conversational partner. The effects are mostly independent on how attention is distributed in time. All social parameters of human communication are affected by attention time with a sole exception: the perceived emotion of the story. Interruptions during day-to-day communication between peers are extremely frequent. Our data should provide a note of caution, by indicating that they have a major effect on the perception people have about what they say (whether it is interesting or not . . .) and about the virtues of the people around them.
Lopez-Rosenfeld, Matías; Calero, Cecilia I.; Fernandez Slezak, Diego; Garbulsky, Gerry; Bergman, Mariano; Trevisan, Marcos; Sigman, Mariano
2015-01-01
There is a prevailing belief that interruptions using cellular phones during face to face interactions may affect severely how people relate and perceive each other. We set out to determine this cost quantitatively through an experiment performed in dyads, in a large audience in a TEDx event. One of the two participants (the speaker) narrates a story vividly. The listener is asked to deliberately ignore the speaker during part of the story (for instance, attending to their cell-phone). The speaker is not aware of this treatment. We show that total amount of attention is the major factor driving subjective beliefs about the story and the conversational partner. The effects are mostly independent on how attention is distributed in time. All social parameters of human communication are affected by attention time with a sole exception: the perceived emotion of the story. Interruptions during day-to-day communication between peers are extremely frequent. Our data should provide a note of caution, by indicating that they have a major effect on the perception people have about what they say (whether it is interesting or not . . .) and about the virtues of the people around them. PMID:26039326
Technology-Induced Risks in History
NASA Astrophysics Data System (ADS)
Rabkin, Ya.
Our perception of risk contains three main aspects: (1) probability of the risk occurring; (2) the extent of possible damage; (3) the degree of voluntary or involuntary exposure to risk. History of risk assessment has been traced in several areas, such as transportation, and has largely focused on insurance. Construction projects constitute one of the oldest areas of technology where accidents continue to occur, while health has always been a fragile commodity. Urbanization has multiplied the risks of illness and death, while natural catastrophes, though still frightening, have ceded their central place to technology-based disasters in the Western perceptions of risk. The human has become the main source of danger to the very survival of the planet. The Enlightenment utopia of scientific progress resulting in social and moral progress of humanity has collided with the awareness of new technology induced risks. Life on Earth began without humans, and it may end without them. Our civilization is the first that faces an end to be brought about by our own technological ingenuity.
Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs
Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.
2009-01-01
Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962
Aging effects on selective attention-related electroencephalographic patterns during face encoding.
Deiber, M-P; Rodriguez, C; Jaques, D; Missonnier, P; Emch, J; Millet, P; Gold, G; Giannakopoulos, P; Ibañez, V
2010-11-24
Previous electrophysiological studies revealed that human faces elicit an early visual event-related potential (ERP) within the occipito-temporal cortex, the N170 component. Although face perception has been proposed to rely on automatic processing, the impact of selective attention on N170 remains controversial both in young and elderly individuals. Using early visual ERP and alpha power analysis, we assessed the influence of aging on selective attention to faces during delayed-recognition tasks for face and letter stimuli, examining 36 elderly and 20 young adults with preserved cognition. Face recognition performance worsened with age. Aging induced a latency delay of the N1 component for faces and letters, as well as of the face N170 component. Contrasting with letters, ignored faces elicited larger N1 and N170 components than attended faces in both age groups. This counterintuitive attention effect on face processing persisted when scenes replaced letters. In contrast with young, elderly subjects failed to suppress irrelevant letters when attending faces. Whereas attended stimuli induced a parietal alpha band desynchronization within 300-1000 ms post-stimulus with bilateral-to-right distribution for faces and left lateralization for letters, ignored and passively viewed stimuli elicited a central alpha synchronization larger on the right hemisphere. Aging delayed the latency of this alpha synchronization for both face and letter stimuli, and reduced its amplitude for ignored letters. These results suggest that due to their social relevance, human faces may cause paradoxical attention effects on early visual ERP components, but they still undergo classical top-down control as a function of endogenous selective attention. Aging does not affect the face bottom-up alerting mechanism but reduces the top-down suppression of distracting letters, possibly impinging upon face recognition, and more generally delays the top-down suppression of task-irrelevant information. Copyright © 2010 IBRO. Published by Elsevier Ltd. All rights reserved.
Feczko, Eric; Shulman, Gordon L.; Petersen, Steven E.; Pruett, John R.
2014-01-01
Findings from diverse subfields of vision research suggest a potential link between high-level aspects of face perception and concentric form-from-structure perception. To explore this relationship, typical adults performed two adaptation experiments and two masking experiments to test whether concentric, but not nonconcentric, Glass patterns (a type of form-from-structure stimulus) utilize a processing mechanism shared by face perception. For the adaptation experiments, subjects were presented with an adaptor for 5 or 20 s, prior to discriminating a target. In the masking experiments, subjects saw a mask, then a target, and then a second mask. Measures of discriminability and bias were derived and repeated measures analysis of variance tested for pattern-specific masking and adaptation effects. Results from Experiment 1 show no Glass pattern-specific effect of adaptation to faces; results from Experiment 2 show concentric Glass pattern masking, but not adaptation, may impair upright/inverted face discrimination; results from Experiment 3 show concentric and radial Glass pattern masking impaired subsequent upright/inverted face discrimination more than translational Glass pattern masking; and results from Experiment 4 show concentric and radial Glass pattern masking impaired subsequent face gender discrimination more than translational Glass pattern masking. Taken together, these findings demonstrate interactions between concentric form-from-structure and face processing, suggesting a possible common processing pathway. PMID:24563526
Early (N170) activation of face-specific cortex by face-like objects
Hadjikhani, Nouchine; Kveraga, Kestutis; Naik, Paulami; Ahlfors, Seppo P.
2009-01-01
The tendency to perceive faces in random patterns exhibiting configural properties of faces is an example of pareidolia. Perception of ‘real’ faces has been associated with a cortical response signal arising at about 170ms after stimulus onset; but what happens when non-face objects are perceived as faces? Using magnetoencephalography (MEG), we found that objects incidentally perceived as faces evoked an early (165ms) activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas common objects did not evoke such activation. An earlier peak at 130 ms was also seen for images of real faces only. Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late re-interpretation cognitive phenomenon. PMID:19218867
Developmental Changes in the Perception of Adult Facial Age
ERIC Educational Resources Information Center
Gross, Thomas F.
2007-01-01
The author studied children's (aged 5-16 years) and young adults' (aged 18-22 years) perception and use of facial features to discriminate the age of mature adult faces. In Experiment 1, participants rated the age of unaltered and transformed (eyes, nose, eyes and nose, and whole face blurred) adult faces (aged 20-80 years). In Experiment 2,…
Integration of internal and external facial features in 8- to 10-year-old children and adults.
Meinhardt-Injac, Bozana; Persike, Malte; Meinhardt, Günter
2014-06-01
Investigation of whole-part and composite effects in 4- to 6-year-old children gave rise to claims that face perception is fully mature within the first decade of life (Crookes & McKone, 2009). However, only internal features were tested, and the role of external features was not addressed, although external features are highly relevant for holistic face perception (Sinha & Poggio, 1996; Axelrod & Yovel, 2010, 2011). In this study, 8- to 10-year-old children and adults performed a same-different matching task with faces and watches. In this task participants attended to either internal or external features. Holistic face perception was tested using a congruency paradigm, in which face and non-face stimuli either agreed or disagreed in both features (congruent contexts) or just in the attended ones (incongruent contexts). In both age groups, pronounced context congruency and inversion effects were found for faces, but not for watches. These findings indicate holistic feature integration for faces. While inversion effects were highly similar in both age groups, context congruency effects were stronger for children. Moreover, children's face matching performance was generally better when attending to external compared to internal features. Adults tended to perform better when attending to internal features. Our results indicate that both adults and 8- to 10-year-old children integrate external and internal facial features into holistic face representations. However, in children's face representations external features are much more relevant. These findings suggest that face perception is holistic but still not adult-like at the end of the first decade of life. Copyright © 2014 Elsevier B.V. All rights reserved.
Visual perception during mirror gazing at one's own face in schizophrenia.
Caputo, Giovanni B; Ferrucci, Roberta; Bortolomasi, Marco; Giacopuzzi, Mario; Priori, Alberto; Zago, Stefano
2012-09-01
In normal observers gazing at one's own face in the mirror for some minutes, at a low illumination level, triggers the perception of strange faces, a new perceptual illusion that has been named 'strange-face in the mirror'. Subjects see distortions of their own faces, but often they see monsters, archetypical faces, faces of dead relatives, and of animals. We designed this study to primarily compare strange-face apparitions in response to mirror gazing in patients with schizophrenia and healthy controls. The study included 16 patients with schizophrenia and 21 healthy controls. In this paper we administered a 7 minute mirror gazing test (MGT). Before the mirror gazing session, all subjects underwent assessment with the Cardiff Anomalous Perception Scale (CAPS). When the 7minute MGT ended, the experimenter assessed patients and controls with a specifically designed questionnaire and interviewed them, asking them to describe strange-face perceptions. Apparitions of strange-faces in the mirror were significantly more intense in schizophrenic patients than in controls. All the following variables were higher in patients than in healthy controls: frequency (p<.005) and cumulative duration of apparitions (p<.009), number and types of strange-faces (p<.002), self-evaluation scores on Likert-type scales of apparition strength (p<.03) and of reality of apparitions (p<.001). In schizophrenic patients, these Likert-type scales showed correlations (p<.05) with CAPS total scores. These results suggest that the increase of strange-face apparitions in schizophrenia can be produced by ego dysfunction, by body dysmorphic disorder and by misattribution of self-agency. MGT may help in completing the standard assessment of patients with schizophrenia, independently of hallucinatory psychopathology. Copyright © 2012 Elsevier B.V. All rights reserved.
Laughter exaggerates happy and sad faces depending on visual context
Sherman, Aleksandra; Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru
2012-01-01
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced visual perception of facial expressions. We simultaneously presented laughter with a happy, neutral, or sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distracter faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a re-examination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may similarly be context dependent. PMID:22215467
Developmental changes in perceptions of attractiveness: a role of experience?
Cooper, Philip A; Geldart, Sybil S; Mondloch, Catherine J; Maurer, Daphne
2006-09-01
In three experiments, we traced the development of the adult pattern of judgments of attractiveness for faces that have been altered to have internal features in low, average, or high positions. Twelve-year-olds and adults demonstrated identical patterns of results: they rated faces with features in an average location as significantly more attractive than faces with either low or high features. Although both 4-year-olds and 9-year-olds rated faces with high features as least attractive, unlike adults and 12-year-olds, they rated faces with low and average features as equally attractive. Three-year-olds with high levels of peer interaction, but not those with low levels of peer interaction, chose faces with low features as significantly more attractive than those with high-placed features, possibly as a result of their increased experience with the proportions of the faces of peers. Overall, the pattern of results is consistent with the hypothesis that experience influences perceptions of attractiveness, with the proportions of the faces participants see in their everyday lives influencing their perceptions of attractiveness.
Russell, Richard; Chatterjee, Garga; Nakayama, Ken
2011-01-01
Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers’ exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. PMID:22192636
Auto white balance method using a pigmentation separation technique for human skin color
NASA Astrophysics Data System (ADS)
Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi
2017-02-01
The human visual system maintains the perception of colors of an object across various light sources. Similarly, current digital cameras feature an auto white balance function, which estimates the illuminant color and corrects the color of a photograph as if the photograph was taken under a certain light source. The main subject in a photograph is often a person's face, which could be used to estimate the illuminant color. However, such estimation is adversely affected by differences in facial colors among individuals. The present paper proposes an auto white balance algorithm based on a pigmentation separation method that separates the human skin color image into the components of melanin, hemoglobin and shading. Pigment densities have a uniform property within the same race that can be calculated from the components of melanin and hemoglobin in the face. We, thus, propose a method that uses the subject's facial color in an image and is unaffected by individual differences in facial color among Japanese people.
Resources and well-being among Arab-American elders.
Ajrouch, Kristine J
2007-06-01
This study addresses diversity of aging experiences by examining the associations among immigrant status, religious affiliation, and resources in the form of both human and social capital with the well-being of Arab-American elders. Data were drawn from a face-to-face survey of 101 Arab-American men and women aged 56 and over living in the metropolitan Detroit area. Correlations demonstrate that religious affiliation is not associated with well-being. Multiple regression analyses reveal that U.S. born Arab Americans reported less frequent feelings of depression and greater life satisfaction than did immigrants, but this variation appears to be accounted for by human capital indicators including education level and language. Social capital including perceptions of the ability to confide in child and relationship quality with spouse is significantly associated with well-being, yet does not constitute a pathway to well-being for Arab-American elders. Human and social capital represent valuable resources and their distribution within this immigrant/ethnic group is associated with noteworthy variations in well-being.
Serial dependence in the perception of attractiveness.
Xia, Ye; Leib, Allison Yamanashi; Whitney, David
2016-12-01
The perception of attractiveness is essential for choices of food, object, and mate preference. Like perception of other visual features, perception of attractiveness is stable despite constant changes of image properties due to factors like occlusion, visual noise, and eye movements. Recent results demonstrate that perception of low-level stimulus features and even more complex attributes like human identity are biased towards recent percepts. This effect is often called serial dependence. Some recent studies have suggested that serial dependence also exists for perceived facial attractiveness, though there is also concern that the reported effects are due to response bias. Here we used an attractiveness-rating task to test the existence of serial dependence in perceived facial attractiveness. Our results demonstrate that perceived face attractiveness was pulled by the attractiveness level of facial images encountered up to 6 s prior. This effect was not due to response bias and did not rely on the previous motor response. This perceptual pull increased as the difference in attractiveness between previous and current stimuli increased. Our results reconcile previously conflicting findings and extend previous work, demonstrating that sequential dependence in perception operates across different levels of visual analysis, even at the highest levels of perceptual interpretation.
Functional selectivity for face processing in the temporal voice area of early deaf individuals
van Ackeren, Markus J.; Rabini, Giuseppe; Zonca, Joshua; Foa, Valentina; Baruffaldi, Francesca; Rezk, Mohamed; Pavani, Francesco; Rossion, Bruno; Collignon, Olivier
2017-01-01
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions. PMID:28652333
Misinterpretation of Facial Expressions of Emotion in Verbal Adults with Autism Spectrum Disorder
Eack, Shaun M.; MAZEFSKY, CARLA A.; Minshew, Nancy J.
2014-01-01
Facial emotion perception is significantly affected in autism spectrum disorder (ASD), yet little is known about how individuals with ASD misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with ASD and 30 age- and gender-matched volunteers without ASD to identify patterns of emotion misinterpretation during face processing that contribute to emotion recognition impairments in autism. Results revealed that difficulty distinguishing emotional from neutral facial expressions characterized much of the emotion perception impairments exhibited by participants with ASD. In particular, adults with ASD uniquely misinterpreted happy faces as neutral, and were significantly more likely than typical volunteers to attribute negative valence to non-emotional faces. The over-attribution of emotions to neutral faces was significantly related to greater communication and emotional intelligence impairments in individuals with ASD. These findings suggest a potential negative bias toward the interpretation of facial expressions and may have implications for interventions designed to remediate emotion perception in ASD. PMID:24535689
Misinterpretation of facial expressions of emotion in verbal adults with autism spectrum disorder.
Eack, Shaun M; Mazefsky, Carla A; Minshew, Nancy J
2015-04-01
Facial emotion perception is significantly affected in autism spectrum disorder, yet little is known about how individuals with autism spectrum disorder misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with autism spectrum disorder and 30 age- and gender-matched volunteers without autism spectrum disorder to identify patterns of emotion misinterpretation during face processing that contribute to emotion recognition impairments in autism. Results revealed that difficulty distinguishing emotional from neutral facial expressions characterized much of the emotion perception impairments exhibited by participants with autism spectrum disorder. In particular, adults with autism spectrum disorder uniquely misinterpreted happy faces as neutral, and were significantly more likely than typical volunteers to attribute negative valence to nonemotional faces. The over-attribution of emotions to neutral faces was significantly related to greater communication and emotional intelligence impairments in individuals with autism spectrum disorder. These findings suggest a potential negative bias toward the interpretation of facial expressions and may have implications for interventions designed to remediate emotion perception in autism spectrum disorder. © The Author(s) 2014.
The neuroscience of face processing and identification in eyewitnesses and offenders.
Werner, Nicole-Simone; Kühnel, Sina; Markowitsch, Hans J
2013-12-06
Humans are experts in face perception. We are better able to distinguish between the differences of faces and their components than between any other kind of objects. Several studies investigating the underlying neural networks provided evidence for deviated face processing in criminal individuals, although results are often confounded by accompanying mental or addiction disorders. On the other hand, face processing in non-criminal healthy persons can be of high juridical interest in cases of witnessing a felony and afterward identifying a culprit. Memory and therefore recognition of a person can be affected by many parameters and thus become distorted. But also face processing itself is modulated by different factors like facial characteristics, degree of familiarity, and emotional relation. These factors make the comparison of different cases, as well as the transfer of laboratory results to real live settings very challenging. Several neuroimaging studies have been published in recent years and some progress was made connecting certain brain activation patterns with the correct recognition of an individual. However, there is still a long way to go before brain imaging can make a reliable contribution to court procedures.
The Neuroscience of Face Processing and Identification in Eyewitnesses and Offenders
Werner, Nicole-Simone; Kühnel, Sina; Markowitsch, Hans J.
2013-01-01
Humans are experts in face perception. We are better able to distinguish between the differences of faces and their components than between any other kind of objects. Several studies investigating the underlying neural networks provided evidence for deviated face processing in criminal individuals, although results are often confounded by accompanying mental or addiction disorders. On the other hand, face processing in non-criminal healthy persons can be of high juridical interest in cases of witnessing a felony and afterward identifying a culprit. Memory and therefore recognition of a person can be affected by many parameters and thus become distorted. But also face processing itself is modulated by different factors like facial characteristics, degree of familiarity, and emotional relation. These factors make the comparison of different cases, as well as the transfer of laboratory results to real live settings very challenging. Several neuroimaging studies have been published in recent years and some progress was made connecting certain brain activation patterns with the correct recognition of an individual. However, there is still a long way to go before brain imaging can make a reliable contribution to court procedures. PMID:24367306
The economics of motion perception and invariants of visual sensitivity.
Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael
2007-06-21
Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.
Asymmetries of the human social brain in the visual, auditory and chemical modalities.
Brancucci, Alfredo; Lucci, Giuliana; Mazzatenta, Andrea; Tommasi, Luca
2009-04-12
Structural and functional asymmetries are present in many regions of the human brain responsible for motor control, sensory and cognitive functions and communication. Here, we focus on hemispheric asymmetries underlying the domain of social perception, broadly conceived as the analysis of information about other individuals based on acoustic, visual and chemical signals. By means of these cues the brain establishes the border between 'self' and 'other', and interprets the surrounding social world in terms of the physical and behavioural characteristics of conspecifics essential for impression formation and for creating bonds and relationships. We show that, considered from the standpoint of single- and multi-modal sensory analysis, the neural substrates of the perception of voices, faces, gestures, smells and pheromones, as evidenced by modern neuroimaging techniques, are characterized by a general pattern of right-hemispheric functional asymmetry that might benefit from other aspects of hemispheric lateralization rather than constituting a true specialization for social information.
Mutual information, perceptual independence, and holistic face perception.
Fitousi, Daniel
2013-07-01
The concept of perceptual independence is ubiquitous in psychology. It addresses the question of whether two (or more) dimensions are perceived independently. Several authors have proposed perceptual independence (or its lack thereof) as a viable measure of holistic face perception (Loftus, Oberg, & Dillon, Psychological Review 111:835-863, 2004; Wenger & Ingvalson, Learning, Memory, and Cognition 28:872-892, 2002). According to this notion, the processing of facial features occurs in an interactive manner. Here, I examine this idea from the perspective of two theories of perceptual independence: the multivariate uncertainty analysis (MUA; Garner & Morton, Definitions, models, and experimental paradigms. Psychological Bulletin 72:233-259, 1969), and the general recognition theory (GRT; Ashby & Townsend, Psychological Review 93:154-179, 1986). The goals of the study were to (1) introduce the MUA, (2) examine various possible relations between MUA and GRT using numerical simulations, and (3) apply the MUA to two consensual markers of holistic face perception(-)recognition of facial features (Farah, Wilson, Drain, & Tanaka, Psychological Review 105:482-498, 1998) and the composite face effect (Young, Hellawell, & Hay, Perception 16:747-759, 1987). The results suggest that facial holism is generated by violations of several types of perceptual independence. They highlight the important theoretical role played by converging operations in the study of holistic face perception.
Perceptions of Rule-Breaking Related to Marine Ecosystem Health
Slater, Matthew J.; Mgaya, Yunus D.; Stead, Selina M.
2014-01-01
Finding effective solutions to manage marine resources is high on political and conservation agendas worldwide. This is made more urgent by the rate of increase in the human population and concomitant resource pressures in coastal areas. This paper links empirical socio-economic data about perceptions of marine resource health to the breaking of marine management rules, using fisheries as a case study. The relationship between perceived rule-breaking (non-compliance with regulations controlling fishing) and perceived health of inshore marine environments was investigated through face-to-face interviews with 299 heads of households in three Tanzanian coastal communities in November and December 2011. Awareness of rules controlling fishing activity was high among all respondents. Fishers were able to describe more specific rules controlling fishing practices than non-fishers (t = 3.5, df = 297, p<0.01). Perceived breaking of fishing regulations was reported by nearly half of all respondents, saying “some” (32% of responses) or “most” (15% of responses) people break fishing rules. Ordinal regression modelling revealed a significant linkage (z = −3.44, p<0.001) in the relationship between respondents' perceptions of deteriorating marine health and their perception of increased rule-breaking. In this paper, inferences from an empirical study are used to identify and argue the potential for using perceptions of ecosystem health and level of rule-breaking as a means to guide management measures. When considering different management options (e.g. Marine Protected Areas), policy makers are advised to take account of and utilise likely egoistic or altruistic decision-making factors used by fishers to determine their marine activities. PMID:24586558
Perceptions of rule-breaking related to marine ecosystem health.
Slater, Matthew J; Mgaya, Yunus D; Stead, Selina M
2014-01-01
Finding effective solutions to manage marine resources is high on political and conservation agendas worldwide. This is made more urgent by the rate of increase in the human population and concomitant resource pressures in coastal areas. This paper links empirical socio-economic data about perceptions of marine resource health to the breaking of marine management rules, using fisheries as a case study. The relationship between perceived rule-breaking (non-compliance with regulations controlling fishing) and perceived health of inshore marine environments was investigated through face-to-face interviews with 299 heads of households in three Tanzanian coastal communities in November and December 2011. Awareness of rules controlling fishing activity was high among all respondents. Fishers were able to describe more specific rules controlling fishing practices than non-fishers (t = 3.5, df = 297, p<0.01). Perceived breaking of fishing regulations was reported by nearly half of all respondents, saying "some" (32% of responses) or "most" (15% of responses) people break fishing rules. Ordinal regression modelling revealed a significant linkage (z= -3.44, p<0.001) in the relationship between respondents' perceptions of deteriorating marine health and their perception of increased rule-breaking. In this paper, inferences from an empirical study are used to identify and argue the potential for using perceptions of ecosystem health and level of rule-breaking as a means to guide management measures. When considering different management options (e.g. Marine Protected Areas), policy makers are advised to take account of and utilise likely egoistic or altruistic decision-making factors used by fishers to determine their marine activities.
Iaria, Giuseppe; Fox, Christopher J; Scheel, Michael; Stowe, Robert M; Barton, Jason J S
2010-04-01
In this study, we report the case of a patient experiencing hallucinations of faces that could be reliably precipitated by looking at trees. Using functional Magnetic Resonance Imaging (fMRI), we found that face hallucinations were associated with increased and decreased neural activity in a number of cortical regions. Within the same fusiform face area, however, we found significant decreased and increased neural activity according to whether the patient was experiencing hallucinations or veridical perception of faces, respectively. These findings may indicate key differences in how hallucinatory and veridical perceptions lead to the same phenomenological experience of seeing faces.
The Naked Truth: The Face and Body Sensitive N170 Response Is Enhanced for Nude Bodies
Hietanen, Jari K.; Nummenmaa, Lauri
2011-01-01
Recent event-related potential studies have shown that the occipitotemporal N170 component - best known for its sensitivity to faces - is also sensitive to perception of human bodies. Considering that in the timescale of evolution clothing is a relatively new invention that hides the bodily features relevant for sexual selection and arousal, we investigated whether the early N170 brain response would be enhanced to nude over clothed bodies. In two experiments, we measured N170 responses to nude bodies, bodies wearing swimsuits, clothed bodies, faces, and control stimuli (cars). We found that the N170 amplitude was larger to opposite and same-sex nude vs. clothed bodies. Moreover, the N170 amplitude increased linearly as the amount of clothing decreased from full clothing via swimsuits to nude bodies. Strikingly, the N170 response to nude bodies was even greater than that to faces, and the N170 amplitude to bodies was independent of whether the face of the bodies was visible or not. All human stimuli evoked greater N170 responses than did the control stimulus. Autonomic measurements and self-evaluations showed that nude bodies were affectively more arousing compared to the other stimulus categories. We conclude that the early visual processing of human bodies is sensitive to the visibility of the sex-related features of human bodies and that the visual processing of other people's nude bodies is enhanced in the brain. This enhancement is likely to reflect affective arousal elicited by nude bodies. Such facilitated visual processing of other people's nude bodies is possibly beneficial in identifying potential mating partners and competitors, and for triggering sexual behavior. PMID:22110574
How does cognitive load influence speech perception? An encoding hypothesis.
Mitterer, Holger; Mattys, Sven L
2017-01-01
Two experiments investigated the conditions under which cognitive load exerts an effect on the acuity of speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference.
Prieto, Esther Alonso; Caharel, Stéphanie; Henson, Richard; Rossion, Bruno
2011-01-01
Compared to objects, pictures of faces elicit a larger early electromagnetic response at occipito-temporal sites on the human scalp, with an onset of 130 ms and a peak at about 170 ms. This N170 face effect is larger in the right than the left hemisphere and has been associated with the early categorization of the stimulus as a face. Here we tested whether this effect can be observed in the absence of some of the visual areas showing a preferential response to faces as typically identified in neuroimaging. Event-related potentials were recorded in response to faces, cars, and their phase-scrambled versions in a well-known brain-damaged case of prosopagnosia (PS). Despite the patient’s right inferior occipital gyrus lesion encompassing the most posterior cortical area showing preferential response to faces (“occipital face area”), we identified an early face-sensitive component over the right occipito-temporal hemisphere of the patient that was identified as the N170. A second experiment supported this conclusion, showing the typical N170 increase of latency and amplitude in response to inverted faces. In contrast, there was no N170 in the left hemisphere, where PS has a lesion to the middle fusiform gyrus and shows no evidence of face-preferential response in neuroimaging (no left “fusiform face area”). These results were replicated by a magnetoencephalographic investigation of the patient, disclosing a M170 component only in the right hemisphere. These observations indicate that face-preferential activation in the inferior occipital cortex is not necessary to elicit early visual responses associated with face perception (N170/M170) on the human scalp. These results further suggest that when the right inferior occipital cortex is damaged, the integrity of the middle fusiform gyrus and/or the superior temporal sulcus – two areas showing face-preferential responses in the patient’s right hemisphere – might be necessary to generate the N170 effect. PMID:22275889
Human and animal sounds influence recognition of body language.
Van den Stock, Jan; Grèzes, Julie; de Gelder, Beatrice
2008-11-25
In naturalistic settings emotional events have multiple correlates and are simultaneously perceived by several sensory systems. Recent studies have shown that recognition of facial expressions is biased towards the emotion expressed by a simultaneously presented emotional expression in the voice even if attention is directed to the face only. So far, no study examined whether this phenomenon also applies to whole body expressions, although there is no obvious reason why this crossmodal influence would be specific for faces. Here we investigated whether perception of emotions expressed in whole body movements is influenced by affective information provided by human and by animal vocalizations. Participants were instructed to attend to the action displayed by the body and to categorize the expressed emotion. The results indicate that recognition of body language is biased towards the emotion expressed by the simultaneously presented auditory information, whether it consist of human or of animal sounds. Our results show that a crossmodal influence from auditory to visual emotional information obtains for whole body video images with the facial expression blanked and includes human as well as animal sounds.
Conscious awareness is required for holistic face processing
Axelrod, Vadim; Rees, Geraint
2014-01-01
Investigating the limits of unconscious processing is essential to understand the function of consciousness. Here, we explored whether holistic face processing, a mechanism believed to be important for face processing in general, can be accomplished unconsciously. Using a novel “eyes-face” stimulus we tested whether discrimination of pairs of eyes was influenced by the surrounding face context. While the eyes were fully visible, the faces that provided context could be rendered invisible through continuous flash suppression. Two experiments with three different sets of face stimuli and a subliminal learning procedure converged to show that invisible faces did not influence perception of visible eyes. In contrast, surrounding faces, when they were clearly visible, strongly influenced perception of the eyes. Thus, we conclude that conscious awareness might be a prerequisite for holistic face processing. PMID:24950500
Hearing Faces: How the Infant Brain Matches the Face It Sees with the Speech It Hears
ERIC Educational Resources Information Center
Bristow, Davina; Dehaene-Lambertz, Ghislaine; Mattout, Jeremie; Soares, Catherine; Gliga, Teodora; Baillet, Sylvain; Mangin, Jean-Francois
2009-01-01
Speech is not a purely auditory signal. From around 2 months of age, infants are able to correctly match the vowel they hear with the appropriate articulating face. However, there is no behavioral evidence of integrated audiovisual perception until 4 months of age, at the earliest, when an illusory percept can be created by the fusion of the…
Jung, Wookyoung; Kang, Joong-Gu; Jeon, Hyeonjin; Shim, Miseon; Sun Kim, Ji; Leem, Hyun-Sung; Lee, Seung-Hwan
2017-08-01
Faces are processed best when they are presented in the left visual field (LVF), a phenomenon known as LVF superiority. Although one eye contributes more when perceiving faces, it is unclear how the dominant eye (DE), the eye we unconsciously use when performing a monocular task, affects face processing. Here, we examined the influence of the DE on the LVF superiority for faces using event-related potentials. Twenty left-eye-dominant (LDE group) and 23 right-eye-dominant (RDE group) participants performed the experiments. Face stimuli were randomly presented in the LVF or right visual field (RVF). The RDE group exhibited significantly larger N170 amplitudes compared with the LDE group. Faces presented in the LVF elicited N170 amplitudes that were significantly more negative in the RDE group than they were in the LDE group, whereas the amplitudes elicited by stimuli presented in the RVF were equivalent between the groups. The LVF superiority was maintained in the RDE group but not in the LDE group. Our results provide the first neural evidence of the DE's effects on the LVF superiority for faces. We propose that the RDE may be more biologically specialized for face processing. © The Author (2017). Published by Oxford University Press.
Jung, Wookyoung; Kang, Joong-Gu; Jeon, Hyeonjin; Shim, Miseon; Sun Kim, Ji; Leem, Hyun-Sung
2017-01-01
Abstract Faces are processed best when they are presented in the left visual field (LVF), a phenomenon known as LVF superiority. Although one eye contributes more when perceiving faces, it is unclear how the dominant eye (DE), the eye we unconsciously use when performing a monocular task, affects face processing. Here, we examined the influence of the DE on the LVF superiority for faces using event-related potentials. Twenty left-eye-dominant (LDE group) and 23 right-eye-dominant (RDE group) participants performed the experiments. Face stimuli were randomly presented in the LVF or right visual field (RVF). The RDE group exhibited significantly larger N170 amplitudes compared with the LDE group. Faces presented in the LVF elicited N170 amplitudes that were significantly more negative in the RDE group than they were in the LDE group, whereas the amplitudes elicited by stimuli presented in the RVF were equivalent between the groups. The LVF superiority was maintained in the RDE group but not in the LDE group. Our results provide the first neural evidence of the DE’s effects on the LVF superiority for faces. We propose that the RDE may be more biologically specialized for face processing. PMID:28379584
Kaiser, Daniel; Strnad, Lukas; Seidl, Katharina N.; Kastner, Sabine
2013-01-01
Visual cues from the face and the body provide information about another's identity, emotional state, and intentions. Previous neuroimaging studies that investigated neural responses to (bodiless) faces and (headless) bodies have reported overlapping face- and body-selective brain regions in right fusiform gyrus (FG). In daily life, however, faces and bodies are typically perceived together and are effortlessly integrated into the percept of a whole person, raising the possibility that neural responses to whole persons are qualitatively different than responses to isolated faces and bodies. The present study used fMRI to examine how FG activity in response to a whole person relates to activity in response to the same face and body but presented in isolation. Using multivoxel pattern analysis, we modeled person-evoked response patterns in right FG through a linear combination of face- and body-evoked response patterns. We found that these synthetic patterns were able to accurately approximate the response patterns to whole persons, with face and body patterns each adding unique information to the response patterns evoked by whole person stimuli. These results suggest that whole person responses in FG primarily arise from the coactivation of independent face- and body-selective neural populations. PMID:24108794
Translation and articulation in biological motion perception.
Masselink, Jana; Lappe, Markus
2015-08-01
Recent models of biological motion processing focus on the articulational aspect of human walking investigated by point-light figures walking in place. However, in real human walking, the change in the position of the limbs relative to each other (referred to as articulation) results in a change of body location in space over time (referred to as translation). In order to examine the role of this translational component on the perception of biological motion we designed three psychophysical experiments of facing (leftward/rightward) and articulation discrimination (forward/backward and leftward/rightward) of a point-light walker viewed from the side, varying translation direction (relative to articulation direction), the amount of local image motion, and trial duration. In a further set of a forward/backward and a leftward/rightward articulation task, we additionally tested the influence of translational speed, including catch trials without articulation. We found a perceptual bias in translation direction in all three discrimination tasks. In the case of facing discrimination the bias was limited to short stimulus presentation. Our results suggest an interaction of articulation analysis with the processing of translational motion leading to best articulation discrimination when translational direction and speed match articulation. Moreover, we conclude that the global motion of the center-of-mass of the dot pattern is more relevant to processing of translation than the local motion of the dots. Our findings highlight that translation is a relevant cue that should be integrated in models of human motion detection.
The influence of attention toward facial expressions on size perception.
Choi, Jeong-Won; Kim, Kiho; Lee, Jang-Han
2016-01-01
According to the New Look theory, size perception is affected by emotional factors. Although previous studies have attempted to explain the effects of both emotion and motivation on size perception, they have failed to identify the underlying mechanisms. This study aimed to investigate the underlying mechanisms of size perception by applying attention toward facial expressions using the Ebbinghaus illusion as a measurement tool. The participants, female university students, were asked to judge the size of a target stimulus relative to the size of facial expressions (i.e., happy, angry, and neutral) surrounding the target. The results revealed that the participants perceived angry and neutral faces to be larger than happy faces. This finding indicates that individuals pay closer attention to neutral and angry faces than happy ones. These results suggest that the mechanisms underlying size perception involve cognitive processes that focus attention toward relevant stimuli and block out irrelevant stimuli.
Abbas, Ozan Luay; Kurkcuoglu, Ayla; Aytop, Cigdem Derya; Uysal, Cengiz; Pelin, Can
2017-10-01
Visual perception of symmetry is a major determinant of satisfaction after aesthetic rhinoplasty. In this study, we sought to investigate the existence of any relationship between anthropometric characteristics of the face and visual perceptions of asymmetry among rhinoplasty patients and to evaluate tools that can shed light on patients who appear at high risk for exaggerating potential asymmetries. In the first part, 168 rhinoplasty patients were asked to fill out the demographic questionnaire, nasal shape evaluation scale, and the somatosensory amplification scale. In the second part, we examined the relationship between anthropometric characteristics of the face and visual perceptions of asymmetry using standardized photographs of 100 medical students. In the third part, patients answered the rhinoplasty outcome evaluation questionnaire 6 months after the surgery. Objectively, no symmetrical face was observed in the anthropometric evaluation. Subjectively, only 73% and 54% of the faces were considered asymmetrical by the rhinoplasty and the control groups, respectively. The rate of asymmetry perception was significantly greater in revision patients when compared with primary rhinoplasty patients. The relationship between the rate of subjective perception of asymmetry and the somatosensory amplification scale scores was statistically significant. We found a significant inverse relationship between the rate of asymmetry perception and the rhinoplasty outcome evaluation scores. Plastic surgeons should be aware of this high selectivity in asymmetry perception, which is associated with poor postoperative satisfaction. Somatosensory amplification scale may help identify rhinoplasty patients at a high risk for exaggerating potential asymmetries. III.
Cook, Stephanie; Kokmotou, Katerina; Soto, Vicente; Fallon, Nicholas; Tyson-Carr, John; Thomas, Anna; Giesbrecht, Timo; Field, Matt; Stancak, Andrej
2017-08-30
Odours alter evaluations of concurrent visual stimuli. However, neural mechanisms underlying the effects of congruent and incongruent odours on facial expression perception are not clear. Moreover, the influence of emotional faces on odour perception is not established. We investigated the effects of one pleasant and one unpleasant odour paired with happy and disgusted faces, on subjective ratings and ERP responses to faces. Participants rated the pleasantness of happy and disgusted faces that appeared during 3s pleasant or unpleasant odour pulses, or without odour. Odour pleasantness and intensity ratings were recorded in each trial. EEG was recorded continuously using a 128-channel system. Happy and disgusted faces paired with pleasant and unpleasant odour were rated as more or less pleasant, respectively, compared to the same faces presented in the other odour conditions. Odours were rated as more pleasant when paired with happy faces, and unpleasant odour was rated more intense when paired with disgusted faces. Unpleasant odour paired with disgusted faces also decreased inspiration. Odour-face interactions were evident in the N200 and N400 components. Our results reveal bi-directional effects of odours and faces, and suggest that odour-face interactions may be represented in ERP components. Pairings of unpleasant odour and disgusted faces resulted in stronger hedonic ratings, ERP changes, increased odour intensity ratings and respiratory adjustment. This finding likely represents heightened adaptive responses to multimodal unpleasant stimuli, prompting appropriate behaviour in the presence of danger. Copyright © 2017. Published by Elsevier B.V.
The facing bias in biological motion perception: structure, kinematics, and body parts.
Schouten, Ben; Troje, Nikolaus F; Verfaillie, Karl
2011-01-01
Depth-ambiguous point-light walkers (PLWs) elicit a facing bias: Observers perceive a PLW as facing toward them more often than as facing away (Vanrie,Dekeyser, & Verfaillie, Perception, 33, 547-560, 2004). While the facing bias correlates with the PLW's perceived gender (Brooks et al., Current Biology, 18, R728-R729, 2008; Schouten, Troje, Brooks, van der Zwan, & Verfaillie, Attention, Perception, & Psychophysics, 72,1256-1260, 2010), it remains unclear whether the change in perceived in-depth orientation is caused by a change in perceived gender. In Experiment 1, we show that structural and kinematic stimulus properties that lead to the same changes in perceived gender elicit opposite changes in perceived in-depth orientation, indicating that the relation between perceived gender and in-depth orientation is not causal. The results of Experiments 2 and 3 further suggest that the perceived in-depth orientation of PLWs is strongly affected by locally acting stimulus properties. The facing bias seems to be induced by stimulus properties in the lower part of the PLW.
Facial biases on vocal perception and memory.
Boltz, Marilyn G
2017-06-01
Does a speaker's face influence the way their voice is heard and later remembered? This question was addressed through two experiments where in each, participants listened to middle-aged voices accompanied by faces that were either age-appropriate, younger or older than the voice or, as a control, no face at all. In Experiment 1, participants evaluated each voice on various acoustical dimensions and speaker characteristics. The results showed that facial displays influenced perception such that the same voice was heard differently depending on the age of the accompanying face. Experiment 2 further revealed that facial displays led to memory distortions that were age-congruent in nature. These findings illustrate that faces can activate certain social categories and preconceived stereotypes that then influence vocal and person perception in a corresponding fashion. Processes of face/voice integration are very similar to those of music/film, indicating that the two areas can mutually inform one another and perhaps, more generally, reflect a centralized mechanism of cross-sensory integration. Copyright © 2017 Elsevier B.V. All rights reserved.
Differential risk perception of rural and urban Burrowing Owls exposed to humans and dogs.
Cavalli, Matilde; Baladrón, Alejandro V; Isacch, Juan Pablo; Biondi, Laura M; Bó, María Susana
2016-03-01
Urban areas expose wildlife to an array of novel predators, amongst which, humans and dogs are highly frequent. Thus, wild animals living in urban areas are forced to invest more time and energy in defence behaviours, which depend on how the risk is perceived and assessed. We experimentally tested whether Burrowing owls coming from rural and urban habitats showed differences in behavioural responses when facing humans and domestic dogs. We measured flight initiation distances (FIDs), nest returning, and aggressiveness level when owls faced a human and a human with a dog walking towards them. Our results showed that urban owls recognise a human with a dog as a greater threat than a human alone, thus indicating that fear of domestic animals should be considered as affecting owls' settlement in cities and towns. On the other hand, rural owls perceived human and dogs as similar threats, but showed higher FIDs, less aggressiveness, and lower tendency to return to the nest than urban owls in both treatments. These findings emphasize the importance of modified habitats in modelling the response of urban and rural owls to predators and represent another step in the explanation of how wild animals assess and respond to threats associated with living in urbanized environments. Copyright © 2015 Elsevier B.V. All rights reserved.
Mesial temporal lobe epilepsy diminishes functional connectivity during emotion perception.
Steiger, Bettina K; Muller, Angela M; Spirig, Esther; Toller, Gianina; Jokeit, Hennric
2017-08-01
Unilateral mesial temporal lobe epilepsy (MTLE) has been associated with impaired recognition of emotional facial expressions. Correspondingly, imaging studies showed decreased activity of the amygdala and cortical face processing regions in response to emotional faces. However, functional connectivity among regions involved in emotion perception has not been studied so far. To address this, we examined intrinsic functional connectivity (FC) modulated by the perception of dynamic fearful faces among the amygdala and limbic, frontal, temporal and brainstem regions. Regions of interest were identified in an activation analysis by presenting a block-design with dynamic fearful faces and dynamic landscapes to 15 healthy individuals. This led to 10 predominately right-hemispheric regions. Functional connectivity between these regions during the perception of fearful faces was examined in drug-refractory patients with left- (n=16) or right-sided (n=17) MTLE, epilepsy patients with extratemporal seizure onset (n=15) and a second group of 15 healthy controls. Healthy controls showed a widespread functional network modulated by the perception of fearful faces that encompassed bilateral amygdalae, limbic, cortical, subcortical and brainstem regions. In patients with left MTLE, a downsized network of frontal and temporal regions centered on the right amygdala was present. Patients with right MTLE showed almost no significant functional connectivity. A maintained network in the epilepsy control group indicates that findings in mesial temporal lobe epilepsy could not be explained by clinical factors such as seizures and antiepileptic medication. Functional networks underlying facial emotion perception are considerably changed in left and right MTLE. Alterations are present for both hemispheres in either MTLE group, but are more pronounced in right MTLE. Disruption of the functional network architecture possibly contributes to deficits in facial emotion recognition frequently reported in MTLE. Copyright © 2017 Elsevier B.V. All rights reserved.
Selective attention modulates early human evoked potentials during emotional face-voice processing.
Ho, Hao Tam; Schröger, Erich; Kotz, Sonja A
2015-04-01
Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face-voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face-voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face-voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective-one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.
Behavioural evidence for distinct mechanisms related to global and biological motion perception.
Miller, Louisa; Agnew, Hannah C; Pilz, Karin S
2018-01-01
The perception of human motion is a vital ability in our daily lives. Human movement recognition is often studied using point-light stimuli in which dots represent the joints of a moving person. Depending on task and stimulus, the local motion of the single dots, and the global form of the stimulus can be used to discriminate point-light stimuli. Previous studies often measured motion coherence for global motion perception and contrasted it with performance in biological motion perception to assess whether difficulties in biological motion processing are related to more general difficulties with motion processing. However, it is so far unknown as to how performance in global motion tasks relates to the ability to use local motion or global form to discriminate point-light stimuli. Here, we investigated this relationship in more detail. In Experiment 1, we measured participants' ability to discriminate the facing direction of point-light stimuli that contained primarily local motion, global form, or both. In Experiment 2, we embedded point-light stimuli in noise to assess whether previously found relationships in task performance are related to the ability to detect signal in noise. In both experiments, we also assessed motion coherence thresholds from random-dot kinematograms. We found relationships between performances for the different biological motion stimuli, but performance for global and biological motion perception was unrelated. These results are in accordance with previous neuroimaging studies that highlighted distinct areas for global and biological motion perception in the dorsal pathway, and indicate that results regarding the relationship between global motion perception and biological motion perception need to be interpreted with caution. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Evidence from Meta-Analyses of the Facial Width-to-Height Ratio as an Evolved Cue of Threat
Geniole, Shawn N.; Denson, Thomas F.; Dixson, Barnaby J.; Carré, Justin M.; McCormick, Cheryl M.
2015-01-01
The facial width-to-height ratio (FWHR) is the width of the face divided by the height of the upper face. There is mixed evidence for the hypothesis that the FWHR is a cue of threat and dominance in the human face. We conducted a systematic review and meta-analyses of all peer-reviewed studies (and 2 unpublished studies) to estimate the magnitude of the sex difference in the FWHR, and the magnitude of the relationship between the FWHR and threatening and dominant behaviours and perceptions. Studies were eligible for inclusion if the authors reported an analysis involving the FWHR. Our analyses revealed that the FWHR was larger in men than in women (d¯ = .11, n = 10,853), cued judgements of masculinity in men (r¯ = .35, n of faces = 487; n of observers = 339), and was related to body mass index (r¯ = .31, n = 2,506). Further, the FWHR predicted both threat behaviour in men (r¯ = .16, n = 4,603) and dominance behaviour in both sexes (r¯ = .12, n = 948) across a variety of indices. Individuals with larger FWHRs were judged by observers as more threatening (r¯ = .46, n of faces = 1,691; n of observers = 2,076) and more dominant (r¯ = .20, n of faces = 603; n of observers = 236) than those with smaller FWHRs. Individuals with larger FWHRs were also judged as less attractive (r¯ = -.26, n of faces = 721; n of observers = 335), especially when women made the judgements. These findings provide some support for the hypothesis that the FWHR is part of an evolved cueing system of intra-sexual threat and dominance in men. A limitation of the meta-analyses on perceptions of threat and dominance were the low number of stimuli involving female and older adult faces. PMID:26181579
Effects of androstadienone on dominance perception in males with low and high social anxiety.
Banner, Amir; Shamay-Tsoory, Simone
2018-05-25
Increasing evidence suggests that humans can communicate both trait-dominance and state-dominance via body odor. Androstadienone (androsta-4,16,-dien-3-one), a chemosignal found in human sweat, seems to be a likely candidate for signaling dominance in humans. The aim of the current study was to investigate the effects of androstadienone on the perception of social dominance. Moreover, we examined whether high levels of social anxiety, a psychopathology involving concerns that specifically pertain to social dominance, are associated with increased sensitivity to androstadienone as a chemical cue of dominance. In a double-blind, placebo-controlled, within-subject design, 64 heterosexual male participants (32 with high social anxiety and 32 withconfliow social anxiety) viewed facial images of males depicting dominant, neutral and submissive postures, and were asked to recognize and rate the dominance expressed in those images. Participants completed the task twice, once under exposure to androstadienone and once under exposure to a control solution. The results indicate that androstadienone increased the perceived dominance of men's faces, specifically among participants with high social anxiety. These findings suggest a direct influence of androstadienone on dominance perception and further highlight the preferential processing of dominance and social threat signals evident in social anxiety. Copyright © 2018 Elsevier Ltd. All rights reserved.
Association between autistic traits and emotion adaptation to partially occluded faces.
Luo, Chengwen; Burns, Edwin; Xu, Hong
2017-04-01
Prolonged exposure to a happy face makes subsequently presented faces appear sadder: the facial emotion aftereffect (FEA). People with autism spectrum disorders and their relatives have diminished holistic perception of faces. Levels of autism can be measured continuously in the general population by autistic traits using the autism-quotient (AQ). Prior work has not found any association between AQ and FEA in adults, possibly due to non-holistic processing strategies employed by those at the higher end of the spectrum. In the present study, we tested whether AQ was associated with FEA to partially occluded faces. We hypothesized that inferring emotion from such faces would require participants to process their viewable parts as a gestalt percept, thus we anticipated this ability would diminish as autistic traits increased. In Experiment 1, we partially occluded the adapting faces with aligned or misaligned opaque bars. Both conditions produced significant FEAs, with aftereffects and AQ negatively correlated. In Experiment 2, we adapted participants to obscured faces flickering in luminance, and manipulated the facilitation of holistic perception by varying the synchronization of this flickering. We found significant FEAs in all conditions, but abolished its association with AQ. In Experiment 3, we showed that the association between AQ and FEA in the occluded conditions in Experiment 1 was not due to the recognizability or perceived emotional intensity of our adaptors; although the overall FEAs were linked to emotional intensity. We propose that increasing autistic traits are associated with diminishing abilities in perceiving emotional faces as a gestalt percept. Copyright © 2017 Elsevier Ltd. All rights reserved.
McElfish, Pearl A; Chughtai, Almas; Low, Lisa K; Garner, Robert; Purvis, Rachel S
2018-05-04
Marshallese migrating to the United States encounter challenges in accessing health care. Previous literature has investigated Marshallese participants' perceptions of the barriers they face in accessing health care. For this study, health care providers managing the care of Marshallese patients were interviewed to understand the providers' perception of barriers that their Marshallese patients encounter. A qualitative research design was utilized to explore health care providers' perceptions of and experiences with the barriers faced by their Marshallese patients when accessing the US health care system. The primary barriers identified were: (1) economic barriers; (2) communication challenges; (3) difficulty understanding and navigating the western health care system; and (4) structural and system barriers. This study provides insight on the barriers Marshallese patients face in accessing health care as well as the barriers providers face in delivering care to Marshallese patients. A better understanding of these barriers can help health care providers and educators to begin initiating improvements in the delivery of care to Marshallese patients.
Facial cues to perceived height influence leadership choices in simulated war and peace contexts.
Re, Daniel E; DeBruine, Lisa M; Jones, Benedict C; Perrett, David I
2013-01-31
Body size and other signs of physical prowess are associated with leadership hierarchies in many social species. Here we (1) assess whether facial cues associated with perceived height and masculinity have different effects on leadership judgments in simulated wartime and peacetime contexts and (2) test how facial cues associated with perceived height and masculinity influence dominance perceptions. Results indicate that cues associated with perceived height and masculinity in potential leaders‟ faces are valued more in a wartime (vs. peacetime) context. Furthermore, increasing cues of apparent height and masculinity in faces increased perceived dominance. Together, these findings suggest that facial cues of physical stature contribute to establishing leadership hierarchies in humans.
Dissociation between the neural correlates of conscious face perception and visual attention.
Navajas, Joaquin; Nitka, Aleksander W; Quian Quiroga, Rodrigo
2017-08-01
Given the higher chance to recognize attended compared to unattended stimuli, the specific neural correlates of these two processes, attention and awareness, tend to be intermingled in experimental designs. In this study, we dissociated the neural correlates of conscious face perception from the effects of visual attention. To do this, we presented faces at the threshold of awareness and manipulated attention through the use of exogenous prestimulus cues. We show that the N170 component, a scalp EEG marker of face perception, was modulated independently by attention and by awareness. An earlier P1 component was not modulated by either of the two effects and a later P3 component was indicative of awareness but not of attention. These claims are supported by converging evidence from (a) modulations observed in the average evoked potentials, (b) correlations between neural and behavioral data at the single-subject level, and (c) single-trial analyses. Overall, our results show a clear dissociation between the neural substrates of attention and awareness. Based on these results, we argue that conscious face perception is triggered by a boost in face-selective cortical ensembles that can be modulated by, but are still independent from, visual attention. © 2017 Society for Psychophysiological Research.
The neural organization of perception in chess experts.
Krawczyk, Daniel C; Boggan, Amy L; McClelland, M Michelle; Bartlett, James C
2011-07-20
The human visual system responds to expertise, and it has been suggested that regions that process faces also process other objects of expertise including chess boards by experts. We tested whether chess and face processing overlap in brain activity using fMRI. Chess experts and novices exhibited face selective areas, but these regions showed no selectivity to chess configurations relative to other stimuli. We next compared neural responses to chess and to scrambled chess displays to isolate areas relevant to expertise. Areas within the posterior cingulate, orbitofrontal cortex, and right temporal cortex were active in this comparison in experts over novices. We also compared chess and face responses within the posterior cingulate and found this area responsive to chess only in experts. These findings indicate that the configurations in chess are not strongly processed by face-selective regions that are selective for faces in individuals who have expertise in both domains. Further, the area most consistently involved in chess did not show overlap with faces. Overall, these results suggest that expert visual processing may be similar at the level of recognition, but need not show the same neural correlates. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Many faces of expertise: fusiform face area in chess experts and novices.
Bilalić, Merim; Langner, Robert; Ulrich, Rolf; Grodd, Wolfgang
2011-07-13
The fusiform face area (FFA) is involved in face perception to such an extent that some claim it is a brain module for faces exclusively. The other possibility is that FFA is modulated by experience in individuation in any visual domain, not only faces. Here we test this latter FFA expertise hypothesis using the game of chess as a domain of investigation. We exploited the characteristic of chess, which features multiple objects forming meaningful spatial relations. In three experiments, we show that FFA activity is related to stimulus properties and not to chess skill directly. In all chess and non-chess tasks, experts' FFA was more activated than that of novices' only when they dealt with naturalistic full-board chess positions. When common spatial relationships formed by chess objects in chess positions were randomly disturbed, FFA was again differentially active only in experts, regardless of the actual task. Our experiments show that FFA contributes to the holistic processing of domain-specific multipart stimuli in chess experts. This suggests that FFA may not only mediate human expertise in face recognition but, supporting the expertise hypothesis, may mediate the automatic holistic processing of any highly familiar multipart visual input.
Hu, Yuanyan; Abbasi, Najam ul Hasan; Zhang, Yang; Chen, Hong
2018-01-01
Facial sexual dimorphism has widely demonstrated as having an influence on the facial attractiveness and social interactions. However, earlier studies show inconsistent results on the effect of sexual dimorphism on facial attractiveness judgments. Previous studies suggest that the level of attractiveness might work as a moderating variable among the relationship between sexual dimorphism and facial preference and have often focused on the effect of sexual dimorphism on general attractiveness ratings, rather than concentrating on trustworthiness perception. Male and female participants viewed target male and female faces that varied on attractiveness (more attractive or less attractive) and sexual dimorphism (masculine or feminine). Participants rated the attractiveness of the faces and reported how much money they would give to the target person as a measure of trust. For the facial attractiveness ratings, (a) both men and women participants preferred masculine male faces to feminine male ones under the more attractive condition, whereas preferred feminine male faces to masculine male ones under the less attractive condition; (b) all participants preferred feminine female faces to masculine female ones under the less attractive condition, while there were no differences between feminine female faces and masculine female faces under the more attractive condition. For the target trustworthiness perception, (a) participants showed no preference between masculine male faces and feminine male faces, neither under the more attractive condition nor the less attractiveness condition; (b) however, all the participants preferred masculine female faces over feminine female faces under the more attractive condition, exhibiting no preference between feminine female faces and masculine female faces under the less attractive condition. These findings suggest that the attractiveness of facial stimulus may be a reason to interpret the inconsistent results from the previous studies, which focused on the effect of facial sexual dimorphism on the facial attractiveness. Furthermore, implications about the effect of target facial sexual dimorphism on participants’ trustworthiness perception were discussed.
Laughter exaggerates happy and sad faces depending on visual context.
Sherman, Aleksandra; Sweeny, Timothy D; Grabowecky, Marcia; Suzuki, Satoru
2012-04-01
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced the visual perception of facial expressions. We presented a sound clip of laughter simultaneously with a happy, a neutral, or a sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of the happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces, laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distractor faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a reexamination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may be similarly context dependent.
Russell, Richard; Chatterjee, Garga; Nakayama, Ken
2012-01-01
Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers' exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. Copyright © 2011 Elsevier Ltd. All rights reserved.
Vingilis-Jaremko, Larissa; Maurer, Daphne; Rhodes, Gillian; Jeffery, Linda
2016-08-03
Adults who missed early visual input because of congenital cataracts later have deficits in many aspects of face processing. Here we investigated whether they make normal judgments of facial attractiveness. In particular, we studied whether their perceptions are affected normally by a face's proximity to the population mean, as is true of typically developing adults, who find average faces to be more attractive than most other faces. We compared the judgments of facial attractiveness of 12 cataract-reversal patients to norms established from 36 adults with normal vision. Participants viewed pairs of adult male and adult female faces that had been transformed 50% toward and 50% away from their respective group averages, and selected which face was more attractive. Averageness influenced patients' judgments of attractiveness, but to a lesser extent than controls. The results suggest that cataract-reversal patients are able to develop a system for representing faces with a privileged position for an average face, consistent with evidence from identity aftereffects. However, early visual experience is necessary to set up the neural architecture necessary for averageness to influence perceptions of attractiveness with its normal potency. © The Author(s) 2016.
Vaina, Lucia M.; Rana, Kunjan D.; Cotos, Ionela; Li-Yang, Chen; Huang, Melissa A.; Podea, Delia
2014-01-01
Background Deficits in face emotion perception are among the most pervasive aspects of schizophrenia impairments which strongly affects interpersonal communication and social skills. Material/Methods Schizophrenic patients (PSZ) and healthy control subjects (HCS) performed 2 psychophysical tasks. One, the SAFFIMAP test, was designed to determine the impact of subliminally presented affective or neutral images on the accuracy of face-expression (angry or neutral) perception. In the second test, FEP, subjects saw pictures of face-expression and were asked to rate them as angry, happy, or neutral. The following clinical scales were used to determine the acute symptoms in PSZ: Positive and Negative Syndrome (PANSS), Young Mania Rating (YMRS), Hamilton Depression (HAM-D), and Hamilton Anxiety (HAM-A). Results On the SAFFIMAP test, different from the HCS group, the PSZ group tended to categorize the neutral expression of test faces as angry and their response to the test-face expression was not influenced by the affective content of the primes. In PSZ, the PANSS-positive score was significantly correlated with correct perception of angry faces for aggressive or pleasant primes. YMRS scores were strongly correlated with PSZ’s tendency to recognize angry face expressions when the prime was a pleasant or a neutral image. The HAM-D score was positively correlated with categorizing the test-faces as neutral, regardless of the affective content of the prime or of the test-face expression (angry or neutral). Conclusions Despite its exploratory nature, this study provides the first evidence that conscious perception and categorization of facial emotions (neutral or angry) in PSZ is directly affected by their positive or negative symptoms of the disease as defined by their individual scores on the clinical diagnostic scales. PMID:25537115
Vaina, Lucia Maria; Rana, Kunjan D; Cotos, Ionela; Li-Yang, Chen; Huang, Melissa A; Podea, Delia
2014-12-24
Deficits in face emotion perception are among the most pervasive aspects of schizophrenia impairments which strongly affects interpersonal communication and social skills. Schizophrenic patients (PSZ) and healthy control subjects (HCS) performed 2 psychophysical tasks. One, the SAFFIMAP test, was designed to determine the impact of subliminally presented affective or neutral images on the accuracy of face-expression (angry or neutral) perception. In the second test, FEP, subjects saw pictures of face-expression and were asked to rate them as angry, happy, or neutral. The following clinical scales were used to determine the acute symptoms in PSZ: Positive and Negative Syndrome (PANSS), Young Mania Rating (YMRS), Hamilton Depression (HAM-D), and Hamilton Anxiety (HAM-A). On the SAFFIMAP test, different from the HCS group, the PSZ group tended to categorize the neutral expression of test faces as angry and their response to the test-face expression was not influenced by the affective content of the primes. In PSZ, the PANSS-positive score was significantly correlated with correct perception of angry faces for aggressive or pleasant primes. YMRS scores were strongly correlated with PSZ's tendency to recognize angry face expressions when the prime was a pleasant or a neutral image. The HAM-D score was positively correlated with categorizing the test-faces as neutral, regardless of the affective content of the prime or of the test-face expression (angry or neutral). Despite its exploratory nature, this study provides the first evidence that conscious perception and categorization of facial emotions (neutral or angry) in PSZ is directly affected by their positive or negative symptoms of the disease as defined by their individual scores on the clinical diagnostic scales.
Bublatzky, Florian; Gerdes, Antje B. M.; White, Andrew J.; Riemer, Martin; Alpers, Georg W.
2014-01-01
Human face perception is modulated by both emotional valence and social relevance, but their interaction has rarely been examined. Event-related brain potentials (ERP) to happy, neutral, and angry facial expressions with different degrees of social relevance were recorded. To implement a social anticipation task, relevance was manipulated by presenting faces of two specific actors as future interaction partners (socially relevant), whereas two other face actors remained non-relevant. In a further control task all stimuli were presented without specific relevance instructions (passive viewing). Face stimuli of four actors (2 women, from the KDEF) were randomly presented for 1s to 26 participants (16 female). Results showed an augmented N170, early posterior negativity (EPN), and late positive potential (LPP) for emotional in contrast to neutral facial expressions. Of particular interest, face processing varied as a function of experimental tasks. Whereas task effects were observed for P1 and EPN regardless of instructed relevance, LPP amplitudes were modulated by emotional facial expression and relevance manipulation. The LPP was specifically enhanced for happy facial expressions of the anticipated future interaction partners. This underscores that social relevance can impact face processing already at an early stage of visual processing. These findings are discussed within the framework of motivated attention and face processing theories. PMID:25076881
Face processing in different brain areas, and critical band masking.
Rolls, Edmund T
2008-09-01
Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size, view, and spatial frequency of faces and objects, and that these neurons show rapid processing and rapid learning. Critical band spatial frequency masking is shown to be a property of these face-selective neurons and of the human visual perception of faces. Which face or object is present is encoded using a distributed representation in which each neuron conveys independent information in its firing rate, with little information evident in the relative time of firing of different neurons. This ensemble encoding has the advantages of maximizing the information in the representation useful for discrimination between stimuli using a simple weighted sum of the neuronal firing by the receiving neurons, generalization, and graceful degradation. These invariant representations are ideally suited to provide the inputs to brain regions such as the orbitofrontal cortex and amygdala that learn the reinforcement associations of an individual's face, for then the learning, and the appropriate social and emotional responses generalize to other views of the same face. A theory is described of how such invariant representations may be produced by self-organizing learning in a hierarchically organized set of visual cortical areas with convergent connectivity. The theory utilizes either temporal or spatial continuity with an associative synaptic modification rule. Another population of neurons in the cortex in the superior temporal sulcus encodes other aspects of faces such as face expression, eye-gaze, face view, and whether the head is moving. These neurons thus provide important additional inputs to parts of the brain such as the orbitofrontal cortex and amygdala that are involved in social communication and emotional behaviour. Outputs of these systems reach the amygdala, in which face-selective neurons are found, and also the orbitofrontal cortex, in which some neurons are tuned to face identity and others to face expression. In humans, activation of the orbitofrontal cortex is found when a change of face expression acts as a social signal that behaviour should change; and damage to the human orbitofrontal and pregenual cingulate cortex can impair face and voice expression identification, and also the reversal of emotional behaviour that normally occurs when reinforcers are reversed.
The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex.
Weiner, Kevin S; Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A; Grill-Spector, Kalanit; Rossion, Bruno
2016-08-10
Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. Copyright © 2016 the authors 0270-6474/16/368426-16$15.00/0.
Faces in Context: A Review and Systematization of Contextual Influences on Affective Face Processing
Wieser, Matthias J.; Brosch, Tobias
2012-01-01
Facial expressions are of eminent importance for social interaction as they convey information about other individuals’ emotions and social intentions. According to the predominant “basic emotion” approach, the perception of emotion in faces is based on the rapid, automatic categorization of prototypical, universal expressions. Consequently, the perception of facial expressions has typically been investigated using isolated, de-contextualized, static pictures of facial expressions that maximize the distinction between categories. However, in everyday life, an individual’s face is not perceived in isolation, but almost always appears within a situational context, which may arise from other people, the physical environment surrounding the face, as well as multichannel information from the sender. Furthermore, situational context may be provided by the perceiver, including already present social information gained from affective learning and implicit processing biases such as race bias. Thus, the perception of facial expressions is presumably always influenced by contextual variables. In this comprehensive review, we aim at (1) systematizing the contextual variables that may influence the perception of facial expressions and (2) summarizing experimental paradigms and findings that have been used to investigate these influences. The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer. These findings further challenge the assumption of automatic, hardwired categorical emotion extraction mechanisms predicted by basic emotion theories. Taking into account a recent model on face processing, we discuss where and when these different contextual influences may take place, thus outlining potential avenues in future research. PMID:23130011
Wieser, Matthias J; Brosch, Tobias
2012-01-01
Facial expressions are of eminent importance for social interaction as they convey information about other individuals' emotions and social intentions. According to the predominant "basic emotion" approach, the perception of emotion in faces is based on the rapid, automatic categorization of prototypical, universal expressions. Consequently, the perception of facial expressions has typically been investigated using isolated, de-contextualized, static pictures of facial expressions that maximize the distinction between categories. However, in everyday life, an individual's face is not perceived in isolation, but almost always appears within a situational context, which may arise from other people, the physical environment surrounding the face, as well as multichannel information from the sender. Furthermore, situational context may be provided by the perceiver, including already present social information gained from affective learning and implicit processing biases such as race bias. Thus, the perception of facial expressions is presumably always influenced by contextual variables. In this comprehensive review, we aim at (1) systematizing the contextual variables that may influence the perception of facial expressions and (2) summarizing experimental paradigms and findings that have been used to investigate these influences. The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer. These findings further challenge the assumption of automatic, hardwired categorical emotion extraction mechanisms predicted by basic emotion theories. Taking into account a recent model on face processing, we discuss where and when these different contextual influences may take place, thus outlining potential avenues in future research.
Super-recognizers: People with extraordinary face recognition ability
Russell, Richard; Duchaine, Brad; Nakayama, Ken
2014-01-01
We tested four people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all four experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than inverted faces, and the four subjects showed a larger ‘inversion effect’ than control subjects, who in turn showed a larger inversion effect than developmental prosopagnosics. This indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these ‘super-recognizers’ are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability, and show that the range of face recognition and face perception ability is wider than previously acknowledged. PMID:19293090
Super-recognizers: people with extraordinary face recognition ability.
Russell, Richard; Duchaine, Brad; Nakayama, Ken
2009-04-01
We tested 4 people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all 4 experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than with inverted faces, and the 4 subjects showed a larger "inversion effect" than did control subjects, who in turn showed a larger inversion effect than did developmental prosopagnosics. This result indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these "super-recognizers" are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability and show that the range of face recognition and face perception ability is wider than has been previously acknowledged.
Conscious awareness is required for holistic face processing.
Axelrod, Vadim; Rees, Geraint
2014-07-01
Investigating the limits of unconscious processing is essential to understand the function of consciousness. Here, we explored whether holistic face processing, a mechanism believed to be important for face processing in general, can be accomplished unconsciously. Using a novel "eyes-face" stimulus we tested whether discrimination of pairs of eyes was influenced by the surrounding face context. While the eyes were fully visible, the faces that provided context could be rendered invisible through continuous flash suppression. Two experiments with three different sets of face stimuli and a subliminal learning procedure converged to show that invisible faces did not influence perception of visible eyes. In contrast, surrounding faces, when they were clearly visible, strongly influenced perception of the eyes. Thus, we conclude that conscious awareness might be a prerequisite for holistic face processing. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Motion facilitates face perception across changes in viewpoint and expression in older adults.
Maguinness, Corrina; Newell, Fiona N
2014-12-01
Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Attention to emotion modulates fMRI activity in human right superior temporal sulcus.
Narumoto, J; Okada, T; Sadato, N; Fukui, K; Yonekura, Y
2001-10-01
A parallel neural network has been proposed for processing various types of information conveyed by faces including emotion. Using functional magnetic resonance imaging (fMRI), we tested the effect of the explicit attention to the emotional expression of the faces on the neuronal activity of the face-responsive regions. Delayed match to sample procedure was adopted. Subjects were required to match the visually presented pictures with regard to the contour of the face pictures, facial identity, and emotional expressions by valence (happy and fearful expressions) and arousal (fearful and sad expressions). Contour matching of the non-face scrambled pictures was used as a control condition. The face-responsive regions that responded more to faces than to non-face stimuli were the bilateral lateral fusiform gyrus (LFG), the right superior temporal sulcus (STS), and the bilateral intraparietal sulcus (IPS). In these regions, general attention to the face enhanced the activities of the bilateral LFG, the right STS, and the left IPS compared with attention to the contour of the facial image. Selective attention to facial emotion specifically enhanced the activity of the right STS compared with attention to the face per se. The results suggest that the right STS region plays a special role in facial emotion recognition within distributed face-processing systems. This finding may support the notion that the STS is involved in social perception.
Relating brain signal variability to knowledge representation.
Heisz, Jennifer J; Shedden, Judith M; McIntosh, Anthony R
2012-11-15
We assessed the hypothesis that brain signal variability is a reflection of functional network reconfiguration during memory processing. In the present experiments, we use multiscale entropy to capture the variability of human electroencephalogram (EEG) while manipulating the knowledge representation associated with faces stored in memory. Across two experiments, we observed increased variability as a function of greater knowledge representation. In Experiment 1, individuals with greater familiarity for a group of famous faces displayed more brain signal variability. In Experiment 2, brain signal variability increased with learning after multiple experimental exposures to previously unfamiliar faces. The results demonstrate that variability increases with face familiarity; cognitive processes during the perception of familiar stimuli may engage a broader network of regions, which manifests as higher complexity/variability in spatial and temporal domains. In addition, effects of repetition suppression on brain signal variability were observed, and the pattern of results is consistent with a selectivity model of neural adaptation. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
Face processing in autism spectrum disorders: from brain regions to brain networks
Nomi, Jason S.; Uddin, Lucina Q.
2015-01-01
Autism spectrum disorder (ASD) is characterized by reduced attention to social stimuli including the human face. This hypo-responsiveness to stimuli that are engaging to typically developing individuals may result from dysfunctioning motivation, reward, and attention systems in the brain. Here we review an emerging neuroimaging literature that emphasizes a shift from focusing on hypo-activation of isolated brain regions such as the fusiform gyrus, amygdala, and superior temporal sulcus in ASD to a more holistic approach to understanding face perception as a process supported by distributed cortical and subcortical brain networks. We summarize evidence for atypical activation patterns within brain networks that may contribute to social deficits characteristic of the disorder. We conclude by pointing to gaps in the literature and future directions that will continue to shed light on aspects of face processing in autism that are still under-examined. In particular, we highlight the need for more developmental studies and studies examining ecologically valid and naturalistic social stimuli. PMID:25829246
Beat Gestures Modulate Auditory Integration in Speech Perception
ERIC Educational Resources Information Center
Biau, Emmanuel; Soto-Faraco, Salvador
2013-01-01
Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…
Differences in Business Undergraduate Perceptions by Preferred Classroom Learning Environment
ERIC Educational Resources Information Center
Blau, Gary; Mittal, Neha; Schirmer, Michael; Ozkan, Bora
2017-01-01
Online education continues to grow at business schools. The authors compared undergraduate business student perceptions across three different classroom learning delivery environments: online, hybrid, and face to face. Based on the survey responses using two independent samples, the authors' analyses found that students who preferred online…
Influence of skin ageing features on Chinese women's perception of facial age and attractiveness.
Porcheron, A; Latreille, J; Jdid, R; Tschachler, E; Morizot, F
2014-08-01
Ageing leads to characteristic changes in the appearance of facial skin. Among these changes, we can distinguish the skin topographic cues (skin sagging and wrinkles), the dark spots and the dark circles around the eyes. Although skin changes are similar in Caucasian and Chinese faces, the age of occurrence and the severity of age-related features differ between the two populations. Little is known about how the ageing of skin influences the perception of female faces in Chinese women. The aim of this study is to evaluate the contribution of the different age-related skin features to the perception of age and attractiveness in Chinese women. Facial images of Caucasian women and Chinese women in their 60s were manipulated separately to reduce the following skin features: (i) skin sagging and wrinkles, (ii) dark spots and (iii) dark circles. Finally, all signs were reduced simultaneously (iv). Female Chinese participants were asked to estimate the age difference between the modified and original images and evaluate the attractiveness of modified and original faces. Chinese women perceived the Chinese faces as younger after the manipulation of dark spots than after the reduction in wrinkles/sagging, whereas they perceived the Caucasian faces as the youngest after the manipulation of wrinkles/sagging. Interestingly, Chinese women evaluated faces with reduced dark spots as being the most attractive whatever the origin of the face. The manipulation of dark circles contributed to making Caucasian and Chinese faces being perceived younger and more attractive than the original faces, although the effect was less pronounced than for the two other types of manipulation. This is the first study to have examined the influence of various age-related skin features on the facial age and attractiveness perception of Chinese women. The results highlight different contributions of dark spots, sagging/wrinkles and dark circles to their perception of Chinese and Caucasian faces. © 2014 The Authors. International Journal of Cosmetic Science published by John Wiley & Sons Ltd on behalf of Society of Cosmetic Scientists and Societe Francaise de Cosmetologie.
Early Stages of Figure–Ground Segregation during Perception of the Face–Vase
Pitts, Michael A.; Martínez, Antígona; Brewer, James B.; Hillyard, Steven A.
2011-01-01
The temporal sequence of neural processes supporting figure–ground perception was investigated by recording ERPs associated with subjects’ perceptions of the face–vase figure. In Experiment 1, subjects continuously reported whether they perceived the face or the vase as the foreground figure by pressing one of two buttons. Each button press triggered a probe flash to the face region, the vase region, or the borders between the two. The N170/vertex positive potential (VPP) component of the ERP elicited by probes to the face region was larger when subjects perceived the faces as figure. Preceding the N170/VPP, two additional components were identified. First, when the borders were probed, ERPs differed in amplitude as early as 110 msec after probe onset depending on subjects’ figure–ground perceptions. Second, when the face or vase regions were probed, ERPs were more positive (at ~150–200 msec) when that region was perceived as figure versus background. These components likely reflect an early “border ownership” stage, and a subsequent “figure–ground segregation” stage of processing. To explore the influence of attention on these stages of processing, two additional experiments were conducted. In Experiment 2, subjects selectively attended to the face or vase region, and the same early ERP components were again produced. In Experiment 3, subjects performed an identical selective attention task, but on a display lacking distinctive figure–ground borders, and neither of the early components were produced. Results from these experiments suggest sequential stages of processing underlying figure–ground perception, each which are subject to modifications by selective attention. PMID:20146604
The biasing of figure-ground assignment by shading cues for objects and faces in prosopagnosia.
Hefter, Rebecca; Jerskey, Beth A; Barton, Jason J S
2008-01-01
Prosopagnosia is defined by impaired recognition of the identity of specific faces. Whether the perception of faces at the categorical level (recognizing that a face is a face) is also impaired to a lesser degree is unclear. We examined whether prosopagnosia is associated with impaired detection of facial contours in a bistable display, by testing a series of five prosopagnosic patients on a variation of Rubin's vase illusion, in which shading was introduced to bias perception towards either the face or the vase. We also included a control bistable display in which a disc or an aperture were the two possible percepts. With the control disc/aperture test, prosopagnosic patients did not generate a normal sigmoid function, but a U-shaped function, indicating that they perceived the shading but had difficulty in using the shading to make the appropriate figure-ground assignment. While controls still generated a sigmoid function for the vase/face test, prosopagnosic patients showed a severe impairment in using shading to make consistent perceptual assignments. We conclude that prosopagnosic patients have difficulty in using shading to segment figures from background correctly, particularly with complex stimuli like faces. This suggests that a subtler defect in face categorization accompanies their severe defect in face identification, consistent with predictions of computational models and recent data from functional imaging.
Menzel, Claudia; Hayn-Leichsenring, Gregor U; Langner, Oliver; Wiese, Holger; Redies, Christoph
2015-01-01
We investigated whether low-level processed image properties that are shared by natural scenes and artworks - but not veridical face photographs - affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess - compared to face images - a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope - in contrast to the other tested image properties - did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis.
Langner, Oliver; Wiese, Holger; Redies, Christoph
2015-01-01
We investigated whether low-level processed image properties that are shared by natural scenes and artworks – but not veridical face photographs – affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess – compared to face images – a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope – in contrast to the other tested image properties – did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis. PMID:25835539
Busigny, Thomas; Van Belle, Goedele; Jemel, Boutheina; Hosein, Anthony; Joubert, Sven; Rossion, Bruno
2014-04-01
Recent studies have provided solid evidence for pure cases of prosopagnosia following brain damage. The patients reported so far have posterior lesions encompassing either or both the right inferior occipital cortex and fusiform gyrus, and exhibit a critical impairment in generating a sufficiently detailed holistic percept to individualize faces. Here, we extended these observations to include the prosopagnosic patient LR (Bukach, Bub, Gauthier, & Tarr, 2006), whose damage is restricted to the anterior region of the right temporal lobe. First, we report that LR is able to discriminate parametrically defined individual exemplars of nonface object categories as accurately and quickly as typical observers, which suggests that the visual similarity account of prosopagnosia does not explain his impairments. Then, we show that LR does not present with the typical face inversion effect, whole-part advantage, or composite face effect and, therefore, has impaired holistic perception of individual faces. Moreover, the patient is more impaired at matching faces when the facial part he fixates is masked than when it is selectively revealed by means of gaze contingency. Altogether these observations support the view that the nature of the critical face impairment does not differ qualitatively across patients with acquired prosopagnosia, regardless of the localization of brain damage: all these patients appear to be impaired to some extent at what constitutes the heart of our visual expertise with faces, namely holistic perception at a sufficiently fine-grained level of resolution to discriminate exemplars of the face class efficiently. This conclusion raises issues regarding the existing criteria for diagnosis/classification of patients as cases of apperceptive or associative prosopagnosia. Copyright © 2014 Elsevier Ltd. All rights reserved.
Emotion perception accuracy and bias in face-to-face versus cyberbullying.
Ciucci, Enrica; Baroncelli, Andrea; Nowicki, Stephen
2014-01-01
The authors investigated the association of traditional and cyber forms of bullying and victimization with emotion perception accuracy and emotion perception bias. Four basic emotions were considered (i.e., happiness, sadness, anger, and fear); 526 middle school students (280 females; M age = 12.58 years, SD = 1.16 years) were recruited, and emotionality was controlled. Results indicated no significant findings for girls. Boys with higher levels of traditional bullying did not show any deficit in perception accuracy of emotions, but they were prone to identify happiness and fear in faces when a different emotion was expressed; in addition, male cyberbullying was related to greater accuracy in recognizing fear. In terms of the victims, cyber victims had a global problem in recognizing emotions and a specific problem in processing anger and fear. It was concluded that emotion perception accuracy and bias were associated with bullying and victimization for boys not only in traditional settings but also in the electronic ones. Implications of these findings for possible intervention are discussed.
MEG Evidence for Dynamic Amygdala Modulations by Gaze and Facial Emotions
Dumas, Thibaud; Dubal, Stéphanie; Attal, Yohan; Chupin, Marie; Jouvent, Roland; Morel, Shasha; George, Nathalie
2013-01-01
Background Amygdala is a key brain region for face perception. While the role of amygdala in the perception of facial emotion and gaze has been extensively highlighted with fMRI, the unfolding in time of amydgala responses to emotional versus neutral faces with different gaze directions is scarcely known. Methodology/Principal Findings Here we addressed this question in healthy subjects using MEG combined with an original source imaging method based on individual amygdala volume segmentation and the localization of sources in the amygdala volume. We found an early peak of amygdala activity that was enhanced for fearful relative to neutral faces between 130 and 170 ms. The effect of emotion was again significant in a later time range (310–350 ms). Moreover, the amygdala response was greater for direct relative averted gaze between 190 and 350 ms, and this effect was selective of fearful faces in the right amygdala. Conclusion Altogether, our results show that the amygdala is involved in the processing and integration of emotion and gaze cues from faces in different time ranges, thus underlining its role in multiple stages of face perception. PMID:24040190
Learning representative features for facial images based on a modified principal component analysis
NASA Astrophysics Data System (ADS)
Averkin, Anton; Potapov, Alexey
2013-05-01
The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.
Egan, Vincent; Cordan, Giray
2009-05-01
One 'reasonable ground' for unlawful sex with a minor is mistaken age. Alcohol consumption and make-up are often deemed further influences on impaired perception. Two hundred and forty persons in bars and cafes rated the attractiveness of composite faces of immature and mature females with and without additional makeup, alcohol users having their concurrent blood-alcohol level measured using a breathalyser. A non-sex-specific preference for immature faces over sexually mature faces was found. Alcohol and make-up did not inflate attractiveness ratings in immature faces. While alcohol consumption significantly inflated attractiveness ratings for participants viewing made-up sexually mature faces, greater alcohol consumption itself did not lead to overestimation of age. Although alcohol limited the processing of maturity cues in female observers, it had no effect on the age perceptions of males viewing female faces, suggesting male mate preferences are not easily disrupted. Participants consistently overestimated the age of sexually immature- and sexually mature-faces by an average of 3.5 years. Our study suggests that even heavy alcohol consumption does not interfere with age-perception tasks in men, so is not of itself an excuse for apparent mistaken age in cases of unlawful sex with a minor.
An Adult Developmental Approach to Perceived Facial Attractiveness and Distinctiveness
Ebner, Natalie C.; Luedicke, Joerg; Voelkle, Manuel C.; Riediger, Michaela; Lin, Tian; Lindenberger, Ulman
2018-01-01
Attractiveness and distinctiveness constitute facial features with high biological and social relevance. Bringing a developmental perspective to research on social-cognitive face perception, we used a large set of faces taken from the FACES Lifespan Database to examine effects of face and perceiver characteristics on subjective evaluations of attractiveness and distinctiveness in young (20–31 years), middle-aged (44–55 years), and older (70–81 years) men and women. We report novel findings supporting variations by face and perceiver age, in interaction with gender and emotion: although older and middle-aged compared to young perceivers generally rated faces of all ages as more attractive, young perceivers gave relatively higher attractiveness ratings to young compared to middle-aged and older faces. Controlling for variations in attractiveness, older compared to young faces were viewed as more distinctive by young and middle-aged perceivers. Age affected attractiveness more negatively for female than male faces. Furthermore, happy faces were rated as most attractive, while disgusted faces were rated as least attractive, particularly so by middle-aged and older perceivers and for young and female faces. Perceivers largely agreed on distinctiveness ratings for neutral and happy emotions, but older and middle-aged compared to young perceivers rated faces displaying negative emotions as more distinctive. These findings underscore the importance of a lifespan perspective on perception of facial characteristics and suggest possible effects of age on goal-directed perception, social motivation, and in-group bias. This publication makes available picture-specific normative data for experimental stimulus selection. PMID:29867620
Bidirectional Gender Face Aftereffects: Evidence Against Normative Facial Coding.
Cronin, Sophie L; Spence, Morgan L; Miller, Paul A; Arnold, Derek H
2017-02-01
Facial appearance can be altered, not just by restyling but also by sensory processes. Exposure to a female face can, for instance, make subsequent faces look more masculine than they would otherwise. Two explanations exist. According to one, exposure to a female face renormalizes face perception, making that female and all other faces look more masculine as a consequence-a unidirectional effect. According to that explanation, exposure to a male face would have the opposite unidirectional effect. Another suggestion is that face gender is subject to contrastive aftereffects. These should make some faces look more masculine than the adaptor and other faces more feminine-a bidirectional effect. Here, we show that face gender aftereffects are bidirectional, as predicted by the latter hypothesis. Images of real faces rated as more and less masculine than adaptors at baseline tended to look even more and less masculine than adaptors post adaptation. This suggests that, rather than mental representations of all faces being recalibrated to better reflect the prevailing statistics of the environment, mental operations exaggerate differences between successive faces, and this can impact facial gender perception.
Audio-Visual Speech Perception Is Special
ERIC Educational Resources Information Center
Tuomainen, J.; Andersen, T.S.; Tiippana, K.; Sams, M.
2005-01-01
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and…
Challenges for Novice School Leaders: Facing Today's Issues in School Administration
ERIC Educational Resources Information Center
Beam, Andrea P.; Claxton, Russell L.; Smith, Samuel J.
2016-01-01
Challenges for novice school leaders evolve as information is managed differently and as societal and regulatory expectations change. This study addresses unique challenges faced by practicing school administrators (n = 159) during their first three years in a school leadership position. It focuses on their perceptions, how perceptions of present…
What Effect Does Flipping the Classroom Have on Undergraduate Student Perceptions and Grades?
ERIC Educational Resources Information Center
Molnar, Kathleen K.
2017-01-01
There is a lack of consensus of the effects on student perceptions and performance in flipping the classroom and its possible value over the traditional face-to-face (FTF) classroom approach. This research examines the expectation that flipping an undergraduate, introductory level, information concepts and skills class would benefit student…
Student Perceptions of Online Learning and Persistence for Course Completion
ERIC Educational Resources Information Center
Snyder, Jill
2014-01-01
This qualitative case study was designed to explore students' perceptions of online learning at a small rural community college to understand what factors impacted their persistence in coursework. The research problem dealt with retention rates in online courses, which were lower than in face-to-face courses. Despite extensive quantitative…
Extension Agents' Perceptions of a Blended Approach to Onboarding
ERIC Educational Resources Information Center
Harder, Amy; Zelaya, Priscilla; Roberts, T. Grady
2016-01-01
Extension organizations are challenged to provide onboarding to new employees that is comprehensive and high quality, yet cost-effective. The purpose of this study was to explore Extension agents' perceptions of participating in an onboarding program that used a blended approach involving face-to-face and online learning components. The objectives…
Nurmoja, Merle; Eamets, Triin; Härma, Hanne-Loore; Bachmann, Talis
2012-10-01
While the dependence of face identification on the level of pixelation-transform of the images of faces has been well studied, similar research on face-based trait perception is underdeveloped. Because depiction formats used for hiding individual identity in visual media and evidential material recorded by surveillance cameras often consist of pixelized images, knowing the effects of pixelation on person perception has practical relevance. Here, the results of two experiments are presented showing the effect of facial image pixelation on the perception of criminality, trustworthiness, and suggestibility. It appears that individuals (N = 46, M age = 21.5 yr., SD = 3.1 for criminality ratings; N = 94, M age = 27.4 yr., SD = 10.1 for other ratings) have the ability to discriminate between facial cues ndicative of these perceived traits from the coarse level of image pixelation (10-12 pixels per face horizontally) and that the discriminability increases with a decrease in the coarseness of pixelation. Perceived criminality and trustworthiness appear to be better carried by the pixelized images than perceived suggestibility.
ERIC Educational Resources Information Center
Burke, Jacqueline A.
2001-01-01
Accounting students (n=128) used either face-to-face or distant Group support systems to complete collaborative tasks. Participation and social presence perceptions were significantly higher face to face. Task difficulty did not affect participation in either environment. (Contains 54 references.) (JOW)
Differential effects of object-based attention on evoked potentials to fearful and disgusted faces.
Santos, Isabel M; Iglesias, Jaime; Olivares, Ela I; Young, Andrew W
2008-04-01
Event-related potentials (ERPs) were used to investigate the role of attention on the processing of facial expressions of fear and disgust. Stimuli consisted of overlapping pictures of a face and a house. Participants had to monitor repetitions of faces or houses, in separate blocks of trials, so that object-based attention was manipulated while spatial attention was kept constant. Faces varied in expression and could be either fearful or neutral (in the fear condition) or disgusted or neutral (in the disgust condition). When attending to faces, participants were required to signal repetitions of the same person, with the facial expressions being completely irrelevant to the task. Different effects of selective attention and different patterns of brain activity were observed for faces with fear and disgust expressions. Results indicated that the perception of fear from faces is gated by selective attention at early latencies, whereas a sustained positivity for fearful faces compared to neutral faces emerged around 160ms at central-parietal sites, independent of selective attention. In the case of disgust, ERP differences began only around 160ms after stimulus onset, and only after 480ms was the perception of disgust modulated by attention allocation. Results are interpreted in terms of different neural mechanisms for the perception of fear and disgust and related to the functional significance of these two emotions for the survival of the organism.
Perceiving groups: The people perception of diversity and hierarchy.
Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L
2018-05-01
The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Impact of facial defect reconstruction on attractiveness and negative facial perception.
Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick; Ishii, Lisa E
2015-06-01
Measure the impact of facial defect reconstruction on observer-graded attractiveness and negative facial perception. Prospective, randomized, controlled experiment. One hundred twenty casual observers viewed images of faces with defects of varying sizes and locations before and after reconstruction as well as normal comparison faces. Observers rated attractiveness, defect severity, and how disfiguring, bothersome, and important to repair they considered each face. Facial defects decreased attractiveness -2.26 (95% confidence interval [CI]: -2.45, -2.08) on a 10-point scale. Mixed effects linear regression showed this attractiveness penalty varied with defect size and location, with large and central defects generating the greatest penalty. Reconstructive surgery increased attractiveness 1.33 (95% CI: 1.18, 1.47), an improvement dependent upon size and location, restoring some defect categories to near normal ranges of attractiveness. Iterated principal factor analysis indicated the disfiguring, important to repair, bothersome, and severity variables were highly correlated and measured a common domain; thus, they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score, representing negative facial perception. The DIBS regression showed defect faces have a 1.5 standard deviation increase in negative perception (DIBS: 1.69, 95% CI: 1.61, 1.77) compared to normal faces, which decreased by a similar magnitude after surgery (DIBS: -1.44, 95% CI: -1.49, -1.38). These findings varied with defect size and location. Surgical reconstruction of facial defects increased attractiveness and decreased negative social facial perception, an impact that varied with defect size and location. These new social perception data add to the evidence base demonstrating the value of high-quality reconstructive surgery. NA. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
Aspects of Facial Contrast Decrease with Age and Are Cues for Age Perception
Porcheron, Aurélie; Mauger, Emmanuelle; Russell, Richard
2013-01-01
Age is a primary social dimension. We behave differently toward people as a function of how old we perceive them to be. Age perception relies on cues that are correlated with age, such as wrinkles. Here we report that aspects of facial contrast–the contrast between facial features and the surrounding skin–decreased with age in a large sample of adult Caucasian females. These same aspects of facial contrast were also significantly correlated with the perceived age of the faces. Individual faces were perceived as younger when these aspects of facial contrast were artificially increased, but older when these aspects of facial contrast were artificially decreased. These findings show that facial contrast plays a role in age perception, and that faces with greater facial contrast look younger. Because facial contrast is increased by typical cosmetics use, we infer that cosmetics function in part by making the face appear younger. PMID:23483959
Face imagery is based on featural representations.
Lobmaier, Janek S; Mast, Fred W
2008-01-01
The effect of imagery on featural and configural face processing was investigated using blurred and scrambled faces. By means of blurring, featural information is reduced; by scrambling a face into its constituent parts configural information is lost. Twenty-four participants learned ten faces together with the sound of a name. In following matching-to-sample tasks participants had to decide whether an auditory presented name belonged to a visually presented scrambled or blurred face in two experimental conditions. In the imagery condition, the name was presented prior to the visual stimulus and participants were required to imagine the corresponding face as clearly and vividly as possible. In the perception condition name and test face were presented simultaneously, thus no facilitation via mental imagery was possible. Analyses of the hit values showed that in the imagery condition scrambled faces were recognized significantly better than blurred faces whereas there was no such effect for the perception condition. The results suggest that mental imagery activates featural representations more than configural representations.
Freeman, Jonathan B.; Ambady, Nalini; Midgley, Katherine J.; Holcomb, Phillip J.
2010-01-01
Using event-related potentials, we investigated how the brain extracts information from another’s face and translates it into relevant action in real-time. In Study 1, participants made between-hand sex categorizations of sex-typical and sex-atypical faces. Sex-atypical faces evoked negativity between 250-550 ms (N300/N400 effects), reflecting the integration of accumulating sex-category knowledge into a coherent sex-category interpretation. Additionally, the lateralized readiness potential (LRP) revealed that the motor cortex began preparing for a correct hand response while social category knowledge was still gradually evolving in parallel. In Study 2, participants made between-hand eye-color categorizations as part of go/no-go trials that were contingent on a target’s sex. On no-go trials, although the hand did not actually move, information about eye color partially prepared the motor cortex to move the hand before perception of sex had finalized. Together, these findings demonstrate the dynamic continuity between person perception and action, such that ongoing results from face processing are immediately and continuously cascaded into the motor system over time. The preparation of action begins based on tentative perceptions of another’s face before perceivers have finished interpreting what they just saw. PMID:20602284
Freeman, Jonathan B; Ambady, Nalini; Midgley, Katherine J; Holcomb, Phillip J
2011-01-01
Using event-related potentials, we investigated how the brain extracts information from another's face and translates it into relevant action in real time. In Study 1, participants made between-hand sex categorizations of sex-typical and sex-atypical faces. Sex-atypical faces evoked negativity between 250 and 550 ms (N300/N400 effects), reflecting the integration of accumulating sex-category knowledge into a coherent sex-category interpretation. Additionally, the lateralized readiness potential revealed that the motor cortex began preparing for a correct hand response while social category knowledge was still gradually evolving in parallel. In Study 2, participants made between-hand eye-color categorizations as part of go/no-go trials that were contingent on a target's sex. On no-go trials, although the hand did not actually move, information about eye color partially prepared the motor cortex to move the hand before perception of sex had finalized. Together, these findings demonstrate the dynamic continuity between person perception and action, such that ongoing results from face processing are immediately and continuously cascaded into the motor system over time. The preparation of action begins based on tentative perceptions of another's face before perceivers have finished interpreting what they just saw. © 2010 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business
Perception of men's beauty and attractiveness by women with low sexual desire.
Ferdenzi, Camille; Delplanque, Sylvain; Vorontsova-Wenger, Olga; Pool, Eva; Bianchi-Demicheli, Francesco; Sander, David
2015-04-01
Despite the high prevalence of hypoactive sexual desire disorder (HSDD), especially among women, this sexual disorder remains poorly understood. Among the multiple factors possibly involved in HSDD, particularities in the cognitive evaluations of social stimuli need to be better characterized. Especially, beauty and attractiveness judgments, two dimensions of interpersonal perception that are related but differ on their underlying motivational aspects, may vary according to the level of sexual desire. The main goal of this study was to investigate whether women with and without HSDD differ in their evaluations of beauty and attractiveness of men's faces and voices. Young women from the general population (controls, n = 16) and with HSDD (patients, n = 16) took part in the study. They were presented with a series of neutral/nonerotic voices and faces of young men from the GEneva Faces And Voices database. Ratings of beauty (i.e., assessments of aesthetic pleasure) and of attractiveness (i.e., assessments of the personal propensity to feel attracted to someone) and the frequency to which the participants pressed a key to see or listen to each stimulus again were the main outcome measures. Ratings of attractiveness were lower than ratings of beauty in both groups of women. The dissociation between beauty and attractiveness was larger in women with HSDD than in control participants. Patients gave lower attractiveness ratings than the controls and replayed the stimuli significantly less often. These results suggest that women with HSDD are characterized by specific alterations of the motivational component of men's perception, very early in the process of interpersonal relationships. Our findings have significant implications, both in better understanding the specific cognitive processes underlying hypoactive sexual desire and more largely the evaluative processes involved in human mate choice. © 2014 International Society for Sexual Medicine.
Robinson, Amanda K; Plaut, David C; Behrmann, Marlene
2017-07-01
Words and faces have vastly different visual properties, but increasing evidence suggests that word and face processing engage overlapping distributed networks. For instance, fMRI studies have shown overlapping activity for face and word processing in the fusiform gyrus despite well-characterized lateralization of these objects to the left and right hemispheres, respectively. To investigate whether face and word perception influences perception of the other stimulus class and elucidate the mechanisms underlying such interactions, we presented images using rapid serial visual presentations. Across 3 experiments, participants discriminated 2 face, word, and glasses targets (T1 and T2) embedded in a stream of images. As expected, T2 discrimination was impaired when it followed T1 by 200 to 300 ms relative to longer intertarget lags, the so-called attentional blink. Interestingly, T2 discrimination accuracy was significantly reduced at short intertarget lags when a face was followed by a word (face-word) compared with glasses-word and word-word combinations, indicating that face processing interfered with word perception. The reverse effect was not observed; that is, word-face performance was no different than the other object combinations. EEG results indicated the left N170 to T1 was correlated with the word decrement for face-word trials, but not for other object combinations. Taken together, the results suggest face processing interferes with word processing, providing evidence for overlapping neural mechanisms of these 2 object types. Furthermore, asymmetrical face-word interference points to greater overlap of face and word representations in the left than the right hemisphere. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Groen, Iris I A; Silson, Edward H; Baker, Chris I
2017-02-19
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
2017-01-01
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013
Hirata, Satoshi; Fuwa, Koki; Sugama, Keiko; Kusunoki, Kiyo; Fujita, Shin
2010-09-01
This paper reports on the use of an eye-tracking technique to examine how chimpanzees look at facial photographs of conspecifics. Six chimpanzees viewed a sequence of pictures presented on a monitor while their eye movements were measured by an eye tracker. The pictures presented conspecific faces with open or closed eyes in an upright or inverted orientation in a frame. The results demonstrated that chimpanzees looked at the eyes, nose, and mouth more frequently than would be expected on the basis of random scanning of faces. More specifically, they looked at the eyes longer than they looked at the nose and mouth when photographs of upright faces with open eyes were presented, suggesting that particular attention to the eyes represents a spontaneous face-scanning strategy shared among monkeys, apes, and humans. In contrast to the results obtained for upright faces with open eyes, the viewing times for the eyes, nose, and mouth of inverted faces with open eyes did not differ from one another. The viewing times for the eyes, nose, and mouth of faces with closed eyes did not differ when faces with closed eyes were presented in either an upright or inverted orientation. These results suggest the possibility that open eyes play an important role in the configural processing of faces and that chimpanzees perceive and process open and closed eyes differently.
Asymmetries of the human social brain in the visual, auditory and chemical modalities
Brancucci, Alfredo; Lucci, Giuliana; Mazzatenta, Andrea; Tommasi, Luca
2008-01-01
Structural and functional asymmetries are present in many regions of the human brain responsible for motor control, sensory and cognitive functions and communication. Here, we focus on hemispheric asymmetries underlying the domain of social perception, broadly conceived as the analysis of information about other individuals based on acoustic, visual and chemical signals. By means of these cues the brain establishes the border between ‘self’ and ‘other’, and interprets the surrounding social world in terms of the physical and behavioural characteristics of conspecifics essential for impression formation and for creating bonds and relationships. We show that, considered from the standpoint of single- and multi-modal sensory analysis, the neural substrates of the perception of voices, faces, gestures, smells and pheromones, as evidenced by modern neuroimaging techniques, are characterized by a general pattern of right-hemispheric functional asymmetry that might benefit from other aspects of hemispheric lateralization rather than constituting a true specialization for social information. PMID:19064350
Abrams, Daniel A.; Chen, Tianwen; Odriozola, Paola; Cheng, Katherine M.; Baker, Amanda E.; Padmanabhan, Aarthi; Ryali, Srikanth; Kochalka, John; Feinstein, Carl; Menon, Vinod
2016-01-01
The human voice is a critical social cue, and listeners are extremely sensitive to the voices in their environment. One of the most salient voices in a child’s life is mother's voice: Infants discriminate their mother’s voice from the first days of life, and this stimulus is associated with guiding emotional and social function during development. Little is known regarding the functional circuits that are selectively engaged in children by biologically salient voices such as mother’s voice or whether this brain activity is related to children’s social communication abilities. We used functional MRI to measure brain activity in 24 healthy children (mean age, 10.2 y) while they attended to brief (<1 s) nonsense words produced by their biological mother and two female control voices and explored relationships between speech-evoked neural activity and social function. Compared to female control voices, mother’s voice elicited greater activity in primary auditory regions in the midbrain and cortex; voice-selective superior temporal sulcus (STS); the amygdala, which is crucial for processing of affect; nucleus accumbens and orbitofrontal cortex of the reward circuit; anterior insula and cingulate of the salience network; and a subregion of fusiform gyrus associated with face perception. The strength of brain connectivity between voice-selective STS and reward, affective, salience, memory, and face-processing regions during mother’s voice perception predicted social communication skills. Our findings provide a novel neurobiological template for investigation of typical social development as well as clinical disorders, such as autism, in which perception of biologically and socially salient voices may be impaired. PMID:27185915
Abrams, Daniel A; Chen, Tianwen; Odriozola, Paola; Cheng, Katherine M; Baker, Amanda E; Padmanabhan, Aarthi; Ryali, Srikanth; Kochalka, John; Feinstein, Carl; Menon, Vinod
2016-05-31
The human voice is a critical social cue, and listeners are extremely sensitive to the voices in their environment. One of the most salient voices in a child's life is mother's voice: Infants discriminate their mother's voice from the first days of life, and this stimulus is associated with guiding emotional and social function during development. Little is known regarding the functional circuits that are selectively engaged in children by biologically salient voices such as mother's voice or whether this brain activity is related to children's social communication abilities. We used functional MRI to measure brain activity in 24 healthy children (mean age, 10.2 y) while they attended to brief (<1 s) nonsense words produced by their biological mother and two female control voices and explored relationships between speech-evoked neural activity and social function. Compared to female control voices, mother's voice elicited greater activity in primary auditory regions in the midbrain and cortex; voice-selective superior temporal sulcus (STS); the amygdala, which is crucial for processing of affect; nucleus accumbens and orbitofrontal cortex of the reward circuit; anterior insula and cingulate of the salience network; and a subregion of fusiform gyrus associated with face perception. The strength of brain connectivity between voice-selective STS and reward, affective, salience, memory, and face-processing regions during mother's voice perception predicted social communication skills. Our findings provide a novel neurobiological template for investigation of typical social development as well as clinical disorders, such as autism, in which perception of biologically and socially salient voices may be impaired.
Faciotopy—A face-feature map with face-like topology in the human occipital face area
Henriksson, Linda; Mur, Marieke; Kriegeskorte, Nikolaus
2015-01-01
The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here we test these regions for “faciotopy”, a particular hypothesis about their intrinsic functional organisation. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighbouring patches represent features that are physically neighbouring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience. Faces have a prototypical configuration of features, are usually perceived in a canonical upright orientation, and are frequently fixated in particular locations. To test the faciotopy hypothesis, we presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The responses in V1 were best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face. Faciotopy would be the first example, to our knowledge, of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance. PMID:26235800
Faciotopy-A face-feature map with face-like topology in the human occipital face area.
Henriksson, Linda; Mur, Marieke; Kriegeskorte, Nikolaus
2015-11-01
The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here we test these regions for "faciotopy", a particular hypothesis about their intrinsic functional organisation. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighbouring patches represent features that are physically neighbouring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience. Faces have a prototypical configuration of features, are usually perceived in a canonical upright orientation, and are frequently fixated in particular locations. To test the faciotopy hypothesis, we presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The responses in V1 were best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face. Faciotopy would be the first example, to our knowledge, of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Maekawa, Toshihiko; Miyanaga, Yuka; Takahashi, Kenji; Takamiya, Naomi; Ogata, Katsuya; Tobimatsu, Shozo
2017-01-01
Individuals with autism spectrum disorder (ASD) show superior performance in processing fine detail, but often exhibit impaired gestalt face perception. The ventral visual stream from the primary visual cortex (V1) to the fusiform gyrus (V4) plays an important role in form (including faces) and color perception. The aim of this study was to investigate how the ventral stream is functionally altered in ASD. Visual evoked potentials were recorded in high-functioning ASD adults (n = 14) and typically developing (TD) adults (n = 14). We used three types of visual stimuli as follows: isoluminant chromatic (red/green, RG) gratings, high-contrast achromatic (black/white, BW) gratings with high spatial frequency (HSF, 5.3 cycles/degree), and face (neutral, happy, and angry faces) stimuli. Compared with TD controls, ASD adults exhibited longer N1 latency for RG, shorter N1 latency for BW, and shorter P1 latency, but prolonged N170 latency, for face stimuli. Moreover, a greater difference in latency between P1 and N170, or between N1 for BW and N170 (i.e., the prolongation of cortico-cortical conduction time between V1 and V4) was observed in ASD adults. These findings indicate that ASD adults have enhanced fine-form (local HSF) processing, but impaired color processing at V1. In addition, they exhibit impaired gestalt face processing due to deficits in integration of multiple local HSF facial information at V4. Thus, altered ventral stream function may contribute to abnormal social processing in ASD. PMID:28146575
Seeing Objects as Faces Enhances Object Detection.
Takahashi, Kohske; Watanabe, Katsumi
2015-10-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.
Seeing Objects as Faces Enhances Object Detection
Watanabe, Katsumi
2015-01-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219
ERIC Educational Resources Information Center
Abraham, Sneha Elizabeth
2014-01-01
The purpose of this study was to investigate faculty and administrator perceptions of online learning compared to traditional face-to-face instruction by exploring the factors that impact online instruction. Strategies that can lead to effective online learning environments were explored. Faculty and administrators working with online education at…
ERIC Educational Resources Information Center
Barry, Amy A.; Smith, JuliAnna Z.; Deutsch, Francine M.; Perry-Jenkins, Maureen
2011-01-01
This study explored first-time fathers' perceived child care skill over the transition to parenthood, based on face-to-face interviews of 152 working-class, dual-earner couples. Analyses examined the associations among fathers' perceived skill and prenatal perception of skill, child care involvement, mothers' breastfeeding, maternal gatekeeping,…
ERIC Educational Resources Information Center
Reid, Eric Justin
2015-01-01
This qualitative study explored the perceptions and experiences of IT Managers in publicly traded companies within the San Antonio, Texas area about outsourced data centers. Narrative data was collected using open-ended questions and face-to-face interviews within semi-structured environments. The research questions guided the study: (1)…
The Impact of Text versus Video Communication on Instructor Feedback in Blended Courses
ERIC Educational Resources Information Center
Borup, Jered; West, Richard E.; Thomas, Rebecca
2015-01-01
In this study we examined student and instructor perceptions of text and video feedback in technology integration courses that combined face-to-face with online instruction for teacher candidates. Items from the Feedback Environment Scale (Steelman et al. 2004) were used to measure student perceptions of feedback quality and delivery. Independent…
ERIC Educational Resources Information Center
Kuo, Yu-Chun; Belland, Brian R.; Schroder, Kerstin E. E.; Walker, Andrew E.
2014-01-01
Blended learning is an effective approach to instruction that combines features of face-to-face learning and computer-mediated learning. This study investigated the relationship between student perceptions of three types of interaction and blended learning course satisfaction. The participants included K-12 teachers enrolled in a graduate-level…
ERIC Educational Resources Information Center
Li, Mingsheng; Campbell, Jacqui
2008-01-01
This study, conducted in 2005 in a New Zealand tertiary institution, examines Asian students' perceptions of the much-promulgated cooperative learning concepts in the form of group work and group assignments. Twenty-two Asian students participated in one-hour individual face-to-face semi-structured interviews. The study found that Asian students…
ERIC Educational Resources Information Center
Goette, William F.; Delello, Julie A.; Schmitt, Andrew L.; Sullivan, Jeremy R.; Rangel, Angelica
2017-01-01
This study compares the academic performance and perceptions of 114 undergraduate students enrolled in an abnormal psychology course. Specifically, this study focuses on whether face-to-face (F2F) or blended modalities are associated with student learning outcomes. In this study, data analysis was based upon the examination of end-of-course…
On-the-Job E-Learning: Workers' Attitudes and Perceptions
ERIC Educational Resources Information Center
Batalla-Busquets, Josep-Maria; Pacheco-Bernal, Carmen
2013-01-01
The use of e-learning for on-the-job training has grown exponentially in the last decade due to it being accepted by people in charge of businesses. Few papers have explored virtual training from the workers' standpoint, that is, the perception they have about the different training methodologies (face-to-face vs. virtual) and the attitudes they…
ERIC Educational Resources Information Center
Sumule, Leonard
2016-01-01
This grounded theory study explores the perceptions of 33 alumni of 10 Indonesian evangelical theological schools regarding the impact of informal mentoring, which they experienced during their time as students. Data from face-to-face interviews revealed that the informal mentoring relationships (a) helped them to handle their social and emotional…
The Job Shadow Assignment: Career Perceptions in Hospitality, Recreation and Tourism
ERIC Educational Resources Information Center
Padron, Thomas C.; Fortune, Mary F.; Spielman, Melany; Tjoei, Sylvie
2017-01-01
The job shadow study measured student career perceptions related to hospitality, recreation and tourism (HRT) and instructional mode (face-to-face (F2F), hybrid, and online). College students self-selecting into three different course modalities taught by the same instructor job shadowed HRT professionals by using Internet and F2F interviews. The…
Single Black Working Mothers' Perceptions: The Journey to Achieve Leadership Positions
ERIC Educational Resources Information Center
Raglin, Sherrell
2017-01-01
Single Black working mothers faced significant challenges in achieving high-level or senior-level leadership positions. The purpose of this qualitative narrative study was to collect, analyze and code the stories told by 10 participants to understand the perceptions and insights of the challenges and barriers single Black working mothers faced in…
Ferdenzi, Camille; Delplanque, Sylvain; Atanassova, Reni; Sander, David
2016-04-01
The androgen steroid androstadienone, an odorous compound emitted from the human axillary region, has recurrently been considered as a candidate compound involved in human chemical communication and mate choice. Although perception of androstadienone has been shown to influence several affective (mood), attentional, physiological and neural parameters, studies investigating its impact on human attractiveness remain unpersuasive because of incomplete designs (e.g., only female participants) and contradictory results. The aim of this study was to investigate how androstadienone may influence others' attractiveness. Specifically, we used a complete design (male and female raters, male and female faces and voices) to determine whether androstadienone influences the perception of social stimuli in a sex-specific manner, which would favor pheromonal-like properties of the compound, or in a more general manner, which would suggest that the compound has broader influences on human psychological responses. After comparing the ratings of men and women who were exposed to androstadienone masked in clove oil with those of men and women who were exposed to clove oil alone, we found that androstadienone enhanced the perceived attractiveness of emotionally relevant stimuli (opposite-sex stimuli in men and in fertile women). Response times for categorizing the stimuli as attractive or not were also affected by androstadienone, with longer response times in men and in fertile women and shorter response times in non-fertile women, irrespective of the stimulus sex. The results favor the hypothesis of general effects over sex-specific effects of androstadienone, thus questioning the relevance of focusing on that particular compound in the study of human attractiveness through body odor and encouraging the search for other semiochemicals that might be significant for human mate choice. Copyright © 2016 Elsevier Ltd. All rights reserved.
The social-sensory interface: category interactions in person perception
Freeman, Jonathan B.; Johnson, Kerri L.; Adams, Reginald B.; Ambady, Nalini
2012-01-01
Research is increasingly challenging the claim that distinct sources of social information—such as sex, race, and emotion—are processed in discrete fashion. Instead, there appear to be functionally relevant interactions that occur. In the present article, we describe research examining how cues conveyed by the human face, voice, and body interact to form the unified representations that guide our perceptions of and responses to other people. We explain how these information sources are often thrown into interaction through bottom-up forces (e.g., phenotypic cues) as well as top-down forces (e.g., stereotypes and prior knowledge). Such interactions point to a person perception process that is driven by an intimate interface between bottom-up perceptual and top-down social processes. Incorporating data from neuroimaging, event-related potentials (ERP), computational modeling, computer mouse-tracking, and other behavioral measures, we discuss the structure of this interface, and we consider its implications and adaptive purposes. We argue that an increased understanding of person perception will likely require a synthesis of insights and techniques, from social psychology to the cognitive, neural, and vision sciences. PMID:23087622
Individual Aesthetic Preferences for Faces Are Shaped Mostly by Environments, Not Genes.
Germine, Laura; Russell, Richard; Bronstad, P Matthew; Blokland, Gabriëlla A M; Smoller, Jordan W; Kwok, Holum; Anthony, Samuel E; Nakayama, Ken; Rhodes, Gillian; Wilmer, Jeremy B
2015-10-19
Although certain characteristics of human faces are broadly considered more attractive (e.g., symmetry, averageness), people also routinely disagree with each other on the relative attractiveness of faces. That is, to some significant degree, beauty is in the "eye of the beholder." Here, we investigate the origins of these individual differences in face preferences using a twin design, allowing us to estimate the relative contributions of genetic and environmental variation to individual face attractiveness judgments or face preferences. We first show that individual face preferences (IP) can be reliably measured and are readily dissociable from other types of attractiveness judgments (e.g., judgments of scenes, objects). Next, we show that individual face preferences result primarily from environments that are unique to each individual. This is in striking contrast to individual differences in face identity recognition, which result primarily from variations in genes [1]. We thus complete an etiological double dissociation between two core domains of social perception (judgments of identity versus attractiveness) within the same visual stimulus (the face). At the same time, we provide an example, rare in behavioral genetics, of a reliably and objectively measured behavioral characteristic where variations are shaped mostly by the environment. The large impact of experience on individual face preferences provides a novel window into the evolution and architecture of the social brain, while lending new empirical support to the long-standing claim that environments shape individual notions of what is attractive. Copyright © 2015 Elsevier Ltd. All rights reserved.
George, Nathalie; Jemel, Boutheina; Fiori, Nicole; Chaby, Laurence; Renault, Bernard
2005-08-01
We investigated the ERP correlates of the subjective perception of upright and upside-down ambiguous pictures as faces using two-tone Mooney stimuli in an explicit facial decision task (deciding whether a face is perceived or not in the display). The difficulty in perceiving upside-down Mooneys as faces was reflected by both lower rates of "Face" responses and delayed "Face" reaction times for upside-down relative to upright stimuli. The N170 was larger for the stimuli reported as "faces". It was also larger for the upright than the upside-down stimuli only when they were reported as faces. Furthermore, facial decision as well as stimulus orientation effects spread from 140-190 ms to 390-440 ms. The behavioural delay in 'Face' responses to upside-down stimuli was reflected in ERPs by later effect of facial decision for upside-down relative to upright Mooneys over occipito-temporal electrodes. Moreover, an orientation effect was observed only for the stimuli reported as faces; it yielded a marked hemispheric asymmetry, lasting from 140-190 ms to 390-440 ms post-stimulus onset in the left hemisphere and from 340-390 to 390-440 ms only in the right hemisphere. Taken together, the results supported a preferential involvement of the right hemisphere in the detection of faces, whatever their orientation. By contrast, the early orientation effect in the left hemisphere suggested that upside-down Mooney stimuli were processed as non face objects until facial decision was reached in this hemisphere. The present data show that face perception involves not only spatially but also temporally distributed activities in occipito-temporal regions.
Olderbak, Sally; Hildebrandt, Andrea; Wilhelm, Oliver
2015-01-01
The shared decline in cognitive abilities, sensory functions (e.g., vision and hearing), and physical health with increasing age is well documented with some research attributing this shared age-related decline to a single common cause (e.g., aging brain). We evaluate the extent to which the common cause hypothesis predicts associations between vision and physical health with social cognition abilities specifically face perception and face memory. Based on a sample of 443 adults (17–88 years old), we test a series of structural equation models, including Multiple Indicator Multiple Cause (MIMIC) models, and estimate the extent to which vision and self-reported physical health are related to face perception and face memory through a common factor, before and after controlling for their fluid cognitive component and the linear effects of age. Results suggest significant shared variance amongst these constructs, with a common factor explaining some, but not all, of the shared age-related variance. Also, we found that the relations of face perception, but not face memory, with vision and physical health could be completely explained by fluid cognition. Overall, results suggest that a single common cause explains most, but not all age-related shared variance with domain specific aging mechanisms evident. PMID:26321998
Neural correlates of own- and other-race face perception: spatial and temporal response differences.
Natu, Vaidehi; Raboy, David; O'Toole, Alice J
2011-02-01
Humans show an "other-race effect" for face recognition, with more accurate recognition of own- versus other-race faces. We compared the neural representations of own- and other-race faces using functional magnetic resonance imaging (fMRI) data in combination with a multi-voxel pattern classifier. Neural activity was recorded while Asians and Caucasians viewed Asian and Caucasian faces. A pattern classifier, applied to voxels across a broad range of ventral temporal areas, discriminated the brain activity maps elicited in response to Asian versus Caucasian faces in the brains of both Asians and Caucasians. Classification was most accurate in the first few time points of the block and required the use of own-race faces in the localizer scan to select voxels for classifier input. Next, we examined differences in the time-course of neural responses to own- and other-race faces and found evidence for a temporal "other-race effect." Own-race faces elicited a larger neural response initially that attenuated rapidly. The response to other-race faces was weaker at first, but increased over time, ultimately surpassing the magnitude of the own-race response in the fusiform "face" area (FFA). A similar temporal response pattern held across a broad range of ventral temporal areas. The pattern-classification results indicate the early availability of categorical information about own- versus other-race face status in the spatial pattern of neural activity. The slower, more sustained, brain response to other-race faces may indicate the need to recruit additional neural resources to process other-race faces for identification. Copyright © 2010 Elsevier Inc. All rights reserved.
Association of Face-lift Surgery With Social Perception, Age, Attractiveness, Health, and Success.
Nellis, Jason C; Ishii, Masaru; Papel, Ira D; Kontis, Theda C; Byrne, Patrick J; Boahene, Kofi D O; Bater, Kristin L; Ishii, Lisa E
2017-07-01
Evidence quantifying the influence of face-lift surgery on societal perceptions is lacking. To measure the association of face-lift surgery with observer-graded perceived age, attractiveness, success, and overall health. In a web-based survey, 526 casual observers naive to the purpose of the study viewed independent images of 13 unique female patient faces before or after face-lift surgery from January 1, 2016, through June 30, 2016. The Delphi method was used to select standardized patient images confirming appropriate patient candidacy and overall surgical effect. Observers estimated age and rated the attractiveness, perceived success, and perceived overall health for each patient image. Facial perception questions were answered on a visual analog scale from 0 to 100, with higher scores corresponding to more positive responses. To evaluate the accuracy of observer age estimation, the patients' preoperative estimated mean age was compared with the patients' actual mean age. A multivariate mixed-effects regression model was used to determine the effect of face-lift surgery. To further characterize the effect of face-lift surgery, estimated ordinal-rank change was calculated for each domain. Blinded casual observer ratings of patients estimated age, attractiveness, perceived success, and perceived overall health. A total of 483 observers (mean [SD] age, 29 [8.6] years; 382 women [79.4%]) successfully completed the survey. Comparing patients' preoperative estimated mean (SD) age (59.6 [9.0] years) and patients' actual mean (SD) age (58.4 [6.9] years) revealed no significant difference (t2662 = -0.47; 95% CI, -6.07 to 3.72; P = .64). On multivariate regression, patients after face-lift surgery were rated as significantly younger (coefficient, -3.69; 95% CI -4.15 to -3.23; P < .001), more attractive (coefficient, 8.21; 95% CI, 7.41-9.02; P < .001), more successful (coefficient, 5.82; 95% CI, 5.05 to 6.59; P < .001), and overall healthier (coefficient, 8.72; 95% CI, 7.88-9.56; P < .001). The ordinal rank changes for an average individual were -21 for perceived age, 21 for attractiveness, 16 for success, and 21 for overall health. In this study, observer perceptions of face-lift surgery were associated with views that patients appeared younger, more attractive, healthier, and more successful. These findings highlight observer perceptions of face-lift surgery that could positively influence social interactions. NA.
Afterimage induced neural activity during emotional face perception.
Cheal, Jenna L; Heisz, Jennifer J; Walsh, Jennifer A; Shedden, Judith M; Rutherford, M D
2014-02-26
The N170 response differs when positive versus negative facial expressions are viewed. This neural response could be associated with the perception of emotions, or some feature of the stimulus. We used an aftereffect paradigm to clarify. Consistent with previous reports of emotional aftereffects, a neutral face was more likely to be described as happy following a sad face adaptation, and more likely to be described as sad following a happy face adaptation. In addition, similar to previous observations with actual emotional faces, we found differences in the latency of the N170 elicited by the neutral face following sad versus happy face adaptation, demonstrating that the emotion-specific effect on the N170 emerges even when emotion expressions are perceptually different but physically identical. The re-entry of emotional information from other brain regions may be driving the emotional aftereffects and the N170 latency differences. Copyright © 2014 Elsevier B.V. All rights reserved.
Kokinous, Jenny; Tavano, Alessandro; Kotz, Sonja A; Schröger, Erich
2017-02-01
The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se. Copyright © 2016 Elsevier B.V. All rights reserved.
Humle, Tatyana
2016-07-01
The Japanese approach to science has permitted theoretical leaps in our understanding of culture in non-human animals and challenged human uniqueness, as it is not embedded in the Western traditional dualisms of human/animal and nature/culture. This paper highlights the value of an interdisciplinary approach and combining methodological approaches in exploring putative cultural variation among chimpanzees. I focus particularly on driver ants (Dorylus sp.) and oil palm (Elaeis guineensis) consumption among the Bossou and Nimba chimpanzees, in south-eastern Guinea at the border with Côte d'Ivoire and Liberia, and hand use across different tool use tasks commonly witnessed at Bossou, i.e. ant-dipping, nut-cracking, pestle-pounding, and algae-scooping. Observed variation in resource use was addressed across differing scales exploring both within- and between-community differences. Our findings have highlighted a tight interplay between ecology, social dynamics and culture, and between social and individual learning and maternal contribution to tool-use acquisition. Exploration of hand use by chimpanzees revealed no evidence for individual-level hand or community-level task specialisation. However, more complex types of tool use such as nut-cracking showed distinct lateralization, while the equivalent of a haptic manual action revealed a strong right hand bias. The data also suggest an overall population tendency for a right hand preference. As well as describing these sites' key contributions to our understanding of chimpanzees and to challenging our perceptions of human uniqueness, this paper also highlights the critical condition and high levels of threats facing this emblematic chimpanzee population, and several questions that remain to be addressed. In the spirit of the Japanese approach to science, I recommend that an interdisciplinary and collaborative research approach can best help us to challenge perceptions of human uniqueness and to further our understanding of chimpanzee behavioural and social flexibility in the face of local social, ecological and anthropogenic changes and threats to their survival.
Developmental origins of the face inversion effect.
Cashon, Cara H; Holt, Nicholas A
2015-01-01
A hallmark of adults' expertise for faces is that they are better at recognizing, discriminating, and processing upright faces compared to inverted faces. We investigate the developmental origins of "the face inversion effect" by reviewing research on infants' perception of upright and inverted faces during the first year of life. We review the effects of inversion on infants' face preference, recognition, processing (holistic and second-order configural), and scanning as well as face-related neural responses. Particular attention is paid to the developmental patterns that emerge within and across these areas of face perception. We conclude that the developmental origins of the inversion effect begin in the first few months of life and grow stronger over the first year, culminating in effects that are commonly thought to indicate adult-like expertise. We posit that by the end of the first year, infants' face-processing system has become specialized to upright faces and a foundation for adults' upright-face expertise has been established. Developmental mechanisms that may facilitate the emergence of this upright-face specialization are discussed, including the roles that physical and social development may play in upright faces' becoming more meaningful to infants during the first year. © 2015 Elsevier Inc. All rights reserved.
The complex duration perception of emotional faces: effects of face direction.
Kliegl, Katrin M; Limbrecht-Ecklundt, Kerstin; Dürr, Lea; Traue, Harald C; Huckauf, Anke
2015-01-01
The perceived duration of emotional face stimuli strongly depends on the expressed emotion. But, emotional faces also differ regarding a number of other features like gaze, face direction, or sex. Usually, these features have been controlled by only using pictures of female models with straight gaze and face direction. Doi and Shinohara (2009) reported that an overestimation of angry faces could only be found when the model's gaze was oriented toward the observer. We aimed at replicating this effect for face direction. Moreover, we explored the effect of face direction on the duration perception sad faces. Controlling for the sex of the face model and the participant, female and male participants rated the duration of neutral, angry, and sad face stimuli of both sexes photographed from different perspectives in a bisection task. In line with current findings, we report a significant overestimation of angry compared to neutral face stimuli that was modulated by face direction. Moreover, the perceived duration of sad face stimuli did not differ from that of neutral faces and was not influenced by face direction. Furthermore, we found that faces of the opposite sex appeared to last longer than those of the same sex. This outcome is discussed with regards to stimulus parameters like the induced arousal, social relevance, and an evolutionary context.
ERIC Educational Resources Information Center
Evans, Nicole Stayton
2013-01-01
The measurement of student perceptions of learning effectiveness is often used as a tool at universities to enhance the quality of course offerings. The recent growth in online course offerings creates new challenges in evaluating learning effectiveness. This study used three principles of adult learning theory, foundation, self-concept, and…
Teachers' Perceptions of Differentiated Learning for At-Risk Second-Grade Students in Reading
ERIC Educational Resources Information Center
Sabb-Cordes, Morelisa L.
2016-01-01
Students were performing below grade level in reading, fluency, and comprehension in a suburban school in South Carolina. The purpose of this study was to explore the perceptions of teachers about their preferred differentiated instruction approach (face-to-face vs. computer-based) to meet the needs of at-risk students in 2nd grade. The underlying…
ERIC Educational Resources Information Center
Bradley, Austrai
2017-01-01
This qualitative study explored parent perceptions of parental involvement in mathematics learning. The study was conducted in a rural area of South Carolina. Face-to-face interviews, online interviews, and review of documents were the sources of data for this inquiry. Findings revealed that parents admitted to a lack of involvement as long as…
Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W
2016-03-01
The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).
Emotional memory and perception in temporal lobectomy patients with amygdala damage.
Brierley, B; Medford, N; Shaw, P; David, A S
2004-04-01
The human amygdala is implicated in the formation of emotional memories and the perception of emotional stimuli--particularly fear--across various modalities. To discern the extent to which these functions are related. 28 patients who had anterior temporal lobectomy (13 left and 15 right) for intractable epilepsy were recruited. Structural magnetic resonance imaging showed that three of them had atrophy of their remaining amygdala. All participants were given tests of affect perception from facial and vocal expressions and of emotional memory, using a standard narrative test and a novel test of word recognition. The results were standardised against matched healthy controls. Performance on all emotion tasks in patients with unilateral lobectomy ranged from unimpaired to moderately impaired. Perception of emotions in faces and voices was (with exceptions) significantly positively correlated, indicating multimodal emotional processing. However, there was no correlation between the subjects' performance on tests of emotional memory and perception. Several subjects showed strong emotional memory enhancement but poor fear perception. Patients with bilateral amygdala damage had greater impairment, particularly on the narrative test of emotional memory, one showing superior fear recognition but absent memory enhancement. Bilateral amygdala damage is particularly disruptive of emotional memory processes in comparison with unilateral temporal lobectomy. On a cognitive level, the pattern of results implies that perception of emotional expressions and emotional memory are supported by separate processing systems or streams.
Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar
2016-03-01
Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial axis and secondarily across the vertical axis. Published by Elsevier Ltd.
A face to remember: emotional expression modulates prefrontal activity during memory formation.
Sergerie, Karine; Lepage, Martin; Armony, Jorge L
2005-01-15
Emotion can exert a modulatory role on episodic memory. Several studies have shown that negative stimuli (e.g., words, pictures) are better remembered than neutral ones. Although facial expressions are powerful emotional stimuli and have been shown to influence perception and attention processes, little is known about their effect on memory. We used functional magnetic resonance imaging (fMRI) in humans to investigate the effects of expression (happy, neutral, and fearful) on prefrontal cortex (PFC) activity during the encoding of faces, using a subsequent memory effect paradigm. Our results show that activity in right PFC predicted memory for faces, regardless of expression, while a homotopic region in the left hemisphere was associated with successful encoding only for faces with an emotional expression. These findings are consistent with the proposed role of right dorsolateral PFC in successful encoding of nonverbal material, but also suggest that left DLPFC may be a site where integration of memory and emotional processes occurs. This study sheds new light on the current controversy regarding the hemispheric lateralization of PFC in memory encoding.
Visual attention to variation in female facial skin color distribution.
Fink, Bernhard; Matts, Paul J; Klingenberg, Heiner; Kuntze, Sebastian; Weege, Bettina; Grammer, Karl
2008-06-01
Visible skin condition of women is argued to influence human physical attraction. Recent research has shown that people are sensitive to variation in skin color distribution, and such variation affects visual perception of female facial attractiveness, healthiness, and age. The eye gaze of 39 males and females, aged 13 to 45 years, was tracked while they viewed images of shape- and topography-standardized stimulus faces that varied only in terms of skin color distribution. The number of fixations and dwell time were significantly higher when viewing stimulus faces with the homogeneous skin color distribution of young people, compared with those of more elderly people. In accordance with recent research, facial stimuli with even skin tones were also judged to be younger and received higher attractiveness ratings. Finally, visual attention measures were negatively correlated with perceived age, but positively associated with attractiveness judgments. Variation in visible skin color distribution (independent of facial form and skin surface topography) is able to selectively attract people's attention toward female faces, and this higher attention results in more positive statements about a woman's face.
Emotion perception, but not affect perception, is impaired with semantic memory loss.
Lindquist, Kristen A; Gendron, Maria; Barrett, Lisa Feldman; Dickerson, Bradford C
2014-04-01
For decades, psychologists and neuroscientists have hypothesized that the ability to perceive emotions on others' faces is inborn, prelinguistic, and universal. Concept knowledge about emotion has been assumed to be epiphenomenal to emotion perception. In this article, we report findings from 3 patients with semantic dementia that cannot be explained by this "basic emotion" view. These patients, who have substantial deficits in semantic processing abilities, spontaneously perceived pleasant and unpleasant expressions on faces, but not discrete emotions such as anger, disgust, fear, or sadness, even in a task that did not require the use of emotion words. Our findings support the hypothesis that discrete emotion concept knowledge helps transform perceptions of affect (positively or negatively valenced facial expressions) into perceptions of discrete emotions such as anger, disgust, fear, and sadness. These findings have important consequences for understanding the processes supporting emotion perception.
Emotion perception, but not affect perception, is impaired with semantic memory loss
Lindquist, Kristen A.; Gendron, Maria; Feldman Barrett, Lisa; Dickerson, Bradford C.
2014-01-01
For decades, psychologists and neuroscientists have hypothesized that the ability to perceive emotions on others’ faces is inborn, pre-linguistic, and universal. Concept knowledge about emotion has been assumed to be epiphenomenal to emotion perception. In this paper, we report findings from three patients with semantic dementia that cannot be explained by this “basic emotion” view. These patients, who have substantial deficits in semantic processing abilities, spontaneously perceived pleasant and unpleasant expressions on faces, but not discrete emotions such as anger, disgust, fear, or sadness, even in a task that did not require the use of emotion words. Our findings support the hypothesis that discrete emotion concept knowledge helps transform perceptions of affect (positively or negatively valenced facial expressions) into perceptions of discrete emotions such as anger, disgust, fear and sadness. These findings have important consequences for understanding the processes supporting emotion perception. PMID:24512242
Oliver, Lindsay D; Mao, Alexander; Mitchell, Derek G V
2015-01-01
Though emotional faces preferentially reach awareness, the present study utilised both objective and subjective indices of awareness to determine whether they enhance subjective awareness and "blindsight". Under continuous flash suppression, participants localised a disgusted, fearful or neutral face (objective index), and rated their confidence (subjective index). Psychopathic traits were also measured to investigate their influence on emotion perception. As predicted, fear increased localisation accuracy, subjective awareness and "blindsight" of upright faces. Coldhearted traits were inversely related to subjective awareness, but not "blindsight", of upright fearful faces. In a follow-up experiment using inverted faces, increased localisation accuracy and awareness, but not "blindsight", were observed for fear. Surprisingly, awareness of inverted fearful faces was positively correlated with coldheartedness. These results suggest that emotion enhances both pre-conscious processing and the qualitative experience of awareness, but that pre-conscious and conscious processing of emotional faces rely on at least partially dissociable cognitive mechanisms.
Ofan, Renana H; Rubin, Nava; Amodio, David M
2011-10-01
We examined the relation between neural activity reflecting early face perception processes and automatic and controlled responses to race. Participants completed a sequential evaluative priming task, in which two-tone images of Black faces, White faces, and cars appeared as primes, followed by target words categorized as pleasant or unpleasant, while encephalography was recorded. Half of these participants were alerted that the task assessed racial prejudice and could reveal their personal bias ("alerted" condition). To assess face perception processes, the N170 component of the ERP was examined. For all participants, stronger automatic pro-White bias was associated with larger N170 amplitudes to Black than White faces. For participants in the alerted condition only, larger N170 amplitudes to Black versus White faces were also associated with less controlled processing on the word categorization task. These findings suggest that preexisting racial attitudes affect early face processing and that situational factors moderate the link between early face processing and behavior.
Sensitivity to spatial frequency content is not specific to face perception
Williams, N. Rankin; Willenbockel, Verena; Gauthier, Isabel
2010-01-01
Prior work using a matching task between images that were complementary in spatial frequency and orientation information suggested that the representation of faces, but not objects, retains low-level spatial frequency (SF) information (Biederman & Kalocsai. 1997). In two experiments, we reexamine the claim that faces are uniquely sensitive to changes in SF. In contrast to prior work, we used a design allowing the computation of sensitivity and response criterion for each category, and in one experiment, equalized low-level image properties across object categories. In both experiments, we find that observers are sensitive to SF changes for upright and inverted faces and nonface objects. Differential response biases across categories contributed to a larger sensitivity for faces, but even sensitivity showed a larger effect for faces, especially when faces were upright and in a front-facing view. However, when objects were inverted, or upright but shown in a three-quarter view, the matching of objects and faces was equally sensitive to SF changes. Accordingly, face perception does not appear to be uniquely affected by changes in SF content. PMID:19576237
Asymmetric cultural effects on perceptual expertise underlie an own-race bias for voices
Perrachione, Tyler K.; Chiao, Joan Y.; Wong, Patrick C.M.
2009-01-01
The own-race bias in memory for faces has been a rich source of empirical work on the mechanisms of person perception. This effect is thought to arise because the face-perception system differentially encodes the relevant structural dimensions of features and their configuration based on experiences with different groups of faces. However, the effects of sociocultural experiences on person perception abilities in other identity-conveying modalities like audition have not been explored. Investigating an own-race bias in the auditory domain provides a unique opportunity for studying whether person identification is a modality-independent construct and how it is sensitive to asymmetric cultural experiences. Here we show that an own-race bias in talker identification arises from asymmetric experience with different spoken dialects. When listeners categorized voices by race (White or Black), a subset of the Black voices were categorized as sounding White, while the opposite case was unattested. Acoustic analyses indicated listeners' perceptions about race were consistent with differences in specific phonetic and phonological features. In a subsequent person-identification experiment, the Black voices initially categorized as sounding White elicited an own-race bias from White listeners, but not from Black listeners. These effects are inconsistent with person-perception models that strictly analogize faces and voices based on recognition from only structural features. Our results demonstrate that asymmetric exposure to spoken dialect, independent from talkers' physical characteristics, affects auditory perceptual expertise for talker identification. Person perception thus additionally relies on socioculturally-acquired dynamic information, which may be represented by different mechanisms in different sensory modalities. PMID:19782970
Looking Like a Leader–Facial Shape Predicts Perceived Height and Leadership Ability
Re, Daniel E.; Hunter, David W.; Coetzee, Vinet; Tiddeman, Bernard P.; Xiao, Dengke; DeBruine, Lisa M.; Jones, Benedict C.; Perrett, David I.
2013-01-01
Judgments of leadership ability from face images predict the outcomes of actual political elections and are correlated with leadership success in the corporate world. The specific facial cues that people use to judge leadership remain unclear, however. Physical height is also associated with political and organizational success, raising the possibility that facial cues of height contribute to leadership perceptions. Consequently, we assessed whether cues to height exist in the face and, if so, whether they are associated with perception of leadership ability. We found that facial cues to perceived height had a strong relationship with perceived leadership ability. Furthermore, when allowed to manually manipulate faces, participants increased facial cues associated with perceived height in order to maximize leadership perception. A morphometric analysis of face shape revealed that structural facial masculinity was not responsible for the relationship between perceived height and perceived leadership ability. Given the prominence of facial appearance in making social judgments, facial cues to perceived height may have a significant influence on leadership selection. PMID:24324651
Garman, Heather D.; Spaulding, Christine J.; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P.
2016-01-01
This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those with shorter N170 latencies exhibited better FER for child angry faces stimuli. Social motivation partially mediated the relationship between a faster N170 and better FER. These effects were all robust to variations in IQ, age, and ASD severity. These findings augur against theories implicating social motivation as uniformly valuable for individuals with ASD, and augment models suggesting a close link between early-stage face perception, social motivation, and FER in this population. Broader implications for models and development of FER in ASD are discussed. PMID:26743637
Garman, Heather D; Spaulding, Christine J; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P; Lerner, Matthew D
2016-12-01
This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those with shorter N170 latencies exhibited better FER for child angry faces stimuli. Social motivation partially mediated the relationship between a faster N170 and better FER. These effects were all robust to variations in IQ, age, and ASD severity. These findings augur against theories implicating social motivation as uniformly valuable for individuals with ASD, and augment models suggesting a close link between early-stage face perception, social motivation, and FER in this population. Broader implications for models and development of FER in ASD are discussed.
Aging disrupts the neural transformations that link facial identity across views.
Habak, Claudine; Wilkinson, Frances; Wilson, Hugh R
2008-01-01
Healthy human aging can have adverse effects on cortical function and on the brain's ability to integrate visual information to form complex representations. Facial identification is crucial to successful social discourse, and yet, it remains unclear whether the neuronal mechanisms underlying face perception per se, and the speed with which they process information, change with age. We present face images whose discrimination relies strictly on the shape and geometry of a face at various stimulus durations. Interestingly, we demonstrate that facial identity matching is maintained with age when faces are shown in the same view (e.g., front-front or side-side), regardless of exposure duration, but degrades when faces are shown in different views (e.g., front and turned 20 degrees to the side) and does not improve at longer durations. Our results indicate that perceptual processing speed for complex representations and the mechanisms underlying same-view facial identity discrimination are maintained with age. In contrast, information is degraded in the neural transformations that represent facial identity across views. We suggest that the accumulation of useful information over time to refine a representation within a population of neurons saturates earlier in the aging visual system than it does in the younger system and contributes to the age-related deterioration of face discrimination across views.
ERIC Educational Resources Information Center
Little, Anthony C.; DeBruine, Lisa M.; Jones, Benedict C.
2011-01-01
A face appears normal when it approximates the average of a population. Consequently, exposure to faces biases perceptions of subsequently viewed faces such that faces similar to those recently seen are perceived as more normal. Simultaneously inducing such aftereffects in opposite directions for two groups of faces indicates somewhat discrete…
Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo
2015-01-01
The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society.
Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo
2015-01-01
The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society. PMID:25762967
Detection of Nonverbal Synchronization through Phase Difference in Human Communication
Kwon, Jinhwan; Ogawa, Ken-ichiro; Ono, Eisuke; Miyake, Yoshihiro
2015-01-01
Nonverbal communication is an important factor in human communication, and body movement synchronization in particular is an important part of nonverbal communication. Some researchers have analyzed body movement synchronization by focusing on changes in the amplitude of body movements. However, the definition of “body movement synchronization” is still unclear. From a theoretical viewpoint, phase difference is the most important factor in synchronization analysis. Therefore, there is a need to measure the synchronization of body movements using phase difference. The purpose of this study was to provide a quantitative definition of the phase difference distribution for detecting body movement synchronization in human communication. The phase difference distribution was characterized using four statistical measurements: density, mean phase difference, standard deviation (SD) and kurtosis. To confirm the effectiveness of our definition, we applied it to human communication in which the roles of speaker and listener were defined. Specifically, we examined the difference in the phase difference distribution between two different communication situations: face-to-face communication with visual interaction and remote communication with unidirectional visual perception. Participant pairs performed a task supposing lecture in the face-to-face communication condition and in the remote communication condition via television. Throughout the lecture task, we extracted a set of phase differences from the time-series data of the acceleration norm of head nodding motions between two participants. Statistical analyses of the phase difference distribution revealed the characteristics of head nodding synchronization. Although the mean phase differences in synchronized head nods did not differ significantly between the conditions, there were significant differences in the densities, the SDs and the kurtoses of the phase difference distributions of synchronized head nods. These results show the difference in nonverbal synchronization between different communication types. Our study indicates that the phase difference distribution is useful in detecting nonverbal synchronization in various human communication situations. PMID:26208100
Detection of Nonverbal Synchronization through Phase Difference in Human Communication.
Kwon, Jinhwan; Ogawa, Ken-ichiro; Ono, Eisuke; Miyake, Yoshihiro
2015-01-01
Nonverbal communication is an important factor in human communication, and body movement synchronization in particular is an important part of nonverbal communication. Some researchers have analyzed body movement synchronization by focusing on changes in the amplitude of body movements. However, the definition of "body movement synchronization" is still unclear. From a theoretical viewpoint, phase difference is the most important factor in synchronization analysis. Therefore, there is a need to measure the synchronization of body movements using phase difference. The purpose of this study was to provide a quantitative definition of the phase difference distribution for detecting body movement synchronization in human communication. The phase difference distribution was characterized using four statistical measurements: density, mean phase difference, standard deviation (SD) and kurtosis. To confirm the effectiveness of our definition, we applied it to human communication in which the roles of speaker and listener were defined. Specifically, we examined the difference in the phase difference distribution between two different communication situations: face-to-face communication with visual interaction and remote communication with unidirectional visual perception. Participant pairs performed a task supposing lecture in the face-to-face communication condition and in the remote communication condition via television. Throughout the lecture task, we extracted a set of phase differences from the time-series data of the acceleration norm of head nodding motions between two participants. Statistical analyses of the phase difference distribution revealed the characteristics of head nodding synchronization. Although the mean phase differences in synchronized head nods did not differ significantly between the conditions, there were significant differences in the densities, the SDs and the kurtoses of the phase difference distributions of synchronized head nods. These results show the difference in nonverbal synchronization between different communication types. Our study indicates that the phase difference distribution is useful in detecting nonverbal synchronization in various human communication situations.
Tso, Ivy F; Calwas, Anita M; Chun, Jinsoo; Mueller, Savanna A; Taylor, Stephan F; Deldin, Patricia J
2015-08-01
Using gaze information to orient attention and guide behavior is critical to social adaptation. Previous studies have suggested that abnormal gaze perception in schizophrenia (SCZ) may originate in abnormal early attentional and perceptual processes and may be related to paranoid symptoms. Using event-related brain potentials (ERPs), this study investigated altered early attentional and perceptual processes during gaze perception and their relationship to paranoid delusions in SCZ. Twenty-eight individuals with SCZ or schizoaffective disorder and 32 demographically matched healthy controls (HCs) completed a gaze-discrimination task with face stimuli varying in gaze direction (direct, averted), head orientation (forward, deviated), and emotion (neutral, fearful). ERPs were recorded during the task. Participants rated experienced threat from each face after the task. Participants with SCZ were as accurate as, though slower than, HCs on the task. Participants with SCZ displayed enlarged N170 responses over the left hemisphere to averted gaze presented in fearful relative to neutral faces, indicating a heightened encoding sensitivity to faces signaling external threat. This abnormality was correlated with increased perceived threat and paranoid delusions. Participants with SCZ also showed a reduction of N170 modulation by head orientation (normally increased amplitude to deviated faces relative to forward faces), suggesting less integration of contextual cues of head orientation in gaze perception. The psychophysiological deviations observed during gaze discrimination in SCZ underscore the role of early attentional and perceptual abnormalities in social information processing and paranoid symptoms of SCZ. (c) 2015 APA, all rights reserved).
Ingadottir, Brynja; Blondal, Katrin; Jaarsma, Tiny; Thylen, Ingela
2016-11-01
The aim of this study was to explore the perceptions of surgical patients about traditional and novel methods to learn about postoperative pain management. Patient education is an important part of postoperative care. Contemporary technology offers new ways for patients to learn about self-care, although face-to-face discussions and brochures are the most common methods of delivering education in nursing practice. A qualitative design with a vignette and semi-structured interviews used for data collection. A purposeful sample of 13 postsurgical patients, who had been discharged from hospital, was recruited during 2013-2014. The patients were given a vignette about anticipated hospital discharge after surgery with four different options for communication (face-to-face, brochure, website, serious game) to learn about postoperative pain management. They were asked to rank their preferred method of learning and thereafter to reflect on their choices. Data were analysed using an inductive content analysis approach. Patients preferred face-to-face education with a nurse, followed by brochures and websites, while games were least preferred. Two categories, each with two sub-categories, emerged from the data. These conceptualized the factors affecting patients' perceptions: (1) 'Trusting the source', sub-categorized into 'Being familiar with the method' and 'Having own prejudgments'; and (2) 'Being motivated to learn' sub-categorized into 'Managing an impaired cognition' and 'Aspiring for increased knowledge'. To implement successfully novel educational methods into postoperative care, healthcare professionals need to be aware of the factors influencing patients' perceptions about how to learn, such as trust and motivation. © 2016 John Wiley & Sons Ltd.
Face processing in autism: Reduced integration of cross-feature dynamics.
Shah, Punit; Bird, Geoffrey; Cook, Richard
2016-02-01
Characteristic problems with social interaction have prompted considerable interest in the face processing of individuals with Autism Spectrum Disorder (ASD). Studies suggest that reduced integration of information from disparate facial regions likely contributes to difficulties recognizing static faces in this population. Recent work also indicates that observers with ASD have problems using patterns of facial motion to judge identity and gender, and may be less able to derive global motion percepts. These findings raise the possibility that feature integration deficits also impact the perception of moving faces. To test this hypothesis, we examined whether observers with ASD exhibit susceptibility to a new dynamic face illusion, thought to index integration of moving facial features. When typical observers view eye-opening and -closing in the presence of asynchronous mouth-opening and -closing, the concurrent mouth movements induce a strong illusory slowing of the eye transitions. However, we find that observers with ASD are not susceptible to this illusion, suggestive of weaker integration of cross-feature dynamics. Nevertheless, observers with ASD and typical controls were equally able to detect the physical differences between comparison eye transitions. Importantly, this confirms that observers with ASD were able to fixate the eye-region, indicating that the striking group difference has a perceptual, not attentional origin. The clarity of the present results contrasts starkly with the modest effect sizes and equivocal findings seen throughout the literature on static face perception in ASD. We speculate that differences in the perception of facial motion may be a more reliable feature of this condition. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Lu, Ming-Tsan Pierre; Cavazos Vela, Javier
2015-01-01
In this article, the authors first reviewed related literature on possible factors that influence learning between an online learning (OL) course format and a face-to-face (F2F) course format. The authors investigated OL and F2F learning perceptions and effectiveness of a graduate-level research methods course at a Hispanic-serving institution…
ERIC Educational Resources Information Center
Loth, Eva; Gomez, Juan Carlos; Happe, Francesca
2010-01-01
Behavioural, neuroimaging and neurophysiological approaches emphasise the active and constructive nature of visual perception, determined not solely by the environmental input, but modulated top-down by prior knowledge. For example, degraded images, which at first appear as meaningless "blobs", can easily be recognized as, say, a face, after…
Right wing authoritarianism is associated with race bias in face detection
Bret, Amélie; Beffara, Brice; McFadyen, Jessica; Mermillod, Martial
2017-01-01
Racial discrimination can be observed in a wide range of psychological processes, including even the earliest phases of face detection. It remains unclear, however, whether racially-biased low-level face processing is influenced by ideologies, such as right wing authoritarianism or social dominance orientation. In the current study, we hypothesized that socio-political ideologies such as these can substantially predict perceptive racial bias during early perception. To test this hypothesis, 67 participants detected faces within arrays of neutral objects. The faces were either Caucasian (in-group) or North African (out-group) and either had a neutral or angry expression. Results showed that participants with higher self-reported right-wing authoritarianism were more likely to show slower response times for detecting out- vs. in-groups faces. We interpreted our results according to the Dual Process Motivational Model and suggest that socio-political ideologies may foster early racial bias via attentional disengagement. PMID:28692705
Recalibration of vocal affect by a dynamic face.
Baart, Martijn; Vroomen, Jean
2018-04-25
Perception of vocal affect is influenced by the concurrent sight of an emotional face. We demonstrate that the sight of an emotional face also can induce recalibration of vocal affect. Participants were exposed to videos of a 'happy' or 'fearful' face in combination with a slightly incongruous sentence with ambiguous prosody. After this exposure, ambiguous test sentences were rated as more 'happy' when the exposure phase contained 'happy' instead of 'fearful' faces. This auditory shift likely reflects recalibration that is induced by error minimization of the inter-sensory discrepancy. In line with this view, when the prosody of the exposure sentence was non-ambiguous and congruent with the face (without audiovisual discrepancy), aftereffects went in the opposite direction, likely reflecting adaptation. Our results demonstrate, for the first time, that perception of vocal affect is flexible and can be recalibrated by slightly discrepant visual information.
Attention Alters Perceived Attractiveness.
Störmer, Viola S; Alvarez, George A
2016-04-01
Can attention alter the impression of a face? Previous studies showed that attention modulates the appearance of lower-level visual features. For instance, attention can make a simple stimulus appear to have higher contrast than it actually does. We tested whether attention can also alter the perception of a higher-order property-namely, facial attractiveness. We asked participants to judge the relative attractiveness of two faces after summoning their attention to one of the faces using a briefly presented visual cue. Across trials, participants judged the attended face to be more attractive than the same face when it was unattended. This effect was not due to decision or response biases, but rather was due to changes in perceptual processing of the faces. These results show that attention alters perceived facial attractiveness, and broadly demonstrate that attention can influence higher-level perception and may affect people's initial impressions of one another. © The Author(s) 2016.
Attractiveness as a Function of Skin Tone and Facial Features: Evidence from Categorization Studies.
Stepanova, Elena V; Strube, Michael J
2018-01-01
Participants rated the attractiveness and racial typicality of male faces varying in their facial features from Afrocentric to Eurocentric and in skin tone from dark to light in two experiments. Experiment 1 provided evidence that facial features and skin tone have an interactive effect on perceptions of attractiveness and mixed-race faces are perceived as more attractive than single-race faces. Experiment 2 further confirmed that faces with medium levels of skin tone and facial features are perceived as more attractive than faces with extreme levels of these factors. Black phenotypes (combinations of dark skin tone and Afrocentric facial features) were rated as more attractive than White phenotypes (combinations of light skin tone and Eurocentric facial features); ambiguous faces (combinations of Afrocentric and Eurocentric physiognomy) with medium levels of skin tone were rated as the most attractive in Experiment 2. Perceptions of attractiveness were relatively independent of racial categorization in both experiments.
Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan
2018-07-01
We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.
Visual imagery of famous faces: effects of memory and attention revealed by fMRI.
Ishai, Alumit; Haxby, James V; Ungerleider, Leslie G
2002-12-01
Complex pictorial information can be represented and retrieved from memory as mental visual images. Functional brain imaging studies have shown that visual perception and visual imagery share common neural substrates. The type of memory (short- or long-term) that mediates the generation of mental images, however, has not been addressed previously. The purpose of this study was to investigate the neural correlates underlying imagery generated from short- and long-term memory (STM and LTM). We used famous faces to localize the visual response during perception and to compare the responses during visual imagery generated from STM (subjects memorized specific pictures of celebrities before the imagery task) and imagery from LTM (subjects imagined famous faces without seeing specific pictures during the experimental session). We found that visual perception of famous faces activated the inferior occipital gyri, lateral fusiform gyri, the superior temporal sulcus, and the amygdala. Small subsets of these face-selective regions were activated during imagery. Additionally, visual imagery of famous faces activated a network of regions composed of bilateral calcarine, hippocampus, precuneus, intraparietal sulcus (IPS), and the inferior frontal gyrus (IFG). In all these regions, imagery generated from STM evoked more activation than imagery from LTM. Regardless of memory type, focusing attention on features of the imagined faces (e.g., eyes, lips, or nose) resulted in increased activation in the right IPS and right IFG. Our results suggest differential effects of memory and attention during the generation and maintenance of mental images of faces.
The automaticity of face perception is influenced by familiarity.
Yan, Xiaoqian; Young, Andrew W; Andrews, Timothy J
2017-10-01
In this study, we explore the automaticity of encoding for different facial characteristics and ask whether it is influenced by face familiarity. We used a matching task in which participants had to report whether the gender, identity, race, or expression of two briefly presented faces was the same or different. The task was made challenging by allowing nonrelevant dimensions to vary across trials. To test for automaticity, we compared performance on trials in which the task instruction was given at the beginning of the trial, with trials in which the task instruction was given at the end of the trial. As a strong criterion for automatic processing, we reasoned that if perception of a given characteristic (gender, race, identity, or emotion) is fully automatic, the timing of the instruction should not influence performance. We compared automaticity for the perception of familiar and unfamiliar faces. Performance with unfamiliar faces was higher for all tasks when the instruction was given at the beginning of the trial. However, we found a significant interaction between instruction and task with familiar faces. Accuracy of gender and identity judgments to familiar faces was the same regardless of whether the instruction was given before or after the trial, suggesting automatic processing of these properties. In contrast, there was an effect of instruction for judgments of expression and race to familiar faces. These results show that familiarity enhances the automatic processing of some types of facial information more than others.
Buchan, Julie N; Paré, Martin; Munhall, Kevin G
2008-11-25
During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces.
On the Perception of Religious Group Membership from Faces
Rule, Nicholas O.; Garrett, James V.; Ambady, Nalini
2010-01-01
Background The study of social categorization has largely been confined to examining groups distinguished by perceptually obvious cues. Yet many ecologically important group distinctions are less clear, permitting insights into the general processes involved in person perception. Although religious group membership is thought to be perceptually ambiguous, folk beliefs suggest that Mormons and non-Mormons can be categorized from their appearance. We tested whether Mormons could be distinguished from non-Mormons and investigated the basis for this effect to gain insight to how subtle perceptual cues can support complex social categorizations. Methodology/Principal Findings Participants categorized Mormons' and non-Mormons' faces or facial features according to their group membership. Individuals could distinguish between the two groups significantly better than chance guessing from their full faces and faces without hair, with eyes and mouth covered, without outer face shape, and inverted 180°; but not from isolated features (i.e., eyes, nose, or mouth). Perceivers' estimations of their accuracy did not match their actual accuracy. Exploration of the remaining features showed that Mormons and non-Mormons significantly differed in perceived health and that these perceptions were related to perceptions of skin quality, as demonstrated in a structural equation model representing the contributions of skin color and skin texture. Other judgments related to health (facial attractiveness, facial symmetry, and structural aspects related to body weight) did not differ between the two groups. Perceptions of health were also responsible for differences in perceived spirituality, explaining folk hypotheses that Mormons are distinct because they appear more spiritual than non-Mormons. Conclusions/Significance Subtle markers of group membership can influence how others are perceived and categorized. Perceptions of health from non-obvious and minimal cues distinguished individuals according to their religious group membership. These data illustrate how the non-conscious detection of very subtle differences in others' appearances supports cognitively complex judgments such as social categorization. PMID:21151864
Oshagh, Morteza; Moghadam, Tahere Baheri; Dashlibrun, Yunes Nazari
2013-01-01
To our knowledge, there is no study regarding the effects of facial type (short face or long face) on the esthetic perception of smiles as related to the amount of tooth and gingival display. Four photographs from two long- and two shortfaced females with posed smiles were prepared and, with altering the amount of tooth display, 5 photos for each of them were produced. These photos were given to 62 dentists and 69 laypersons to rate the images. There were significant differences between short and long-face patterns in low and high smile lines; also there were significant differences between dentists and laypersons about some images. Smile lines consistent with gingival margin were the best for both short and long-face patterns; also 1.5 mm incisor coverage in short-face and 1.5 mm gingival display in long-face received as high scores. In short-face patterns, lower smile lines, and in long-face patterns, higher smile lines are more acceptable by both dentists and laypersons, which can help in designing orthodontic treatment goals.
Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea
2017-04-01
Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.
Students' Perceptions of Study Modes
ERIC Educational Resources Information Center
Hagel, Pauline; Shaw, Robin N.
2006-01-01
This paper reports on a survey of how Australian undergraduate students perceive the benefits of broad study modes: face-to-face classes, web-based study, and print-based study. Two benefit types were identified through factor analysis: engagement and functionality. Respondents rated face-to-face classes highest on engagement and print-based study…
Enhanced Visual Short-Term Memory for Angry Faces
ERIC Educational Resources Information Center
Jackson, Margaret C.; Wu, Chia-Yun; Linden, David E. J.; Raymond, Jane E.
2009-01-01
Although some views of face perception posit independent processing of face identity and expression, recent studies suggest interactive processing of these 2 domains. The authors examined expression-identity interactions in visual short-term memory (VSTM) by assessing recognition performance in a VSTM task in which face identity was relevant and…
ERIC Educational Resources Information Center
Bahrick, Lorraine E.; Krogh-Jespersen, Sheila; Argumosa, Melissa A.; Lopez, Hassel
2014-01-01
Although infants and children show impressive face-processing skills, little research has focused on the conditions that facilitate versus impair face perception. According to the intersensory redundancy hypothesis (IRH), face discrimination, which relies on detection of visual featural information, should be impaired in the context of…
Kok, Rebecca; Van der Burg, Erik; Rhodes, Gillian; Alais, David
2017-01-01
Studies suggest that familiar faces are processed in a manner distinct from unfamiliar faces and that familiarity with a face confers an advantage in identity recognition. Our visual system seems to capitalize on experience to build stable face representations that are impervious to variation in retinal input that may occur due to changes in lighting, viewpoint, viewing distance, eye movements, etc. Emerging evidence also suggests that our visual system maintains a continuous perception of a face's identity from one moment to the next despite the retinal input variations through serial dependence. This study investigates whether interactions occur between face familiarity and serial dependence. In two experiments, participants used a continuous scale to rate attractiveness of unfamiliar and familiar faces (either experimentally learned or famous) presented in rapid sequences. Both experiments revealed robust inter-trial effects in which attractiveness ratings for a given face depended on the preceding face's attractiveness. This inter-trial attractiveness effect was most pronounced for unfamiliar faces. Indeed, when participants were familiar with a given face, attractiveness ratings showed significantly less serial dependence. These results represent the first evidence that familiar faces can resist the temporal integration seen in sequential dependencies and highlight the importance of familiarity to visual cognition. PMID:28405355
Dalrymple, Kirsten A; Elison, Jed T; Duchaine, Brad
2017-02-01
Evidence suggests that face and object recognition depend on distinct neural circuitry within the visual system. Work with adults with developmental prosopagnosia (DP) demonstrates that some individuals have preserved object recognition despite severe face recognition deficits. This face selectivity in adults with DP indicates that face- and object-processing systems can develop independently, but it is unclear at what point in development these mechanisms are separable. Determining when individuals with DP first show dissociations between faces and objects is one means to address this question. In the current study, we investigated face and object processing in six children with DP (5-12-years-old). Each child was assessed with one face perception test, two different face memory tests, and two object memory tests that were matched to the face memory tests in format and difficulty. Scores from the DP children on the matched face and object tasks were compared to within-subject data from age-matched controls. Four of the six DP children, including the 5-year-old, showed evidence of face-specific deficits, while one child appeared to have more general visual-processing deficits. The remaining child had inconsistent results. The presence of face-specific deficits in children with DP suggests that face and object perception depend on dissociable processes in childhood.
Use of context in emotion perception: The role of top-down control, cue type, and perceiver's age.
Ngo, Nhi; Isaacowitz, Derek M
2015-06-01
Although context is crucial to emotion perception, there are various factors that can modulate contextual influence. The current research investigated how cue type, top-down control, and the perceiver's age influence attention to context in facial emotion perception. In 2 experiments, younger and older adults identified facial expressions contextualized by other faces, isolated objects, and scenes. In the first experiment, participants were instructed to ignore face, object, and scene contexts. Face context was found to influence perception the least, whereas scene context produced the most contextual effect. Older adults were more influenced by context than younger adults, but both age groups were similarly influenced by different types of contextual cues, even when they were instructed to ignore the context. In the second experiment, when explicitly instructed that the context had no meaningful relationship to the target, younger and older adults both were less influenced by context than when they were instructed that the context was relevant to the target. Results from both studies indicate that contextual influence on emotion perception is not constant, but can vary based on the type of contextual cue, cue relevance, and the perceiver's age. (c) 2015 APA, all rights reserved).
Neural correlates of the perception of dynamic versus static facial expressions of emotion.
Kessler, Henrik; Doyen-Waldecker, Cornelia; Hofer, Christian; Hoffmann, Holger; Traue, Harald C; Abler, Birgit
2011-04-20
This study investigated brain areas involved in the perception of dynamic facial expressions of emotion. A group of 30 healthy subjects was measured with fMRI when passively viewing prototypical facial expressions of fear, disgust, sadness and happiness. Using morphing techniques, all faces were displayed as still images and also dynamically as a film clip with the expressions evolving from neutral to emotional. Irrespective of a specific emotion, dynamic stimuli selectively activated bilateral superior temporal sulcus, visual area V5, fusiform gyrus, thalamus and other frontal and parietal areas. Interaction effects of emotion and mode of presentation (static/dynamic) were only found for the expression of happiness, where static faces evoked greater activity in the medial prefrontal cortex. Our results confirm previous findings on neural correlates of the perception of dynamic facial expressions and are in line with studies showing the importance of the superior temporal sulcus and V5 in the perception of biological motion. Differential activation in the fusiform gyrus for dynamic stimuli stands in contrast to classical models of face perception but is coherent with new findings arguing for a more general role of the fusiform gyrus in the processing of socially relevant stimuli.
Neural and behavioral responses to attractiveness in adult and infant faces.
Hahn, Amanda C; Perrett, David I
2014-10-01
Facial attractiveness provides a very powerful motivation for sexual and parental behavior. We therefore review the importance of faces to the study of neurobiological control of human reproductive motivations. For heterosexual individuals there is a common brain circuit involving the nucleus accumbens, the medial prefrontal, dorsal anterior cingulate and the orbitofrontal cortices that is activated more by attractive than unattractive faces, particularly for faces of the opposite sex. Behavioral studies indicate parallel effects of attractiveness on incentive salience or willingness to work to see faces. There is some evidence that the reward value of opposite sex attractiveness is more pronounced in men than women, perhaps reflecting the greater importance assigned to physical attractiveness by men when evaluating a potential mate. Sex differences and similarities in response to facial attractiveness are reviewed. Studies comparing heterosexual and homosexual observers indicate the orbitofrontal cortex and mediodorsal thalamus are more activated by faces of the desired sex than faces of the less-preferred sex, independent of observer gender or sexual orientation. Infant faces activate brain regions that partially overlap with those responsive to adult faces. Infant faces provide a powerful stimulus, which also elicits sex differences in behavior and brain responses that appear dependent on sex hormones. There are many facial dimensions affecting perceptions of attractiveness that remain unexplored in neuroimaging, and we conclude by suggesting that future studies combining parametric manipulation of face images, brain imaging, hormone assays and genetic polymorphisms in receptor sensitivity are needed to understand the neural and hormonal mechanisms underlying reproductive drives. Copyright © 2014 Elsevier Ltd. All rights reserved.
Are you approaching me? Motor execution influences perceived action orientation.
Manera, Valeria; Cavallo, Andrea; Chiavarino, Claudia; Schouten, Ben; Verfaillie, Karl; Becchio, Cristina
2012-01-01
Human observers are especially sensitive to the actions of conspecifics that match their own actions. This has been proposed to be critical for social interaction, providing the basis for empathy and joint action. However, the precise relation between observed and executed actions is still poorly understood. Do ongoing actions change the way observers perceive others' actions? To pursue this question, we exploited the bistability of depth-ambiguous point-light walkers, which can be perceived as facing towards the viewer or as facing away from the viewer. We demonstrate that point-light walkers are perceived more often as facing the viewer when the observer is walking on a treadmill compared to when the observer is performing an action that does not match the observed behavior (e.g., cycling). These findings suggest that motor processes influence the perceived orientation of observed actions: Acting observers tend to perceive similar actions by conspecifics as oriented towards themselves. We discuss these results in light of the possible mechanisms subtending action-induced modulation of perception.
Medical Students' Perceptions of Child Psychiatry: Pre- and Post-Psychiatry Clerkship
ERIC Educational Resources Information Center
Martin, Vicki L.; Bennett, David S.; Pitale, Maria
2005-01-01
Objective: The U.S. is facing a severe shortage of child and adolescent psychiatrists (CAPs). While medical students have been relatively disinterested in psychiatry, little research has examined their perceptions of CAP. The present study examined student perceptions of CAP and general psychiatry, and whether these perceptions changed during the…
Thalamocortical interactions underlying visual fear conditioning in humans.
Lithari, Chrysa; Moratti, Stephan; Weisz, Nathan
2015-11-01
Despite a strong focus on the role of the amygdala in fear conditioning, recent works point to a more distributed network supporting fear conditioning. We aimed to elucidate interactions between subcortical and cortical regions in fear conditioning in humans. To do this, we used two fearful faces as conditioned stimuli (CS) and an electrical stimulation at the left hand, paired with one of the CS, as unconditioned stimulus (US). The luminance of the CS was rhythmically modulated leading to "entrainment" of brain oscillations at a predefined modulation frequency. Steady-state responses (SSR) were recorded by MEG. In addition to occipital regions, spectral analysis of SSR revealed increased power during fear conditioning particularly for thalamus and cerebellum contralateral to the upcoming US. Using thalamus and amygdala as seed-regions, directed functional connectivity was calculated to capture the modulation of interactions that underlie fear conditioning. Importantly, this analysis showed that the thalamus drives the fusiform area during fear conditioning, while amygdala captures the more general effect of fearful faces perception. This study confirms ideas from the animal literature, and demonstrates for the first time the central role of the thalamus in fear conditioning in humans. © 2015 Wiley Periodicals, Inc.
Age and executive ability impact the neural correlates of race perception
Lee, Eunice J.; Krendl, Anne C.
2016-01-01
Decreased executive ability elicits racial bias. We clarified the neural correlates of how executive ability contributes to race perception by comparing young adults (YA) to a population with highly variable executive ability: older adults (OA). After replicating work showing higher race bias in OA vs YA and a negative association between bias and executive ability, a subsample of White YA and OA perceived Black and White faces and cars during functional magnetic resonance imaging. YA had higher executive ability than OA, and OA had higher variability in executive ability. When perceiving Black vs White faces, YA exhibited more dorsolateral prefrontal cortex recruitment—a region previously implicated in regulating prejudiced responses—than OA. Moreover, OA with relatively impaired executive ability had more amygdala activity toward Black faces vs OA with relatively intact executive ability, whereas responses to White faces did not differ. Both YA and OA with relatively intact executive ability had stronger amygdala-ventrolateral prefrontal cortex connectivity when perceiving Black vs White faces. These findings are the first to disentangle age from executive ability differences in neural recruitment when perceiving race, potentially informing past behavioral work on aging and race perception. PMID:27330185
ERIC Educational Resources Information Center
Mondloch, Catherine J.
2012-01-01
The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body…
Rachel E. Schattman; V. Ernesto Méndez; Scott C. Merrill; Asim Zia
2018-01-01
The relationships among farmers' belief in climate change, perceptions of climate-related risk, and use of climate adaptation practices is a growing topic of interest in U.S. scholarship. The northeast region is not well represented in the literature, although it is highly agricultural and will likely face climaterelated risks that differ from those faced in other...
Learners' Perceptions of Blended Learning and the Roles and Interaction of f2f and Online Learning
ERIC Educational Resources Information Center
Huang, Qiang
2016-01-01
The present study aims to probe into learners' perceptions of blended learning in relation to the respective roles of face-to-face learning (f2f learning) and online learning as well as their interaction in the blended EFL contexts. Questionnaires were used in the study to examine the attitudes of 296 university students towards a blended English…
Intact perception but abnormal orientation towards face-like objects in young children with ASD
Guillon, Quentin; Rogé, Bernadette; Afzali, Mohammad H.; Baduel, Sophie; Kruck, Jeanne; Hadjikhani, Nouchine
2016-01-01
There is ample behavioral evidence of diminished orientation towards faces as well as the presence of face perception impairments in autism spectrum disorder (ASD), but the underlying mechanisms of these deficits are still unclear. We used face-like object stimuli that have been shown to evoke pareidolia in typically developing (TD) individuals to test the effect of a global face-like configuration on orientation and perceptual processes in young children with ASD and age-matched TD controls. We show that TD children were more likely to look first towards upright face-like objects than children with ASD, showing that a global face-like configuration elicit a stronger orientation bias in TD children as compared to children with ASD. However, once they were looking at the stimuli, both groups spent more time exploring the upright face-like object, suggesting that they both perceived it as a face. Our results are in agreement with abnormal social orienting in ASD, possibly due to an abnormal tuning of the subcortical pathway, leading to poor orienting and attention towards faces. Our results also indicate that young children with ASD can perceive a generic face holistically, such as face-like objects, further demonstrating holistic processing of faces in ASD. PMID:26912096
Intact perception but abnormal orientation towards face-like objects in young children with ASD.
Guillon, Quentin; Rogé, Bernadette; Afzali, Mohammad H; Baduel, Sophie; Kruck, Jeanne; Hadjikhani, Nouchine
2016-02-25
There is ample behavioral evidence of diminished orientation towards faces as well as the presence of face perception impairments in autism spectrum disorder (ASD), but the underlying mechanisms of these deficits are still unclear. We used face-like object stimuli that have been shown to evoke pareidolia in typically developing (TD) individuals to test the effect of a global face-like configuration on orientation and perceptual processes in young children with ASD and age-matched TD controls. We show that TD children were more likely to look first towards upright face-like objects than children with ASD, showing that a global face-like configuration elicit a stronger orientation bias in TD children as compared to children with ASD. However, once they were looking at the stimuli, both groups spent more time exploring the upright face-like object, suggesting that they both perceived it as a face. Our results are in agreement with abnormal social orienting in ASD, possibly due to an abnormal tuning of the subcortical pathway, leading to poor orienting and attention towards faces. Our results also indicate that young children with ASD can perceive a generic face holistically, such as face-like objects, further demonstrating holistic processing of faces in ASD.
Matsuda, Yoshi-Taka; Fujimura, Tomomi; Katahira, Kentaro; Okada, Masato; Ueno, Kenichi; Cheng, Kang; Okanoya, Kazuo
2013-01-01
Our understanding of facial emotion perception has been dominated by two seemingly opposing theories: the categorical and dimensional theories. However, we have recently demonstrated that hybrid processing involving both categorical and dimensional perception can be induced in an implicit manner (Fujimura etal., 2012). The underlying neural mechanisms of this hybrid processing remain unknown. In this study, we tested the hypothesis that separate neural loci might intrinsically encode categorical and dimensional processing functions that serve as a basis for hybrid processing. We used functional magnetic resonance imaging to measure neural correlates while subjects passively viewed emotional faces and performed tasks that were unrelated to facial emotion processing. Activity in the right fusiform face area (FFA) increased in response to psychologically obvious emotions and decreased in response to ambiguous expressions, demonstrating the role of the FFA in categorical processing. The amygdala, insula and medial prefrontal cortex exhibited evidence of dimensional (linear) processing that correlated with physical changes in the emotional face stimuli. The occipital face area and superior temporal sulcus did not respond to these changes in the presented stimuli. Our results indicated that distinct neural loci process the physical and psychological aspects of facial emotion perception in a region-specific and implicit manner. PMID:24133426
Furl, N; van Rijsbergen, N J; Treves, A; Dolan, R J
2007-08-01
Previous studies have shown reductions of the functional magnetic resonance imaging (fMRI) signal in response to repetition of specific visual stimuli. We examined how adaptation affects the neural responses associated with categorization behavior, using face adaptation aftereffects. Adaptation to a given facial category biases categorization towards non-adapted facial categories in response to presentation of ambiguous morphs. We explored a hypothesis, posed by recent psychophysical studies, that these adaptation-induced categorizations are mediated by activity in relatively advanced stages within the occipitotemporal visual processing stream. Replicating these studies, we find that adaptation to a facial expression heightens perception of non-adapted expressions. Using comparable behavioral methods, we also show that adaptation to a specific identity heightens perception of a second identity in morph faces. We show both expression and identity effects to be associated with heightened anterior medial temporal lobe activity, specifically when perceiving the non-adapted category. These regions, incorporating bilateral anterior ventral rhinal cortices, perirhinal cortex and left anterior hippocampus are regions previously implicated in high-level visual perception. These categorization effects were not evident in fusiform or occipital gyri, although activity in these regions was reduced to repeated faces. The findings suggest that adaptation-induced perception is mediated by activity in regions downstream to those showing reductions due to stimulus repetition.
Lepage, M; Sergerie, K; Benoit, A; Czechowska, Y; Dickie, E; Armony, J L
2011-09-01
There is a general consensus in the literature that schizophrenia causes difficulties with facial emotion perception and discrimination. Functional brain imaging studies have observed reduced limbic activity during facial emotion perception but few studies have examined the relation to flat affect severity. A total of 26 people with schizophrenia and 26 healthy controls took part in this event-related functional magnetic resonance imaging study. Sad, happy and neutral faces were presented in a pseudo-random order and participants indicated the gender of the face presented. Manual segmentation of the amygdala was performed on a structural T1 image. Both the schizophrenia group and the healthy control group rated the emotional valence of facial expressions similarly. Both groups exhibited increased brain activity during the perception of emotional faces relative to neutral ones in multiple brain regions, including multiple prefrontal regions bilaterally, the right amygdala, right cingulate cortex and cuneus. Group comparisons, however, revealed increased activity in the healthy group in the anterior cingulate, right parahippocampal gyrus and multiple visual areas. In schizophrenia, the severity of flat affect correlated significantly with neural activity in several brain areas including the amygdala and parahippocampal region bilaterally. These results suggest that many of the brain regions involved in emotional face perception, including the amygdala, are equally recruited in both schizophrenia and controls, but flat affect can also moderate activity in some other brain regions, notably in the left amygdala and parahippocampal gyrus bilaterally. There were no significant group differences in the volume of the amygdala.
Psychocentricity and participant profiles: implications for lexical processing among multilinguals
Libben, Gary; Curtiss, Kaitlin; Weber, Silke
2014-01-01
Lexical processing among bilinguals is often affected by complex patterns of individual experience. In this paper we discuss the psychocentric perspective on language representation and processing, which highlights the centrality of individual experience in psycholinguistic experimentation. We discuss applications to the investigation of lexical processing among multilinguals and explore the advantages of using high-density experiments with multilinguals. High density experiments are designed to co-index measures of lexical perception and production, as well as participant profiles. We discuss the challenges associated with the characterization of participant profiles and present a new data visualization technique, that we term Facial Profiles. This technique is based on Chernoff faces developed over 40 years ago. The Facial Profile technique seeks to overcome some of the challenges associated with the use of Chernoff faces, while maintaining the core insight that recoding multivariate data as facial features can engage the human face recognition system and thus enhance our ability to detect and interpret patterns within multivariate datasets. We demonstrate that Facial Profiles can code participant characteristics in lexical processing studies by recoding variables such as reading ability, speaking ability, and listening ability into iconically-related relative sizes of eye, mouth, and ear, respectively. The balance of ability in bilinguals can be captured by creating composite facial profiles or Janus Facial Profiles. We demonstrate the use of Facial Profiles and Janus Facial Profiles in the characterization of participant effects in the study of lexical perception and production. PMID:25071614
Editorial: Challenges for the usability of AR and VR for clinical neurosurgical procedures.
de Ribaupierre, Sandrine; Eagleson, Roy
2017-10-01
There are a number of challenges that must be faced when trying to develop AR and VR-based Neurosurgical simulators, Surgical Navigation Platforms, and "Smart OR" systems. Trying to simulate an operating room environment and surgical tasks in Augmented and Virtual Reality is a challenge many are attempting to solve, in order to train surgeons or help them operate. What are some of the needs of the surgeon, and what are the challenges encountered (human computer interface, perception, workflow, etc). We discuss these tradeoffs and conclude with critical remarks.
The perception of positive and negative facial expressions by unilateral stroke patients.
Abbott, Jacenta D; Wijeratne, Tissa; Hughes, Andrew; Perre, Diana; Lindell, Annukka K
2014-04-01
There remains conflict in the literature about the lateralisation of affective face perception. Some studies have reported a right hemisphere advantage irrespective of valence, whereas others have found a left hemisphere advantage for positive, and a right hemisphere advantage for negative, emotion. Differences in injury aetiology and chronicity, proportion of male participants, participant age, and the number of emotions used within a perception task may contribute to these contradictory findings. The present study therefore controlled and/or directly examined the influence of these possible moderators. Right brain-damaged (RBD; n=17), left brain-damaged (LBD; n=17), and healthy control (HC; n=34) participants completed two face perception tasks (identification and discrimination). No group differences in facial expression perception according to valence were found. Across emotions, the RBD group was less accurate thanthe HC group, however RBD and LBD group performancedid not differ. The lack of difference between RBD and LBD groups indicates that both hemispheres are involved in positive and negative expression perception. The inclusion of older adults and the well-defined chronicity range of the brain-damaged participants may have moderated these findings. Participant sex and general face perception ability did not influence performance. Furthermore, while the RBD group was less accurate than the LBD group when the identification task tested two emotions, performance of the two groups was indistinguishable when the number of emotions increased (four or six). This suggests that task demand moderates a study's ability to find hemispheric differences in the perception of facial emotion. Copyright © 2014 Elsevier Inc. All rights reserved.
Not in My Backyard: CCS Sites and Public Perception of CCS.
Braun, Carola
2017-12-01
Carbon capture and storage (CCS) is a technology that counteracts climate change by capturing atmospheric emissions of CO 2 from human activities, storing them in geological formations underground. However, CCS also involves major risks and side effects, and faces strong public opposition. The whereabouts of 408 potential CCS sites in Germany were released in 2011. Using detailed survey data on the public perception of CCS, this study quantifies how living close to a potential storage site affects the acceptance of CCS. It also analyzes the influence of other regional characteristics on the acceptance of CCS. The study finds that respondents who live close to a potential CCS site have significantly lower acceptance rates than those who do not. Living in a coal-mining region also markedly decreases acceptance. © 2017 Society for Risk Analysis.
Developmental Changes in Face Recognition during Childhood: Evidence from Upright and Inverted Faces
ERIC Educational Resources Information Center
de Heering, Adelaide; Rossion, Bruno; Maurer, Daphne
2012-01-01
Adults are experts at recognizing faces but there is controversy about how this ability develops with age. We assessed 6- to 12-year-olds and adults using a digitized version of the Benton Face Recognition Test, a sensitive tool for assessing face perception abilities. Children's response times for correct responses did not decrease between ages 6…
Strange-face Illusions During Interpersonal-Gazing and Personality Differences of Spirituality.
Caputo, Giovanni B
Strange-face illusions are produced when two individuals gaze at each other in the eyes in low illumination for more than a few minutes. Usually, the members of the dyad perceive numinous apparitions, like the other's face deformations and perception of a stranger or a monster in place of the other, and feel a short lasting dissociation. In the present experiment, the influence of the spirituality personality trait on strength and number of strange-face illusions was investigated. Thirty participants were preliminarily tested for superstition (Paranormal Belief Scale, PBS) and spirituality (Spiritual Transcendence Scale, STS); then, they were randomly assigned to 15 dyads. Dyads performed the intersubjective gazing task for 10 minutes and, finally, strange-face illusions (measured through the Strange-Face Questionnaire, SFQ) were evaluated. The first finding was that SFQ was independent of PBS; hence, strange-face illusions during intersubjective gazing are authentically perceptual, hallucination-like phenomena, and not due to superstition. The second finding was that SFQ depended on the spiritual-universality scale of STS (a belief in the unitive nature of life; e.g., "there is a higher plane of consciousness or spirituality that binds all people") and the two variables were negatively correlated. Thus, strange-face illusions, in particular monstrous apparitions, could potentially disrupt binding among human beings. Strange-face illusions can be considered as 'projections' of the subject's unconscious into the other's face. In conclusion, intersubjective gazing at low illumination can be a tool for conscious integration of unconscious 'shadows of the Self' in order to reach completeness of the Self. Copyright © 2017 Elsevier Inc. All rights reserved.