Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier
Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker's face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals' face processing abilities differ from monolinguals'. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation.
Taylor, M J; Batty, M; Itier, R J
The understanding of the adult proficiency in recognizing and extracting information from faces is still limited despite the number of studies over the last decade. Our knowledge on the development of these capacities is even more restricted, as only a handful of such studies exist. Here we present a combined reanalysis of four ERP studies in children from 4 to 15 years of age and adults (n = 424, across the studies), which investigated face processing in implicit and explicit tasks. We restricted these analyses to what was common across studies: early ERP components and upright face processing across all four studies and the inversion effect, investigated in three of the studies. These data demonstrated that processing faces implicates very rapid neural activity, even in young children--at the P1 component--with protracted age-related change in both P1 and N170, that were sensitive to the different task demands. Inversion produced latency and amplitude effects on the P1 from the youngest group, but on N170 only starting in mid childhood. These developmental data suggest that there are functionally different sources of the P1 and N170, related to the processing of different aspects of faces.
Vlamings, Petra Hendrika Johanna Maria; Jonkman, Lisa Marthe; van Daalen, Emma; van der Gaag, Rutger Jan; Kemner, Chantal
A detailed visual processing style has been noted in autism spectrum disorder (ASD); this contributes to problems in face processing and has been directly related to abnormal processing of spatial frequencies (SFs). Little is known about the early development of face processing in ASD and the relation with abnormal SF processing. We investigated whether young ASD children show abnormalities in low spatial frequency (LSF, global) and high spatial frequency (HSF, detailed) processing and explored whether these are crucially involved in the early development of face processing. Three- to 4-year-old children with ASD (n = 22) were compared with developmentally delayed children without ASD (n = 17). Spatial frequency processing was studied by recording visual evoked potentials from visual brain areas while children passively viewed gratings (HSF/LSF). In addition, children watched face stimuli with different expressions, filtered to include only HSF or LSF. Enhanced activity in visual brain areas was found in response to HSF versus LSF information in children with ASD, in contrast to control subjects. Furthermore, facial-expression processing was also primarily driven by detail in ASD. Enhanced visual processing of detailed (HSF) information is present early in ASD and occurs for neutral (gratings), as well as for socially relevant stimuli (facial expressions). These data indicate that there is a general abnormality in visual SF processing in early ASD and are in agreement with suggestions that a fast LSF subcortical face processing route might be affected in ASD. This could suggest that abnormal visual processing is causative in the development of social problems in ASD. Copyright © 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
van Ackeren, Markus J.; Rabini, Giuseppe; Zonca, Joshua; Foa, Valentina; Baruffaldi, Francesca; Rezk, Mohamed; Pavani, Francesco; Rossion, Bruno; Collignon, Olivier
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions. PMID:28652333
Li, Su; Lee, Kang; Zhao, Jing; Yang, Zhi; He, Sheng; Weng, Xuchu
Little is known about the impact of learning to read on early neural development for word processing and its collateral effects on neural development in non-word domains. Here, we examined the effect of early exposure to reading on neural responses to both word and face processing in preschool children with the use of the Event Related Potential (ERP) methodology. We specifically linked children's reading experience (indexed by their sight vocabulary) to two major neural markers: the amplitude differences between the left and right N170 on the bilateral posterior scalp sites and the hemispheric spectrum power differences in the γ band on the same scalp sites. The results showed that the left-lateralization of both the word N170 and the spectrum power in the γ band were significantly positively related to vocabulary. In contrast, vocabulary and the word left-lateralization both had a strong negative direct effect on the face right-lateralization. Also, vocabulary negatively correlated with the right-lateralized face spectrum power in the γ band even after the effects of age and the word spectrum power were partialled out. The present study provides direct evidence regarding the role of reading experience in the neural specialization of word and face processing above and beyond the effect of maturation. The present findings taken together suggest that the neural development of visual word processing competes with that of face processing before the process of neural specialization has been consolidated. Copyright © 2013 Elsevier Ltd. All rights reserved.
Li, Su; Lee, Kang; Zhao, Jing; Yang, Zhi; He, Sheng; Weng, Xuchu
Little is known about the impact of learning to read on early neural development for word processing and its collateral effects on neural development in non-word domains. Here, we examined the effect of early exposure to reading on neural responses to both word and face processing in preschool children with the use of the Event Related Potential (ERP) methodology. We specifically linked children’s reading experience (indexed by their sight vocabulary) to two major neural markers: the amplitude differences between the left and right N170 on the bilateral posterior scalp sites and the hemispheric spectrum power differences in the γ band on the same scalp sites. The results showed that the left-lateralization of both the word N170 and the spectrum power in the γ band were significantly positively related to vocabulary. In contrast, vocabulary and the word left-lateralization both had a strong negative direct effect on the face right-lateralization. Also, vocabulary negatively correlated with the right-lateralized face spectrum power in the γ band even after the effects of age and the word spectrum power were partialled out. The present study provides direct evidence regarding the role of reading experience in the neural specialization of word and face processing above and beyond the effect of maturation. The present findings taken together suggest that the neural development of visual word processing competes with that of face processing before the process of neural specialization has been consolidated. PMID:23462239
Lee, Kang; Quinn, Paul C; Pascalis, Olivier
Infants have asymmetrical exposure to different types of faces (e.g., more human than other-species, more female than male, and more own-race than other-race). What are the developmental consequences of such experiential asymmetry? Here we review recent advances in research on the development of cross-race face processing. The evidence suggests that greater exposure to own- than other-race faces in infancy leads to developmentally early perceptual differences in visual preference, recognition, category formation, and scanning of own- and other-race faces. Further, such perceptual differences in infancy may be associated with the emergence of implicit racial bias, consistent with a Perceptual-Social Linkage Hypothesis. Current and future work derived from this hypothesis may lay an important empirical foundation for the development of intervention programs to combat the early occurrence of implicit racial bias.
Nemrodov, Dan; Anderson, Thomas; Preston, Frank F.; Itier, Roxane J.
Eyes are central to face processing however their role in early face encoding as reflected by the N170 ERP component is unclear. Using eye tracking to enforce fixation on specific facial features, we found that the N170 was larger for fixation on the eyes compared to fixation on the forehead, nasion, nose or mouth, which all yielded similar amplitudes. This eye sensitivity was seen in both upright and inverted faces and was lost in eyeless faces, demonstrating it was due to the presence of eyes at fovea. Upright eyeless faces elicited largest N170 at nose fixation. Importantly, the N170 face inversion effect (FIE) was strongly attenuated in eyeless faces when fixation was on the eyes but was less attenuated for nose fixation and was normal when fixation was on the mouth. These results suggest the impact of eye removal on the N170 FIE is a function of the angular distance between the fixated feature and the eye location. We propose the Lateral Inhibition, Face Template and Eye Detector based (LIFTED) model which accounts for all the present N170 results including the FIE and its interaction with eye removal. Although eyes elicit the largest N170 response, reflecting the activity of an eye detector, the processing of upright faces is holistic and entails an inhibitory mechanism from neurons coding parafoveal information onto neurons coding foveal information. The LIFTED model provides a neuronal account of holistic and featural processing involved in upright and inverted faces and offers precise predictions for further testing. PMID:24768932
Crookes, Kate; McKone, Elinor
Historically, it was believed the perceptual mechanisms involved in individuating faces developed only very slowly over the course of childhood, and that adult levels of expertise were not reached until well into adolescence. Over the last 10 years, there has been some erosion of this view by demonstrations that all adult-like behavioural…
Liu, Tai-Ying; Chen, Yong-Sheng; Hsieh, Jen-Chuen; Chen, Li-Fen
The amygdala has been regarded as a key substrate for emotion processing. However, the engagement of the left and right amygdala during the early perceptual processing of different emotional faces remains unclear. We investigated the temporal profiles of oscillatory gamma activity in the amygdala and effective connectivity of the amygdala with the thalamus and cortical areas during implicit emotion-perceptual tasks using event-related magnetoencephalography (MEG). We found that within 100 ms after stimulus onset the right amygdala habituated to emotional faces rapidly (with duration around 20–30 ms), whereas activity in the left amygdala (with duration around 50–60 ms) sustained longer than that in the right. Our data suggest that the right amygdala could be linked to autonomic arousal generated by facial emotions and the left amygdala might be involved in decoding or evaluating expressive faces in the early perceptual emotion processing. The results of effective connectivity provide evidence that only negative emotional processing engages both cortical and subcortical pathways connected to the right amygdala, representing its evolutional significance (survival). These findings demonstrate the asymmetric engagement of bilateral amygdala in emotional face processing as well as the capability of MEG for assessing thalamo-cortico-limbic circuitry. PMID:25629899
Hilimire, Matthew R; Mienaltowski, Andrew; Blanchard-Fields, Fredda; Corballis, Paul M
With advancing age, processing resources are shifted away from negative emotional stimuli and toward positive ones. Here, we explored this 'positivity effect' using event-related potentials (ERPs). Participants identified the presence or absence of a visual probe that appeared over photographs of emotional faces. The ERPs elicited by the onsets of angry, sad, happy and neutral faces were recorded. We examined the frontocentral emotional positivity (FcEP), which is defined as a positive deflection in the waveforms elicited by emotional expressions relative to neutral faces early on in the time course of the ERP. The FcEP is thought to reflect enhanced early processing of emotional expressions. The results show that within the first 130 ms young adults show an FcEP to negative emotional expressions, whereas older adults show an FcEP to positive emotional expressions. These findings provide additional evidence that the age-related positivity effect in emotion processing can be traced to automatic processes that are evident very early in the processing of emotional facial expressions. © The Author (2013). Published by Oxford University Press. For Permissions, please email: firstname.lastname@example.org.
Sun, Delin; Chan, Chetwyn C. H.; Fan, Jintu; Wu, Yi; Lee, Tatia M. C.
Facial attractiveness is closely related to romantic love. To understand if the neural underpinnings of perceived facial attractiveness and facial expression are similar constructs, we recorded neural signals using an event-related potential (ERP) methodology for 20 participants who were viewing faces with varied attractiveness and expressions. We found that attractiveness and expression were reflected by two early components, P2-lateral (P2l) and P2-medial (P2m), respectively; their interaction effect was reflected by LPP, a late component. The findings suggested that facial attractiveness and expression are first processed in parallel for discrimination between stimuli. After the initial processing, more attentional resources are allocated to the faces with the most positive or most negative valence in both the attractiveness and expression dimensions. The findings contribute to the theoretical model of face perception. PMID:26648885
Schacht, Annekathrin; Sommer, Werner
Recent research suggests that emotion effects in word processing resemble those in other stimulus domains such as pictures or faces. The present study aims to provide more direct evidence for this notion by comparing emotion effects in word and face processing in a within-subject design. Event-related brain potentials (ERPs) were recorded as participants made decisions on the lexicality of emotionally positive, negative, and neutral German verbs or pseudowords, and on the integrity of intact happy, angry, and neutral faces or slightly distorted faces. Relative to neutral and negative stimuli both positive verbs and happy faces elicited posterior ERP negativities that were indistinguishable in scalp distribution and resembled the early posterior negativities reported by others. Importantly, these ERP modulations appeared at very different latencies. Therefore, it appears that similar brain systems reflect the decoding of both biological and symbolic emotional signals of positive valence, differing mainly in the speed of meaning access, which is more direct and faster for facial expressions than for words.
Paul, Brianna; Appelbaum, Mark; Carapetian, Stephanie; Hesselink, John; Nass, Ruth; Trauner, Doris; Stiles, Joan
Human visuospatial functions are commonly divided into those dependent on the ventral visual stream (ventral occipitotemporal regions), which allows for processing the 'what' of an object, and the dorsal visual stream (dorsal occipitoparietal regions), which allows for processing 'where' an object is in space. Information about the development of each of the two streams has been accumulating, but very little is known about the effects of injury, particularly very early injury, on this developmental process. Using a set of computerized dorsal and ventral stream tasks matched for stimuli, required response, and difficulty (for typically-developing individuals), we sought to compare the differential effects of injury to the two systems by examining performance in individuals with perinatal brain injury (PBI), who present with selective deficits in visuospatial processing from a young age. Thirty participants (mean=15.1 years) with early unilateral brain injury (15 right hemisphere PBI, 15 left hemisphere PBI) and 16 matched controls participated. On our tasks children with PBI performed more poorly than controls (lower accuracy and longer response times), and this was particularly prominent for the ventral stream task. Lateralization of PBI was also a factor, as the dorsal stream task did not seem to be associated with lateralized deficits, with both PBI groups showing only subtle decrements in performance, while the ventral stream task elicited deficits from RPBI children that do not appear to improve with age. Our findings suggest that early injury results in lesion-specific visuospatial deficits that persist into adolescence. Further, as the stimuli used in our ventral stream task were faces, our findings are consistent with what is known about the neural systems for face processing, namely, that they are established relatively early, follow a comparatively rapid developmental trajectory (conferring a vulnerability to early insult), and are biased toward the right
Ho, Hao Tam; Schröger, Erich; Kotz, Sonja A
Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face-voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face-voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face-voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective-one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.
Pittig, Andre; Schupp, Harald T.; Alpers, Georg W.
Abstract The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer—conveyed by facial expression and face direction—amplifies emotional face processing within triadic group situations. PMID:28158672
Bublatzky, Florian; Pittig, Andre; Schupp, Harald T; Alpers, Georg W
The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer-conveyed by facial expression and face direction-amplifies emotional face processing within triadic group situations. © The Author (2017). Published by Oxford University Press.
Cassia, Viola Macchi; Proietti, Valentina; Pisacane, Antonella
Available evidence indicates that experience with one face from a specific age group improves face-processing abilities if acquired within the first 3 years of life but not in adulthood. In the current study, we tested whether the effects of early experience endure at age 6 and whether the first 3 years of life are a sensitive period for the…
Young, Audrey; Luyster, Rhiannon J; Fox, Nathan A; Zeanah, Charles H; Nelson, Charles A
Early psychosocial deprivation has profound adverse effects on children's brain and behavioural development, including abnormalities in physical growth, intellectual function, social cognition, and emotional development. Nevertheless, the domain of emotional face processing has appeared in previous research to be relatively spared; here, we test for possible sleeper effects emerging in early adolescence. This study employed event-related potentials (ERPs) to examine the neural correlates of facial emotion processing in 12-year-old children who took part in a randomized controlled trial of foster care as an intervention for early institutionalization. Results revealed no significant group differences in two face and emotion-sensitive ERP components (P1 and N170), nor any association with age at placement or per cent of lifetime spent in an institution. These results converged with previous evidence from this population supporting relative sparing of facial emotion processing. We hypothesize that this sparing is due to an experience-dependent mechanism in which the amount of exposure to faces and facial expressions of emotion children received was sufficient to meet the low threshold required for cortical specialization of structures critical to emotion processing. Statement of contribution What is already known on this subject? Early psychosocial deprivation leads to profoundly detrimental effects on children's brain and behavioural development. With respect to children's emotional face processing abilities, few adverse effects of institutionalized rearing have previously been reported. Recent studies suggest that 'sleeper effects' may emerge many years later, especially in the domain of face processing. What does this study add? Examining a cumulative 12 years of data, we found only minimal group differences and no evidence of a sleeper effect in this particular domain. These findings identify emotional face processing as a unique ability in which relative sparing
Simion, Francesca; Di Giorgio, Elisa; Leo, Irene; Bardi, Lara
There are several lines of evidence which suggests that, since birth, the human system detects social agents on the basis of at least two properties: the presence of a face and the way they move. This chapter reviews the infant research on the origin of brain specialization for social stimuli and on the role of innate mechanisms and perceptual experience in shaping the development of the social brain. Two lines of convergent evidence on face detection and biological motion detection will be presented to demonstrate the innate predispositions of the human system to detect social stimuli at birth. As for face detection, experiments will be presented to demonstrate that, by virtue of nonspecific attentional biases, a very coarse template of faces become active at birth. As for biological motion detection, studies will be presented to demonstrate that, since birth, the human system is able to detect social stimuli on the basis of their properties such as the presence of a semi-rigid motion named biological motion. Overall, the empirical evidence converges in supporting the notion that the human system begins life broadly tuned to detect social stimuli and that the progressive specialization will narrow the system for social stimuli as a function of experience. Copyright © 2011 Elsevier B.V. All rights reserved.
Caharel, Stéphanie; Leleu, Arnaud; Bernard, Christian; Viggiano, Maria-Pia; Lalonde, Robert; Rebaï, Mohamed
The properties of the face-sensitive N170 component of the event-related brain potential (ERP) were explored through an orientation discrimination task using natural faces, objects, and Arcimboldo paintings presented upright or inverted. Because Arcimboldo paintings are composed of non-face objects but have a global face configuration, they provide great control to disentangle high-level face-like or object-like visual processes at the level of the N170, and may help to examine the implication of each hemisphere in the global/holistic processing of face formats. For upright position, N170 amplitudes in the right occipito-temporal region did not differ between natural faces and Arcimboldo paintings but were larger for both of these categories than for objects, supporting the view that as early as the N170 time-window, the right hemisphere is involved in holistic perceptual processing of face-like configurations irrespective of their features. Conversely, in the left hemisphere, N170 amplitudes differed between Arcimboldo portraits and natural faces, suggesting that this hemisphere processes local facial features. For upside-down orientation in both hemispheres, N170 amplitudes did not differ between Arcimboldo paintings and objects, but were reduced for both categories compared to natural faces, indicating that the disruption of holistic processing with inversion leads to an object-like processing of Arcimboldo paintings due to the lack of local facial features. Overall, these results provide evidence that global/holistic perceptual processing of faces and face-like formats involves the right hemisphere as early as the N170 time-window, and that the local processing of face features is rather implemented in the left hemisphere. © 2013.
Turk, Matthew A.; Pentland, Alexander P.
The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.
Hadjikhani, Nouchine; Kveraga, Kestutis; Naik, Paulami; Ahlfors, Seppo P.
The tendency to perceive faces in random patterns exhibiting configural properties of faces is an example of pareidolia. Perception of ‘real’ faces has been associated with a cortical response signal arising at about 170ms after stimulus onset; but what happens when non-face objects are perceived as faces? Using magnetoencephalography (MEG), we found that objects incidentally perceived as faces evoked an early (165ms) activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas common objects did not evoke such activation. An earlier peak at 130 ms was also seen for images of real faces only. Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late re-interpretation cognitive phenomenon. PMID:19218867
Kiss, Monika; Eimer, Martin
To investigate whether facial expression is processed in the absence of conscious awareness, ERPs were recorded in a task in which participants had to identify the expression of masked fearful and neutral target faces. On supraliminal trials (200 ms target duration), in which identification performance was high, a sustained positivity to fearful versus neutral target faces started 140 ms after target face onset. On subliminal trials (8 ms target duration), identification performance was at chance level, but ERPs still showed systematic fear-specific effects. An early positivity to fearful target faces was present but smaller than on supraliminal trials. A subsequent enhanced N2 to fearful faces was only present for subliminal trials. In contrast, a P3 enhancement to fearful faces was observed on supraliminal but not subliminal trials. Results demonstrate rapid emotional expression processing in the absence of awareness.
Kiss, Monika; Eimer, Martin
To investigate whether facial expression is processed in the absence of conscious awareness, ERPs were recorded in a task where participants had to identify the expression of masked fearful and neutral target faces. On supraliminal trials (200 ms target duration), where identification performance was high, a sustained positivity to fearful versus neutral target faces started 140 ms after target face onset. On subliminal trials (8 ms target duration), identification performance was at chance level, but ERPs still showed systematic fear-specific effects. An early positivity to fearful target faces was present but smaller than on supraliminal trials. A subsequent enhanced N2 to fearful faces was only present for subliminal trials. In contrast, a P3 enhancement to fearful faces was observed on supraliminal but not subliminal trials. Results demonstrate rapid emotional expression processing in the absence of awareness. PMID:17995905
Hadjikhani, Nouchine; Kveraga, Kestutis; Naik, Paulami; Ahlfors, Seppo P
The tendency to perceive faces in random patterns exhibiting configural properties of faces is an example of pareidolia. Perception of 'real' faces has been associated with a cortical response signal arising at approximately 170 ms after stimulus onset, but what happens when nonface objects are perceived as faces? Using magnetoencephalography, we found that objects incidentally perceived as faces evoked an early (165 ms) activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas common objects did not evoke such activation. An earlier peak at 130 ms was also seen for images of real faces only. Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late reinterpretation cognitive phenomenon.
Zhao, Mintao; Bülthoff, Isabelle
Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Cassia, Viola Macchi; Pisacane, Antonella; Gava, Lucia
This study aimed to investigate the presence of an own-age bias in young children who accumulated different amounts of early experience with child faces. Discrimination abilities for upright and inverted adult and child faces were tested using a delayed two-alternative, forced-choice matching-to-sample task in two groups of 3-year-old children,…
Zhang, Hong; Sun, Yaoru; Zhao, Lun
Perception of face parts on the basis of features is thought to be different from perception of whole faces, which is more based on configural information. Face context is also suggested to play an important role in face processing. To investigate how face context influences the early-stage perception of facial local parts, we used an oddball paradigm that tested perceptual stages of face processing rather than recognition. We recorded the event-related potentials (ERPs) elicited by whole faces and face parts presented in four conditions (upright-normal, upright-thatcherised, inverted-normal and inverted-thatcherised), as well as the ERPs elicited by non-face objects (whole houses and house parts) with corresponding conditions. The results showed that face context significantly affected the N170 with increased amplitudes and earlier peak latency for upright normal faces. Removing face context delayed the P1 latency but did not affect the P1 amplitude prominently for both upright and inverted normal faces. Across all conditions, neither the N170 nor the P1 was modulated by house context. The significant changes on the N170 and P1 components revealed that face context influences local part processing at the early stage of face processing and this context effect might be specific for face perception. We further suggested that perceptions of whole faces and face parts are functionally distinguished.
Damaskinou, Nikoleta; Watling, Dawn
This study was designed to investigate the patterns of electrophysiological responses of early emotional processing at frontocentral sites in adults and to explore whether adults' activation patterns show hemispheric lateralization for facial emotion processing. Thirty-five adults viewed full face and chimeric face stimuli. After viewing two faces, sequentially, participants were asked to decide which of the two faces was more emotive. The findings from the standard faces and the chimeric faces suggest that emotion processing is present during the early phases of face processing in the frontocentral sites. In particular, sad emotional faces are processed differently than neutral and happy (including happy chimeras) faces in these early phases of processing. Further, there were differences in the electrode amplitudes over the left and right hemisphere, particularly in the early temporal window. This research provides supporting evidence that the chimeric face test is a test of emotion processing that elicits right hemispheric processing.
Visconti di Oleggio Castello, Matteo; Wheeler, Kelsey G; Cipolli, Carlo; Gobbini, M Ida
Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes.
In this article, the author presents the challenges faced by early childhood education in 29 countries, according to the World Forum National Representatives and Global Leaders for Young Children. The countries represented in these responses include: Azerbaijan, Bangladesh, Bolivia, Brazil, Canada, Denmark, Egypt, Fiji, India, Iran, Iraq, Japan,…
Richler, Jennifer J; Floyd, R Jackie; Gauthier, Isabel
Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically.
Richler, Jennifer J.; Floyd, R. Jackie; Gauthier, Isabel
Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically. PMID:26223027
Tong, Carl W; Ahmad, Tariq; Brittain, Evan L; Bunch, T Jared; Damp, Julie B; Dardas, Todd; Hijar, Amalea; Hill, Joseph A; Hilliard, Anthony A; Houser, Steven R; Jahangir, Eiman; Kates, Andrew M; Kim, Darlene; Lindman, Brian R; Ryan, John J; Rzeszut, Anne K; Sivaram, Chittur A; Valente, Anne Marie; Freeman, Andrew M
Early career academic cardiologists currently face unprecedented challenges that threaten a highly valued career path. A team consisting of early career professionals and senior leadership members of American College of Cardiology completed this white paper to inform the cardiovascular medicine profession regarding the plight of early career cardiologists and to suggest possible solutions. This paper includes: 1) definition of categories of early career academic cardiologists; 2) general challenges to all categories and specific challenges to each category; 3) obstacles as identified by a survey of current early career members of the American College of Cardiology; 4) major reasons for the failure of physician-scientists to receive funding from National Institute of Health/National Heart Lung and Blood Institute career development grants; 5) potential solutions; and 6) a call to action with specific recommendations. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Heisz, Jennifer J.; Shedden, Judith M.
Face processing changes when a face is learned with personally relevant information. In a five-day learning paradigm, faces were presented with rich semantic stories that conveyed personal information about the faces. Event-related potentials were recorded before and after learning during a passive viewing task. When faces were novel, we observed…
Walther, Sebastian; Federspiel, Andrea; Horn, Helge; Bianchi, Piero; Wiest, Roland; Wirth, Miranka; Strik, Werner; Müller, Thomas Jörg
Face processing is crucial to social interaction, but is impaired in schizophrenia patients, who experience delays in face recognition, difficulties identifying others, and misperceptions of affective content. The right fusiform face area plays an important role in the early stages of human face processing and thus may be affected in schizophrenia. The aim of the study was therefore to investigate whether face processing deficits are related to dysfunctions of the right fusiform face area in schizophrenia patients compared with controls. In a rapid, event-related functional magnetic resonance imaging (fMRI) design, we investigated the encoding of new faces, as well as the recognition of newly learned, famous, and unfamiliar faces, in 13 schizophrenia patients and 21 healthy controls. We applied region of interest analysis to each individual's right fusiform face area and tested for group differences. Controls displayed higher blood oxygenation level dependent (BOLD) activation during the memorization of faces that were later successfully recognized. In schizophrenia patients, this effect was not observed. During the recognition task, schizophrenia patients exhibited lower BOLD responses, less accuracy, and longer reaction times to famous and unfamiliar faces. Our results support the hypothesis that impaired face processing in schizophrenia is related to early-stage deficits during the encoding and recognition of faces.
Cohan, Sarah; Nakayama, Ken
Prosopagnosia has largely been regarded as an untreatable disorder. However, recent case studies using cognitive training have shown that it is possible to enhance face recognition abilities in individuals with developmental prosopagnosia. Our goal was to determine if this approach could be effective in a larger population of developmental prosopagnosics. We trained 24 developmental prosopagnosics using a 3-week online face-training program targeting holistic face processing. Twelve subjects with developmental prosopagnosia were assessed before and after training, and the other 12 were assessed before and after a waiting period, they then performed the training, and were then assessed again. The assessments included measures of front-view face discrimination, face discrimination with view-point changes, measures of holistic face processing, and a 5-day diary to quantify potential real-world improvements. Compared with the waiting period, developmental prosopagnosics showed moderate but significant overall training-related improvements on measures of front-view face discrimination. Those who reached the more difficult levels of training (‘better’ trainees) showed the strongest improvements in front-view face discrimination and showed significantly increased holistic face processing to the point of being similar to that of unimpaired control subjects. Despite challenges in characterizing developmental prosopagnosics’ everyday face recognition and potential biases in self-report, results also showed modest but consistent self-reported diary improvements. In summary, we demonstrate that by using cognitive training that targets holistic processing, it is possible to enhance face perception across a group of developmental prosopagnosics and further suggest that those who improved the most on the training task received the greatest benefits. PMID:24691394
Zhao, Mintao; Bülthoff, Isabelle
Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability--holistic face processing--remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based…
Martens, Ulla; Leuthold, Hartmut; Schweinberger, Stefan R.
The authors examined face perception models with regard to the functional and temporal organization of facial identity and expression analysis. Participants performed a manual 2-choice go/no-go task to classify faces, where response hand depended on facial familiarity (famous vs. unfamiliar) and response execution depended on facial expression…
Curby, Kim M; Johnson, Kareem J; Tyson, Alyssa
Negative emotions are linked with a local, rather than global, visual processing style, which may preferentially facilitate feature-based, relative to holistic, processing mechanisms. Because faces are typically processed holistically, and because social contexts are prime elicitors of emotions, we examined whether negative emotions decrease holistic processing of faces. We induced positive, negative, or neutral emotions via film clips and measured holistic processing before and after the induction: participants made judgements about cued parts of chimeric faces, and holistic processing was indexed by the interference caused by task-irrelevant face parts. Emotional state significantly modulated face-processing style, with the negative emotion induction leading to decreased holistic processing. Furthermore, self-reported change in emotional state correlated with changes in holistic processing. These results contrast with general assumptions that holistic processing of faces is automatic and immune to outside influences, and they illustrate emotion's power to modulate socially relevant aspects of visual perception.
Liu, Taosheng; Pinheiro, Ana P; Zhao, Zhongxin; Nestor, Paul G; McCarley, Robert W; Niznikiewicz, Margaret
While several studies have consistently demonstrated abnormalities in the unisensory processing of face and voice in schizophrenia (SZ), the extent of abnormalities in the simultaneous processing of both types of information remains unclear. To address this issue, we used event-related potentials (ERP) methodology to probe the multisensory integration of face and non-semantic sounds in schizophrenia. EEG was recorded from 18 schizophrenia patients and 19 healthy control (HC) subjects in three conditions: neutral faces (visual condition-VIS); neutral non-semantic sounds (auditory condition-AUD); neutral faces presented simultaneously with neutral non-semantic sounds (audiovisual condition-AUDVIS). When compared with HC, the schizophrenia group showed less negative N170 to both face and face-voice stimuli; later P270 peak latency in the multimodal condition of face-voice relative to unimodal condition of face (the reverse was true in HC); reduced P400 amplitude and earlier P400 peak latency in the face but not in the voice-face condition. Thus, the analysis of ERP components suggests that deficits in the encoding of facial information extend to multimodal face-voice stimuli and that delays exist in feature extraction from multimodal face-voice stimuli in schizophrenia. In contrast, categorization processes seem to benefit from the presentation of simultaneous face-voice information. Timepoint by timepoint tests of multimodal integration did not suggest impairment in the initial stages of processing in schizophrenia. Published by Elsevier B.V.
Ibáñez, Agustín; Riveros, Rodrigo; Hurtado, Esteban; Gleichgerrcht, Ezequiel; Urquina, Hugo; Herrera, Eduar; Amoruso, Lucía; Reyes, Migdyrai Martin; Manes, Facundo
Previous studies have reported facial emotion recognition impairments in schizophrenic patients, as well as abnormalities in the N170 component of the event-related potential. Current research on schizophrenia highlights the importance of complexly-inherited brain-based deficits. In order to examine the N170 markers of face structural and emotional processing, DSM-IV diagnosed schizophrenia probands (n=13), unaffected first-degree relatives from multiplex families (n=13), and control subjects (n=13) matched by age, gender and educational level, performed a categorization task which involved words and faces with positive and negative valence. The N170 component, while present in relatives and control subjects, was reduced in patients, not only for faces, but also for face-word differences, suggesting a deficit in structural processing of stimuli. Control subjects showed N170 modulation according to the valence of facial stimuli. However, this discrimination effect was found to be reduced both in patients and relatives. This is the first report showing N170 valence deficits in relatives. Our results suggest a generalized deficit affecting the structural encoding of faces in patients, as well as the emotion discrimination both in patients and relatives. Finally, these findings lend support to the notion that cortical markers of facial discrimination can be validly considered as vulnerability markers. © 2011 Elsevier Ireland Ltd. All rights reserved.
Le Grand, Richard; Cooper, Philip A.; Mondloch, Catherine J.; Lewis, Terri L.; Sagiv, Noam; de Gelder, Beatrice; Maurer, Daphne
Developmental prosopagnosia (DP) is a severe impairment in identifying faces that is present from early in life and that occurs despite no apparent brain damage and intact visual and intellectual function. Here, we investigated what aspects of face processing are impaired/spared in developmental prosopagnosia by examining a relatively large group…
Pegna, Alan J.; Darque, Alexandra; Berrut, Claire; Khateb, Asaid
A number of investigations have reported that emotional faces can be processed subliminally, and that they give rise to specific patterns of brain activation in the absence of awareness. Recent event-related potential (ERP) studies have suggested that electrophysiological differences occur early in time (<200 ms) in response to backward-masked emotional faces. These findings have been taken as evidence of a rapid non-conscious pathway, which would allow threatening stimuli to be processed rapidly and subsequently allow appropriate avoidance action to be taken. However, for this to be the case, subliminal processing should arise even if the threatening stimulus is not attended. This point has in fact not yet been clearly established. In this ERP study, we investigated whether subliminal processing of fearful faces occurs outside the focus of attention. Fourteen healthy participants performed a line judgment task while fearful and non-fearful (happy or neutral) faces were presented both subliminally and supraliminally. ERPs were compared across the four experimental conditions (i.e., subliminal and supraliminal; fearful and non-fearful). The earliest differences between fearful and non-fearful faces appeared as an enhanced posterior negativity for the former at 170 ms (the N170 component) over right temporo-occipital electrodes. This difference was observed for both subliminal (p < 0.05) and supraliminal presentations (p < 0.01). Our results confirm that subliminal processing of fearful faces occurs early in the course of visual processing, and more importantly, that this arises even when the subject's attention is engaged in an incidental task. PMID:21687457
Pegna, Alan J; Darque, Alexandra; Berrut, Claire; Khateb, Asaid
A number of investigations have reported that emotional faces can be processed subliminally, and that they give rise to specific patterns of brain activation in the absence of awareness. Recent event-related potential (ERP) studies have suggested that electrophysiological differences occur early in time (<200 ms) in response to backward-masked emotional faces. These findings have been taken as evidence of a rapid non-conscious pathway, which would allow threatening stimuli to be processed rapidly and subsequently allow appropriate avoidance action to be taken. However, for this to be the case, subliminal processing should arise even if the threatening stimulus is not attended. This point has in fact not yet been clearly established. In this ERP study, we investigated whether subliminal processing of fearful faces occurs outside the focus of attention. Fourteen healthy participants performed a line judgment task while fearful and non-fearful (happy or neutral) faces were presented both subliminally and supraliminally. ERPs were compared across the four experimental conditions (i.e., subliminal and supraliminal; fearful and non-fearful). The earliest differences between fearful and non-fearful faces appeared as an enhanced posterior negativity for the former at 170 ms (the N170 component) over right temporo-occipital electrodes. This difference was observed for both subliminal (p < 0.05) and supraliminal presentations (p < 0.01). Our results confirm that subliminal processing of fearful faces occurs early in the course of visual processing, and more importantly, that this arises even when the subject's attention is engaged in an incidental task.
The author conducted 7 experiments to examine possible interactions between orienting to eye gaze and specific forms of face processing. Participants classified a letter following either an upright or inverted face with averted, uninformative eye gaze. Eye gaze orienting effects were recorded for upright and inverted faces, irrespective of whether…
Mondloch, Catherine J.; Geldart, Sybil; Maurer, Daphne; Le Grand, Richard
Two experiments examined the impact of slow development of processing differences among faces in the spacing among facial features (second-order relations). Computerized tasks involving various face-processing skills were used. Results of experiment with 6-, 8-, and 10-year-olds and with adults indicated that slow development of sensitivity to…
Hayward, William G; Crookes, Kate; Chu, Ming Hon; Favelle, Simone K; Rhodes, Gillian
Although many researchers agree that faces are processed holistically, we know relatively little about what information holistic processing captures from a face. Most studies that assess the nature of holistic processing do so with changes to the face affecting many different aspects of face information (e.g., different identities). Does holistic processing affect every aspect of a face? We used the composite task, a common means of examining the strength of holistic processing, with participants making same-different judgments about configuration changes or component changes to 1 portion of a face. Configuration changes involved changes in spatial position of the eyes, whereas component changes involved lightening or darkening the eyebrows. Composites were either aligned or misaligned, and were presented either upright or inverted. Both configuration judgments and component judgments showed evidence of holistic processing, and in both cases it was strongest for upright face composites. These results suggest that holistic processing captures a broad range of information about the face, including both configuration-based and component-based information. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Cecchi, Ariel S
Cognitive and affective penetration of perception refers to the influence that higher mental states such as beliefs and emotions have on perceptual systems. Psychological and neuroscientific studies appear to show that these states modulate the visual system at the visuomotor, attentional, and late levels of processing. However, empirical evidence showing that similar consequences occur in early stages of visual processing seems to be scarce. In this paper, I argue that psychological evidence does not seem to be either sufficient or necessary to argue in favour of or against the cognitive penetration of perception in either late or early vision. In order to do that we need to have recourse to brain imaging techniques. Thus, I introduce a neuroscientific study and argue that it seems to provide well-grounded evidence for the cognitive penetration of early vision in face perception. I also examine and reject alternative explanations to my conclusion. Copyright © 2018 Elsevier Inc. All rights reserved.
Garrido, Lucia; Driver, Jon; Dolan, Raymond J.; Duchaine, Bradley C.; Furl, Nicholas
Face processing is mediated by interactions between functional areas in the occipital and temporal lobe, and the fusiform face area (FFA) and anterior temporal lobe play key roles in the recognition of facial identity. Individuals with developmental prosopagnosia (DP), a lifelong face recognition impairment, have been shown to have structural and functional neuronal alterations in these areas. The present study investigated how face selectivity is generated in participants with normal face processing, and how functional abnormalities associated with DP, arise as a function of network connectivity. Using functional magnetic resonance imaging and dynamic causal modeling, we examined effective connectivity in normal participants by assessing network models that include early visual cortex (EVC) and face-selective areas and then investigated the integrity of this connectivity in participants with DP. Results showed that a feedforward architecture from EVC to the occipital face area, EVC to FFA, and EVC to posterior superior temporal sulcus (pSTS) best explained how face selectivity arises in both controls and participants with DP. In this architecture, the DP group showed reduced connection strengths on feedforward connections carrying face information from EVC to FFA and EVC to pSTS. These altered network dynamics in DP contribute to the diminished face selectivity in the posterior occipitotemporal areas affected in DP. These findings suggest a novel view on the relevance of feedforward projection from EVC to posterior occipitotemporal face areas in generating cortical face selectivity and differences in face recognition ability. SIGNIFICANCE STATEMENT Areas of the human brain showing enhanced activation to faces compared to other objects or places have been extensively studied. However, the factors leading to this face selectively have remained mostly unknown. We show that effective connectivity from early visual cortex to posterior occipitotemporal face areas gives
Ruz, María; Aranda, Clara; Sarmiento, Beatriz R; Sanabria, Daniel
The ability of attention to apply in a flexible manner to several types of information at various stages of processing has been studied extensively. However, the susceptibility of these effects to the nature of the idiosyncratic items being attended is less understood. In the current study, we used symbolic cues to orient the attention of participants to the subsequent appearance of the face of a famous person (the former king of Spain) or an unfamiliar face. These were matched in perceptual characteristics. Behavioral effects showed that face-specific attention optimized response speed in an orthogonal task when the target matched the cue (valid trials) compared to when it did not (invalid trials). According to topographical analyses of the electrophysiological data, the famous and unfamiliar faces engaged dissociable brain circuits in two different temporal windows, from 144 to 300 ms after target processing, and at a later 456-492 ms epoch. In addition, orienting attention to specific faces modulated the perceptual stages reflected in the P1 and N170 potentials but with a different laterality pattern that depended on the familiarity of the faces. Whereas only attention to the famous face enhanced the P1 potential at left posterior electrodes, with no corresponding effect for the unfamiliar face at this stage, the N170 was modulated at left posterior sites for the famous item and at right homologous electrodes for the unfamiliar face. Intermediate processing stages, previously linked to facial identity processing indexed by the P2 and N2 potentials, reflected item familiarity but were not affected by the cueing manipulation. At the P3 level, attention influenced again item processing but did so in an equivalent manner for the famous and unfamiliar face. Our results, showing that identity-specific attention modulates perceptual stages of facial processing at different locations depending on idiosyncratic stimulus familiarity, may inform comparison of studies
Besson, G; Barragan-Jason, G; Thorpe, S J; Fabre-Thorpe, M; Puma, S; Ceccaldi, M; Barbeau, E J
Verifying that a face is from a target person (e.g. finding someone in the crowd) is a critical ability of the human face processing system. Yet how fast this can be performed is unknown. The 'entry-level shift due to expertise' hypothesis suggests that - since humans are face experts - processing faces should be as fast - or even faster - at the individual than at superordinate levels. In contrast, the 'superordinate advantage' hypothesis suggests that faces are processed from coarse to fine, so that the opposite pattern should be observed. To clarify this debate, three different face processing levels were compared: (1) a superordinate face categorization level (i.e. detecting human faces among animal faces), (2) a face familiarity level (i.e. recognizing famous faces among unfamiliar ones) and (3) verifying that a face is from a target person, our condition of interest. The minimal speed at which faces can be categorized (∼260ms) or recognized as familiar (∼360ms) has largely been documented in previous studies, and thus provides boundaries to compare our condition of interest to. Twenty-seven participants were included. The recent Speed and Accuracy Boosting procedure paradigm (SAB) was used since it constrains participants to use their fastest strategy. Stimuli were presented either upright or inverted. Results revealed that verifying that a face is from a target person (minimal RT at ∼260ms) was remarkably fast but longer than the face categorization level (∼240ms) and was more sensitive to face inversion. In contrast, it was much faster than recognizing a face as familiar (∼380ms), a level severely affected by face inversion. Face recognition corresponding to finding a specific person in a crowd thus appears achievable in only a quarter of a second. In favor of the 'superordinate advantage' hypothesis or coarse-to-fine account of the face visual hierarchy, these results suggest a graded engagement of the face processing system across processing
Karl, Christian; Hewig, Johannes; Osinsky, Roman
There is broad evidence that contextual factors influence the processing of emotional facial expressions. Yet temporal-dynamic aspects, inter alia how face processing is influenced by the specific order of neutral and emotional facial expressions, have been largely neglected. To shed light on this topic, we recorded electroencephalogram from 168 healthy participants while they performed a gender-discrimination task with angry and neutral faces. Our event-related potential (ERP) analyses revealed a strong emotional modulation of the N170 component, indicating that the basic visual encoding and emotional analysis of a facial stimulus happen, at least partially, in parallel. While the N170 and the late positive potential (LPP; 400-600 ms) were only modestly affected by the sequence of preceding faces, we observed a strong influence of face sequences on the early posterior negativity (EPN; 200-300 ms). Finally, the differing response patterns of the EPN and LPP indicate that these two ERPs represent distinct processes during face analysis: while the former seems to represent the integration of contextual information in the perception of a current face, the latter appears to represent the net emotional interpretation of a current face.
Jemel, Boutheina; Mottron, Laurent; Dawson, Michelle
Within the last 10 years, there has been an upsurge of interest in face processing abilities in autism which has generated a proliferation of new empirical demonstrations employing a variety of measuring techniques. Observably atypical social behaviors early in the development of children with autism have led to the contention that autism is a…
Polewan, Robert J; Vigorito, Christopher M; Nason, Christopher D; Block, Richard A; Moore, John W
Commands to blink were embedded within pictures of faces and simple geometric shapes or forms. The faces and shapes were conditioned stimuli (CSs), and the required responses were conditioned responses, or more properly, Cartesian reflexes (CRs). As in classical conditioning protocols, response times (RTs) were measured from CS onset. RTs provided a measure of the processing cost (PC) of attending to a CS. A PC is the extra time required to respond relative to RTs to unconditioned stimulus (US) commands presented alone. They reflect the interplay between attentional processing of the informational content of a CS and its signaling function with respect to the US command. This resulted in longer RTs to embedded commands. Differences between PCs of faces and geometric shapes represent a starting place for a new mental chronometry based on the traditional idea that differences in RT reflect differences in information processing.
Cheung, Olivia S.; Gauthier, Isabel
Faces and objects of expertise compete for early perceptual processes and holistic processing resources (Gauthier, Curran, Curby, & Collins, 2003). Here, we examined the nature of interference on holistic face processing in working memory by comparing how various types of loads affect selective attention to parts of face composites. In dual…
Burke, Darren; Sulikowski, Danielle
In this paper we examine the holistic processing of faces from an evolutionary perspective, clarifying what such an approach entails, and evaluating the extent to which the evidence currently available permits any strong conclusions. While it seems clear that the holistic processing of faces depends on mechanisms evolved to perform that task, our review of the comparative literature reveals that there is currently insufficient evidence (or sometimes insufficiently compelling evidence) to decide when in our evolutionary past such processing may have arisen. It is also difficult to assess what kinds of selection pressures may have led to evolution of such a mechanism, or even what kinds of information holistic processing may have originally evolved to extract, given that many sources of socially relevant face-based information other than identity depend on integrating information across different regions of the face – judgments of expression, behavioral intent, attractiveness, sex, age, etc. We suggest some directions for future research that would help to answer these important questions. PMID:23382721
Sasson, Noah J.
Both behavioral and neuroimaging evidence indicate that individuals with autism demonstrate marked abnormalities in the processing of faces. These abnormalities are often explained as either the result of an innate impairment to specialized neural systems or as a secondary consequence of reduced levels of social interest. A review of the…
Riggs, Lily; Fujioka, Takako; Chan, Jessica; McQuiggan, Douglas A.; Anderson, Adam K.; Ryan, Jennifer D.
The processing of emotional as compared to neutral information is associated with different patterns in eye movement and neural activity. However, the ‘emotionality’ of a stimulus can be conveyed not only by its physical properties, but also by the information that is presented with it. There is very limited work examining the how emotional information may influence the immediate perceptual processing of otherwise neutral information. We examined how presenting an emotion label for a neutral face may influence subsequent processing by using eye movement monitoring (EMM) and magnetoencephalography (MEG) simultaneously. Participants viewed a series of faces with neutral expressions. Each face was followed by a unique negative or neutral sentence to describe that person, and then the same face was presented in isolation again. Viewing of faces paired with a negative sentence was associated with increased early viewing of the eye region and increased neural activity between 600 and 1200 ms in emotion processing regions such as the cingulate, medial prefrontal cortex, and amygdala, as well as posterior regions such as the precuneus and occipital cortex. Viewing of faces paired with a neutral sentence was associated with increased activity in the parahippocampal gyrus during the same time window. By monitoring behavior and neural activity within the same paradigm, these findings demonstrate that emotional information alters subsequent visual scanning and the neural systems that are presumably invoked to maintain a representation of the neutral information along with its emotional details. PMID:25566024
Jemel, Boutheina; Mottron, Laurent; Dawson, Michelle
Within the last 10 years, there has been an upsurge of interest in face processing abilities in autism which has generated a proliferation of new empirical demonstrations employing a variety of measuring techniques. Observably atypical social behaviors early in the development of children with autism have led to the contention that autism is a condition where the processing of social information, particularly faces, is impaired. While several empirical sources of evidence lend support to this hypothesis, others suggest that there are conditions under which autistic individuals do not differ from typically developing persons. The present paper reviews this bulk of empirical evidence, and concludes that the versatility and abilities of face processing in persons with autism have been underestimated.
Lambert, Beverley, Ed.
This collection of 14 essays addresses the changes and challenges that the early childhood education profession in Australia has faced in recent years, and covers a wide range of important issues of particular relevance to the preparation of early childhood professionals. The essays are: (1) "The Changing Ecology of Australian Childhood"…
Parr, Lisa A.
The ability to recognize faces is an important socio-cognitive skill that is associated with a number of cognitive specializations in humans. While numerous studies have examined the presence of these specializations in non-human primates, species where face recognition would confer distinct advantages in social situations, results have been mixed. The majority of studies in chimpanzees support homologous face-processing mechanisms with humans, but results from monkey studies appear largely dependent on the type of testing methods used. Studies that employ passive viewing paradigms, like the visual paired comparison task, report evidence of similarities between monkeys and humans, but tasks that use more stringent, operant response tasks, like the matching-to-sample task, often report species differences. Moreover, the data suggest that monkeys may be less sensitive than chimpanzees and humans to the precise spacing of facial features, in addition to the surface-based cues reflected in those features, information that is critical for the representation of individual identity. The aim of this paper is to provide a comprehensive review of the available data from face-processing tasks in non-human primates with the goal of understanding the evolution of this complex cognitive skill. PMID:21536559
Jemel, B; George, N; Olivares, E; Fiori, N; Renault, B
Thirty scalp sites were used to investigate the specific topography of the event-related potentials (ERPs) related to face associative priming when masked eyes of familiar faces were completed with either the proper features or incongruent ones. The enhanced negativity of N210 and N350, due to structural incongruity of faces, have a "category specific" inferotemporal localization on the scalp. Additional analyses support the existence of multiple ERP features within the temporal interval typically associated with N400 (N350 and N380), involving occipitotemporal and centroparietal areas. Seven reliable dipole locations have been evidenced using the brain electrical source analysis algorithm. Some of these localizations (fusiform, parahippocampal) are already known to be involved in face recognition, the other ones being related to general cognitive processes related to the task's demand. Because of their specific topography, the observed effects suggest that the face structural congruency process might involve early specialized neocortical areas in parallel with cortical memory circuits in the integration of perceptual and cognitive face processing.
Sylvester, C M; Petersen, S E; Luby, J L; Barch, D M
Individuals with anxiety disorders exhibit a 'vigilance-avoidance' pattern of attention to threatening stimuli when threatening and neutral stimuli are presented simultaneously, a phenomenon referred to as 'threat bias'. Modifying threat bias through cognitive retraining during adolescence reduces symptoms of anxiety, and so elucidating neural mechanisms of threat bias during adolescence is of high importance. We explored neural mechanisms by testing whether threat bias in adolescents is associated with generalized or threat-specific differences in the neural processing of faces. Subjects were categorized into those with (n = 25) and without (n = 27) threat avoidance based on a dot-probe task at average age 12.9 years. Threat avoidance in this cohort has previously been shown to index threat bias. Brain response to individually presented angry and neutral faces was assessed in a separate session using functional magnetic resonance imaging. Adolescents with threat avoidance exhibited lower activity for both angry and neutral faces relative to controls in several regions in the occipital, parietal, and temporal lobes involved in early visual and facial processing. Results generalized to happy, sad, and fearful faces. Adolescents with a prior history of depression and/or an anxiety disorder had lower activity for all faces in these same regions. A subset of results replicated in an independent dataset. Threat bias is associated with generalized, rather than threat-specific, differences in the neural processing of faces in adolescents. Findings may aid in the development of novel treatments for anxiety disorders that use attention training to modify threat bias.
Chakraborty, Anya; Chakrabarti, Bhismadev
We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner. PMID:29487554
Duval, Elizabeth R; Lovelace, Christopher T; Aarant, Justin; Filion, Diane L
The purpose of this study was to investigate the effects of both facial expression and face gender on startle eyeblink response patterns at varying lead intervals (300, 800, and 3500ms) indicative of attentional and emotional processes. We aimed to determine whether responses to affective faces map onto the Defense Cascade Model (Lang et al., 1997) to better understand the stages of processing during affective face viewing. At 300ms, there was an interaction between face expression and face gender with female happy and neutral faces and male angry faces producing inhibited startle. At 3500ms, there was a trend for facilitated startle during angry compared to neutral faces. These findings suggest that affective expressions are perceived differently in male and female faces, especially at short lead intervals. Future studies investigating face processing should take both face gender and expression into account. © 2013.
Sun, Tianyi; Li, Lin; Xu, Yuanli; Zheng, Li; Zhang, Weidong; Zhou, Fanzhi Anita; Guo, Xiuyan
Previous research has reported that women superiority on face recognition tasks, taking sex difference in accuracy rates as major evidence. By appropriately modifying experimental tasks and examining reaction time as behavioral measure, it was possible to explore which stage of face processing contributes to womens' superiority. We used a modified delayed matching-to-sample task to investigate the time course characteristics of face recognition by ERP, for both men and women. In each trial, participants matched successively presented faces to samples (target faces) by key pressing. It was revealed that women were more accurate and faster than men on the task. ERP results showed that compared to men, women had shorter peak latencies of early components P100 and N170, as well as larger mean amplitude of the late positive component P300. Correlations between P300 mean amplitudes and RTs were found for both sexes. Besides, reaction times of women but not men were positively correlated with N170 latencies. In general, we provided further evidence for women superiority on face recognition in both behavioral and neural aspects. Copyright © 2016 Elsevier Ireland Ltd and Japan Neuroscience Society. All rights reserved.
Zhao, Mintao; Bülthoff, Heinrich H.; Bülthoff, Isabelle
Faces are processed holistically, so selective attention to 1 face part without any influence of the others often fails. In this study, 3 experiments investigated what type of facial information (shape or surface) underlies holistic face processing and whether generalization of holistic processing to nonexperienced faces requires extensive…
Yovel, Galit; Wilmer, Jeremy B.; Duchaine, Brad
Faces are probably the most widely studied visual stimulus. Most research on face processing has used a group-mean approach that averages behavioral or neural responses to faces across individuals and treats variance between individuals as noise. However, individual differences in face processing can provide valuable information that complements and extends findings from group-mean studies. Here we demonstrate that studies employing an individual differences approach—examining associations and dissociations across individuals—can answer fundamental questions about the way face processing operates. In particular these studies allow us to associate and dissociate the mechanisms involved in face processing, tie behavioral face processing mechanisms to neural mechanisms, link face processing to broader capacities and quantify developmental influences on face processing. The individual differences approach we illustrate here is a powerful method that should be further explored within the domain of face processing as well as fruitfully applied across the cognitive sciences. PMID:25191241
Geng, Haiyan; Zhang, Shen; Li, Qi; Tao, Ran; Xu, Shan
Self-related information has been found to be processed more quickly and accurately in studies with supraliminal self-stimuli and traditional paradigms such as masked priming. We conducted two experiments to investigate whether subliminal self-face processing enjoys this advantage and the neural correlates of processing self-faces at both subliminal and supraliminal levels. We found that self-faces were quicker than famous-other faces to gain dominance against dynamic noise patterns during prolonged interocular suppression to enter awareness (Experiment 1). Meanwhile, subliminal contrast of self- and famous-other face processing was reflected in a reduced early vertex positive potential (VPP) component, whereas supraliminal self-other face differentiation was reflected in an enhanced N170, as well as a more positive late component (300-580ms, Experiment 2) to the self-face. The clear dissociations of self- and other-face processing found across our two experiments validate the self advantage. Our findings also contribute to understandings of the mechanisms underlying self-face processing at different levels of awareness. Copyright © 2012 Elsevier Ltd. All rights reserved.
Richler, Jennifer J.; Gauthier, Isabel; Wenger, Michael J.; Palmeri, Thomas J.
Researchers have used several composite face paradigms to assess holistic processing of faces. In the selective attention paradigm, participants decide whether one face part (e.g., top) is the same as a previously seen face part. Their judgment is affected by whether the irrelevant part of the test face is the same as or different than the…
Tottenham, N.; Hare, T. A.; Millner, A.; Gilhooly, T.; Zevin, J. D.; Casey, B. J.
A functional neuroimaging study examined the long-term neural correlates of early adverse rearing conditions in humans as they relate to socio-emotional development. Previously institutionalized (PI) children and a same-aged comparison group were scanned using functional magnetic resonance imaging (fMRI) while performing an Emotional Face Go/Nogo…
Richler, Jennifer J.; Floyd, R. Jackie; Gauthier, Isabel
Efforts to understand individual differences in high-level vision necessitate the development of measures that have sufficient reliability, which is generally not a concern in group studies. Holistic processing is central to research on face recognition and, more recently, to the study of individual differences in this area. However, recent work has shown that the most popular measure of holistic processing, the composite task, has low reliability. This is particularly problematic for the recent surge in interest in studying individual differences in face recognition. Here, we developed and validated a new measure of holistic face processing specifically for use in individual-differences studies. It avoids some of the pitfalls of the standard composite design and capitalizes on the idea that trial variability allows for better traction on reliability. Across four experiments, we refine this test and demonstrate its reliability. PMID:25228629
Silvert, Laetitia; Hewstone, Miles; Nobre, Anna C.
The present study investigated the influence social factors upon the neural processing of faces of other races using event-related potentials. A multi-tiered approach was used to identify face-specific stages of processing, to test for effects of race-of-face upon processing at these stages and to evaluate the impact of social contact and individuating experience upon these effects. The results showed that race-of-face has significant effects upon face processing, starting from early perceptual stages of structural encoding, and that social factors may play an important role in mediating these effects. PMID:19015091
Zhao, Mintao; Bülthoff, Heinrich H; Bülthoff, Isabelle
Faces are processed holistically, so selective attention to 1 face part without any influence of the others often fails. In this study, 3 experiments investigated what type of facial information (shape or surface) underlies holistic face processing and whether generalization of holistic processing to nonexperienced faces requires extensive discrimination experience. Results show that facial shape information alone is sufficient to elicit the composite face effect (CFE), 1 of the most convincing demonstrations of holistic processing, whereas facial surface information is unnecessary (Experiment 1). The CFE is eliminated when faces differ only in surface but not shape information, suggesting that variation of facial shape information is necessary to observe holistic face processing (Experiment 2). Removing 3-dimensional (3D) facial shape information also eliminates the CFE, indicating the necessity of 3D shape information for holistic face processing (Experiment 3). Moreover, participants show similar holistic processing for faces with and without extensive discrimination experience (i.e., own- and other-race faces), suggesting that generalization of holistic processing to nonexperienced faces requires facial shape information, but does not necessarily require further individuation experience. These results provide compelling evidence that facial shape information underlies holistic face processing. This shape-based account not only offers a consistent explanation for previous studies of holistic face processing, but also suggests a new ground-in addition to expertise-for the generalization of holistic processing to different types of faces and to nonface objects. (c) 2016 APA, all rights reserved).
Bublatzky, Florian; Gerdes, Antje B. M.; White, Andrew J.; Riemer, Martin; Alpers, Georg W.
Human face perception is modulated by both emotional valence and social relevance, but their interaction has rarely been examined. Event-related brain potentials (ERP) to happy, neutral, and angry facial expressions with different degrees of social relevance were recorded. To implement a social anticipation task, relevance was manipulated by presenting faces of two specific actors as future interaction partners (socially relevant), whereas two other face actors remained non-relevant. In a further control task all stimuli were presented without specific relevance instructions (passive viewing). Face stimuli of four actors (2 women, from the KDEF) were randomly presented for 1s to 26 participants (16 female). Results showed an augmented N170, early posterior negativity (EPN), and late positive potential (LPP) for emotional in contrast to neutral facial expressions. Of particular interest, face processing varied as a function of experimental tasks. Whereas task effects were observed for P1 and EPN regardless of instructed relevance, LPP amplitudes were modulated by emotional facial expression and relevance manipulation. The LPP was specifically enhanced for happy facial expressions of the anticipated future interaction partners. This underscores that social relevance can impact face processing already at an early stage of visual processing. These findings are discussed within the framework of motivated attention and face processing theories. PMID:25076881
Maher, S.; Ekstrom, T.; Tong, Y.; Nickerson, L.D.; Frederick, B.; Chen, Y.
Face detection, the perceptual capacity to identify a visual stimulus as a face before probing deeper into specific attributes (such as its identity or emotion), is essential for social functioning. Despite the importance of this functional capacity, face detection and its underlying brain mechanisms are not well understood. This study evaluated the roles that the cortical face processing system, which is identified largely through studying other aspects of face perception, play in face detection. Specifically, we used functional magnetic resonance imaging (fMRI) to examine the activations of the fusifom face area (FFA), occipital face area (OFA) and superior temporal sulcus (STS) when face detection was isolated from other aspects of face perception and when face detection was perceptually-equated across individual human participants (n=20). During face detection, FFA and OFA were significantly activated, even for stimuli presented at perceptual-threshold levels, whereas STS was not. During tree detection, however, FFA and OFA were responsive only for highly salient (i.e., high contrast) stimuli. Moreover, activation of FFA during face detection predicted a significant portion of the perceptual performance levels that were determined psychophysically for each participant. This pattern of result indicates that FFA and OFA have a greater sensitivity to face detection signals and selectively support the initial process of face vs. non-face object perception. PMID:26592952
de Heering, Adelaide; Houthuys, Sarah; Rossion, Bruno
Although it is acknowledged that adults integrate features into a representation of the whole face, there is still some disagreement about the onset and developmental course of holistic face processing. We tested adults and children from 4 to 6 years of age with the same paradigm measuring holistic face processing through an adaptation of the…
Lavelli, Manuela; Fogel, Alan
A microgenetic research design with a multiple case study method and a combination of quantitative and qualitative analyses was used to investigate interdyad differences in real-time dynamics and developmental change processes in mother-infant face-to-face communication over the first 3 months of life. Weekly observations of 24 mother-infant dyads with analyses performed dyad by dyad showed that most dyads go through 2 qualitatively different developmental phases of early face-to-face communication: After a phase of mutual attentiveness, mutual engagement begins in Weeks 7-8, with infant smiling and cooing bidirectionally linked with maternal mirroring. This gives rise to sequences of positive feedback that, by the 3rd month, dynamically stabilizes into innovative play routines. However, when there is a lack of bidirectional positive feedback between infant and maternal behaviors, and a lack of permeability of the early communicative patterns to incorporate innovations, the development of the mutual engagement phase is compromised. The findings contribute both to theories of relationship change processes and to clinical work with at-risk mother-infant interactions. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Beckes, Lane; Coan, James A; Morris, James P
Not much is known about the neural and psychological processes that promote the initial conditions necessary for positive social bonding. This study explores one method of conditioned bonding utilizing dynamics related to the social regulation of emotion and attachment theory. This form of conditioning involves repeated presentations of negative stimuli followed by images of warm, smiling faces. L. Beckes, J. Simpson, and A. Erickson (2010) found that this conditioning procedure results in positive associations with the faces measured via a lexical decision task, suggesting they are perceived as comforting. This study found that the P1 ERP was similarly modified by this conditioning procedure and the P1 amplitude predicted lexical decision times to insecure words primed by the faces. The findings have implications for understanding how the brain detects supportive people, the flexibility and modifiability of early ERP components, and social bonding more broadly. Copyright © 2013 Society for Psychophysiological Research.
Liu, Taosheng; Pinheiro, Ana; Zhao, Zhongxin; Nestor, Paul G.; McCarley, Robert W.; Niznikiewicz, Margaret A.
Both facial expression and tone of voice represent key signals of emotional communication but their brain processing correlates remain unclear. Accordingly, we constructed a novel implicit emotion recognition task consisting of simultaneously presented human faces and voices with neutral, happy, and angry valence, within the context of recognizing monkey faces and voices task. To investigate the temporal unfolding of the processing of affective information from human face-voice pairings, we recorded event-related potentials (ERPs) to these audiovisual test stimuli in 18 normal healthy subjects; N100, P200, N250, P300 components were observed at electrodes in the frontal-central region, while P100, N170, P270 were observed at electrodes in the parietal-occipital region. Results indicated a significant audiovisual stimulus effect on the amplitudes and latencies of components in frontal-central (P200, P300, and N250) but not the parietal occipital region (P100, N170 and P270). Specifically, P200 and P300 amplitudes were more positive for emotional relative to neutral audiovisual stimuli, irrespective of valence, whereas N250 amplitude was more negative for neutral relative to emotional stimuli. No differentiation was observed between angry and happy conditions. The results suggest that the general effect of emotion on audiovisual processing can emerge as early as 200 msec (P200 peak latency) post stimulus onset, in spite of implicit affective processing task demands, and that such effect is mainly distributed in the frontal-central region. PMID:22383987
Axelrod, Vadim; Rees, Geraint
Investigating the limits of unconscious processing is essential to understand the function of consciousness. Here, we explored whether holistic face processing, a mechanism believed to be important for face processing in general, can be accomplished unconsciously. Using a novel "eyes-face" stimulus we tested whether discrimination of pairs of eyes was influenced by the surrounding face context. While the eyes were fully visible, the faces that provided context could be rendered invisible through continuous flash suppression. Two experiments with three different sets of face stimuli and a subliminal learning procedure converged to show that invisible faces did not influence perception of visible eyes. In contrast, surrounding faces, when they were clearly visible, strongly influenced perception of the eyes. Thus, we conclude that conscious awareness might be a prerequisite for holistic face processing. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Gao, Zaifeng; Flevaris, Anastasia V; Robertson, Lynn C; Bentin, Shlomo
We used the composite-face illusion and Navon stimuli to determine the consequences of priming local or global processing on subsequent face recognition. The composite-face illusion reflects the difficulty of ignoring the task-irrelevant half-face while attending the task-relevant half if the half-faces in the composite are aligned. On each trial, participants first matched two Navon stimuli, attending to either the global or the local level, and then matched the upper halves of two composite faces presented sequentially. Global processing of Navon stimuli increased the sensitivity to incongruence between the upper and the lower halves of the composite face, relative to a baseline in which the composite faces were not primed. Local processing of Navon stimuli did not influence the sensitivity to incongruence. Although incongruence induced a bias toward different responses, this bias was not modulated by priming. We conclude that global processing of Navon stimuli augments holistic processing of the face.
Kaufman, Jessica; Ryan, Rebecca; Walsh, Louisa; Horey, Dell; Leask, Julie; Robinson, Priscilla; Hill, Sophie
Early childhood vaccination is an essential global public health practice that saves two to three million lives each year, but many children do not receive all the recommended vaccines. To achieve and maintain appropriate coverage rates, vaccination programmes rely on people having sufficient awareness and acceptance of vaccines.Face-to-face information or educational interventions are widely used to help parents understand why vaccines are important; explain where, how and when to access services; and address hesitancy and concerns about vaccine safety or efficacy. Such interventions are interactive, and can be adapted to target particular populations or identified barriers.This is an update of a review originally published in 2013. To assess the effects of face-to-face interventions for informing or educating parents about early childhood vaccination on vaccination status and parental knowledge, attitudes and intention to vaccinate. We searched the CENTRAL, MEDLINE, Embase, five other databases, and two trial registries (July and August 2017). We screened reference lists of relevant articles, and contacted authors of included studies and experts in the field. We had no language or date restrictions. We included randomised controlled trials (RCTs) and cluster-RCTs evaluating the effects of face-to-face interventions delivered to parents or expectant parents to inform or educate them about early childhood vaccination, compared with control or with another face-to-face intervention. The World Health Organization recommends that children receive all early childhood vaccines, with the exception of human papillomavirus vaccine (HPV), which is delivered to adolescents. We used standard methodological procedures expected by Cochrane. Two authors independently reviewed all search results, extracted data and assessed the risk of bias of included studies. In this update, we found four new studies, for a total of ten studies. We included seven RCTs and three cluster
Li, Yongna; Tse, Chi-Shing
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).
Li, Yongna; Tse, Chi-Shing
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender). PMID:27840621
Khalid, Shah; Ansorge, Ulrich
Prior research has provided evidence for (1) subcortical processing of subliminal facial expressions of emotion and (2) for the emotion-specificity of these processes. Here, we investigated if this is also true for the processing of the subliminal facial display of disgust. In Experiment 1, we used differently filtered masked prime faces portraying emotionally neutral or disgusted expressions presented prior to clearly visible target faces to test if the masked primes exerted an influence on target processing nonetheless. Whereas we found evidence for subliminal face congruence or priming effects, in particular, reverse priming by low spatial frequencies disgusted face primes, we did not find any support for a subcortical origin of the effect. In Experiment 2, we compared the influence of subliminal disgusted faces with that of subliminal fearful faces and demonstrated a behavioral performance difference between the two, pointing to an emotion-specific processing of the disgusted facial expressions. In both experiments, we also tested for the dependence of the subliminal emotional face processing on spatial attention – with mixed results, suggesting an attention-independence in Experiment 1 but not in Experiment 2 –, and we found perfect masking of the face primes – that is, proof of the subliminality of the prime faces. Based on our findings, we speculate that subliminal facial expressions of disgust could afford easy avoidance of these faces. This could be a unique effect of disgusted faces as compared to other emotional facial displays, at least under the conditions studied here. PMID:28680413
Khalid, Shah; Ansorge, Ulrich
Prior research has provided evidence for (1) subcortical processing of subliminal facial expressions of emotion and (2) for the emotion-specificity of these processes. Here, we investigated if this is also true for the processing of the subliminal facial display of disgust. In Experiment 1, we used differently filtered masked prime faces portraying emotionally neutral or disgusted expressions presented prior to clearly visible target faces to test if the masked primes exerted an influence on target processing nonetheless. Whereas we found evidence for subliminal face congruence or priming effects, in particular, reverse priming by low spatial frequencies disgusted face primes, we did not find any support for a subcortical origin of the effect. In Experiment 2, we compared the influence of subliminal disgusted faces with that of subliminal fearful faces and demonstrated a behavioral performance difference between the two, pointing to an emotion-specific processing of the disgusted facial expressions. In both experiments, we also tested for the dependence of the subliminal emotional face processing on spatial attention - with mixed results, suggesting an attention-independence in Experiment 1 but not in Experiment 2 -, and we found perfect masking of the face primes - that is, proof of the subliminality of the prime faces. Based on our findings, we speculate that subliminal facial expressions of disgust could afford easy avoidance of these faces. This could be a unique effect of disgusted faces as compared to other emotional facial displays, at least under the conditions studied here.
Devue, Christel; Barsics, Catherine
Most humans seem to demonstrate astonishingly high levels of skill in face processing if one considers the sophisticated level of fine-tuned discrimination that face recognition requires. However, numerous studies now indicate that the ability to process faces is not as fundamental as once thought and that performance can range from despairingly poor to extraordinarily high across people. Here we studied people who are super specialists of faces, namely portrait artists, to examine how their specific visual experience with faces relates to a range of face processing skills (perceptual discrimination, short- and longer term recognition). Artists show better perceptual discrimination and, to some extent, recognition of newly learned faces than controls. They are also more accurate on other perceptual tasks (i.e., involving non-face stimuli or mental rotation). By contrast, artists do not display an advantage compared to controls on longer term face recognition (i.e., famous faces) nor on person recognition from other sensorial modalities (i.e., voices). Finally, the face inversion effect exists in artists and controls and is not modulated by artistic practice. Advantages in face processing for artists thus seem to closely mirror perceptual and visual short term memory skills involved in portraiture. Copyright © 2016 Elsevier Ltd. All rights reserved.
Senese, Vincenzo Paolo; Shinohara, Kazuyuki; Esposito, Gianluca; Doi, Hirokazu; Venuti, Paola; Bornstein, Marc H.
Genetics, early experience, and culture shape caregiving, but it is still not clear how genetics, early experiences, and cultural factors might interact to influence specific caregiving propensities, such as adult responsiveness to infant cues. To address this gap, 80 Italian adults (50% M; 18-25 years) were (1) genotyped for two oxytocin receptor gene polymorphisms (rs53576 and rs2254298) and the serotonin transporter gene polymorphism (5-HTTLPR), which are implicated in parenting behaviour, (2) completed the Adult Parental Acceptance/Rejection Questionnaire to evaluate their recollections of parental behaviours toward them in childhood, and (3) were administered a Single Category Implicit Association Test to evaluate their implicit responses to faces of Italian infants, Japanese infants, and Italian adults. Analysis of implicit associations revealed that Italian infant faces were evaluated as most positive; participants in the rs53576 GG group had the most positive implicit associations to Italian infant faces; the serotonin polymorphism moderated the effect of early care experiences on adults’ implicit association to both Italian infant and adult female faces. Finally, 5-HTTLPR S carriers showed less positive implicit responses to Japanese infant faces. We conclude that adult in-group preference extends to in-group infant faces and that implicit responses to social cues are influenced by interactions of genetics, early care experiences, and cultural factors. These findings have implications for understanding processes that regulate adult caregiving. PMID:27650102
Robbins, Rachel; McKone, Elinor
The origin of "special" processing for upright faces has been a matter of ongoing debate. If it is due to generic expertise, as opposed to having some innate component, holistic processing should be learnable for stimuli other than upright faces. Here we assess inverted faces. We trained subjects to discriminate identical twins using up to 1100…
Lahaie, A; Mottron, L; Arguin, M; Berthiaume, C; Jemel, B; Saumier, D
Configural processing in autism was studied in Experiment 1 by using the face inversion effect. A normal inversion effect was observed in the participants with autism, suggesting intact configural face processing. A priming paradigm using partial or complete faces served in Experiment 2 to assess both local and configural face processing. Overall, normal priming effects were found in participants with autism, irrespective of whether the partial face primes were intuitive face parts (i.e., eyes, nose, etc.) or arbitrary segments. An exception, however, was that participants with autism showed magnified priming with single face parts relative to typically developing control participants. The present findings argue for intact configural processing in autism along with an enhanced processing for individual face parts. The face-processing peculiarities known to characterize autism are discussed on the basis of these results and past congruent results with nonsocial stimuli.
Pitts, Michael A; Martínez, Antígona; Brewer, James B; Hillyard, Steven A
The temporal sequence of neural processes supporting figure-ground perception was investigated by recording ERPs associated with subjects' perceptions of the face-vase figure. In Experiment 1, subjects continuously reported whether they perceived the face or the vase as the foreground figure by pressing one of two buttons. Each button press triggered a probe flash to the face region, the vase region, or the borders between the two. The N170/vertex positive potential (VPP) component of the ERP elicited by probes to the face region was larger when subjects perceived the faces as figure. Preceding the N170/VPP, two additional components were identified. First, when the borders were probed, ERPs differed in amplitude as early as 110 msec after probe onset depending on subjects' figure-ground perceptions. Second, when the face or vase regions were probed, ERPs were more positive (at ∼ 150-200 msec) when that region was perceived as figure versus background. These components likely reflect an early "border ownership" stage, and a subsequent "figure-ground segregation" stage of processing. To explore the influence of attention on these stages of processing, two additional experiments were conducted. In Experiment 2, subjects selectively attended to the face or vase region, and the same early ERP components were again produced. In Experiment 3, subjects performed an identical selective attention task, but on a display lacking distinctive figure-ground borders, and neither of the early components were produced. Results from these experiments suggest sequential stages of processing underlying figure-ground perception, each which are subject to modifications by selective attention.
Watson, Tamara L.
People with autism and schizophrenia have been shown to have a local bias in sensory processing and face recognition difficulties. A global or holistic processing strategy is known to be important when recognizing faces. Studies investigating face recognition in these populations are reviewed and show that holistic processing is employed despite lower overall performance in the tasks used. This implies that holistic processing is necessary but not sufficient for optimal face recognition and new avenues for research into face recognition based on network models of autism and schizophrenia are proposed. PMID:23847581
Axelrod, Vadim; Rees, Geraint
Investigating the limits of unconscious processing is essential to understand the function of consciousness. Here, we explored whether holistic face processing, a mechanism believed to be important for face processing in general, can be accomplished unconsciously. Using a novel “eyes-face” stimulus we tested whether discrimination of pairs of eyes was influenced by the surrounding face context. While the eyes were fully visible, the faces that provided context could be rendered invisible through continuous flash suppression. Two experiments with three different sets of face stimuli and a subliminal learning procedure converged to show that invisible faces did not influence perception of visible eyes. In contrast, surrounding faces, when they were clearly visible, strongly influenced perception of the eyes. Thus, we conclude that conscious awareness might be a prerequisite for holistic face processing. PMID:24950500
Skelly, Laurie R.; Decety, Jean
Emotionally expressive faces are processed by a distributed network of interacting sub-cortical and cortical brain regions. The components of this network have been identified and described in large part by the stimulus properties to which they are sensitive, but as face processing research matures interest has broadened to also probe dynamic interactions between these regions and top-down influences such as task demand and context. While some research has tested the robustness of affective face processing by restricting available attentional resources, it is not known whether face network processing can be augmented by increased motivation to attend to affective face stimuli. Short videos of people expressing emotions were presented to healthy participants during functional magnetic resonance imaging. Motivation to attend to the videos was manipulated by providing an incentive for improved recall performance. During the motivated condition, there was greater coherence among nodes of the face processing network, more widespread correlation between signal intensity and performance, and selective signal increases in a task-relevant subset of face processing regions, including the posterior superior temporal sulcus and right amygdala. In addition, an unexpected task-related laterality effect was seen in the amygdala. These findings provide strong evidence that motivation augmentsco-activity among nodes of the face processing network and the impact of neural activity on performance. These within-subject effects highlight the necessity to consider motivation when interpreting neural function in special populations, and to further explore the effect of task demands on face processing in healthy brains. PMID:22768287
Walla, Peter; Mayer, Dagmar; Deecke, Lüder; Lang, Wilfried
Magnetic field changes related to face encoding were recorded in 20 healthy young participants. Faces had to be deeply encoded under four kinds of simultaneous nasal chemical stimulation. Neutral room air, phenyl ethyl alcohol (PEA, rose flavor), carbon dioxide (CO2, pain), and hydrogen sulfide (H2S, rotten eggs flavor) were used as chemical stimuli. PEA and H2S represented odor stimuli, whereas CO2 was used for trigeminal stimulation (pain sensation). After the encoding of faces, the respective recognition performances were tested focusing on recognition effects related to specific chemical stimulation during encoding. The number of correctly recognized faces (hits) varied between chemical conditions. PEA stimulation during face encoding significantly increased the number of hits compared to the control condition. H2S also led to an increased mean number of hits, whereas simultaneous CO2 administration during face encoding resulted in a reduction. Analysis of the physiological data revealed two latency regions of interest. Compared to the control condition, both olfactory stimulus conditions resulted in reduced activity components peaking at about 260 ms after stimulus onset, whereas CO2 produced a strongly pronounced enhanced activity component peaking at about 700 ms after stimulus onset. Both olfactory conditions elicited only weak enhanced activities at about 700 ms, and CO2 did not show any difference activity at 260 ms after stimulus onset compared to the control condition. It is concluded that the early activity differences represent subconscious olfactory information processing leading to enhanced memory performances irrespective of the hedonic value, at least if they are only subconsciously processed. The later activity is suggested to reflect conscious CO2 perception negatively affecting face encoding and therefore leading to reduced subsequent face recognition. We interpret that conscious processing of nasal chemical stimulation competes with deep face
Xiao, Naiqi; Quinn, Paul C.; Ge, Liezhong; Lee, Kang
We report three experiments in which we investigated the effect of rigid facial motion on face processing. Specifically, we used the face composite effect to examine whether rigid facial motion influences primarily featural or holistic processing of faces. In Experiments 1, 2, and 3, participants were first familiarized with dynamic displays in which a target face turned from one side to another; then at test, participants judged whether the top half of a composite face (the top half of the target face aligned or misaligned with the bottom half of a foil face) belonged to the target face. We compared performance in the dynamic condition to various static control conditions in Experiments 1, 2, and 3, which differed from each other in terms of the display order of the multiple static images or the inter stimulus interval (ISI) between the images. We found that the size of the face composite effect in the dynamic condition was significantly smaller than that in the static conditions. In other words, the dynamic face display influenced participants to process the target faces in a part-based manner and consequently their recognition of the upper portion of the composite face at test became less interfered with by the aligned lower part of the foil face. The findings from the present experiments provide the strongest evidence to date to suggest that the rigid facial motion mainly influences facial featural, but not holistic, processing. PMID:22342561
Cheung, Olivia S.; Richler, Jennifer J.; Phillips, W. Stewart; Gauthier, Isabel
We examined whether temporal integration of face parts reflects holistic processing or response interference. Participants learned to name two faces “Fred” and two “Bob”. At test, top and bottom halves of different faces formed composites and were presented briefly separated in time. Replicating prior findings (Singer & Sheinberg, 2006), naming of the target halves for aligned composites was slowed when the irrelevant halves were from faces with a different name compared to that from the original face. However, no interference was observed when the irrelevant halves had identical names as the target halves but came from different learned faces, arguing against a true holistic effect. Instead, response interference was obtained when the target halves briefly preceded the irrelevant halves. Experiment 2 confirmed a double-dissociation between holistic processing vs. response interference for intact faces vs. temporally separated face halves, suggesting that simultaneous presentation of facial information is critical for holistic processing. PMID:21327378
Proverbio, Alice Mado; Riva, Federica; Martin, Eleonora; Zani, Alberto
Some behavioral and neuroimaging studies suggest that adults prefer to view attractive faces of the opposite sex more than attractive faces of the same sex. However, unlike the other-race face effect (Caldara et al., 2004), little is known regarding the existence of an opposite-/same-sex bias in face processing. In this study, the faces of 130 attractive male and female adults were foveally presented to 40 heterosexual university students (20 men and 20 women) who were engaged in a secondary perceptual task (landscape detection). The automatic processing of face gender was investigated by recording ERPs from 128 scalp sites. Neural markers of opposite- vs. same-sex bias in face processing included larger and earlier centro-parietal N400s in response to faces of the opposite sex and a larger late positivity (LP) to same-sex faces. Analysis of intra-cortical neural generators (swLORETA) showed that facial processing-related (FG, BA37, BA20/21) and emotion-related brain areas (the right parahippocampal gyrus, BA35; uncus, BA36/38; and the cingulate gyrus, BA24) had higher activations in response to opposite- than same-sex faces. The results of this analysis, along with data obtained from ERP recordings, support the hypothesis that both genders process opposite-sex faces differently than same-sex faces. The data also suggest a hemispheric asymmetry in the processing of opposite-/same-sex faces, with the right hemisphere involved in processing same-sex faces and the left hemisphere involved in processing faces of the opposite sex. The data support previous literature suggesting a right lateralization for the representation of self-image and body awareness.
Matheson, H E; Bilsbury, T G; McMullen, P A
A large body of research suggests that faces are processed by a specialized mechanism within the human visual system. This specialized mechanism is made up of subprocesses (Maurer, LeGrand, & Mondloch, 2002). One subprocess, called second- order relational processing, analyzes the metric distances between face parts. Importantly, it is well established that other-race faces and contrast-reversed faces are associated with impaired performance on numerous face processing tasks. Here, we investigated the specificity of second-order relational processing by testing how this process is applied to faces of different race and photographic contrast. Participants completed a feature displacement discrimination task, directly measuring the sensitivity to second-order relations between face parts. Across three experiments we show that, despite absolute differences in sensitivity in some conditions, inversion impaired performance in all conditions. The presence of robust inversion effects for all faces suggests that second-order relational processing can be applied to faces of different race and photographic contrast.
Ashwin, Chris; Wheelwright, Sally; Baron-Cohen, Simon
People show a left visual field (LVF) bias for faces, i.e., involving the right hemisphere of the brain. Lesion and neuroimaging studies confirm the importance of the right-hemisphere and suggest separable neural pathways for processing facial identity vs. emotions. We investigated the hemispheric processing of faces in adults with and without…
Compton, Rebecca J
Many recent studies have revealed that interaction between the left and right cerebral hemispheres can aid in task performance, but these studies have tended to examine perception of simple stimuli such as letters, digits or simple shapes, which may have limited naturalistic validity. The present study extends these prior findings to a more naturalistic face perception task. Matching tasks required subjects to indicate when a target face matched one of two probe faces. Matches could be either across-field, requiring inter-hemispheric interaction, or within-field, not requiring inter-hemispheric interaction. Subjects indicated when faces matched in emotional expression (Experiment 1; n=32) or in character identity (Experiment 2; n=32). In both experiments, across-field performance was significantly better than within-field performance, supporting the primary hypothesis. Further, this advantage was greater for the more difficult character identity task. Results offer qualified support for the hypothesis that inter-hemispheric interaction is especially advantageous as task demands increase.
Boggan, Amy L; Bartlett, James C; Krawczyk, Daniel C
Face processing has several distinctive hallmarks that researchers have attributed either to face-specific mechanisms or to extensive experience distinguishing faces. Here, we examined the face-processing hallmark of selective attention failure--as indexed by the congruency effect in the composite paradigm--in a domain of extreme expertise: chess. Among 27 experts, we found that the congruency effect was equally strong with chessboards and faces. Further, comparing these experts with recreational players and novices, we observed a trade-off: Chess expertise was positively related to the congruency effect with chess yet negatively related to the congruency effect with faces. These and other findings reveal a case of expertise-dependent, facelike processing of objects of expertise and suggest that face and expert-chess recognition share common processes.
Colasante, Tyler; Mossad, Sarah I; Dudek, Joanna; Haley, David W
Understanding the relative and joint prioritization of age- and valence-related face characteristics in adults' cortical face processing remains elusive because these two characteristics have not been manipulated in a single study of neural face processing. We used electroencephalography to investigate adults' P1, N170, P2 and LPP responses to infant and adult faces with happy and sad facial expressions. Viewing infant vs adult faces was associated with significantly larger P1, N170, P2 and LPP responses, with hemisphere and/or participant gender moderating this effect in select cases. Sad faces were associated with significantly larger N170 responses than happy faces. Sad infant faces were associated with significantly larger N170 responses in the right hemisphere than all other combinations of face age and face valence characteristics. We discuss the relative and joint neural prioritization of infant face characteristics and negative facial affect, and their biological value as distinct caregiving and social cues. © The Author (2016). Published by Oxford University Press. For Permissions, please email: email@example.com.
Ramon, Meike; Rossion, Bruno
In two behavioral experiments involving lateralized stimulus presentation, we tested whether one of the most commonly used measures of holistic face processing--the composite face effect--would be more pronounced for stimuli presented to the right as compared to the left hemisphere. In experiment 1, we investigated the composite face effect in a…
Peters, Judith C; Vlamings, Petra; Kemner, Chantal
Face perception in adults depends on skilled processing of interattribute distances ('configural' processing), which is disrupted for faces presented in inverted orientation (face inversion effect or FIE). Children are not proficient in configural processing, and this might relate to an underlying immaturity to use facial information in low spatial frequency (SF) ranges, which capture the coarse information needed for configural processing. We hypothesized that during adolescence a shift from use of high to low SF information takes place. Therefore, we studied the influence of SF content on neural face processing in groups of children (9-10 years), adolescents (14-15 years) and young adults (21-29 years) by measuring event-related potentials (ERPs) to upright and inverted faces which varied in SF content. Results revealed that children show a neural FIE in early processing stages (i.e. P1; generated in early visual areas), suggesting a superficial, global facial analysis. In contrast, ERPs of adults revealed an FIE at later processing stages (i.e. N170; generated in face-selective, higher visual areas). Interestingly, adolescents showed FIEs in both processing stages, suggesting a hybrid developmental stage. Furthermore, adolescents and adults showed FIEs for stimuli containing low SF information, whereas such effects were driven by both low and high SF information in children. These results indicate that face processing has a protracted maturational course into adolescence, and is dependent on changes in SF processing. During adolescence, sensitivity to configural cues is developed, which aids the fast and holistic processing that is so special for faces. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Meaux, Emilie; Hernandez, Nadia; Carteau-Martin, Isabelle; Martineau, Joëlle; Barthélémy, Catherine; Bonnet-Brilhault, Frédérique; Batty, Magali
Although the wide neural network and specific processes related to faces have been revealed, the process by which face-processing ability develops remains unclear. An interest in faces appears early in infancy, and developmental findings to date have suggested a long maturation process of the mechanisms involved in face processing. These developmental changes may be supported by the acquisition of more efficient strategies to process faces (theory of expertise) and by the maturation of the face neural network identified in adults. This study aimed to clarify the link between event-related potential (ERP) development in response to faces and the behavioral changes in the way faces are scanned throughout childhood. Twenty-six young children (4-10 years of age) were included in two experimental paradigms, the first exploring ERPs during face processing, the second investigating the visual exploration of faces using an eye-tracking system. The results confirmed significant age-related changes in visual ERPs (P1, N170 and P2). Moreover, an increased interest in the eye region and an attentional shift from the mouth to the eyes were also revealed. The proportion of early fixations on the eye region was correlated with N170 and P2 characteristics, highlighting a link between the development of ERPs and gaze behavior. We suggest that these overall developmental dynamics may be sustained by a gradual, experience-dependent specialization in face processing (i.e. acquisition of face expertise), which produces a more automatic and efficient network associated with effortless identification of faces, and allows the emergence of human-specific social and communication skills. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
D'Souza, Dean; Cole, Victoria; Farran, Emily K; Brown, Janice H; Humphreys, Kate; Howard, John; Rodic, Maja; Dekker, Tessa M; D'Souza, Hana; Karmiloff-Smith, Annette
Face processing is a crucial socio-cognitive ability. Is it acquired progressively or does it constitute an innately-specified, face-processing module? The latter would be supported if some individuals with seriously impaired intelligence nonetheless showed intact face-processing abilities. Some theorists claim that Williams syndrome (WS) provides such evidence since, despite IQs in the 50s, adolescents/adults with WS score in the normal range on standardized face-processing tests. Others argue that atypical neural and cognitive processes underlie WS face-processing proficiencies. But what about infants with WS? Do they start with typical face-processing abilities, with atypicality developing later, or are atypicalities already evident in infancy? We used an infant familiarization/novelty design and compared infants with WS to typically developing controls as well as to a group of infants with Down syndrome matched on both mental and chronological age. Participants were familiarized with a schematic face, after which they saw a novel face in which either the features (eye shape) were changed or just the configuration of the original features. Configural changes were processed successfully by controls, but not by infants with WS who were only sensitive to featural changes and who showed syndrome-specific profiles different from infants with the other neurodevelopmental disorder. Our findings indicate that theorists can no longer use the case of WS to support claims that evolution has endowed the human brain with an independent face-processing module.
Zhan, Youlong; Chen, Jie; Xiao, Xiao; Li, Jin; Yang, Zilu; Fan, Wei; Zhong, Yiping
The present study adopted a reward-priming paradigm to investigate whether and how monetary reward cues affected self-face processing. Event-related potentials were recorded during judgments of head orientation of target faces (self, friend, and stranger), with performance associated with a monetary reward. The results showed self-faces elicited larger N2 mean amplitudes than other-faces, and mean N2 amplitudes increased after monetary reward as compared with no reward cue. Moreover, an interaction effect between cue type and face type was observed for the P3 component, suggesting that both self-faces and friend-faces elicited larger P3 mean amplitudes than stranger-faces after no reward cue, with no significant difference between self-faces and friend-faces under this condition. However, self-faces elicited larger P3 mean amplitudes than friend-faces when monetary reward cues were provided. Interestingly, the enhancement of reward on friend-faces processing was observed at late positive potentials (LPP; 450–600 ms), suggesting that the LPP difference between friend-faces and stranger-faces was enhanced with monetary reward cues. Thus, we found that the enhancement effect of reward on self-relevant processing occurred at the later stages, but not at the early stage. These findings suggest that the activation of the reward expectations can enhance self-face processing, yielding a robust and sustained modulation over their overlapped brain areas where reward and self-relevant processing mechanisms may operate together. PMID:27242637
Vuilleumier, Patrik; Armony, Jorge L; Driver, Jon; Dolan, Raymond J
High and low spatial frequency information in visual images is processed by distinct neural channels. Using event-related functional magnetic resonance imaging (fMRI) in humans, we show dissociable roles of such visual channels for processing faces and emotional fearful expressions. Neural responses in fusiform cortex, and effects of repeating the same face identity upon fusiform activity, were greater with intact or high-spatial-frequency face stimuli than with low-frequency faces, regardless of emotional expression. In contrast, amygdala responses to fearful expressions were greater for intact or low-frequency faces than for high-frequency faces. An activation of pulvinar and superior colliculus by fearful expressions occurred specifically with low-frequency faces, suggesting that these subcortical pathways may provide coarse fear-related inputs to the amygdala.
Zhu, Xiang-ru; Zhang, Hui-jun; Wu, Ting-ting; Luo, Wen-bo; Luo, Yue-jia
The perceptual processing of emotional conflict was studied using electrophysiological techniques to measure event-related potentials (ERPs). The emotional face-word Stroop task in which emotion words are written in prominent red color across a face was use to study emotional conflict. In each trial, the emotion word and facial expression were either congruent or incongruent (in conflict). When subjects were asked to identify the expression of the face during a trial, the incongruent condition evoked a more negative N170 ERP component in posterior lateral sites than in the congruent condition. In contrast, when subjects were asked to identify the word during a trial, the incongruent condition evoked a less negative N170 component than the congruent condition. The present findings extend our understanding of the control processes involved in emotional conflict by demonstrating that differentiation of emotional congruency begins at an early perceptual processing stage. (c) 2010 Elsevier Ireland Ltd. All rights reserved.
Mendez, Mario F.; Ringman, John M.; Shapira, Jill S.
Background Developmental prosopagnosia (DP) and semantic dementia (SD) may be the two most common neurologic disorders of face processing, but their main clinical and pathophysiologic differences have not been established. To identify those features, we compared patients with DP and SD. Methods Five patients with DP, five with right temporal-predominant SD, and ten normal controls underwent cognitive, visual perceptual, and face-processing tasks. Results Although the patients with SD were more cognitively impaired than those with DP, the two groups did not differ statistically on the visual perceptual tests. On the face-processing tasks, the DP group had difficulty with configural analysis and they reported relying on serial, feature-by-feature analysis or awareness of salient features to recognize faces. By contrast, the SD group had problems with person knowledge and made semantically related errors. The SD group had better face familiarity scores, suggesting a potentially useful clinical test for distinguishing SD from DP. Conclusions These two disorders of face processing represent clinically distinguishable disturbances along a right hemisphere face-processing network: DP, characterized by early configural agnosia for faces, and SD, characterized primarily by a multimodal person knowledge disorder. We discuss these preliminary findings in the context of the current literature on the face-processing network; recent studies suggest an additional right anterior temporal, unimodal face familiarity-memory deficit consistent with an “associative prosopagnosia.” PMID:26705265
Joseph, Jane E; DiBartolo, Michelle D; Bhatt, Ramesh S
Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2(nd) order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2(nd) order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6-8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9-11 years) showed an intermediate pattern with a trend toward holistic processing of 2(nd) order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2(nd) order and featural information are incorporated into holistic representations, whereas older children only incorporate 2(nd) order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2(nd) order processing initially then incorporates featural information by adulthood.
Joseph, Jane E.; DiBartolo, Michelle D.; Bhatt, Ramesh S.
Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2nd order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2nd order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6–8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9–11 years) showed an intermediate pattern with a trend toward holistic processing of 2nd order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2nd order and featural information are incorporated into holistic representations, whereas older children only incorporate 2nd order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2nd order processing initially then incorporates featural information by adulthood. PMID:26300838
Campatelli, G.; Federico, R. R.; Apicella, F.; Sicca, F.; Muratori, F.
Face processing has been studied and discussed in depth during previous decades in several branches of science, and evidence from research supports the view that this process is a highly specialized brain function. Several authors argue that difficulties in the use and comprehension of the information conveyed by human faces could represent a core…
Cassidy, Brittany S.; Krendl, Anne C.; Stanko, Kathleen A.; Rydell, Robert J.; Young, Steven G.; Hugenberg, Kurt
The dehumanization of Black Americans is an ongoing societal problem. Reducing configural face processing, a well-studied aspect of typical face encoding, decreases the activation of human-related concepts to White faces, suggesting that the extent that faces are configurally processed contributes to dehumanization. Because Black individuals are more dehumanized relative to White individuals, the current work examined how configural processing might contribute to their greater dehumanization. Study 1 showed that inverting faces (which reduces configural processing) reduced the activation of human-related concepts toward Black more than White faces. Studies 2a and 2b showed that reducing configural processing affects dehumanization by decreasing trust and increasing homogeneity among Black versus White faces. Studies 3a–d showed that configural processing effects emerge in racial outgroups for whom untrustworthiness may be a more salient group stereotype (i.e., Black, but not Asian, faces). Study 4 provided evidence that these effects are specific to reduced configural processing versus more general perceptual disfluency. Reduced configural processing may thus contribute to the greater dehumanization of Black relative to White individuals. PMID:29910510
Ponseti, J.; Granert, O.; van Eimeren, T.; Jansen, O.; Wolff, S.; Beier, K.; Deuschl, G.; Bosinski, H.; Siebner, H.
Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating. In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (fMRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more than child faces. These brain regions comprised areas known to be implicated in face processing, and sexual processing, including occipital areas, the ventrolateral prefrontal cortex and, subcortically, the putamen and nucleus caudatus. The same regions were activated in paedophiles, but with a reversed preferential response pattern. PMID:24850896
Ponseti, J; Granert, O; van Eimeren, T; Jansen, O; Wolff, S; Beier, K; Deuschl, G; Bosinski, H; Siebner, H
Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating. In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (fMRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more than child faces. These brain regions comprised areas known to be implicated in face processing, and sexual processing, including occipital areas, the ventrolateral prefrontal cortex and, subcortically, the putamen and nucleus caudatus. The same regions were activated in paedophiles, but with a reversed preferential response pattern. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Kelly, David J.; Liu, Shaoying; Rodger, Helen; Miellet, Sebastien; Ge, Liezhong; Caldara, Roberto
Perception and eye movements are affected by culture. Adults from Eastern societies (e.g. China) display a disposition to process information "holistically," whereas individuals from Western societies (e.g. Britain) process information "analytically." Recently, this pattern of cultural differences has been extended to face…
Ross, David A; Gauthier, Isabel
Holistic processing is a hallmark of face processing. There is evidence that holistic processing is strongest for faces at identification distance, 2 - 10 meters from the observer. However, this evidence is based on tasks that have been little used in the literature and that are indirect measures of holistic processing. We use the composite task- a well validated and frequently used paradigm - to measure the effect of viewing distance on holistic processing. In line with previous work, we find a congruency x alignment effect that is strongest for faces that are close (2m equivalent distance) than for faces that are further away (24m equivalent distance). In contrast, the alignment effect for same trials, used by several authors to measure holistic processing, produced results that are difficult to interpret. We conclude that our results converge with previous findings providing more direct evidence for an effect of size on holistic processing.
Ross, David A.; Gauthier, Isabel
Holistic processing is a hallmark of face processing. There is evidence that holistic processing is strongest for faces at identification distance, 2 – 10 meters from the observer. However, this evidence is based on tasks that have been little used in the literature and that are indirect measures of holistic processing. We use the composite task– a well validated and frequently used paradigm – to measure the effect of viewing distance on holistic processing. In line with previous work, we find a congruency x alignment effect that is strongest for faces that are close (2m equivalent distance) than for faces that are further away (24m equivalent distance). In contrast, the alignment effect for same trials, used by several authors to measure holistic processing, produced results that are difficult to interpret. We conclude that our results converge with previous findings providing more direct evidence for an effect of size on holistic processing. PMID:26500423
Rousselet, Guillaume A; Ince, Robin A A; van Rijsbergen, Nicola J; Schyns, Philippe G
In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye. © 2014 ARVO.
Proverbio, Alice Mado
Several studies have demonstrated that women show a greater interest for social information and empathic attitude than men. This article reviews studies on sex differences in the brain, with particular reference to how males and females process faces and facial expressions, social interactions, pain of others, infant faces, faces in things (pareidolia phenomenon), opposite-sex faces, humans vs. landscapes, incongruent behavior, motor actions, biological motion, erotic pictures, and emotional information. Sex differences in oxytocin-based attachment response and emotional memory are also mentioned. In addition, we investigated how 400 different human faces were evaluated for arousal and valence dimensions by a group of healthy male and female University students. Stimuli were carefully balanced for sensory and perceptual characteristics, age, facial expression, and sex. As a whole, women judged all human faces as more positive and more arousing than men. Furthermore, they showed a preference for the faces of children and the elderly in the arousal evaluation. Regardless of face aesthetics, age, or facial expression, women rated human faces higher than men. The preference for opposite- vs. same-sex faces strongly interacted with facial age. Overall, both women and men exhibited differences in facial processing that could be interpreted in the light of evolutionary psychobiology. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Issa, Elias; DiCarlo, James
SUMMARY Functional magnetic resonance imaging (fMRI) has revealed multiple subregions in monkey inferior temporal cortex (IT) that are selective for images of faces over other objects. The earliest of these subregions, the posterior lateral face patch (PL), has not been studied previously at the neurophysiological level. Perhaps not surprisingly, we found that PL contains a high concentration of ‘face selective’ cells when tested with standard image sets comparable to those used previously to define the region at the level of fMRI. However, we here report that several different image sets and analytical approaches converge to show that nearly all face selective PL cells are driven by the presence of a single eye in the context of a face outline. Most strikingly, images containing only an eye, even when incorrectly positioned in an outline, drove neurons nearly as well as full face images, and face images lacking only this feature led to longer latency responses. Thus, bottom-up face processing is relatively local and linearly integrates features -- consistent with parts-based models -- grounding investigation of how the presence of a face is first inferred in the IT face processing hierarchy. PMID:23175821
flying honeybees ( Apis mellifera ) as a model to understand how a non-mammalian brain learns to recognise human faces. Individual bees were trained...understand how a non-mammalian brain processes human faces is the honeybee (J Exp Biol 2005 v208p4709). Individual free flying honeybees ( Apis ... mellifera ) were provided with differential conditioning to achromatic target and distractor face images. Bee acquisition reached >70% correct choices
Olivares, Ela I.; Iglesias, Jaime; Saavedra, Cristina; Trujillo-Barreto, Nelson J.; Valdés-Sosa, Mitchell
We analyze the functional significance of different event-related potentials (ERPs) as electrophysiological indices of face perception and face recognition, according to cognitive and neurofunctional models of face processing. Initially, the processing of faces seems to be supported by early extrastriate occipital cortices and revealed by modulations of the occipital P1. This early response is thought to reflect the detection of certain primary structural aspects indicating the presence grosso modo of a face within the visual field. The posterior-temporal N170 is more sensitive to the detection of faces as complex-structured stimuli and, therefore, to the presence of its distinctive organizational characteristics prior to within-category identification. In turn, the relatively late and probably more rostrally generated N250r and N400-like responses might respectively indicate processes of access and retrieval of face-related information, which is stored in long-term memory (LTM). New methods of analysis of electrophysiological and neuroanatomical data, namely, dynamic causal modeling, single-trial and time-frequency analyses, are highly recommended to advance in the knowledge of those brain mechanisms concerning face processing. PMID:26160999
Carmel, David; Bentin, Shlomo
To explore face specificity in visual processing, we compared the role of task-associated strategies and expertise on the N170 event-related potential (ERP) component elicited by human faces with the ERPs elicited by cars, birds, items of furniture, and ape faces. In Experiment 1, participants performed a car monitoring task and an animacy decision task. In Experiment 2, participants monitored human faces while faces of apes were the distracters. Faces elicited an equally conspicuous N170, significantly larger than the ERPs elicited by non-face categories regardless of whether they were ignored or had an equal status with other categories (Experiment 1), or were the targets (in Experiment 2). In contrast, the negative component elicited by cars during the same time range was larger if they were targets than if they were not. Furthermore, unlike the posterior-temporal distribution of the N170, the negative component elicited by cars and its modulation by task were more conspicuous at occipital sites. Faces of apes elicited an N170 that was similar in amplitude to that elicited by the human face targets, albeit peaking 10 ms later. As our participants were not ape experts, this pattern indicates that the N170 is face-specific, but not specie-specific, i.e. it is elicited by particular face features regardless of expertise. Overall, these results demonstrate the domain specificity of the visual mechanism implicated in processing faces, a mechanism which is not influenced by either task or expertise. The processing of other objects is probably accomplished by a more general visual processor, which is sensitive to strategic manipulations and attention.
Eimer, Martin; Holmes, Amanda
Results from recent event-related brain potential (ERP) studies investigating brain processes involved in the detection and analysis of emotional facial expression are reviewed. In all experiments, emotional faces were found to trigger an increased ERP positivity relative to neutral faces. The onset of this emotional expression effect was…
Soria Bauser, Denise; Thoma, Patrizia; Aizenberg, Victoria; Brüne, Martin; Juckel, Georg; Daum, Irene
Face and body perception rely on common processing mechanisms and activate similar but not identical brain networks. Patients with schizophrenia show impaired face perception, and the present study addressed for the first time body perception in this group. Seventeen patients diagnosed with schizophrenia or schizoaffective disorder were compared to 17 healthy controls on standardized tests assessing basic face perception skills (identity discrimination, memory for faces, recognition of facial affect). A matching-to-sample task including emotional and neutral faces, bodies and cars either in an upright or in an inverted position was administered to assess potential category-specific performance deficits and impairments of configural processing. Relative to healthy controls, schizophrenia patients showed poorer performance on the tasks assessing face perception skills. In the matching-to-sample task, they also responded more slowly and less accurately than controls, regardless of the stimulus category. Accuracy analysis showed significant inversion effects for faces and bodies across groups, reflecting configural processing mechanisms; however reaction time analysis indicated evidence of reduced inversion effects regardless of category in schizophrenia patients. The magnitude of the inversion effects was not related to clinical symptoms. Overall, the data point towards reduced configural processing, not only for faces but also for bodies and cars in individuals with schizophrenia. © 2011 Elsevier Ltd. All rights reserved.
Schmid, Petra C; Amodio, David M
Power is thought to increase discrimination toward subordinate groups, yet its effect on different forms of implicit bias remains unclear. We tested whether power enhances implicit racial stereotyping, in addition to implicit prejudice (i.e., evaluative associations), and examined the effect of power on the automatic processing of faces during implicit tasks. Study 1 showed that manipulated high power increased both forms of implicit bias, relative to low power. Using a neural index of visual face processing (the N170 component of the ERP), Study 2 revealed that power affected the encoding of White ingroup vs. Black outgroup faces. Whereas high power increased the relative processing of outgroup faces during evaluative judgments in the prejudice task, it decreased the relative processing of outgroup faces during stereotype trait judgments. An indirect effect of power on implicit prejudice through enhanced processing of outgroup versus ingroup faces suggested a potential link between face processing and implicit bias. Together, these findings demonstrate that power can affect implicit prejudice and stereotyping as well as early processing of racial ingroup and outgroup faces.
Tanzer, Michal; Weinbach, Noam; Mardo, Elite; Henik, Avishai; Avidan, Galia
Congenital prosopagnosia (CP) is a severe face processing impairment that occurs in the absence of any obvious brain damage and has often been associated with a more general deficit in deriving holistic relations between facial features or even between non-face shape dimensions. Here we further characterized this deficit and examined a potential way to ameliorate it. To this end we manipulated phasic alertness using alerting cues previously shown to modulate attention and enhance global processing of visual stimuli in normal observers. Specifically, we first examined whether individuals with CP, similarly to controls, would show greater global processing when exposed to an alerting cue in the context of a non-facial task (Navon global/local task). We then explored the effect of an alerting cue on face processing (upright/inverted face discrimination). Confirming previous findings, in the absence of alerting cues, controls showed a typical global bias in the Navon task and an inversion effect indexing holistic processing in the upright/inverted task, while CP failed to show these effects. Critically, when alerting cues preceded the experimental trials, both groups showed enhanced global interference and a larger inversion effect. These results suggest that phasic alertness may modulate visual processing and consequently, affect global/holistic perception. Hence, these findings further reinforce the notion that global/holistic processing may serve as a possible mechanism underlying the face processing deficit in CP. Moreover, they imply a possible route for enhancing face processing in individuals with CP and thus shed new light on potential amelioration of this disorder. Copyright © 2016 Elsevier Ltd. All rights reserved.
Itier, Roxane J; Taylor, Margot J
Using ERPs in a face recognition task, we investigated whether inversion and contrast reversal, which seem to disrupt different aspects of face configuration, differentially affected encoding and memory for faces. Upright, inverted, and negative (contrast-reversed) unknown faces were either immediately repeated (0-lag) or repeated after 1 intervening face (1-lag). The encoding condition (new) consisted of the first presentation of items correctly recognized in the two repeated conditions. 0-lag faces were recognized better and faster than 1-lag faces. Inverted and negative pictures elicited longer reaction times, lower hit rates, and higher false alarm rates than upright faces. ERP analyses revealed that negative and inverted faces affected both early (encoding) and late (recognition) stages of face processing. Early components (N170, VPP) were delayed and enhanced by both inversion and contrast reversal which also affected P1 and P2 components. Amplitudes were higher for inverted faces at frontal and parietal sites from 350 to 600 ms. Priming effects were seen at encoding stages, revealed by shorter latencies and smaller amplitudes of N170 for repeated stimuli, which did not differ depending on face type. Repeated faces yielded more positive amplitudes than new faces from 250 to 450 ms frontally and from 400 to 600 ms parietally. However, ERP differences revealed that the magnitude of this repetition effect was smaller for negative and inverted than upright faces at 0-lag but not at 1-lag condition. Thus, face encoding and recognition processes were affected by inversion and contrast-reversal differently.
Ramon, Meike; Busigny, Thomas; Rossion, Bruno
Prosopagnosia is an impairment at individualizing faces that classically follows brain damage. Several studies have reported observations supporting an impairment of holistic/configural face processing in acquired prosopagnosia. However, this issue may require more compelling evidence as the cases reported were generally patients suffering from integrative visual agnosia, and the sensitivity of the paradigms used to measure holistic/configural face processing in normal individuals remains unclear. Here we tested a well-characterized case of acquired prosopagnosia (PS) with no object recognition impairment, in five behavioral experiments (whole/part and composite face paradigms with unfamiliar faces). In all experiments, for normal observers we found that processing of a given facial feature was affected by the location and identity of the other features in a whole face configuration. In contrast, the patient's results over these experiments indicate that she encodes local facial information independently of the other features embedded in the whole facial context. These observations and a survey of the literature indicate that abnormal holistic processing of the individual face may be a characteristic hallmark of prosopagnosia following brain damage, perhaps with various degrees of severity. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Schwiedrzik, Caspar M.; Zarco, Wilbert; Everling, Stefan; Freiwald, Winrich A.
Faces transmit a wealth of social information. How this information is exchanged between face-processing centers and brain areas supporting social cognition remains largely unclear. Here we identify these routes using resting state functional magnetic resonance imaging in macaque monkeys. We find that face areas functionally connect to specific regions within frontal, temporal, and parietal cortices, as well as subcortical structures supporting emotive, mnemonic, and cognitive functions. This establishes the existence of an extended face-recognition system in the macaque. Furthermore, the face patch resting state networks and the default mode network in monkeys show a pattern of overlap akin to that between the social brain and the default mode network in humans: this overlap specifically includes the posterior superior temporal sulcus, medial parietal, and dorsomedial prefrontal cortex, areas supporting high-level social cognition in humans. Together, these results reveal the embedding of face areas into larger brain networks and suggest that the resting state networks of the face patch system offer a new, easily accessible venue into the functional organization of the social brain and into the evolution of possibly uniquely human social skills. PMID:26348613
Sandford, Adam; Burton, A Mike
Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement. Copyright © 2014 Elsevier B.V. All rights reserved.
Sharon, Haggai; Pasternak, Yotam; Ben Simon, Eti; Gruberger, Michal; Giladi, Nir; Krimchanski, Ben Zion; Hassin, David; Hendler, Talma
Background The Vegetative State (VS) is a severe disorder of consciousness in which patients are awake but display no signs of awareness. Yet, recent functional magnetic resonance imaging (fMRI) studies have demonstrated evidence for covert awareness in VS patients by recording specific brain activations during a cognitive task. However, the possible existence of incommunicable subjective emotional experiences in VS patients remains largely unexplored. This study aimed to probe the question of whether VS patients retain a brain ability to selectively process external stimuli according to their emotional value and look for evidence of covert emotional awareness in patients. Methods and Findings In order to explore these questions we employed the emotive impact of observing personally familiar faces, known to provoke specific perceptual as well as emotional brain activations. Four VS patients and thirteen healthy controls first underwent an fMRI scan while viewing pictures of non-familiar faces, personally familiar faces and pictures of themselves. In a subsequent imagery task participants were asked to actively imagine one of their parent's faces. Analyses focused on face and familiarity selective regional brain activations and inter-regional functional connectivity. Similar to controls, all patients displayed face selective brain responses with further limbic and cortical activations elicited by familiar faces. In patients as well as controls, Connectivity was observed between emotional, visual and face specific areas, suggesting aware emotional perception. This connectivity was strongest in the two patients who later recovered. Notably, these two patients also displayed selective amygdala activation during familiar face imagery, with one further exhibiting face selective activations, indistinguishable from healthy controls. Conclusions Taken together, these results show that selective emotional processing can be elicited in VS patients both by external emotionally
Zhou, Guifei; Liu, Jiangang; Ding, Xiao Pan; Fu, Genyue; Lee, Kang
Numerous developmental studies have suggested that other-race effect (ORE) in face recognition emerges as early as in infancy and develops steadily throughout childhood. However, there is very limited research on the neural mechanisms underlying this developmental ORE. The present study used Granger causality analysis (GCA) to examine the development of children's cortical networks in processing own- and other-race faces. Children were between 3 and 13 years. An old-new paradigm was used to assess their own- and other-race face recognition with ETG-4000 (Hitachi Medical Co., Japan) acquiring functional near infrared spectroscopy (fNIRS) data. After preprocessing, for each participant and under each face condition, we obtained the causal map by calculating the weights of causal relations between the time courses of [oxy-Hb] of each pair of channels using GCA. To investigate further the differential causal connectivity for own-race faces and other-race faces at the group level, a repeated measure analysis of variance (ANOVA) was performed on the GCA weights for each pair of channels with the face race task (own-race face vs. other-race face) as the within-subject variable and the age as a between-subject factor (continuous variable). We found an age-related increase in functional connectivity, paralleling a similar age-related improvement in behavioral face processing ability. More importantly, we found that the significant differences in neural functional connectivity between the recognition of own-race faces and that of other-race faces were modulated by age. Thus, like the behavioral ORE, the neural ORE emerges early and undergoes a protracted developmental course. PMID:27713696
Müsch, Kathrin; Hamamé, Carlos M; Perrone-Bertolotti, Marcela; Minotti, Lorella; Kahane, Philippe; Engel, Andreas K; Lachaux, Jean-Philippe; Schneider, Till R
Face processing depends on the orchestrated activity of a large-scale neuronal network. Its activity can be modulated by attention as a function of task demands. However, it remains largely unknown whether voluntary, endogenous attention and reflexive, exogenous attention to facial expressions equally affect all regions of the face-processing network, and whether such effects primarily modify the strength of the neuronal response, the latency, the duration, or the spectral characteristics. We exploited the good temporal and spatial resolution of intracranial electroencephalography (iEEG) and recorded from depth electrodes to uncover the fast dynamics of emotional face processing. We investigated frequency-specific responses and event-related potentials (ERP) in the ventral occipito-temporal cortex (VOTC), ventral temporal cortex (VTC), anterior insula, orbitofrontal cortex (OFC), and amygdala when facial expressions were task-relevant or task-irrelevant. All investigated regions of interest (ROI) were clearly modulated by task demands and exhibited stronger changes in stimulus-induced gamma band activity (50-150 Hz) when facial expressions were task-relevant. Observed latencies demonstrate that the activation is temporally coordinated across the network, rather than serially proceeding along a processing hierarchy. Early and sustained responses to task-relevant faces in VOTC and VTC corroborate their role for the core system of face processing, but they also occurred in the anterior insula. Strong attentional modulation in the OFC and amygdala (300 msec) suggests that the extended system of the face-processing network is only recruited if the task demands active face processing. Contrary to our expectation, we rarely observed differences between fearful and neutral faces. Our results demonstrate that activity in the face-processing network is susceptible to the deployment of selective attention. Moreover, we show that endogenous attention operates along the whole
Homola, György A.; Jbabdi, Saad; Beckmann, Christian F.; Bartsch, Andreas J.
Age is one of the most salient aspects in faces and of fundamental cognitive and social relevance. Although face processing has been studied extensively, brain regions responsive to age have yet to be localized. Using evocative face morphs and fMRI, we segregate two areas extending beyond the previously established face-sensitive core network, centered on the inferior temporal sulci and angular gyri bilaterally, both of which process changes of facial age. By means of probabilistic tractography, we compare their patterns of functional activation and structural connectivity. The ventral portion of Wernicke's understudied perpendicular association fasciculus is shown to interconnect the two areas, and activation within these clusters is related to the probability of fiber connectivity between them. In addition, post-hoc age-rating competence is found to be associated with high response magnitudes in the left angular gyrus. Our results provide the first evidence that facial age has a distinct representation pattern in the posterior human brain. We propose that particular face-sensitive nodes interact with additional object-unselective quantification modules to obtain individual estimates of facial age. This brain network processing the age of faces differs from the cortical areas that have previously been linked to less developmental but instantly changeable face aspects. Our probabilistic method of associating activations with connectivity patterns reveals an exemplary link that can be used to further study, assess and quantify structure-function relationships. PMID:23185334
Richler, Jennifer J; Tanaka, James W; Brown, Danielle D; Gauthier, Isabel
One hallmark of holistic face processing is an inability to selectively attend to 1 face part while ignoring information in another part. In 3 sequential matching experiments, the authors tested perceptual and decisional accounts of holistic processing by measuring congruency effects between cued and uncued composite face halves shown in spatially aligned or disjointed configurations. The authors found congruency effects when the top and bottom halves of the study face were spatially aligned, misaligned (Experiment 1), or adjacent to one another (Experiment 2). However, at test, congruency effects were reduced by misalignment and abolished for adjacent configurations. This suggests that manipulations at test are more influential than manipulations at study, consistent with a decisional account of holistic processing. When encoding demands for study and test faces were equated (Experiment 3), the authors observed effects of study configuration suggesting that, consistent with a perceptual explanation, encoding does influence the magnitude of holistic processing. Together, these results cannot be accounted for by current perceptual or decisional accounts of holistic processing and suggest the existence of an attention-dependent mechanism that can integrate spatially separated face parts.
Brown, Christopher P.; Feger, Beth Smith
Federal, state, and local policy makers' high-stakes standards-based accountability reforms are transforming the early childhood teacher education process. These reforms affect how early education teacher candidates figure their role as teachers. By employing Holland, Lachicotte, Skinner, and Cain's conception of figured worlds to analyze the…
Frühholz, Sascha; Jellinghaus, Anne; Herrmann, Manfred
Facial expressions are important emotional stimuli during social interactions. Symbolic emotional cues, such as affective words, also convey information regarding emotions that is relevant for social communication. Various studies have demonstrated fast decoding of emotions from words, as was shown for faces, whereas others report a rather delayed decoding of information about emotions from words. Here, we introduced an implicit (color naming) and explicit task (emotion judgment) with facial expressions and words, both containing information about emotions, to directly compare the time course of emotion processing using event-related potentials (ERP). The data show that only negative faces affected task performance, resulting in increased error rates compared to neutral faces. Presentation of emotional faces resulted in a modulation of the N170, the EPN and the LPP components and these modulations were found during both the explicit and implicit tasks. Emotional words only affected the EPN during the explicit task, but a task-independent effect on the LPP was revealed. Finally, emotional faces modulated source activity in the extrastriate cortex underlying the generation of the N170, EPN and LPP components. Emotional words led to a modulation of source activity corresponding to the EPN and LPP, but they also affected the N170 source on the right hemisphere. These data show that facial expressions affect earlier stages of emotion processing compared to emotional words, but the emotional value of words may have been detected at early stages of emotional processing in the visual cortex, as was indicated by the extrastriate source activity. Copyright © 2011 Elsevier B.V. All rights reserved.
Samson, Hélène; Fiori-Duharcourt, Nicole; Doré-Mazars, Karine; Lemoine, Christelle; Vergilino-Perez, Dorine
Previous studies have demonstrated a left perceptual bias while looking at faces, due to the fact that observers mainly use information from the left side of a face (from the observer's point of view) to perform a judgment task. Such a bias is consistent with the right hemisphere dominance for face processing and has sometimes been linked to a left gaze bias, i.e. more and/or longer fixations on the left side of the face. Here, we recorded eye-movements, in two different experiments during a gender judgment task, using normal and chimeric faces which were presented above, below, right or left to the central fixation point or on it (central position). Participants performed the judgment task by remaining fixated on the fixation point or after executing several saccades (up to three). A left perceptual bias was not systematically found as it depended on the number of allowed saccades and face position. Moreover, the gaze bias clearly depended on the face position as the initial fixation was guided by face position and landed on the closest half-face, toward the center of gravity of the face. The analysis of the subsequent fixations revealed that observers move their eyes from one side to the other. More importantly, no apparent link between gaze and perceptual biases was found here. This implies that we do not look necessarily toward the side of the face that we use to make a gender judgment task. Despite the fact that these results may be limited by the absence of perceptual and gaze biases in some conditions, we emphasized the inter-individual differences observed in terms of perceptual bias, hinting at the importance of performing individual analysis and drawing attention to the influence of the method used to study this bias. PMID:24454927
Feusner, Jamie D; Townsend, Jennifer; Bystritsky, Alexander; Bookheimer, Susan
Body dysmorphic disorder (BDD) is a severe psychiatric condition in which individuals are preoccupied with perceived appearance defects. Clinical observation suggests that patients with BDD focus on details of their appearance at the expense of configural elements. This study examines abnormalities in visual information processing in BDD that may underlie clinical symptoms. To determine whether patients with BDD have abnormal patterns of brain activation when visually processing others' faces with high, low, or normal spatial frequency information. Case-control study. University hospital. Twelve right-handed, medication-free subjects with BDD and 13 control subjects matched by age, sex, and educational achievement. Intervention Functional magnetic resonance imaging while performing matching tasks of face stimuli. Stimuli were neutral-expression photographs of others' faces that were unaltered, altered to include only high spatial frequency visual information, or altered to include only low spatial frequency visual information. Blood oxygen level-dependent functional magnetic resonance imaging signal changes in the BDD and control groups during tasks with each stimulus type. Subjects with BDD showed greater left hemisphere activity relative to controls, particularly in lateral prefrontal cortex and lateral temporal lobe regions for all face tasks (and dorsal anterior cingulate activity for the low spatial frequency task). Controls recruited left-sided prefrontal and dorsal anterior cingulate activity only for the high spatial frequency task. Subjects with BDD demonstrate fundamental differences from controls in visually processing others' faces. The predominance of left-sided activity for low spatial frequency and normal faces suggests detail encoding and analysis rather than holistic processing, a pattern evident in controls only for high spatial frequency faces. These abnormalities may be associated with apparent perceptual distortions in patients with BDD. The
Bultitude, Janet H.; Downing, Paul E.; Rafal, Robert D.
Patients with hemispatial neglect (‘neglect’) following a brain lesion show difficulty responding or orienting to objects and events on the left side of space. Substantial evidence supports the use of a sensorimotor training technique called prism adaptation as a treatment for neglect. Reaching for visual targets viewed through prismatic lenses that induce a rightward shift in the visual image results in a leftward recalibration of reaching movements that is accompanied by a reduction of symptoms in patients with neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Interestingly, prism adaptation can also alter aspects of non-lateralised spatial attention. We previously demonstrated that prism adaptation alters the extent to which neglect patients and healthy participants process local features versus global configurations of visual stimuli. Since deficits in non-lateralised spatial attention are thought to contribute to the severity of neglect symptoms, it is possible that the effect of prism adaptation on these deficits contributes to its efficacy. This study examines the pervasiveness of the effects of prism adaptation on perception by examining the effect of prism adaptation on configural face processing using a composite face task. The composite face task is a persuasive demonstration of the automatic global-level processing of faces: the top and bottom halves of two familiar faces form a seemingly new, unknown face when viewed together. Participants identified the top or bottom halves of composite faces before and after prism adaptation. Sensorimotor adaptation was confirmed by significant pointing aftereffect, however there was no significant change in the extent to which the irrelevant face half interfered with processing. The results support the proposal that the therapeutic effects of prism adaptation
Bultitude, Janet H; Downing, Paul E; Rafal, Robert D
Patients with hemispatial neglect ('neglect') following a brain lesion show difficulty responding or orienting to objects and events on the left side of space. Substantial evidence supports the use of a sensorimotor training technique called prism adaptation as a treatment for neglect. Reaching for visual targets viewed through prismatic lenses that induce a rightward shift in the visual image results in a leftward recalibration of reaching movements that is accompanied by a reduction of symptoms in patients with neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Interestingly, prism adaptation can also alter aspects of non-lateralised spatial attention. We previously demonstrated that prism adaptation alters the extent to which neglect patients and healthy participants process local features versus global configurations of visual stimuli. Since deficits in non-lateralised spatial attention are thought to contribute to the severity of neglect symptoms, it is possible that the effect of prism adaptation on these deficits contributes to its efficacy. This study examines the pervasiveness of the effects of prism adaptation on perception by examining the effect of prism adaptation on configural face processing using a composite face task. The composite face task is a persuasive demonstration of the automatic global-level processing of faces: the top and bottom halves of two familiar faces form a seemingly new, unknown face when viewed together. Participants identified the top or bottom halves of composite faces before and after prism adaptation. Sensorimotor adaptation was confirmed by significant pointing aftereffect, however there was no significant change in the extent to which the irrelevant face half interfered with processing. The results support the proposal that the therapeutic effects of prism adaptation are
McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H
Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. (c) 2016 APA, all rights reserved).
Zhang, Yu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min(-1) using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.
Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej
This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.
Monk, Christopher S; Weng, Shih-Jen; Wiggins, Jillian Lee; Kurapati, Nikhil; Louro, Hugo M C; Carrasco, Melisa; Maslowsky, Julie; Risi, Susan; Lord, Catherine
Autism spectrum disorders (ASD) are associated with severe impairments in social functioning. Because faces provide nonverbal cues that support social interactions, many studies of ASD have examined neural structures that process faces, including the amygdala, ventromedial prefrontal cortex and superior and middle temporal gyri. However, increases or decreases in activation are often contingent on the cognitive task. Specifically, the cognitive domain of attention influences group differences in brain activation. We investigated brain function abnormalities in participants with ASD using a task that monitored attention bias to emotional faces. Twenty-four participants (12 with ASD, 12 controls) completed a functional magnetic resonance imaging study while performing an attention cuing task with emotional (happy, sad, angry) and neutral faces. In response to emotional faces, those in the ASD group showed greater right amygdala activation than those in the control group. A preliminary psychophysiological connectivity analysis showed that ASD participants had stronger positive right amygdala and ventromedial prefrontal cortex coupling and weaker positive right amygdala and temporal lobe coupling than controls. There were no group differences in the behavioural measure of attention bias to the emotional faces. The small sample size may have affected our ability to detect additional group differences. When attention bias to emotional faces was equivalent between ASD and control groups, ASD was associated with greater amygdala activation. Preliminary analyses showed that ASD participants had stronger connectivity between the amygdala ventromedial prefrontal cortex (a network implicated in emotional modulation) and weaker connectivity between the amygdala and temporal lobe (a pathway involved in the identification of facial expressions, although areas of group differences were generally in a more anterior region of the temporal lobe than what is typically reported for
Tate, Andrew J; Fischer, Hanno; Leigh, Andrea E; Kendrick, Keith M
Visual cues from faces provide important social information relating to individual identity, sexual attraction and emotional state. Behavioural and neurophysiological studies on both monkeys and sheep have shown that specialized skills and neural systems for processing these complex cues to guide behaviour have evolved in a number of mammals and are not present exclusively in humans. Indeed, there are remarkable similarities in the ways that faces are processed by the brain in humans and other mammalian species. While human studies with brain imaging and gross neurophysiological recording approaches have revealed global aspects of the face-processing network, they cannot investigate how information is encoded by specific neural networks. Single neuron electrophysiological recording approaches in both monkeys and sheep have, however, provided some insights into the neural encoding principles involved and, particularly, the presence of a remarkable degree of high-level encoding even at the level of a specific face. Recent developments that allow simultaneous recordings to be made from many hundreds of individual neurons are also beginning to reveal evidence for global aspects of a population-based code. This review will summarize what we have learned so far from these animal-based studies about the way the mammalian brain processes the faces and the emotions they can communicate, as well as associated capacities such as how identity and emotion cues are dissociated and how face imagery might be generated. It will also try to highlight what questions and advances in knowledge still challenge us in order to provide a complete understanding of just how brain networks perform this complex and important social recognition task.
Tate, Andrew J; Fischer, Hanno; Leigh, Andrea E; Kendrick, Keith M
Visual cues from faces provide important social information relating to individual identity, sexual attraction and emotional state. Behavioural and neurophysiological studies on both monkeys and sheep have shown that specialized skills and neural systems for processing these complex cues to guide behaviour have evolved in a number of mammals and are not present exclusively in humans. Indeed, there are remarkable similarities in the ways that faces are processed by the brain in humans and other mammalian species. While human studies with brain imaging and gross neurophysiological recording approaches have revealed global aspects of the face-processing network, they cannot investigate how information is encoded by specific neural networks. Single neuron electrophysiological recording approaches in both monkeys and sheep have, however, provided some insights into the neural encoding principles involved and, particularly, the presence of a remarkable degree of high-level encoding even at the level of a specific face. Recent developments that allow simultaneous recordings to be made from many hundreds of individual neurons are also beginning to reveal evidence for global aspects of a population-based code. This review will summarize what we have learned so far from these animal-based studies about the way the mammalian brain processes the faces and the emotions they can communicate, as well as associated capacities such as how identity and emotion cues are dissociated and how face imagery might be generated. It will also try to highlight what questions and advances in knowledge still challenge us in order to provide a complete understanding of just how brain networks perform this complex and important social recognition task. PMID:17118930
Ibáñez, Agustín; Hurtado, Esteban; Lobos, Alejandro; Escobar, Josefina; Trujillo, Natalia; Baez, Sandra; Huepe, David; Manes, Facundo; Decety, Jean
Current research on empathy for pain emphasizes the overlap in the neural response between the first-hand experience of pain and its perception in others. However, recent studies suggest that the perception of the pain of others may reflect the processing of a threat or negative arousal rather than an automatic pro-social response. It can thus be suggested that pain processing of other-related, but not self-related, information could imply danger rather than empathy, due to the possible threat represented in the expressions of others (especially if associated with pain stimuli). To test this hypothesis, two experiments considering subliminal stimuli were designed. In Experiment 1, neutral and semantic pain expressions previously primed with own or other faces were presented to participants. When other-face priming was used, only the detection of semantic pain expressions was facilitated. In Experiment 2, pictures with pain and neutral scenarios previously used in ERP and fMRI research were used in a categorization task. Those pictures were primed with own or other faces following the same procedure as in Experiment 1 while ERPs were recorded. Early (N1) and late (P3) cortical responses between pain and no-pain were modulated only in the other-face priming condition. These results support the threat value of pain hypothesis and suggest the necessity for the inclusion of own- versus other-related information in future empathy for pain research. Copyright © 2011 Elsevier B.V. All rights reserved.
Joseph, Jane E; Swearingen, Joshua E; Clark, Jonathan D; Benca, Chelsie E; Collins, Heather R; Corbly, Christine R; Gathers, Ann D; Bhatt, Ramesh S
Greater expertise for faces in adults than in children may be achieved by a dynamic interplay of functional segregation and integration of brain regions throughout development. The present study examined developmental changes in face network functional connectivity in children (5-12 years) and adults (18-43 years) during face-viewing using a graph-theory approach. A face-specific developmental change involved connectivity of the right occipital face area. During childhood, this node increased in strength and within-module clustering based on positive connectivity. These changes reflect an important role of the ROFA in segregation of function during childhood. In addition, strength and diversity of connections within a module that included primary visual areas (left and right calcarine) and limbic regions (left hippocampus and right inferior orbitofrontal cortex) increased from childhood to adulthood, reflecting increased visuo-limbic integration. This integration was pronounced for faces but also emerged for natural objects. Taken together, the primary face-specific developmental changes involved segregation of a posterior visual module during childhood, possibly implicated in early stage perceptual face processing, and greater integration of visuo-limbic connections from childhood to adulthood, which may reflect processing related to development of perceptual expertise for individuation of faces and other visually homogenous categories. Copyright © 2012 Elsevier Inc. All rights reserved.
Doty, Tracy J.; Japee, Shruti; Ingvar, Martin; Ungerleider, Leslie G.
Stimuli that signal threat show considerable variability in the extent to which they enhance behavior, even among healthy individuals. However, the neural underpinning of this behavioral variability is not well understood. By manipulating expectation of threat in an fMRI study of fearful vs. neutral face categorization, we uncovered a network of areas underlying variability in threat processing in healthy adults. We explicitly altered expectation by presenting face images at three different expectation levels: 80%, 50%, and 20%. Subjects were instructed to report as fast and as accurately as possible whether the face was fearful (signaled threat) or not. An uninformative cue preceded each face by 4 seconds (s). By taking the difference between response times (RT) to fearful compared to neutral faces, we quantified an overall fear RT bias (i.e. faster to fearful than neutral faces) for each subject. This bias correlated positively with late trial fMRI activation (8 s after the face) during unexpected fearful face trials in bilateral ventromedial prefrontal cortex, the left subgenual cingulate cortex, and the right caudate nucleus and correlated negatively with early trial fMRI activation (4 s after the cue) during expected neutral face trials in bilateral dorsal striatum and the right ventral striatum. These results demonstrate that the variability in threat processing among healthy adults is reflected not only in behavior but also in the magnitude of activation in medial prefrontal and striatal regions that appear to encode affective value. PMID:24841078
Doty, Tracy J; Japee, Shruti; Ingvar, Martin; Ungerleider, Leslie G
Stimuli that signal threat show considerable variability in the extents to which they enhance behavior, even among healthy individuals. However, the neural underpinning of this behavioral variability is not well understood. By manipulating expectation of threat in an fMRI study of fearful versus neutral face categorization, we uncovered a network of areas underlying variability in threat processing in healthy adults. We explicitly altered expectations by presenting face images at three different expectation levels: 80 %, 50 %, and 20 %. Subjects were instructed to report as quickly and accurately as possible whether the face was fearful (signaled threat) or not. An uninformative cue preceded each face by 4 s. By taking the difference between reaction times (RTs) to fearful and neutral faces, we quantified an overall fear RT bias (i.e., faster to fearful than to neutral faces) for each subject. This bias correlated positively with late-trial fMRI activation (8 s after the face) during unexpected-fearful-face trials in bilateral ventromedial prefrontal cortex, the left subgenual cingulate cortex, and the right caudate nucleus, and correlated negatively with early-trial fMRI activation (4 s after the cue) during expected-neutral-face trials in bilateral dorsal striatum and the right ventral striatum. These results demonstrate that the variability in threat processing among healthy adults is reflected not only in behavior, but also in the magnitude of activation in medial prefrontal and striatal regions that appear to encode affective value.
DeGutis, Joseph; Wilmer, Jeremy; Mercado, Rogelio J.; Cohan, Sarah
Although holistic processing is thought to underlie normal face recognition ability, widely discrepant reports have recently emerged about this link in an individual differences context. Progress in this domain may have been impeded by the widespread use of subtraction scores, which lack validity due to their contamination with control condition…
Lavelli, Manuela; Fogel, Alan
A microgenetic research design with a multiple case study method and a combination of quantitative and qualitative analyses was used to investigate interdyad differences in real-time dynamics and developmental change processes in mother-infant face-to-face communication over the first 3 months of life. Weekly observations of 24 mother-infant dyads…
Rolls, Edmund T
Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size, view, and spatial frequency of faces and objects, and that these neurons show rapid processing and rapid learning. Critical band spatial frequency masking is shown to be a property of these face-selective neurons and of the human visual perception of faces. Which face or object is present is encoded using a distributed representation in which each neuron conveys independent information in its firing rate, with little information evident in the relative time of firing of different neurons. This ensemble encoding has the advantages of maximizing the information in the representation useful for discrimination between stimuli using a simple weighted sum of the neuronal firing by the receiving neurons, generalization, and graceful degradation. These invariant representations are ideally suited to provide the inputs to brain regions such as the orbitofrontal cortex and amygdala that learn the reinforcement associations of an individual's face, for then the learning, and the appropriate social and emotional responses generalize to other views of the same face. A theory is described of how such invariant representations may be produced by self-organizing learning in a hierarchically organized set of visual cortical areas with convergent connectivity. The theory utilizes either temporal or spatial continuity with an associative synaptic modification rule. Another population of neurons in the cortex in the superior temporal sulcus encodes other aspects of faces such as face expression, eye-gaze, face view, and whether the head is moving. These neurons thus provide important additional inputs to parts of the brain such as the orbitofrontal cortex and amygdala that are involved in social communication and emotional behaviour. Outputs of these systems reach the amygdala, in which face
Mondloch, Catherine J.; Segalowitz, Sidney J.; Lewis, Terri L.; Dywan, Jane; Le Grand, Richard; Maurer, Daphne
The expertise of adults in face perception is facilitated by their ability to rapidly detect that a stimulus is a face. In two experiments, we examined the role of early visual input in the development of face detection by testing patients who had been treated as infants for bilateral congenital cataract. Experiment 1 indicated that, at age 9 to…
Neuhaus, Emily; Jones, Emily J. H.; Barnes, Karen; Sterling, Lindsey; Estes, Annette; Munson, Jeff; Dawson, Geraldine; Webb, Sara J.
Both autism spectrum (ASD) and anxiety disorders are associated with atypical neural and attentional responses to emotional faces, differing in affective face processing from typically developing peers. Within a longitudinal study of children with ASD (23 male, 3 female), we hypothesized that early ERPs to emotional faces would predict concurrent…
Wang, Chao-Chih; Ross, David A; Gauthier, Isabel; Richler, Jennifer J
The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1.
Wang, Chao-Chih; Ross, David A.; Gauthier, Isabel; Richler, Jennifer J.
The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1. PMID:27933014
Wieser, Matthias J.; Brosch, Tobias
Facial expressions are of eminent importance for social interaction as they convey information about other individuals’ emotions and social intentions. According to the predominant “basic emotion” approach, the perception of emotion in faces is based on the rapid, automatic categorization of prototypical, universal expressions. Consequently, the perception of facial expressions has typically been investigated using isolated, de-contextualized, static pictures of facial expressions that maximize the distinction between categories. However, in everyday life, an individual’s face is not perceived in isolation, but almost always appears within a situational context, which may arise from other people, the physical environment surrounding the face, as well as multichannel information from the sender. Furthermore, situational context may be provided by the perceiver, including already present social information gained from affective learning and implicit processing biases such as race bias. Thus, the perception of facial expressions is presumably always influenced by contextual variables. In this comprehensive review, we aim at (1) systematizing the contextual variables that may influence the perception of facial expressions and (2) summarizing experimental paradigms and findings that have been used to investigate these influences. The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer. These findings further challenge the assumption of automatic, hardwired categorical emotion extraction mechanisms predicted by basic emotion theories. Taking into account a recent model on face processing, we discuss where and when these different contextual influences may take place, thus outlining potential avenues in future
Wieser, Matthias J; Brosch, Tobias
Facial expressions are of eminent importance for social interaction as they convey information about other individuals' emotions and social intentions. According to the predominant "basic emotion" approach, the perception of emotion in faces is based on the rapid, automatic categorization of prototypical, universal expressions. Consequently, the perception of facial expressions has typically been investigated using isolated, de-contextualized, static pictures of facial expressions that maximize the distinction between categories. However, in everyday life, an individual's face is not perceived in isolation, but almost always appears within a situational context, which may arise from other people, the physical environment surrounding the face, as well as multichannel information from the sender. Furthermore, situational context may be provided by the perceiver, including already present social information gained from affective learning and implicit processing biases such as race bias. Thus, the perception of facial expressions is presumably always influenced by contextual variables. In this comprehensive review, we aim at (1) systematizing the contextual variables that may influence the perception of facial expressions and (2) summarizing experimental paradigms and findings that have been used to investigate these influences. The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer. These findings further challenge the assumption of automatic, hardwired categorical emotion extraction mechanisms predicted by basic emotion theories. Taking into account a recent model on face processing, we discuss where and when these different contextual influences may take place, thus outlining potential avenues in future research.
Mo, Ce; He, Dongjun; Fang, Fang
Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration. SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as
Michelon, Pascale; Biederman, Irving
There have been a number of reports of preserved face imagery in prosopagnosia. We put this issue to experimental test by comparing the performance of MJH, a 34-year-old prosopagnosic since the age of 5, to controls on tasks where the participants had to judge faces of current celebrities, either in terms of overall similarity (Of Bette Midler, Hillary Clinton, and Diane Sawyer, whose face looks least like the other two?) or on individual features (Is Ronald Reagan's nose pointy?). For each task, a performance measure reflecting the degree of agreement of each participant with the average of the others (not including MJH) was calculated. On the imagery versions of these tasks, MJH was within the lower range of the controls for the agreement measure (though significantly below the mean of the controls). When the same tasks were performed from pictures, agreement among the controls markedly increased whereas MJH's performance was virtually unaffected, placing him well below the range of the controls. This pattern was also apparent with a test of facial features of emotion (Are the eyes wrinkled when someone is surprised?). On three non-face imagery tasks assessing color (What color is a football?), relative lengths of animal's tails (Is a bear's tail long in proportion to its body?), and mental size comparisons (What is bigger, a camel or a zebra?), MJH was within or close to the lower end of the normal range. As most of the celebrities became famous after the onset of MJH's prosopagnosia, our confirmation of the reports of less impaired face imagery in some prosopagnosics cannot be attributed to pre-lesion storage. We speculate that face recognition, in contrast to object recognition, relies more heavily on a representation that describes the initial spatial filter values so the metrics of the facial surface can be specified. If prosopagnosia is regarded as a form of simultanagnosia in which some of these filter values cannot be registered on any one encounter with
Guellai, Bahia; Mersad, Karima; Streri, Arlette
From birth, newborns show a preference for faces talking a native language compared to silent faces. The present study addresses two questions that remained unanswered by previous research: (a) Does the familiarity with the language play a role in this process and (b) Are all the linguistic and paralinguistic cues necessary in this case? Experiment 1 extended newborns' preference for native speakers to non-native ones. Given that fetuses and newborns are sensitive to the prosodic characteristics of speech, Experiments 2 and 3 presented faces talking native and nonnative languages with the speech stream being low-pass filtered. Results showed that newborns preferred looking at a person who talked to them even when only the prosodic cues were provided for both languages. Nonetheless, a familiarity preference for the previously talking face is observed in the "normal speech" condition (i.e., Experiment 1) and a novelty preference in the "filtered speech" condition (Experiments 2 and 3). This asymmetry reveals that newborns process these two types of stimuli differently and that they may already be sensitive to a mismatch between the articulatory movements of the face and the corresponding speech sounds. Copyright © 2014 Elsevier Inc. All rights reserved.
Burton, A Mike; Schweinberger, Stefan R; Jenkins, Rob; Kaufmann, Jürgen M
Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition. © The Author(s) 2015.
Batty, Magali; Taylor, Margot J.
Our facial expressions give others the opportunity to access our feelings, and constitute an important nonverbal tool for communication. Many recent studies have investigated emotional perception in adults, and our knowledge of neural processes involved in emotions is increasingly precise. Young children also use faces to express their internal…
Stephan, Blossom Christa Maree; Caine, Diana
Visual scanpath recording was used to investigate the information processing strategies used by a prosopagnosic patient, SC, when viewing faces. Compared to controls, SC showed an aberrant pattern of scanning, directing attention away from the internal configuration of facial features (eyes, nose) towards peripheral regions (hair, forehead) of the…
MTR AND ETR COMPLEXES. CAMERA FACING EASTERLY TOWARD CHEMICAL PROCESSING PLANT. MTR AND ITS ATTACHMENTS IN FOREGROUND. ETR BEYOND TO RIGHT. INL NEGATIVE NO. 56-4100. - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Reardon, Robert C.; Lenz, Janet G.; Sampson, James P., Jr.; Peterson, Gary W.
This article draws upon the authors' experience in developing cognitive information processing theory in order to examine three important questions facing vocational psychology and assessment: (a) Where should new knowledge for vocational psychology come from? (b) How do career theories and research find their way into practice? and (c) What is…
Hoehl, Stefanie; Striano, Tricia
Recent research has demonstrated that infants' attention towards novel objects is affected by an adult's emotional expression and eye gaze toward the object. The current event-related potential (ERP) study investigated how infants at 3, 6, and 9 months of age process fearful compared to neutral faces looking toward objects or averting gaze away…
Crookes, Kate; Favelle, Simone; Hayward, William G
Recent evidence suggests stronger holistic processing for own-race faces may underlie the own-race advantage in face memory. In previous studies Caucasian participants have demonstrated larger holistic processing effects for Caucasian over Asian faces. However, Asian participants have consistently shown similar sized effects for both Asian and Caucasian faces. We investigated two proposed explanations for the holistic processing of other-race faces by Asian participants: (1) greater other-race exposure, (2) a general global processing bias. Holistic processing was tested using the part-whole task. Participants were living in predominantly own-race environments and other-race contact was evaluated. Despite reporting significantly greater contact with own-race than other-race people, Chinese participants displayed strong holistic processing for both Asian and Caucasian upright faces. In addition, Chinese participants showed no evidence of holistic processing for inverted faces arguing against a general global processing bias explanation. Caucasian participants, in line with previous studies, displayed stronger holistic processing for Caucasian than Asian upright faces. For inverted faces there were no race-of-face differences. These results are used to suggest that Asians may make more general use of face-specific mechanisms than Caucasians.
Crookes, Kate; Favelle, Simone; Hayward, William G.
Recent evidence suggests stronger holistic processing for own-race faces may underlie the own-race advantage in face memory. In previous studies Caucasian participants have demonstrated larger holistic processing effects for Caucasian over Asian faces. However, Asian participants have consistently shown similar sized effects for both Asian and Caucasian faces. We investigated two proposed explanations for the holistic processing of other-race faces by Asian participants: (1) greater other-race exposure, (2) a general global processing bias. Holistic processing was tested using the part-whole task. Participants were living in predominantly own-race environments and other-race contact was evaluated. Despite reporting significantly greater contact with own-race than other-race people, Chinese participants displayed strong holistic processing for both Asian and Caucasian upright faces. In addition, Chinese participants showed no evidence of holistic processing for inverted faces arguing against a general global processing bias explanation. Caucasian participants, in line with previous studies, displayed stronger holistic processing for Caucasian than Asian upright faces. For inverted faces there were no race-of-face differences. These results are used to suggest that Asians may make more general use of face-specific mechanisms than Caucasians. PMID:23386840
Yang, Ping; Wang, Min; Jin, Zhenlan; Li, Ling
The ability to focus on task-relevant information, while suppressing distraction, is critical for human cognition and behavior. Using a delayed-match-to-sample (DMS) task, we investigated the effects of emotional face distractors (positive, negative, and neutral faces) on early and late phases of visual short-term memory (VSTM) maintenance intervals, using low and high VSTM loads. Behavioral results showed decreased accuracy and delayed reaction times (RTs) for high vs. low VSTM load. Event-related potentials (ERPs) showed enhanced frontal N1 and occipital P1 amplitudes for negative faces vs. neutral or positive faces, implying rapid attentional alerting effects and early perceptual processing of negative distractors. However, high VSTM load appeared to inhibit face processing in general, showing decreased N1 amplitudes and delayed P1 latencies. An inverse correlation between the N1 activation difference (high-load minus low-load) and RT costs (high-load minus low-load) was found at left frontal areas when viewing negative distractors, suggesting that the greater the inhibition the lower the RT cost for negative faces. Emotional interference effect was not found in the late VSTM-related parietal P300, frontal positive slow wave (PSW) and occipital negative slow wave (NSW) components. In general, our findings suggest that the VSTM load modulates the early attention and perception of emotional distractors. PMID:26388763
Yovel, Galit; Belin, Pascal
Both faces and voices are rich in socially-relevant information, which humans are remarkably adept at extracting, including a person's identity, age, gender, affective state, personality, etc. Here, we review accumulating evidence from behavioral, neuropsychological, electrophysiological, and neuroimaging studies which suggest that the cognitive and neural processing mechanisms engaged by perceiving faces or voices are highly similar, despite the very different nature of their sensory input. The similarity between the two mechanisms likely facilitates the multi-modal integration of facial and vocal information during everyday social interactions. These findings emphasize a parsimonious principle of cerebral organization, where similar computational problems in different modalities are solved using similar solutions. PMID:23664703
Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
Converging reports indicate that face images are processed through specialized neural networks in the brain -i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.
Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
Converging reports indicate that face images are processed through specialized neural networks in the brain –i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches. PMID:27113635
There are many kinds of so-called irregular expressions in natural dialogues. Even if the content of a conversation is the same in words, different meanings can be interpreted by a person's feeling or face expression. To have a good understanding of dialogues, it is required in a flexible dialogue processing system to infer the speaker's view properly. However, it is difficult to obtain the meaning of the speaker's sentences in various scenes using traditional methods. In this paper, a new approach for dialogue processing that incorporates information from the speaker's face is presented. We first divide conversation statements into several simple tasks. Second, we process each simple task using an independent processor. Third, we employ some speaker's face information to estimate the view of the speakers to solve ambiguities in dialogues. The approach presented in this paper can work efficiently, because independent processors run in parallel, writing partial results to a shared memory, incorporating partial results at appropriate points, and complementing each other. A parallel algorithm and a method for employing the face information in a dialogue machine translation will be discussed, and some results will be included in this paper.
Wieser, Matthias J; Pauli, Paul; Reicherts, Philipp; Mühlberger, Andreas
Anxiety is supposed to enhance the processing of threatening information. Here, we investigated the cortical processing of angry faces during anticipated public speaking. To elicit anxiety, a group of participants was told that they would have to perform a public speech. As a control condition, another group was told that they would have to write a short essay. During anticipation of these tasks, participants saw facial expressions (angry, happy, and neutral) while electroencephalogram was recorded. Event-related potential analysis revealed larger N170 amplitudes for angry compared to happy and neutral faces in the anxiety group. The early posterior negativity as an index of motivated attention was also enhanced for angry compared to happy and neutral faces in participants anticipating public speaking. These results indicate that fear of public speaking influences early perceptual processing of faces such that especially the processing of angry faces is facilitated.
Tanaka, James W.; Sung, Andrew
Although a growing body of research indicates that children with autism spectrum disorder (ASD) exhibit selective deficits in their ability to recognize facial identities and expressions, the source of their face impairment is, as yet, undetermined. In this paper, we consider three possible accounts of the autism face deficit: 1) the holistic hypothesis, 2) the local perceptual bias hypothesis and 3) the eye avoidance hypothesis. A review of the literature indicates that contrary to the holistic hypothesis, there is little evidence to suggest that individuals with autism do not perceive faces holistically. The local perceptual bias account also fails to explain the selective advantage that ASD individuals demonstrate for objects and their selective disadvantage for faces. The eye avoidance hypothesis provides a plausible explanation of face recognition deficits where individuals with ASD avoid the eye region because it is perceived as socially threatening. Direct eye contact elicits a heightened physiological response as indicated by heightened skin conductance and increased amgydala activity. For individuals with autism, avoiding the eyes is an adaptive strategy, however, this approach interferes with the ability to process facial cues of identity, expressions and intentions, The “eye avoidance” strategy has negative effects on the ability to decode facial information about identity, expression, and intentions, exacerbating the social challenges for persons with ASD. PMID:24150885
Tanaka, James W; Sung, Andrew
Although a growing body of research indicates that children with autism spectrum disorder (ASD) exhibit selective deficits in their ability to recognize facial identities and expressions, the source of their face impairment is, as yet, undetermined. In this paper, we consider three possible accounts of the autism face deficit: (1) the holistic hypothesis, (2) the local perceptual bias hypothesis and (3) the eye avoidance hypothesis. A review of the literature indicates that contrary to the holistic hypothesis, there is little evidence to suggest that individuals with autism do perceive faces holistically. The local perceptual bias account also fails to explain the selective advantage that ASD individuals demonstrate for objects and their selective disadvantage for faces. The eye avoidance hypothesis provides a plausible explanation of face recognition deficits where individuals with ASD avoid the eye region because it is perceived as socially threatening. Direct eye contact elicits a increased physiological response as indicated by heightened skin conductance and amygdala activity. For individuals with autism, avoiding the eyes is an adaptive strategy, however, this approach interferes with the ability to process facial cues of identity, expressions and intentions, exacerbating the social challenges for persons with ASD.
Li, Shuaixia; Li, Ping; Wang, Wei; Zhu, Xiangru; Luo, Wenbo
In this study, we presented pictorial representations of happy, neutral, and fearful expressions projected in the eye regions to determine whether the eye region alone is sufficient to produce a context effect. Participants were asked to judge the valence of surprised faces that had been preceded by a picture of an eye region. Behavioral results showed that affective ratings of surprised faces were context dependent. Prime-related ERPs with presentation of happy eyes elicited a larger P1 than those for neutral and fearful eyes, likely due to the recognition advantage provided by a happy expression. Target-related ERPs showed that surprised faces in the context of fearful and happy eyes elicited dramatically larger C1 than those in the neutral context, which reflected the modulation by predictions during the earliest stages of face processing. There were larger N170 with neutral and fearful eye contexts compared to the happy context, suggesting faces were being integrated with contextual threat information. The P3 component exhibited enhanced brain activity in response to faces preceded by happy and fearful eyes compared with neutral eyes, indicating motivated attention processing may be involved at this stage. Altogether, these results indicate for the first time that the influence of isolated eye regions on the perception of surprised faces involves preferential processing at the early stages and elaborate processing at the late stages. Moreover, higher cognitive processes such as predictions and attention can modulate face processing from the earliest stages in a top-down manner. © 2017 Society for Psychophysiological Research.
Luo, Qiu L.; Wang, Han L.; Dzhelyova, Milena; Huang, Ping; Mo, Lei
This study explored the extent to which there are the neural correlates of the affective personality influence on face processing using event-related potentials (ERPs). In the learning phase, participants viewed a target individual’s face (expression neutral or faint smile) paired with either negative, neutral or positive sentences describing previous typical behavior of the target. In the following EEG testing phase, participants completed gender judgments of the learned faces. Statistical analyses were conducted on measures of neural activity during the gender judgment task. Repeated measures ANOVA of ERP data showed that faces described as having a negative personality elicited larger N170 than did those with a neutral or positive description. The early posterior negativity (EPN) showed the same result pattern, with larger amplitudes for faces paired with negative personality than for others. The size of the late positive potential was larger for faces paired with positive personality than for those with neutral and negative personality. The current study indicates that affective personality information is associated with an automatic, top–down modulation on face processing. PMID:27303359
Graham, Reiko; LaBar, Kevin S.
The face conveys a rich source of non-verbal information used during social communication. While research has revealed how specific facial channels such as emotional expression are processed, little is known about the prioritization and integration of multiple cues in the face during dyadic exchanges. Classic models of face perception have emphasized the segregation of dynamic versus static facial features along independent information processing pathways. Here we review recent behavioral and neuroscientific evidence suggesting that within the dynamic stream, concurrent changes in eye gaze and emotional expression can yield early independent effects on face judgments and covert shifts of visuospatial attention. These effects are partially segregated within initial visual afferent processing volleys, but are subsequently integrated in limbic regions such as the amygdala or via reentrant visual processing volleys. This spatiotemporal pattern may help to resolve otherwise perplexing discrepancies across behavioral studies of emotional influences on gaze-directed attentional cueing. Theoretical explanations of gaze-expression interactions are discussed, with special consideration of speed-of-processing (discriminability) and contextual (ambiguity) accounts. Future research in this area promises to reveal the mental chronometry of face processing and interpersonal attention, with implications for understanding how social referencing develops in infancy and is impaired in autism and other disorders of social cognition. PMID:22285906
Zhao, Mintao; Cheung, Sing-Hang; Wong, Alan C-N; Rhodes, Gillian; Chan, Erich K S; Chan, Winnie W L; Hayward, William G
We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition.
Zhang, Jiedong; Liu, Jia
Most of human daily social interactions rely on the ability to successfully recognize faces. Yet ∼2% of the human population suffers from face blindness without any acquired brain damage [this is also known as developmental prosopagnosia (DP) or congenital prosopagnosia]). Despite the presence of severe behavioral face recognition deficits, surprisingly, a majority of DP individuals exhibit normal face selectivity in the right fusiform face area (FFA), a key brain region involved in face configural processing. This finding, together with evidence showing impairments downstream from the right FFA in DP individuals, has led some to argue that perhaps the right FFA is largely intact in DP individuals. Using fMRI multivoxel pattern analysis, here we report the discovery of a neural impairment in the right FFA of DP individuals that may play a critical role in mediating their face-processing deficits. In seven individuals with DP, we discovered that, despite the right FFA's preference for faces and it showing decoding for the different face parts, it exhibited impaired face configural decoding and did not contain distinct neural response patterns for the intact and the scrambled face configurations. This abnormality was not present throughout the ventral visual cortex, as normal neural decoding was found in an adjacent object-processing region. To our knowledge, this is the first direct neural evidence showing impaired face configural processing in the right FFA in individuals with DP. The discovery of this neural impairment provides a new clue to our understanding of the neural basis of DP. PMID:25632131
Gaspari, Romolo J
A case is reported of a 38-year-old man presenting with early Ludwig's angina. It is difficult to differentiate superficial from deep infections of the face and neck by physical examination alone. The diagnosis of this condition with bedside soft tissue ultrasound of the face is described. Ludwig's angina is an uncommon infection of the deep tissues of the face and neck that usually evolves from more superficial infections such as dental abscesses.
Niina, Megumi; Okamura, Jun-ya; Wang, Gang
Scalp event-related potential (ERP) studies have demonstrated larger N170 amplitudes when subjects view faces compared to items from object categories. Extensive attempts have been made to clarify face selectivity and hemispheric dominance for face processing. The purpose of this study was to investigate hemispheric differences in N170s activated by human faces and non-face objects, as well as the extent of overlap of their sources. ERP was recorded from 20 subjects while they viewed human face and non-face images. N170s obtained during the presentation of human faces appeared earlier and with larger amplitude than for other category images. Further source analysis with a two-dipole model revealed that the locations of face and object processing largely overlapped in the left hemisphere. Conversely, the source for face processing in the right hemisphere located more anterior than the source for object processing. The results suggest that the neuronal circuits for face and object processing are largely shared in the left hemisphere, with more distinct circuits in the right hemisphere. Copyright © 2015 Elsevier B.V. All rights reserved.
Boutet, Isabelle; Meinhardt-Injac, Bozana
We simultaneously investigated the role of three hypotheses regarding age-related differences in face processing: perceptual degradation, impaired holistic processing, and an interaction between the two. Young adults (YA) aged 20-33-year olds, middle-age adults (MA) aged 50-64-year olds, and older adults (OA) aged 65-82-year olds were tested on the context congruency paradigm, which allows measurement of face-specific holistic processing across the life span (Meinhardt-Injac, Persike & Meinhardt, 2014. Acta Psychologica, 151, 155-163). Perceptual degradation was examined by measuring performance with faces that were not filtered (FSF), with faces filtered to preserve low spatial frequencies (LSF), and with faces filtered to preserve high spatial frequencies (HSF). We found that reducing perceptual signal strength had a greater impact on MA and OA for HSF faces, but not LSF faces. Context congruency effects were significant and of comparable magnitude across ages for FSF, LSF, and HSF faces. By using watches as control objects, we show that these holistic effects reflect face-specific mechanisms in all age groups. Our results support the perceptual degradation hypothesis for faces containing only HSF and suggest that holistic processing is preserved in aging even under conditions of reduced signal strength. © The Author(s) 2018. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: firstname.lastname@example.org.
The 2013 Schroth Faces of the Future symposium was created to recognize early career professionals (those within 10 years of graduation) who represent the future in their field via innovative research. For this year, future faces in mycology research were recognized. Drs. Jason Slot, Erica Goss, Jam...
Holler, Judith; Kendrick, Kobin H; Levinson, Stephen C
The home of human language use is face-to-face interaction, a context in which communicative exchanges are characterised not only by bodily signals accompanying what is being said but also by a pattern of alternating turns at talk. This transition between turns is astonishingly fast-typically a mere 200-ms elapse between a current and a next speaker's contribution-meaning that comprehending, producing, and coordinating conversational contributions in time is a significant challenge. This begs the question of whether the additional information carried by bodily signals facilitates or hinders language processing in this time-pressured environment. We present analyses of multimodal conversations revealing that bodily signals appear to profoundly influence language processing in interaction: Questions accompanied by gestures lead to shorter turn transition times-that is, to faster responses-than questions without gestures, and responses come earlier when gestures end before compared to after the question turn has ended. These findings hold even after taking into account prosodic patterns and other visual signals, such as gaze. The empirical findings presented here provide a first glimpse of the role of the body in the psycholinguistic processes underpinning human communication.
Breton, Audrey; Jerbi, Karim; Henaff, Marie-Anne; Cheylus, Anne; Baudouin, Jean-Yves; Schmitz, Christina; Krolak-Salmon, Pierre; Van der Henst, Jean-Baptiste
Recognition of social hierarchy is a key feature that helps us navigate through our complex social environment. Neuroimaging studies have identified brain structures involved in the processing of hierarchical stimuli but the precise temporal dynamics of brain activity associated with such processing remains largely unknown. Here, we used electroencephalography to examine the effect of social hierarchy on neural responses elicited by faces. In contrast to previous studies, the key manipulation was that a hierarchical context was constructed, not by varying facial expressions, but by presenting neutral-expression faces in a game setting. Once the performance-based hierarchy was established, participants were presented with high-rank, middle-rank and low-rank player faces and had to evaluate the rank of each face with respect to their own position. Both event-related potentials and task-related oscillatory activity were investigated. Three main findings emerge from the study. First, the experimental manipulation had no effect on the early N170 component, which may suggest that hierarchy did not modulate the structural encoding of neutral-expression faces. Second, hierarchy significantly modulated the amplitude of the late positive potential (LPP) within a 400-700 ms time-window, with more a prominent LPP occurring when the participants processed the face of the highest-rank player. Third, high-rank faces were associated with the highest reduction of alpha power. Taken together these findings provide novel electrophysiological evidence for enhanced allocation of attentional resource in the presence of high-rank faces. At a broader level, this study brings new insights into the neural processing underlying social categorization.
Schulte Holthausen, Barbara; Habel, Ute; Kellermann, Thilo; Schelenz, Patrick D; Schneider, Frank; Christopher Edgar, J; Turetsky, Bruce I; Regenbogen, Christina
Facial threat is associated with changes in limbic activity as well as modifications in the cortical face-related N170. It remains unclear if task-irrelevant threat modulates the response to a subsequent facial stimulus, and whether the amygdala's role in early threat perception is independent and direct, or modulatory. In 19 participants, crowds of emotional faces were followed by target faces and a rating task while simultaneous EEG-fMRI were recorded. In addition to conventional analyses, fMRI-informed EEG analyses and fMRI dynamic causal modeling (DCM) were performed. Fearful crowds reduced EEG N170 target face amplitudes and increased responses in a fMRI network comprising insula, amygdala and inferior frontal cortex. Multimodal analyses showed that amygdala response was present ∼60 ms before the right fusiform gyrus-derived N170. DCM indicated inhibitory connections from amygdala to fusiform gyrus, strengthened when fearful crowds preceded a target face. Results demonstrated the suppressing influence of task-irrelevant fearful crowds on subsequent face processing. The amygdala may be sensitive to task-irrelevant fearful crowds and subsequently strengthen its inhibitory influence on face-responsive fusiform N170 generators. This provides spatiotemporal evidence for a feedback mechanism of the amygdala by narrowing attention in order to focus on potential threats. © The Author (2016). Published by Oxford University Press. For Permissions, please email: email@example.com.
Richler, Jennifer J.; Gauthier, Isabel
The concept of holistic processing is a cornerstone of face recognition research, yet central questions related to holistic processing remain unanswered, and debates have thus far failed to reach a resolution despite accumulating empirical evidence. We argue that a considerable source of confusion in this literature stems from a methodological problem. Specifically, two different measures of holistic processing based on the composite paradigm (complete design and partial design) are used in the literature, but they often lead to qualitatively different results. First, we present a comprehensive review of the work that directly compares the two designs, and which clearly favors the complete design over the partial design. Second, we report a meta-analysis of holistic face processing according to both designs, and use this as further evidence for one design over the other. The meta-analysis effect size of holistic processing in the complete design is nearly three times that of the partial design. Effect sizes were not correlated between measures, consistent with the suggestion that they do not measure the same thing. Our meta-analysis also examines the correlation between conditions in the complete design of the composite task, and suggests that in an individual differences context, little is gained by including a misaligned baseline. Finally, we offer a comprehensive review of the state of knowledge about holistic processing based on evidence gathered from the measure we favor based on the first sections of our review—the complete design—and outline outstanding research questions in that new context. PMID:24956123
Valdés-Conroy, Berenice; Aguado, Luis; Fernández-Cahill, María; Romero-Ferreiro, Verónica; Diéguez-Risco, Teresa
The effects of task demands and the interaction between gender and expression in face perception were studied using event-related potentials (ERPs). Participants performed three different tasks with male and female faces that were emotionally inexpressive or that showed happy or angry expressions. In two of the tasks (gender and expression categorization) facial properties were task-relevant while in a third task (symbol discrimination) facial information was irrelevant. Effects of expression were observed on the visual P100 component under all task conditions, suggesting the operation of an automatic process that is not influenced by task demands. The earliest interaction between expression and gender was observed later in the face-sensitive N170 component. This component showed differential modulations by specific combinations of gender and expression (e.g., angry male vs. angry female faces). Main effects of expression and task were observed in a later occipito-temporal component peaking around 230 ms post-stimulus onset (EPN or early posterior negativity). Less positive amplitudes in the presence of angry faces and during performance of the gender and expression tasks were observed. Finally, task demands also modulated a positive component peaking around 400 ms (LPC, or late positive complex) that showed enhanced amplitude for the gender task. The pattern of results obtained here adds new evidence about the sequence of operations involved in face processing and the interaction of facial properties (gender and expression) in response to different task demands. Copyright © 2014 Elsevier B.V. All rights reserved.
Garrido-Vásquez, Patricia; Pell, Marc D; Paulmann, Silke; Sehm, Bernhard; Kotz, Sonja A
Parkinson's disease (PD) affects patients beyond the motor domain. According to previous evidence, one mechanism that may be impaired in the disease is face processing. However, few studies have investigated this process at the neural level in PD. Moreover, research using dynamic facial displays rather than static pictures is scarce, but highly warranted due to the higher ecological validity of dynamic stimuli. In the present study we aimed to investigate how PD patients process emotional and non-emotional dynamic face stimuli at the neural level using event-related potentials. Since the literature has revealed a predominantly right-lateralized network for dynamic face processing, we divided the group into patients with left (LPD) and right (RPD) motor symptom onset (right versus left cerebral hemisphere predominantly affected, respectively). Participants watched short video clips of happy, angry, and neutral expressions and engaged in a shallow gender decision task in order to avoid confounds of task difficulty in the data. In line with our expectations, the LPD group showed significant face processing deficits compared to controls. While there were no group differences in early, sensory-driven processing (fronto-central N1 and posterior P1), the vertex positive potential, which is considered the fronto-central counterpart of the face-specific posterior N170 component, had a reduced amplitude and delayed latency in the LPD group. This may indicate disturbances of structural face processing in LPD. Furthermore, the effect was independent of the emotional content of the videos. In contrast, static facial identity recognition performance in LPD was not significantly different from controls, and comprehensive testing of cognitive functions did not reveal any deficits in this group. We therefore conclude that PD, and more specifically the predominant right-hemispheric affection in left-onset PD, is associated with impaired processing of dynamic facial expressions
Contreras-Rodríguez, Oren; Pujol, Jesus; Batalla, Iolanda; Harrison, Ben J; Bosque, Javier; Ibern-Regàs, Immaculada; Hernández-Ribas, Rosa; Soriano-Mas, Carles; Deus, Joan; López-Solà, Marina; Pifarré, Josep; Menchón, José M; Cardoner, Narcís
Psychopaths show a reduced ability to recognize emotion facial expressions, which may disturb the interpersonal relationship development and successful social adaptation. Behavioral hypotheses point toward an association between emotion recognition deficits in psychopathy and amygdala dysfunction. Our prediction was that amygdala dysfunction would combine deficient activation with disturbances in functional connectivity with cortical regions of the face-processing network. Twenty-two psychopaths and 22 control subjects were assessed and functional magnetic resonance maps were generated to identify both brain activation and task-induced functional connectivity using psychophysiological interaction analysis during an emotional face-matching task. Results showed significant amygdala activation in control subjects only, but differences between study groups did not reach statistical significance. In contrast, psychopaths showed significantly increased activation in visual and prefrontal areas, with this latest activation being associated with psychopaths' affective-interpersonal disturbances. Psychophysiological interaction analyses revealed a reciprocal reduction in functional connectivity between the left amygdala and visual and prefrontal cortices. Our results suggest that emotional stimulation may evoke a relevant cortical response in psychopaths, but a disruption in the processing of emotional faces exists involving the reciprocal functional interaction between the amygdala and neocortex, consistent with the notion of a failure to integrate emotion into cognition in psychopathic individuals.
Suess, Franziska; Rabovsky, Milena; Abdel Rahman, Rasha
According to a widely held view, basic emotions such as happiness or anger are reflected in facial expressions that are invariant and uniquely defined by specific facial muscle movements. Accordingly, expression perception should not be vulnerable to influences outside the face. Here, we test this assumption by manipulating the emotional valence of biographical knowledge associated with individual persons. Faces of well-known and initially unfamiliar persons displaying neutral expressions were associated with socially relevant negative, positive or comparatively neutral biographical information. The expressions of faces associated with negative information were classified as more negative than faces associated with neutral information. Event-related brain potential modulations in the early posterior negativity, a component taken to reflect early sensory processing of affective stimuli such as emotional facial expressions, suggest that negative affective knowledge can bias the perception of faces with neutral expressions toward subjectively displaying negative emotions. © The Author (2014). Published by Oxford University Press. For Permissions, please email: firstname.lastname@example.org.
Cashon, Cara H.; Cohen, Leslie B.
The development of the "inversion" effect in face processing was examined in infants 3 to 6 months of age by testing their integration of the internal and external features of upright and inverted faces using a variation of the "switch" visual habituation paradigm. When combined with previous findings showing that 7-month-olds use integrative…
Richler, Jennifer J; Gauthier, Isabel
The concept of holistic processing is a cornerstone of face recognition research, yet central questions related to holistic processing remain unanswered, and debates have thus far failed to reach a resolution despite accumulating empirical evidence. We argue that a considerable source of confusion in this literature stems from a methodological problem. Specifically, 2 measures of holistic processing based on the composite paradigm (complete design and partial design) are used in the literature, but they often lead to qualitatively different results. First, we present a comprehensive review of the work that directly compares the 2 designs, and which clearly favors the complete design over the partial design. Second, we report a meta-analysis of holistic face processing according to both designs and use this as further evidence for one design over the other. The meta-analysis effect size of holistic processing in the complete design is nearly 3 times that of the partial design. Effect sizes were not correlated between measures, consistent with the suggestion that they do not measure the same thing. Our meta-analysis also examines the correlation between conditions in the complete design of the composite task, and suggests that in an individual differences context, little is gained by including a misaligned baseline. Finally, we offer a comprehensive review of the state of knowledge about holistic processing based on evidence gathered from the measure we favor based on the 1st sections of our review-the complete design-and outline outstanding research questions in that new context. PsycINFO Database Record (c) 2014 APA, all rights reserved.
De Winter, François-Laurent; Timmers, Dorien; de Gelder, Beatrice; Van Orshoven, Marc; Vieren, Marleen; Bouckaert, Miriam; Cypers, Gert; Caekebeke, Jo; Van de Vliet, Laura; Goffin, Karolien; Van Laere, Koen; Sunaert, Stefan; Vandenberghe, Rik; Vandenbulcke, Mathieu; Van den Stock, Jan
Deficits in face processing have been described in the behavioral variant of fronto-temporal dementia (bvFTD), primarily regarding the recognition of facial expressions. Less is known about face shape and face identity processing. Here we used a hierarchical strategy targeting face shape and face identity recognition in bvFTD and matched healthy controls. Participants performed 3 psychophysical experiments targeting face shape detection (Experiment 1), unfamiliar face identity matching (Experiment 2), familiarity categorization and famous face-name matching (Experiment 3). The results revealed group differences only in Experiment 3, with a deficit in the bvFTD group for both familiarity categorization and famous face-name matching. Voxel-based morphometry regression analyses in the bvFTD group revealed an association between grey matter volume of the left ventral anterior temporal lobe and familiarity recognition, while face-name matching correlated with grey matter volume of the bilateral ventral anterior temporal lobes. Subsequently, we quantified familiarity-specific and name-specific recognition deficits as the sum of the celebrities of which respectively only the name or only the familiarity was accurately recognized. Both indices were associated with grey matter volume of the bilateral anterior temporal cortices. These findings extent previous results by documenting the involvement of the left anterior temporal lobe (ATL) in familiarity detection and the right ATL in name recognition deficits in fronto-temporal lobar degeneration.
Stockdale, Laura A.; Morrison, Robert G.; Kmiecik, Matthew J.; Garbarino, James
Media violence exposure causes increased aggression and decreased prosocial behavior, suggesting that media violence desensitizes people to the emotional experience of others. Alterations in emotional face processing following exposure to media violence may result in desensitization to others’ emotional states. This study used scalp electroencephalography methods to examine the link between exposure to violence and neural changes associated with emotional face processing. Twenty-five participants were shown a violent or nonviolent film clip and then completed a gender discrimination stop-signal task using emotional faces. Media violence did not affect the early visual P100 component; however, decreased amplitude was observed in the N170 and P200 event-related potentials following the violent film, indicating that exposure to film violence leads to suppression of holistic face processing and implicit emotional processing. Participants who had just seen a violent film showed increased frontal N200/P300 amplitude. These results suggest that media violence exposure may desensitize people to emotional stimuli and thereby require fewer cognitive resources to inhibit behavior. PMID:25759472
Stockdale, Laura A; Morrison, Robert G; Kmiecik, Matthew J; Garbarino, James; Silton, Rebecca L
Media violence exposure causes increased aggression and decreased prosocial behavior, suggesting that media violence desensitizes people to the emotional experience of others. Alterations in emotional face processing following exposure to media violence may result in desensitization to others' emotional states. This study used scalp electroencephalography methods to examine the link between exposure to violence and neural changes associated with emotional face processing. Twenty-five participants were shown a violent or nonviolent film clip and then completed a gender discrimination stop-signal task using emotional faces. Media violence did not affect the early visual P100 component; however, decreased amplitude was observed in the N170 and P200 event-related potentials following the violent film, indicating that exposure to film violence leads to suppression of holistic face processing and implicit emotional processing. Participants who had just seen a violent film showed increased frontal N200/P300 amplitude. These results suggest that media violence exposure may desensitize people to emotional stimuli and thereby require fewer cognitive resources to inhibit behavior. © The Author (2015). Published by Oxford University Press. For Permissions, please email: email@example.com.
Wang, Yuping; Chen, Nian-Shing; Levy, Mike
This article discusses the learning process undertaken by language teachers in a cyber face-to-face teacher training program. Eight tertiary Chinese language teachers attended a 12-week training program conducted in an online synchronous learning environment characterized by multimedia-based, oral and visual interaction. The term "cyber…
Aveta, Achille; Casati, Paolo
Facial injuries are often accompanied by soft tissue injuries. The complexity of these injuries is represented by the potential for loss of relationships between the functional and the aesthetic subunits of the head. Most reviews of craniofacial trauma have concentrated on fractures. With this article, we want to emphasize the importance of early aesthetic reconstruction of the face in polytrauma patients. We present 13 patients with soft tissue injuries of the face, treated in our emergency department in the 'day one surgery", without "second look"procedures. The final result always restored a sense of normalcy to the face. The face is the first most visible part of the human anatomy, so, in emergency, surgeons must pay special attention also to the reconstruction of the face, in polytrauma patients.
Greenberg, Seth N.; Goshen-Gottstein, Yonatan
The present work considers the mental imaging of faces, with a focus in own-face imaging. Experiments 1 and 3 demonstrated an own-face disadvantage, with slower generation of mental images of one's own face than of other familiar faces. In contrast, Experiment 2 demonstrated that mental images of facial parts are generated more quickly for one's…
Lv, Jing; Yan, Tianyi; Tao, Luyang; Zhao, Lun
The current study investigated the time course of the other-race classification advantage (ORCA) in the subordinate classification of normally configured faces and distorted faces by race. Slightly distorting the face configuration delayed the categorization of own-race faces and had no conspicuous effects on other-race faces. The N170 was sensitive neither to configural distortions nor to faces' races. The P3 was enhanced for other-race than own-race faces and reduced by configural manipulation only for own-race faces. We suggest that the source of ORCA is the configural analysis applied by default while processing own-race faces. PMID:26733850
Bas-Hoogendam, Janna Marie; Andela, Cornelie D; van der Werff, Steven J A; Pannekoek, J Nienke; van Steenbergen, Henk; Meijer, Onno C; van Buchem, Mark A; Rombouts, Serge A R B; van der Mast, Roos C; Biermasz, Nienke R; van der Wee, Nic J A; Pereira, Alberto M
Patients with long-term remission of Cushing's disease (CD) demonstrate residual psychological complaints. At present, it is not known how previous exposure to hypercortisolism affects psychological functioning in the long-term. Earlier magnetic resonance imaging (MRI) studies demonstrated abnormalities of brain structure and resting-state connectivity in patients with long-term remission of CD, but no data are available on functional alterations in the brain during the performance of emotional or cognitive tasks in these patients. We performed a cross-sectional functional MRI study, investigating brain activation during emotion processing in patients with long-term remission of CD. Processing of emotional faces versus a non-emotional control condition was examined in 21 patients and 21 matched healthy controls. Analyses focused on activation and connectivity of two a priori determined regions of interest: the amygdala and the medial prefrontal-orbitofrontal cortex (mPFC-OFC). We also assessed psychological functioning, cognitive failure, and clinical disease severity. Patients showed less mPFC activation during processing of emotional faces compared to controls, whereas no differences were found in amygdala activation. An exploratory psychophysiological interaction analysis demonstrated decreased functional coupling between the ventromedial PFC and posterior cingulate cortex (a region structurally connected to the PFC) in CD-patients. The present study is the first to show alterations in brain function and task-related functional coupling in patients with long-term remission of CD relative to matched healthy controls. These alterations may, together with abnormalities in brain structure, be related to the persisting psychological morbidity in patients with CD after long-term remission. Copyright © 2015 Elsevier Ltd. All rights reserved.
Grasso, Damion J.; Moser, Jason S.; Dozier, Mary; Simons, Robert
This study employed visually evoked event-related potential (ERP) methodology to examine temporal patterns of structural and higher-level face processing in birth and foster/adoptive mothers viewing pictures of their children. Fourteen birth mothers and 14 foster/adoptive mothers engaged in a computerized task in which they viewed facial pictures of their own children, and of familiar and unfamiliar children and adults. All mothers, regardless of type, showed ERP patterns suggestive of increased attention allocation to their own children’s faces compared to other child and adult faces beginning as early as 100–150 ms after stimulus onset and lasting for several hundred milliseconds. These data are in line with a parallel processing model that posits the involvement of several brain regions in simultaneously encoding the structural features of faces as well as their emotional and personal significance. Additionally, late positive ERP patterns associated with greater allocation of attention predicted mothers’ perceptions of the parent–child relationship as positive and influential to their children’s psychological development. These findings suggest the potential utility of using ERP components to index maternal processes. PMID:19428973
Gil, Sandrine; Le Bigot, Ludovic
In recent years, researchers have become interested in the way that the affective quality of contextual information transfers to a perceived target. We therefore examined the effect of a red (vs. green, mixed red/green, and achromatic) background - known to be valenced - on the processing of stimuli that play a key role in human interactions, namely facial expressions. We also examined whether the valenced-color effect can be modulated by gender, which is also known to be valenced. Female and male adult participants performed a categorization task of facial expressions of emotion in which the faces of female and male posers expressing two ambiguous emotions (i.e., neutral and surprise) were presented against the four different colored backgrounds. Additionally, this task was completed by collecting subjective ratings for each colored background in the form of five semantic differential scales corresponding to both discrete and dimensional perspectives of emotion. We found that the red background resulted in more negative face perception than the green background, whether the poser was female or male. However, whereas this valenced-color effect was the only effect for female posers, for male posers, the effect was modulated by both the nature of the ambiguous emotion and the decoder's gender. Overall, our findings offer evidence that color and gender have a common valence-based dimension.
Gil, Sandrine; Le Bigot, Ludovic
In recent years, researchers have become interested in the way that the affective quality of contextual information transfers to a perceived target. We therefore examined the effect of a red (vs. green, mixed red/green, and achromatic) background – known to be valenced – on the processing of stimuli that play a key role in human interactions, namely facial expressions. We also examined whether the valenced-color effect can be modulated by gender, which is also known to be valenced. Female and male adult participants performed a categorization task of facial expressions of emotion in which the faces of female and male posers expressing two ambiguous emotions (i.e., neutral and surprise) were presented against the four different colored backgrounds. Additionally, this task was completed by collecting subjective ratings for each colored background in the form of five semantic differential scales corresponding to both discrete and dimensional perspectives of emotion. We found that the red background resulted in more negative face perception than the green background, whether the poser was female or male. However, whereas this valenced-color effect was the only effect for female posers, for male posers, the effect was modulated by both the nature of the ambiguous emotion and the decoder’s gender. Overall, our findings offer evidence that color and gender have a common valence-based dimension. PMID:25852625
Pfabigan, Daniela M; Alexopoulos, Johanna; Sailer, Uta
Antisocial individuals are characterized to display self-determined and inconsiderate behavior during social interaction. Furthermore, recognition deficits regarding fearful facial expressions have been observed in antisocial populations. These observations give rise to the question whether or not antisocial behavioral tendencies are associated with deficits in basic processing of social cues. The present study investigated early visual stimulus processing of social stimuli in a group of healthy female individuals with antisocial behavioral tendencies compared to individuals without these tendencies while measuring event-related potentials (P1, N170). To this end, happy and angry faces served as feedback stimuli which were embedded in a gambling task. Results showed processing differences as early as 88-120 ms after feedback onset. Participants low on antisocial traits displayed larger P1 amplitudes than participants high on antisocial traits. No group differences emerged for N170 amplitudes. Attention allocation processes, individual arousal levels as well as face processing are discussed as possible causes of the observed group differences in P1 amplitudes. In summary, the current data suggest that sensory processing of facial stimuli is functionally intact but less ready to respond in healthy individuals with antisocial tendencies.
Michel, Caroline; Rossion, Bruno; Han, Jaehyun; Chung, Chan-Sup; Caldara, Roberto
Recognizing individual faces outside one's race poses difficulty, a phenomenon known as the other-race effect. Most researchers agree that this effect results from differential experience with same-race (SR) and other-race (OR) faces. However, the specific processes that develop with visual experience and underlie the other-race effect remain to be clarified. We tested whether the integration of facial features into a whole representation-holistic processing-was larger for SR than OR faces in Caucasians and Asians without life experience with OR faces. For both classes of participants, recognition of the upper half of a composite-face stimulus was more disrupted by the bottom half (the composite-face effect) for SR than OR faces, demonstrating that SR faces are processed more holistically than OR faces. This differential holistic processing for faces of different races, probably a by-product of visual experience, may be a critical factor in the other-race effect.
Luo, Lizhu; Becker, Benjamin; Geng, Yayuan; Zhao, Zhiying; Gao, Shan; Zhao, Weihua; Yao, Shuxia; Zheng, Xiaoxiao; Ma, Xiaole; Gao, Zhao; Hu, Jiehui; Kendrick, Keith M
In line with animal models indicating sexually dimorphic effects of oxytocin (OXT) on social-emotional processing, a growing number of OXT-administration studies in humans have also reported sex-dependent effects during social information processing. To explore whether sex-dependent effects already occur during early, subliminal, processing stages the present pharmacological fMRI-study combined the intranasal-application of either OXT or placebo (n = 86-43 males) with a backward-masking emotional face paradigm. Results showed that while OXT suppressed inferior frontal gyrus, dorsal anterior cingulate and anterior insula responses to threatening face stimuli in men it increased them in women. In women increased anterior cingulate reactivity during subliminal threat processing was also positively associated with trait anxiety. On the network level, sex-dependent effects were observed on amygdala, anterior cingulate and inferior frontal gyrus functional connectivity that were mainly driven by reduced coupling in women following OXT. Our findings demonstrate that OXT produces sex-dependent effects even at the early stages of social-emotional processing, and suggest that while it attenuates neural responses to threatening social stimuli in men it increases them in women. Thus in a therapeutic context OXT may potentially produce different effects on anxiety disorders in men and women. Copyright © 2017 Elsevier Inc. All rights reserved.
MacNamara, Annmarie; Vergés, Alvaro; Kujawa, Autumn; Fitzgerald, Kate D.; Monk, Christopher S.; Phan, K. Luan
Socio-emotional processing is an essential part of development, and age-related changes in its neural correlates can be observed. The late positive potential (LPP) is a measure of motivated attention that can be used to assess emotional processing; however, changes in the LPP elicited by emotional faces have not been assessed across a wide age range in childhood and young adulthood. We used an emotional face matching task to examine behavior and event-related potentials (ERPs) in 33 youth aged 7 to 19 years old. Younger children were slower when performing the matching task. The LPP elicited by emotional faces but not control stimuli (geometric shapes) decreased with age; by contrast, an earlier ERP (the P1) decreased with age for both faces and shapes, suggesting increased efficiency of early visual processing. Results indicate age-related attenuation in emotional processing that may stem from increased efficiency and regulatory control when performing a socio-emotional task. PMID:26220144
Wangila, Violet Muyoka
This paper scrutinises the challenges facing the implementation of Early Childhood Development and Education policy in Bungoma County, Kenya. The study used a mixed research design and study population comprised of the QASOs, the Head teachers, ECDE teachers and the non-teaching staff in respective ECDCs. The sample size of the study comprised of…
Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina
Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.
Forged in the early 1960s, the paradigm for pharmaceutical innovation has remained virtually unchanged for nearly 50 years. During a period when most other research-based industries have made frequent and often sweeping modifications to their R&D processes, the pharmaceutical sector continues to utilize a drug development process that is slow, inefficient, risky, and expensive. Few who work in or follow the activities of the pharmaceutical industry question whether change is coming. They know that the pharmaceutical sector, as currently structured, is unable to deliver enough new products to market to generate revenues sufficient to sustain its own growth. Nearly all major drug developers are critically examining current R&D practices and, in some cases, considering a radical overhaul of their R&D models. But key questions remain. What will the landscape for pharmaceutical innovation look like in the future? And, who will develop tomorrow’s medicines? PMID:20130565
Tabak, Joshua A.; Zayas, Vivian
Research has shown that people are able to judge sexual orientation from faces with above-chance accuracy, but little is known about how these judgments are formed. Here, we investigated the importance of well-established face processing mechanisms in such judgments: featural processing (e.g., an eye) and configural processing (e.g., spatial distance between eyes). Participants judged sexual orientation from faces presented for 50 milliseconds either upright, which recruits both configural and featural processing, or upside-down, when configural processing is strongly impaired and featural processing remains relatively intact. Although participants judged women’s and men’s sexual orientation with above-chance accuracy for upright faces and for upside-down faces, accuracy for upside-down faces was significantly reduced. The reduced judgment accuracy for upside-down faces indicates that configural face processing significantly contributes to accurate snap judgments of sexual orientation. PMID:22629321
Santos, Andreia; Rosset, Delphine; Deruelle, Christine
Increased motivation towards social stimuli in Williams syndrome (WS) led us to hypothesize that a face's human status would have greater impact than face's orientation on WS' face processing abilities. Twenty-nine individuals with WS were asked to categorize facial emotion expressions in real, human cartoon and non-human cartoon faces presented…
Ewing, Louise; Pellicano, Elizabeth; Rhodes, Gillian
There are few direct examinations of whether face-processing difficulties in autism are disproportionate to difficulties with other complex non-face stimuli. Here we examined discrimination ability and memory for faces, cars, and inverted faces in children and adolescents with and without autism. Results showed that, relative to typical children,…
Leppanen, Jukka M.; Moulson, Margaret C.; Vogel-Farley, Vanessa K.; Nelson, Charles A.
To examine the ontogeny of emotional face processing, event-related potentials (ERPs) were recorded from adults and 7-month-old infants while viewing pictures of fearful, happy, and neutral faces. Face-sensitive ERPs at occipital-temporal scalp regions differentiated between fearful and neutral/happy faces in both adults (N170 was larger for fear)…
Hirot, France; Lesage, Marine; Pedron, Lya; Meyer, Isabelle; Thomas, Pierre; Cottencin, Olivier; Guardia, Dewi
Body image disturbances and massive weight loss are major clinical symptoms of anorexia nervosa (AN). The aim of the present study was to examine the influence of body changes and eating attitudes on self-face recognition ability in AN. Twenty-seven subjects suffering from AN and 27 control participants performed a self-face recognition task (SFRT). During the task, digital morphs between their own face and a gender-matched unfamiliar face were presented in a random sequence. Participants' self-face recognition failures, cognitive flexibility, body concern and eating habits were assessed with the Self-Face Recognition Questionnaire (SFRQ), Trail Making Test (TMT), Body Shape Questionnaire (BSQ) and Eating Disorder Inventory-2 (EDI-2), respectively. Subjects suffering from AN exhibited significantly greater difficulties than control participants in identifying their own face (p = 0.028). No significant difference was observed between the two groups for TMT (all p > 0.1, non-significant). Regarding predictors of self-face recognition skills, there was a negative correlation between SFRT and body mass index (p = 0.01) and a positive correlation between SFRQ and EDI-2 (p < 0.001) or BSQ (p < 0.001). Among factors involved, nutritional status and intensity of eating disorders could play a part in impaired self-face recognition.
Tu, Tao; Schneck, Noam; Muraskin, Jordan; Sajda, Paul
Network interactions are likely to be instrumental in processes underlying rapid perception and cognition. Specifically, high-level and perceptual regions must interact to balance pre-existing models of the environment with new incoming stimuli. Simultaneous electroencephalography (EEG) and fMRI (EEG/fMRI) enables temporal characterization of brain-network interactions combined with improved anatomical localization of regional activity. In this paper, we use simultaneous EEG/fMRI and multivariate dynamical systems (MDS) analysis to characterize network relationships between constitute brain areas that reflect a subject's choice for a face versus nonface categorization task. Our simultaneous EEG and fMRI analysis on 21 human subjects (12 males, 9 females) identifies early perceptual and late frontal subsystems that are selective to the categorical choice of faces versus nonfaces. We analyze the interactions between these subsystems using an MDS in the space of the BOLD signal. Our main findings show that differences between face-choice and house-choice networks are seen in the network interactions between the early and late subsystems, and that the magnitude of the difference in network interaction positively correlates with the behavioral false-positive rate of face choices. We interpret this to reflect the role of saliency and expectations likely encoded in frontal "late" regions on perceptual processes occurring in "early" perceptual regions. SIGNIFICANCE STATEMENT Our choices are affected by our biases. In visual perception and cognition such biases can be commonplace and quite curious-e.g., we see a human face when staring up at a cloud formation or down at a piece of toast at the breakfast table. Here we use multimodal neuroimaging and dynamical systems analysis to measure whole-brain spatiotemporal dynamics while subjects make decisions regarding the type of object they see in rapidly flashed images. We find that the degree of interaction in these networks
Maher, S; Mashhoon, Y; Ekstrom, T; Lukas, S; Chen, Y
Face detection, an ability to identify a visual stimulus as a face, is impaired in patients with schizophrenia. It is unclear whether impaired face processing in this psychiatric disorder results from face-specific domains or stems from more basic visual domains. In this study, we examined cortical face-sensitive N170 response in schizophrenia, taking into account deficient basic visual contrast processing. We equalized visual contrast signals among patients (n=20) and controls (n=20) and between face and tree images, based on their individual perceptual capacities (determined using psychophysical methods). We measured N170, a putative temporal marker of face processing, during face detection and tree detection. In controls, N170 amplitudes were significantly greater for faces than trees across all three visual contrast levels tested (perceptual threshold, two times perceptual threshold and 100%). In patients, however, N170 amplitudes did not differ between faces and trees, indicating diminished face selectivity (indexed by the differential responses to face vs. tree). These results indicate a lack of face-selectivity in temporal responses of brain machinery putatively responsible for face processing in schizophrenia. This neuroimaging finding suggests that face-specific processing is compromised in this psychiatric disorder. Copyright © 2015 Elsevier B.V. All rights reserved.
Wolf, Lois C.
This paper examines ways in which early childhood education can provide the vital foundation for lifelong attitudes and values toward the democratic process. One goal of early childhood education is to empower the social bonding which brings collaborative relationships to their full potential and gives the child a sense of connectedness to others.…
Tanaka, James W.; Sung, Andrew
Although a growing body of research indicates that children with autism spectrum disorder (ASD) exhibit selective deficits in their ability to recognize facial identities and expressions, the source of their face impairment is, as yet, undetermined. In this paper, we consider three possible accounts of the autism face deficit: (1) the holistic…
Schwarzer, Gudrun; Massaro, Dominic W.
Two experiments studied whether and how 5-year-olds integrate single facial features to identify faces. Results indicated that children could evaluate and integrate information from eye and mouth features to identify a face when salience of features was varied. A weighted Fuzzy Logical Model of Perception fit better than a Single Channel Model,…
Hinojosa, Jose A.; Martin-Loeches, Manuel; Munoz, Francisco; Casado, Pilar; Pozo, Miguel A.
This study investigates the automatic-controlled nature of early semantic processing by means of the Recognition Potential (RP), an event-related potential response that reflects lexical selection processes. For this purpose tasks differing in their processing requirements were used. Half of the participants performed a physical task involving a…
Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias
Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Keyes, Helen; Zalicks, Catherine
Using a priming paradigm, we investigate whether socially important faces are processed preferentially compared to other familiar and unfamiliar faces, and whether any such effects are affected by changes in viewpoint. Participants were primed with frontal images of personally familiar, famous or unfamiliar faces, and responded to target images of congruent or incongruent identity, presented in frontal, three quarter or profile views. We report that participants responded significantly faster to socially important faces (a friend’s face) compared to other highly familiar (famous) faces or unfamiliar faces. Crucially, responses to famous and unfamiliar faces did not differ. This suggests that, when presented in the context of a socially important stimulus, socially unimportant familiar faces (famous faces) are treated in a similar manner to unfamiliar faces. This effect was not tied to viewpoint, and priming did not affect socially important face processing differently to other faces. PMID:27219101
Liu, Wenbo; Li, Ming; Yi, Li
The atypical face scanning patterns in individuals with Autism Spectrum Disorder (ASD) has been repeatedly discovered by previous research. The present study examined whether their face scanning patterns could be potentially useful to identify children with ASD by adopting the machine learning algorithm for the classification purpose. Particularly, we applied the machine learning method to analyze an eye movement dataset from a face recognition task [Yi et al., 2016], to classify children with and without ASD. We evaluated the performance of our model in terms of its accuracy, sensitivity, and specificity of classifying ASD. Results indicated promising evidence for applying the machine learning algorithm based on the face scanning patterns to identify children with ASD, with a maximum classification accuracy of 88.51%. Nevertheless, our study is still preliminary with some constraints that may apply in the clinical practice. Future research should shed light on further valuation of our method and contribute to the development of a multitask and multimodel approach to aid the process of early detection and diagnosis of ASD. Autism Res 2016, 9: 888-898. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Beharelle, Anjali Raja; Dick, Anthony Steven; Josse, Goulven; Solodkin, Ana; Huttenlocher, Peter R.; Levine, Susan C.; Small, Steven L.
A predominant theory regarding early stroke and its effect on language development, is that early left hemisphere lesions trigger compensatory processes that allow the right hemisphere to assume dominant language functions, and this is thought to underlie the near normal language development observed after early stroke. To test this theory, we…
Minami, T; Goto, K; Kitazaki, M; Nakauchi, S
In humans, face configuration, contour and color may affect face perception, which is important for social interactions. This study aimed to determine the effect of color information on face perception by measuring event-related potentials (ERPs) during the presentation of natural- and bluish-colored faces. Our results demonstrated that the amplitude of the N170 event-related potential, which correlates strongly with face processing, was higher in response to a bluish-colored face than to a natural-colored face. However, gamma-band activity was insensitive to the deviation from a natural face color. These results indicated that color information affects the N170 associated with a face detection mechanism, which suggests that face color is important for face detection. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Wang, Zhe; Quinn, Paul C; Jin, Haiyang; Sun, Yu-Hao P; Tanaka, James W; Pascalis, Olivier; Lee, Kang
Using a composite-face paradigm, we examined the holistic processing induced by Asian faces, Caucasian faces, and monkey faces with human Asian participants in two experiments. In Experiment 1, participants were asked to judge whether the upper halves of two faces successively presented were the same or different. A composite-face effect was found for Asian faces and Caucasian faces, but not for monkey faces. In Experiment 2, participants were asked to judge whether the lower halves of the two faces successively presented were the same or different. A composite-face effect was found for monkey faces as well as for Asian faces and Caucasian faces. Collectively, these results reveal that own-species (i.e., own-race and other-race) faces engage holistic processing in both upper and lower halves of the face, but other-species (i.e., monkey) faces engage holistic processing only when participants are asked to match the lower halves of the face. The findings are discussed in the context of a region-based holistic processing account for the species-specific effect in face recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cleary, Laura; Brady, Nuala; Fitzgerald, Michael; Gallagher, Louise
Impaired face perception in autism spectrum disorders is thought to reflect a perceptual style characterized by componential rather than configural processing of faces. This study investigated face processing in adolescents with autism spectrum disorders using the Thatcher illusion, a perceptual phenomenon exhibiting "inversion effects"…
Kemner, C.; Schuller, A-M.; Van Engeland, H.
Background: Children with pervasive developmental disorder (PDD) show behavioral abnormalities in gaze and face processing, but recent studies have indicated that normal activation of face-specific brain areas in response to faces is possible in this group. It is not clear whether the brain activity related to gaze processing is also normal in…
Jessen, Sarah; Grossmann, Tobias
Human adults can process emotional information both with and without conscious awareness, and it has been suggested that the two processes rely on partly distinct brain mechanisms. However, the developmental origins of these brain processes are unknown. In the present event-related brain potential (ERP) study, we examined the brain responses of 7-month-old infants in response to subliminally (50 and 100 msec) and supraliminally (500 msec) presented happy and fearful facial expressions. Our results revealed that infants' brain responses (Pb and Nc) over central electrodes distinguished between emotions irrespective of stimulus duration, whereas the discrimination between emotions at occipital electrodes (N290 and P400) only occurred when faces were presented supraliminally (above threshold). This suggests that early in development the human brain not only discriminates between happy and fearful facial expressions irrespective of conscious perception, but also that, similar to adults, supraliminal and subliminal emotion processing relies on distinct neural processes. Our data further suggest that the processing of emotional facial expressions differs across infants depending on their behaviorally shown perceptual sensitivity. The current ERP findings suggest that distinct brain processes underpinning conscious and unconscious emotion perception emerge early in ontogeny and can therefore be seen as a key feature of human social functioning. Copyright © 2014 Elsevier Ltd. All rights reserved.
Network interactions are likely to be instrumental in processes underlying rapid perception and cognition. Specifically, high-level and perceptual regions must interact to balance pre-existing models of the environment with new incoming stimuli. Simultaneous electroencephalography (EEG) and fMRI (EEG/fMRI) enables temporal characterization of brain–network interactions combined with improved anatomical localization of regional activity. In this paper, we use simultaneous EEG/fMRI and multivariate dynamical systems (MDS) analysis to characterize network relationships between constitute brain areas that reflect a subject's choice for a face versus nonface categorization task. Our simultaneous EEG and fMRI analysis on 21 human subjects (12 males, 9 females) identifies early perceptual and late frontal subsystems that are selective to the categorical choice of faces versus nonfaces. We analyze the interactions between these subsystems using an MDS in the space of the BOLD signal. Our main findings show that differences between face-choice and house-choice networks are seen in the network interactions between the early and late subsystems, and that the magnitude of the difference in network interaction positively correlates with the behavioral false-positive rate of face choices. We interpret this to reflect the role of saliency and expectations likely encoded in frontal “late” regions on perceptual processes occurring in “early” perceptual regions. SIGNIFICANCE STATEMENT Our choices are affected by our biases. In visual perception and cognition such biases can be commonplace and quite curious—e.g., we see a human face when staring up at a cloud formation or down at a piece of toast at the breakfast table. Here we use multimodal neuroimaging and dynamical systems analysis to measure whole-brain spatiotemporal dynamics while subjects make decisions regarding the type of object they see in rapidly flashed images. We find that the degree of interaction in these
Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle
Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Mărcăuteanu, Corina; Negrutiu, Meda; Sinescu, Cosmin; Demjan, Eniko; Hughes, Mike; Bradu, Adrian; Dobre, George; Podoleanu, Adrian G.
Excessive dental wear (pathological attrition and/or abfractions) is a frequent complication in bruxing patients. The parafunction causes heavy occlusal loads. The aim of this study is the early detection and monitoring of occlusal overload in bruxing patients. En-face optical coherence tomography was used for investigating and imaging of several extracted tooth, with a normal morphology, derived from patients with active bruxism and from subjects without parafunction. We found a characteristic pattern of enamel cracks in patients with first degree bruxism and with a normal tooth morphology. We conclude that the en-face optical coherence tomography is a promising non-invasive alternative technique for the early detection of occlusal overload, before it becomes clinically evident as tooth wear.
Hall, Leah M J; Klimes-Dougan, Bonnie; Hunt, Ruskin H; Thomas, Kathleen M; Houri, Alaa; Noack, Emily; Mueller, Bryon A; Lim, Kelvin O; Cullen, Kathryn R
Major depressive disorder (MDD) often begins during adolescence when the brain is still maturing. To better understand the neurobiological underpinnings of MDD early in development, this study examined brain function in response to emotional faces in adolescents with MDD and healthy (HC) adolescents using functional magnetic resonance imaging (fMRI). Thirty-two unmedicated adolescents with MDD and 23 healthy age- and gender-matched controls completed an fMRI task viewing happy and fearful faces. Fronto-limbic regions of interest (ROI; bilateral amygdala, insula, subgenual and rostral anterior cingulate cortices) and whole-brain analyses were conducted to examine between-group differences in brain function. ROI analyses revealed that patients had greater bilateral amygdala activity than HC in response to viewing fearful versus happy faces, which remained significant when controlling for comorbid anxiety. Whole-brain analyses revealed that adolescents with MDD had lower activation compared to HC in a right hemisphere cluster comprised of the insula, superior/middle temporal gyrus, and Heschl׳s gyrus when viewing fearful faces. Brain activity in the subgenual anterior cingulate cortex was inversely correlated with depression severity. Limitations include a cross-sectional design with a modest sample size and use of a limited range of emotional stimuli. Results replicate previous studies that suggest emotion processing in adolescent MDD is associated with abnormalities within fronto-limbic brain regions. Findings implicate elevated amygdalar arousal to negative stimuli in adolescents with depression and provide new evidence for a deficit in functioning of the saliency network, which may be a future target for early intervention and MDD treatment. Copyright © 2014 Elsevier B.V. All rights reserved.
Chua, Kao-Wei; Richler, Jennifer J; Gauthier, Isabel
Faces are processed holistically, but the locus of holistic processing remains unclear. We created two novel races of faces (Lunaris and Taiyos) to study how experience with face parts influences holistic processing. In Experiment 1, subjects individuated Lunaris wherein the top, bottom, or both face halves contained diagnostic information. Subjects who learned to attend to face parts exhibited no holistic processing. This suggests that individuation only leads to holistic processing when the whole face is attended. In Experiment 2, subjects individuated both Lunaris and Taiyos, with diagnostic information in complementary face halves of the two races. Holistic processing was measured with composites made of either diagnostic or nondiagnostic face parts. Holistic processing was only observed for composites made from diagnostic face parts, demonstrating that holistic processing can occur for diagnostic face parts that were never seen together. These results suggest that holistic processing is an expression of learned attention to diagnostic face parts. PsycINFO Database Record (c) 2014 APA, all rights reserved.
PROCESS WATER BUILDING, TRA-605. CONTEXTUAL VIEW, CAMERA FACING SOUTHEAST. PROCESS WATER BUILDING AND ETR STACK ARE IN LEFT HALF OF VIEW. TRA-666 IS NEAR CENTER, ABUTTED BY SECURITY BUILDING; TRA-626, AT RIGHT EDGE OF VIEW BEHIND BUS. INL NEGATIVE NO. HD46-34-1. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Brennen, T; Bruce, V
In this paper we report five experiments that investigate the influence of prime faces upon the speed with which familiar faces are recognized and named. Previously, priming had been reported when the prime and target faces were closely associated, e.g., Prince Charles and Princess Diana (Bruce & Valentine, 1986). In Experiment 1 we show that there is a reliable effect of relatedness on a double-familiarity decision, even when the faces are only categorically related, e.g., Kirk Douglas and Clint Eastwood. Then it was shown that such an effect emerges only on a double decision task (Experiments 2 and 3). Experiment 4 showed that on a primed naming task, faces preceded by a categorically related prime were responded to more quickly than those preceded by an unrelated prime, and the effect was due to inhibition. Experiment 5 replicated this effect and also showed that when associatively related primes were used, a facilitatory, and not an inhibitory, effect is found. It is argued that the facilitation of associative priming arises at an earlier locus than the inhibition of categorial priming.
Dalrymple, Kirsten A; Elison, Jed T; Duchaine, Brad
Evidence suggests that face and object recognition depend on distinct neural circuitry within the visual system. Work with adults with developmental prosopagnosia (DP) demonstrates that some individuals have preserved object recognition despite severe face recognition deficits. This face selectivity in adults with DP indicates that face- and object-processing systems can develop independently, but it is unclear at what point in development these mechanisms are separable. Determining when individuals with DP first show dissociations between faces and objects is one means to address this question. In the current study, we investigated face and object processing in six children with DP (5-12-years-old). Each child was assessed with one face perception test, two different face memory tests, and two object memory tests that were matched to the face memory tests in format and difficulty. Scores from the DP children on the matched face and object tasks were compared to within-subject data from age-matched controls. Four of the six DP children, including the 5-year-old, showed evidence of face-specific deficits, while one child appeared to have more general visual-processing deficits. The remaining child had inconsistent results. The presence of face-specific deficits in children with DP suggests that face and object perception depend on dissociable processes in childhood.
Cheng, Xue Jun; McCarthy, Callum J.; Wang, Tony S. L.; Palmeri, Thomas J.; Little, Daniel R.
Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of…
Moradi, Afsane; Mehrinejad, Seyed Abolghasem; Ghadiri, Mohammad; Rezaei, Farzin
Introduction: Emotional stimulus is processed automatically in a bottom-up way or can be processed voluntarily in a top-down way. Imaging studies have indicated that bottom-up and top-down processing are mediated through different neural systems. However, temporal differentiation of top-down versus bottom-up processing of facial emotional expressions has remained to be clarified. The present study aimed to explore the time course of these processes as indexed by the emotion-specific P100 and late positive potential (LPP) event-related potential (ERP) components in a group of healthy women. Methods: Fourteen female students of Alzahra University, Tehran, Iran aged 18–30 years, voluntarily participated in the study. The subjects completed 2 overt and covert emotional tasks during ERP acquisition. Results: The results indicated that fearful expressions significantly produced greater P100 amplitude compared to other expressions. Moreover, the P100 findings showed an interaction between emotion and processing conditions. Further analysis indicated that within the overt condition, fearful expressions elicited more P100 amplitude compared to other emotional expressions. Also, overt conditions created significantly more LPP latencies and amplitudes compared to covert conditions. Conclusion: Based on the results, early perceptual processing of fearful face expressions is enhanced in top-down way compared to bottom-up way. It also suggests that P100 may reflect an attentional bias toward fearful emotions. However, no such differentiation was observed within later processing stages of face expressions, as indexed by the ERP LPP component, in a top-down versus bottom-up way. Overall, this study provides a basis for further exploring of bottom-up and top-down processes underlying emotion and may be typically helpful for investigating the temporal characteristics associated with impaired emotional processing in psychiatric disorders. PMID:28446947
Neufeld, Janina; Johnstone, Tom; Chakrabarti, Bhismadev
Deficits in facial mimicry have been widely reported in autism. Some studies have suggested that these deficits are restricted to spontaneous mimicry and do not extend to volitional mimicry. We bridge these apparently inconsistent observations by testing the impact of reward value on neural indices of mimicry and how autistic traits modulate this impact. Neutral faces were conditioned with high and low reward. Subsequently, functional connectivity between the ventral striatum (VS) and inferior frontal gyrus (IFG) was measured while neurotypical adults (n = 30) watched happy expressions made by these conditioned faces. We found greater VS–IFG connectivity in response to high reward vs low reward happy faces. This difference was negatively proportional to autistic traits, suggesting that reduced spontaneous mimicry of social stimuli seen in autism, may be related to a failure in the modulation of the mirror system by the reward system rather than a circumscribed deficit in the mirror system. PMID:24493838
Hinojosa, José A; Martín-Loeches, Manuel; Muñoz, Francisco; Casado, Pilar; Pozo, Miguel A
This study investigates the automatic-controlled nature of early semantic processing by means of the Recognition Potential (RP), an event-related potential response that reflects lexical selection processes. For this purpose tasks differing in their processing requirements were used. Half of the participants performed a physical task involving a lower-upper case discrimination judgement (shallow processing requirements), whereas the other half carried out a semantic task, consisting in detecting animal names (deep processing requirements). Stimuli were identical in the two tasks. Reaction time measures revealed that the physical task was easier to perform than the semantic task. However, RP effects elicited by the physical and semantic tasks did not differ in either latency, amplitude, or topographic distribution. Thus, the results from the present study suggest that early semantic processing is automatically triggered whenever a linguistic stimulus enters the language processor.
Walsh, Jennifer A.; Creighton, Sarah E.; Rutherford, M. D.
Some, but not all, relevant studies have revealed face processing deficits among those with autism spectrum disorder (ASD). In particular, deficits are revealed in face processing tasks that involve emotion perception. The current study examined whether either deficits in processing emotional expression or deficits in processing social cognitive…
Bernasconi, Fosco; Kometer, Michael; Pokorny, Thomas; Seifritz, Erich; Vollenweider, Franz X
Emotional face processing is critically modulated by the serotonergic system, and serotonin (5-HT) receptor agonists impair emotional face processing. However, the specific contribution of the 5-HT1A receptor remains poorly understood. Here we investigated the spatiotemporal brain mechanisms underpinning the modulation of emotional face processing induced by buspirone, a partial 5-HT1A receptor agonist. In a psychophysical discrimination of emotional faces task, we observed that the discrimination fearful versus neutral faces were reduced, but not happy versus neutral faces. Electrical neuroimaging analyses were applied to visual evoked potentials elicited by emotional face images, after placebo and buspirone administration. Buspirone modulated response strength (i.e., global field power) in the interval 230-248ms after stimulus onset. Distributed source estimation over this time interval revealed that buspirone decreased the neural activity in the right dorsolateral prefrontal cortex that was evoked by fearful faces. These results indicate temporal and valence-specific effects of buspirone on the neuronal correlates of emotional face processing. Furthermore, the reduced neural activity in the dorsolateral prefrontal cortex in response to fearful faces suggests a reduced attention to fearful faces. Collectively, these findings provide new insights into the role of 5-HT1A receptors in emotional face processing and have implications for affective disorders that are characterized by an increased attention to negative stimuli. Copyright © 2015 Elsevier B.V. and ECNP. All rights reserved.
Smith-Spark, James H.; Moore, Viv
Two under-explored areas of developmental dyslexia research, face naming and age of acquisition (AoA), were investigated. Eighteen dyslexic and 18 non-dyslexic university students named the faces of 50 well-known celebrities, matched for facial distinctiveness and familiarity. Twenty-five of the famous people were learned early in life, while the…
Kleinhans, Natalia M.; Richards, Todd; Sterling, Lindsey; Stegbauer, Keith C.; Mahurin, Roderick; Johnson, L. Clark; Greenson, Jessica; Dawson, Geraldine; Aylward, Elizabeth
Abnormalities in the interactions between functionally linked brain regions have been suggested to be associated with the clinical impairments observed in autism spectrum disorders (ASD). We investigated functional connectivity within the limbic system during face identification; a primary component of social cognition, in 19 high-functioning…
Kirchner, Jennifer C.; Hatri, Alexander; Heekeren, Hauke R.; Dziobek, Isabel
Deviant gaze behavior is a defining characteristic of autism. Its relevance as a pathophysiological mechanism, however, remains unknown. In the present study, we compared eye fixations of 20 adults with autism and 21 controls while they were engaged in taking the Multifaceted Empathy Test (MET). Additional measures of face emotion and identity…
Heisz, Jennifer J.; Ryan, Jennifer D.
Older adults differ from their younger counterparts in the way they view faces. We assessed whether older adults can use past experience to mitigate these typical face-processing differences; that is, we examined whether there are age-related differences in the use of memory to support current processing. Eye movements of older and younger adults were monitored as they viewed faces that varied in the type/amount of prior exposure. Prior exposure was manipulated by including famous and novel faces, and by presenting faces up to five times. We expected that older adults may have difficulty quickly establishing new representations to aid in the processing of recently presented faces, but would be able to invoke face representations that have been stored in memory long ago to aid in the processing of famous faces. Indeed, younger adults displayed effects of recent exposure with a decrease in the total fixations to the faces and a gradual increase in the proportion of fixations to the eyes. These effects of recent exposure were largely absent in older adults. In contrast, the effect of fame, revealed by a subtle increase in fixations to the inner features of famous compared to non-famous faces, was similar for younger and older adults. Our results suggest that older adults’ current processing can benefit from lifetime experience, however the full benefit of recent experience on face processing is not realized in older adults. PMID:22007169
Shim, Miseon; Kim, Do-Won; Yoon, Sunkyung; Park, Gewnhi; Im, Chang-Hwan; Lee, Seung-Hwan
Deficits in facial emotion processing is a major characteristic of patients with panic disorder. It is known that visual stimuli with different spatial frequencies take distinct neural pathways. This study investigated facial emotion processing involving stimuli presented at broad, high, and low spatial frequencies in patients with panic disorder. Eighteen patients with panic disorder and 19 healthy controls were recruited. Seven event-related potential (ERP) components: (P100, N170, early posterior negativity (EPN); vertex positive potential (VPP), N250, P300; and late positive potential (LPP)) were evaluated while the participants looked at fearful and neutral facial stimuli presented at three spatial frequencies. When a fearful face was presented, panic disorder patients showed a significantly increased P100 amplitude in response to low spatial frequency compared to high spatial frequency; whereas healthy controls demonstrated significant broad spatial frequency dependent processing in P100 amplitude. Vertex positive potential amplitude was significantly increased in high and broad spatial frequency, compared to low spatial frequency in panic disorder. Early posterior negativity amplitude was significantly different between HSF and BSF, and between LSF and BSF processing in both groups, regardless of facial expression. The possibly confounding effects of medication could not be controlled. During early visual processing, patients with panic disorder prefer global to detailed information. However, in later processing, panic disorder patients overuse detailed information for the perception of facial expressions. These findings suggest that unique spatial frequency-dependent facial processing could shed light on the neural pathology associated with panic disorder. Copyright © 2016 Elsevier B.V. All rights reserved.
Li, Jun; Liang, Jimin; Tian, Jie; Liu, Jiangang; Zhao, Jizheng; Zhang, Hui; Shi, Guangming
Although top-down perceptual process plays an important role in face processing, its neural substrate is still puzzling because the top-down stream is extracted difficultly from the activation pattern associated with contamination caused by bottom-up face perception input. In the present study, a novel paradigm of instructing participants to detect faces from pure noise images is employed, which could efficiently eliminate the interference of bottom-up face perception in topdown face processing. Analyzing the map of functional connectivity with right FFA analyzed by conventional Pearson's correlation, a possible face processing pattern induced by top-down perception can be obtained. Apart from the brain areas of bilateral fusiform gyrus (FG), left inferior occipital gyrus (IOG) and left superior temporal sulcus (STS), which are consistent with a core system in the distributed cortical network for face perception, activation induced by top-down face processing is also found in these regions that include the anterior cingulate gyrus (ACC), right oribitofrontal cortex (OFC), left precuneus, right parahippocampal cortex, left dorsolateral prefrontal cortex (DLPFC), right frontal pole, bilateral premotor cortex, left inferior parietal cortex and bilateral thalamus. The results indicate that making-decision, attention, episodic memory retrieving and contextual associative processing network cooperate with general face processing regions to process face information under top-down perception.
Kamio, Yoko; Wolf, Julie; Fein, Deborah
This study examined automatic processing of emotional faces in individuals with high-functioning Pervasive Developmental Disorders (HFPDD) using an affective priming paradigm. Sixteen participants (HFPDD and matched controls) were presented with happy faces, fearful faces or objects in both subliminal and supraliminal exposure conditions, followed…
Cashon, Cara H.; Ha, Oh-Ryeong; DeNicola, Christopher A.; Mervis, Carolyn B.
Holistic processing of upright, but not inverted, faces is a marker of perceptual expertise for faces. This pattern is shown by typically developing individuals beginning at age 7 months. Williams syndrome (WS) is a rare neurogenetic developmental disorder characterized by extreme interest in faces from a very young age. Research on the effects of…
Vervloed, Mathijs P. J.; Hendriks, Angelique W.; van den Eijnde, Esther
Face processing development is negatively affected when infants have not been exposed to faces for some time because of congenital cataract blocking all vision (Le Grand, Mondloch, Maurer, & Brent, 2001). It is not clear, however, whether more subtle differences in face exposure may also have an influence. The present study looked at the effect of…
Karmiloff-Smith, Annette; Thomas, Michael; Annaz, Dagmara; Humphreys, Kate; Ewing, Sandra; Brace, Nicola; Duuren, Mike; Pike, Graham; Grice, Sarah; Campbell, Ruth
Face processing in Williams syndrome (WS) has been a topic of heated debate over the past decade. Initial claims about a normally developing ('intact') face-processing module were challenged by data suggesting that individuals with WS used a different balance of cognitive processes from controls, even when their behavioural scores fell within the normal range. Measurement of evoked brain potentials also point to atypical processes. However, two recent studies have claimed that people with WS process faces exactly like normal controls. In this paper, we examine the details of this continuing debate on the basis of three new face-processing experiments. In particular, for two of our experiments we built task-specific full developmental trajectories from childhood to adolescence/adulthood and plotted the WS data on these trajectories. The first experiment used photos of real faces. While it revealed broadly equivalent accuracy across groups, the WS participants were worse at configural processing when faces were upright and less sensitive than controls to face inversion. In Experiment 2, measuring face processing in a storybook context, the face inversion effect emerged clearly in controls but only weakly in the WS developmental trajectory. Unlike the controls, the Benton Face Recognition Test and the Pattern Construction results were not correlated in WS, highlighting the different developmental patterns in the two groups. Again in contrast to the controls, Experiment 3 with schematic faces and non-face stimuli revealed a configural-processing deficit in WS both with respect to their chronological age (CA) and to their level of performance on the Benton. These findings point to both delay and deviance in WS face processing and illustrate how vital it is to build developmental trajectories for each specific task.
Peykarjou, Stefanie; Westerlund, Alissa; Cassia, Viola Macchi; Kuefner, Dana; Nelson, Charles A.
The current study examines the processing of upright and inverted faces in 3-year-old children (n = 35). Event-related potentials (ERPs) were recorded during a passive looking paradigm including adult and newborn face stimuli. We observed three face-sensitive components, the P1, the N170 and the P400. Inverted faces elicited shorter P1 latency and…
Luzzi, Simona; Baldinelli, Sara; Ranaldi, Valentina; Fabi, Katia; Cafazzo, Viviana; Fringuelli, Fabio; Silvestrini, Mauro; Provinciali, Leandro; Reverberi, Carlo; Gainotti, Guido
Famous face and voice recognition is reported to be impaired both in semantic dementia (SD) and in Alzheimer's Disease (AD), although more severely in the former. In AD a coexistence of perceptual impairment in face and voice processing has also been reported and this could contribute to the altered performance in complex semantic tasks. On the other hand, in SD both face and voice recognition disorders could be related to the prevalence of atrophy in the right temporal lobe (RTL). The aim of the present study was twofold: (1) to investigate famous faces and voices recognition in SD and AD to verify if the two diseases show a differential pattern of impairment, resulting from disruption of different cognitive mechanisms; (2) to check if face and voice recognition disorders prevail in patients with atrophy mainly affecting the RTL. To avoid the potential influence of primary perceptual problems in face and voice recognition, a pool of patients suffering from early SD and AD were administered a detailed set of tests exploring face and voice perception. Thirteen SD (8 with prevalence of right and 5 with prevalence of left temporal atrophy) and 25 CE patients, who did not show visual and auditory perceptual impairment, were finally selected and were administered an experimental battery exploring famous face and voice recognition and naming. Twelve SD patients underwent cerebral PET imaging and were classified in right and left SD according to the onset modality and to the prevalent decrease in FDG uptake in right or left temporal lobe respectively. Correlation of PET imaging and famous face and voice recognition was performed. Results showed a differential performance profile in the two diseases, because AD patients were significantly impaired in the naming tests, but showed preserved recognition, whereas SD patients were profoundly impaired both in naming and in recognition of famous faces and voices. Furthermore, face and voice recognition disorders prevailed in SD
Sandberg, Kristian; Bahrami, Bahador; Kanai, Ryota; Barnes, Gareth Robert; Overgaard, Morten; Rees, Geraint
Previous studies indicate that conscious face perception may be related to neural activity in a large time window around 170-800ms after stimulus presentation, yet in the majority of these studies changes in conscious experience are confounded with changes in physical stimulation. Using multivariate classification on MEG data recorded when participants reported changes in conscious perception evoked by binocular rivalry between a face and a grating, we showed that only MEG signals in the 120-320ms time range, peaking at the M170 around 180ms and the P2m at around 260ms, reliably predicted conscious experience. Conscious perception could not only be decoded significantly better than chance from the sensors that showed the largest average difference, as previous studies suggest, but also from patterns of activity across groups of occipital sensors that individually were unable to predict perception better than chance. Additionally, source space analyses showed that sources in the early and late visual system predicted conscious perception more accurately than frontal and parietal sites, although conscious perception could also be decoded there. Finally, the patterns of neural activity associated with conscious face perception generalized from one participant to another around the times of maximum prediction accuracy. Our work thus demonstrates that the neural correlates of particular conscious contents (here, faces) are highly consistent in time and space within individuals and that these correlates are shared to some extent between individuals. PMID:23281780
Schwarzer, Gudrun; Zauner, Nicola; Jovanovic, Bianca
Two experiments examined whether 4-, 6-, and 10-month-old infants process natural looking faces by feature, i.e. processing internal facial features independently of the facial context or holistically by processing the features in conjunction with the facial context. Infants were habituated to two faces and looking time was measured. After…
Karmiloff-Smith, Annette; Thomas, Michael; Annaz, Dagmara; Humphreys, Kate; Ewing, Sandra; Brace, Nicola; Van Duuren, Mike; Pike, Graham; Grice, Sarah; Campbell, Ruth
Background: Face processing in Williams syndrome (WS) has been a topic of heated debate over the past decade. Initial claims about a normally developing ("intact") face-processing module were challenged by data suggesting that individuals with WS used a different balance of cognitive processes from controls, even when their behavioural scores fell…
Meinhardt-Injac, Bozana; Persike, Malte; Meinhardt, Günter
There is increasing evidence that face recognition might be impaired in older adults, but it is unclear whether the impairment is truly perceptual, and face specific. In order to address this question we compared performance in same/different matching tasks with face and non-face objects (watches) among young (mean age 23.7) and older adults (mean age 70.4) using a context congruency paradigm (Meinhardt-Injac, Persike & Meinhardt, 2010, Meinhardt-Injac, Persike and Meinhardt, 2011a). Older adults were less accurate than young adults with both object classes, while face matching was notably impaired. Effects of context congruency and inversion, measured as the hallmarks of holistic processing, were equally strong in both age groups, and were found only for faces, but not for watches. The face specific decline in older adults revealed deficits in handling internal facial features, while young adults matched external and internal features equally well. Comparison with non-face stimuli showed that this decline was face specific, and did not concern processing of object features in general. Taken together, the results indicate no age-related decline in the capabilities to process faces holistically. Rather, strong holistic effects, combined with a loss of precision in handling internal features indicate that older adults rely on global viewing strategies for faces. At the same time, access to the exact properties of inner face details becomes restricted. Copyright © 2014. Published by Elsevier B.V.
Collins, Jessica A.; Olson, Ingrid R.
Extensive research has supported the existence of a specialized face-processing network that is distinct from the visual processing areas used for general object recognition. The majority of this work has been aimed at characterizing the response properties of the fusiform face area (FFA) and the occipital face area (OFA), which together are thought to constitute the core network of brain areas responsible for facial identification. Although accruing evidence has shown that face-selective patches in the ventral anterior temporal lobes (vATLs) are interconnected with the FFA and OFA, and that they play a role in facial identification, the relative contribution of these brain areas to the core face-processing network has remained unarticulated. Here we review recent research critically implicating the vATLs in face perception and memory. We propose that current models of face processing should be revised such that the ventral anterior temporal lobes serve a centralized role in the visual face-processing network. We speculate that a hierarchically organized system of face processing areas extends bilaterally from the inferior occipital gyri to the vATLs, with facial representations becoming increasingly complex and abstracted from low-level perceptual features as they move forward along this network. The anterior temporal face areas may serve as the apex of this hierarchy, instantiating the final stages of face recognition. We further argue that the anterior temporal face areas are ideally suited to serve as an interface between face perception and face memory, linking perceptual representations of individual identity with person-specific semantic knowledge. PMID:24937188
Ho, Michael R; Pezdek, Kathy
The cross-race effect (CRE) describes the finding that same-race faces are recognized more accurately than cross-race faces. According to social-cognitive theories of the CRE, processes of categorization and individuation at encoding account for differential recognition of same- and cross-race faces. Recent face memory research has suggested that similar but distinct categorization and individuation processes also occur postencoding, at recognition. Using a divided-attention paradigm, in Experiments 1A and 1B we tested and confirmed the hypothesis that distinct postencoding categorization and individuation processes occur during the recognition of same- and cross-race faces. Specifically, postencoding configural divided-attention tasks impaired recognition accuracy more for same-race than for cross-race faces; on the other hand, for White (but not Black) participants, postencoding featural divided-attention tasks impaired recognition accuracy more for cross-race than for same-race faces. A social categorization paradigm used in Experiments 2A and 2B tested the hypothesis that the postencoding in-group or out-group social orientation to faces affects categorization and individuation processes during the recognition of same-race and cross-race faces. Postencoding out-group orientation to faces resulted in categorization for White but not for Black participants. This was evidenced by White participants' impaired recognition accuracy for same-race but not for cross-race out-group faces. Postencoding in-group orientation to faces had no effect on recognition accuracy for either same-race or cross-race faces. The results of Experiments 2A and 2B suggest that this social orientation facilitates White but not Black participants' individuation and categorization processes at recognition. Models of recognition memory for same-race and cross-race faces need to account for processing differences that occur at both encoding and recognition.
Shyi, Gary C-W; Wang, Chao-Chih
The composite face task is one of the most popular research paradigms for measuring holistic processing of upright faces. The exact mechanism underlying holistic processing remains elusive and controversial, and some studies have suggested that holistic processing may not be evenly distributed, in that the top-half of a face might induce stronger holistic processing than its bottom-half counterpart. In two experiments, we further examined the possibility of asymmetric holistic processing. Prior to Experiment 1, we confirmed that perceptual discriminability was equated between top and bottom face halves; we found no differences in performance between top and bottom face halves when they were presented individually. Then, in Experiment 1, using the composite face task with the complete design to reduce response bias, we failed to obtain evidence that would support the notion of asymmetric holistic processing between top and bottom face halves. To further reduce performance variability and to remove lingering holistic effects observed in the misaligned condition in Experiment 1, we doubled the number of trials and increased misalignment between top and bottom face halves to make misalignment more salient in Experiment 2. Even with these additional manipulations, we were unable to find evidence indicative of asymmetric holistic processing. Taken together, these findings suggest that holistic processing is distributed homogenously within an upright face.
Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.
Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402
Giel, Katrin Elisabeth; Paganini, Sarah; Schank, Irena; Enck, Paul; Zipfel, Stephan; Junne, Florian
Problems in emotion processing potentially contribute to the development and maintenance of chronic pain. Theories focusing on attentional processing have suggested that dysfunctional attention deployment toward emotional information, i.e., attentional biases for negative emotions, might entail one potential developmental and/or maintenance factor of chronic pain. We assessed self-reported alexithymia, attentional orienting to and maintenance on emotional stimuli using eye tracking in 17 patients with chronic pain disorder (CP) and two age- and sex-matched control groups, 17 healthy individuals (HC) and 17 individuals who were matched to CP according to depressive symptoms (DC). In a choice viewing paradigm, a dot indicated the position of the emotional picture in the next trial to allow for strategic attention deployment. Picture pairs consisted of a happy or sad facial expression and a neutral facial expression of the same individual. Participants were asked to explore picture pairs freely. CP and DC groups reported higher alexithymia than the HC group. HC showed a previously reported emotionality bias by preferentially orienting to the emotional face and preferentially maintaining on the happy face. CP and DC participants showed no facilitated early attention to sad facial expressions, and DC participants showed no facilitated early attention to happy facial expressions, while CP and DC participants did. We found no group differences in attentional maintenance. Our findings are in line with the clinical large overlap between pain and depression. The blunted initial reaction to sadness could be interpreted as a failure of the attentional system to attend to evolutionary salient emotional stimuli or as an attempt to suppress negative emotions. These difficulties in emotion processing might contribute to etiology or maintenance of chronic pain and depression.
Balas, Benjamin J.; Nelson, Charles A.; Westerlund, Alissa; Vogel-Farley, Vanessa; Riggins, Tracy; Kuefner, Dana
Infant face processing becomes more selective during the first year of life as a function of varying experience with distinct face categories defined by species, race, and age. Given that any individual face belongs to many such categories (e.g. A young Caucasian man's face) we asked how the neural selectivity for one aspect of facial appearance was affected by category membership along another dimension of variability. 6-month-old infants were shown upright and inverted pictures of either their own mother or a stranger while event-related potentials (ERPs) were recorded. We found that the amplitude of the P400 (a face-sensitive ERP component) was only sensitive to the orientation of the mother's face, suggesting that “tuning” of the neural response to faces is realized jointly across multiple dimensions of face appearance. PMID:20204154
Harris, Daniel A.; Ciaramitaro, Vivian M.
While some models of how various attributes of a face are processed have posited that face features, invariant physical cues such as gender or ethnicity as well as variant social cues such as emotion, may be processed independently (e.g., Bruce and Young, 1986), other models suggest a more distributed representation and interdependent processing (e.g., Haxby et al., 2000). Here, we use a contingent adaptation paradigm to investigate if mechanisms for processing the gender and emotion of a face are interdependent and symmetric across the happy–angry emotional continuum and regardless of the gender of the face. We simultaneously adapted participants to angry female faces and happy male faces (Experiment 1) or to happy female faces and angry male faces (Experiment 2). In Experiment 1, we found evidence for contingent adaptation, with simultaneous aftereffects in opposite directions: male faces were biased toward angry while female faces were biased toward happy. Interestingly, in the complementary Experiment 2, we did not find evidence for contingent adaptation, with both male and female faces biased toward angry. Our results highlight that evidence for contingent adaptation and the underlying interdependent face processing mechanisms that would allow for contingent adaptation may only be evident for certain combinations of face features. Such limits may be especially important in the case of social cues given how maladaptive it may be to stop responding to threatening information, with male angry faces considered to be the most threatening. The underlying neuronal mechanisms that could account for such asymmetric effects in contingent adaptation remain to be elucidated. PMID:27471482
Frässle, Stefan; Krach, Sören; Paulus, Frieder Michel; Jansen, Andreas
While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness-related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization. PMID:27250879
Frässle, Stefan; Krach, Sören; Paulus, Frieder Michel; Jansen, Andreas
While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness-related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization.
Wu, Minjie; Kujawa, Autumn; Lu, Lisa H; Fitzgerald, Daniel A; Klumpp, Heide; Fitzgerald, Kate D; Monk, Christopher S; Phan, K Luan
The ability to process and respond to emotional facial expressions is a critical skill for healthy social and emotional development. There has been growing interest in understanding the neural circuitry underlying development of emotional processing, with previous research implicating functional connectivity between amygdala and frontal regions. However, existing work has focused on threatening emotional faces, raising questions regarding the extent to which these developmental patterns are specific to threat or to emotional face processing more broadly. In the current study, we examined age-related changes in brain activity and amygdala functional connectivity during an fMRI emotional face matching task (including angry, fearful, and happy faces) in 61 healthy subjects aged 7-25 years. We found age-related decreases in ventral medial prefrontal cortex activity in response to happy faces but not to angry or fearful faces, and an age-related change (shifting from positive to negative correlation) in amygdala-anterior cingulate cortex/medial prefrontal cortex (ACC/mPFC) functional connectivity to all emotional faces. Specifically, positive correlations between amygdala and ACC/mPFC in children changed to negative correlations in adults, which may suggest early emergence of bottom-up amygdala excitatory signaling to ACC/mPFC in children and later development of top-down inhibitory control of ACC/mPFC over amygdala in adults. Age-related changes in amygdala-ACC/mPFC connectivity did not vary for processing of different facial emotions, suggesting changes in amygdala-ACC/mPFC connectivity may underlie development of broad emotional processing, rather than threat-specific processing. Hum Brain Mapp 37:1684-1695, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Wu, Minjie; Kujawa, Autumn; Lu, Lisa H.; Fitzgerald, Daniel A.; Klumpp, Heide; Fitzgerald, Kate D.; Monk, Christopher S.; Phan, K. Luan
The ability to process and respond to emotional facial expressions is a critical skill for healthy social and emotional development. There has been growing interest in understanding the neural circuitry underlying development of emotional processing, with previous research implicating functional connectivity between amygdala and frontal regions. However, existing work has focused on threatening emotional faces, raising questions regarding the extent to which these developmental patterns are specific to threat or to emotional face processing more broadly. In the current study, we examined age-related changes in brain activity and amygdala functional connectivity during an fMRI emotional face matching task (including angry, fearful and happy faces) in 61 healthy subjects aged 7–25 years. We found age-related decreases in ventral medial prefrontal cortex (vmPFC) activity in response to happy faces but not to angry or fearful faces, and an age-related change (shifting from positive to negative correlation) in amygdala-anterior cingulate cortex/medial prefrontal cortex (ACC/mPFC) functional connectivity to all emotional faces. Specifically, positive correlations between amygdala and ACC/mPFC in children changed to negative correlations in adults, which may suggest early emergence of bottom-up amygdala excitatory signaling to ACC/mPFC in children and later development of top-down inhibitory control of ACC/mPFC over amygdala in adults. Age-related changes in amygdala-ACC/mPFC connectivity did not vary for processing of different facial emotions, suggesting changes in amygdala-ACC/mPFC connectivity may underlie development of broad emotional processing, rather than threat-specific processing. PMID:26931629
Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G
Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.
Liu, Jiangang; Wang, Meiyun; Shi, Xiaohong; Feng, Lu; Li, Ling; Thacker, Justine Marie; Tian, Jie; Shi, Dapeng; Lee, Kang
Brains can perceive or recognize a face even though we are subjectively unaware of the existence of that face. However, the exact neural correlates of such covert face processing remain unknown. Here, we compared the fMRI activities between a prosopagnosic patient and normal controls when they saw famous and unfamiliar faces. When compared with objects, the patient showed greater activation to famous faces in the fusiform face area (FFA) though he could not overtly recognize those faces. In contrast, the controls showed greater activation to both famous and unfamiliar faces in the FFA. Compared with unfamiliar faces, famous faces activated the controls', but not the patient's lateral prefrontal cortex (LPFC) known to be involved in familiar face recognition. In contrast, the patient showed greater activation in the bilateral medial frontal gyrus (MeFG). Functional connectivity analyses revealed that the patient's right middle fusiform gyrus (FG) showed enhanced connectivity to the MeFG, whereas the controls' middle FG showed enhanced connectivity to the LPFC. These findings suggest that the FFA may be involved in both covert and overt face recognition. The patient's impairment in overt face recognition may be due to the absence of the coupling between the right FG and the LPFC. PMID:23448870
Keyes, Helen; Brady, Nuala; Reilly, Richard B.; Foxe, John J.
The neural basis of self-recognition is mainly studied using brain-imaging techniques which reveal much about the localization of self-processing in the brain. There are comparatively few studies using EEG which allow us to study the time course of self-recognition. In this study, participants monitored a sequence of images, including 20 distinct…
Li, Jun; Liu, Jiangang; Liang, Jimin; Zhang, Hongchuan; Zhao, Jizheng; Rieth, Cory A.; Huber, David E.; Li, Wu; Shi, Guangming; Ai, Lin; Tian, Jie; Lee, Kang
To study top-down face processing, the present study used an experimental paradigm in which participants detected non-existent faces in pure noise images. Conventional BOLD signal analysis identified three regions involved in this illusory face detection. These regions included the left orbitofrontal cortex (OFC) in addition to the right fusiform face area (FFA) and right occipital face area (OFA), both of which were previously known to be involved in both top-down and bottom-up processing of faces. We used Dynamic Causal Modeling (DCM) and Bayesian model selection to further analyze the data, revealing both intrinsic and modulatory effective connectivities among these three cortical regions. Specifically, our results support the claim that the orbitofrontal cortex plays a crucial role in the top-down processing of faces by regulating the activities of the occipital face area, and the occipital face area in turn detects the illusory face features in the visual stimuli and then provides this information to the fusiform face area for further analysis. PMID:20423709
Li, Tianbi; Wang, Xueqin; Pan, Junhao; Feng, Shuyuan; Gong, Mengyuan; Wu, Yaxue; Li, Guoxiang; Li, Sheng; Yi, Li
The processing of social stimuli, such as human faces, is impaired in individuals with autism spectrum disorder (ASD), which could be accounted for by their lack of social motivation. The current study examined how the attentional processing of faces in children with ASD could be modulated by the learning of face-reward associations. Sixteen high-functioning children with ASD and 20 age- and ability-matched typically developing peers participated in the experiments. All children started with a reward learning task, in which the children were presented with three female faces that were attributed with positive, negative, and neutral values, and were required to remember the faces and their associated values. After this, they were tested on the recognition of the learned faces and a visual search task in which the learned faces served as the distractor. We found a modulatory effect of the face-reward associations on the visual search but not the recognition performance in both groups despite the lower efficacy among children with ASD in learning the face-reward associations. Specifically, both groups responded faster when one of the distractor faces was associated with positive or negative values than when the distractor face was neutral, suggesting an efficient attentional processing of these reward-associated faces. Our findings provide direct evidence for the perceptual-level modulatory effect of reward learning on the attentional processing of faces in individuals with ASD. Autism Res 2017, 10: 1797-1807. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. In our study, we tested whether the face processing of individuals with ASD could be changed when the faces were associated with different social meanings. We found no effect of social meanings on face recognition, but both groups responded faster in the visual search task when one of the distractor faces was associated with positive or negative values than when the neutral face. The
Srinivasan, Narayanan; Gupta, Rashmi
Recent studies have shown links between happy faces and global, distributed attention as well as sad faces to local, focused attention. Emotions have been shown to affect global-local processing. Given that studies on emotion-cognition interactions have not explored the effect of perceptual processing at different spatial scales on processing stimuli with emotional content, the present study investigated the link between perceptual focus and emotional processing. The study investigated the effects of global-local processing on the recognition of distractor faces with emotional expressions. Participants performed a digit discrimination task with digits at either the global level or the local level presented against a distractor face (happy or sad) as background. The results showed that global processing associated with broad scope of attention facilitates recognition of happy faces, and local processing associated with narrow scope of attention facilitates recognition of sad faces. The novel results of the study provide conclusive evidence for emotion-cognition interactions by demonstrating the effect of perceptual processing on emotional faces. The results along with earlier complementary results on the effect of emotion on global-local processing support a reciprocal relationship between emotional processing and global-local processing. Distractor processing with emotional information also has implications for theories of selective attention.
Trauer, Sophie M; Kotz, Sonja A; Müller, Matthias M
Emotional scenes and faces have shown to capture and bind visual resources at early sensory processing stages, i.e. in early visual cortex. However, emotional words have led to mixed results. In the current study ERPs were assessed simultaneously with steady-state visual evoked potentials (SSVEPs) to measure attention effects on early visual activity in emotional word processing. Neutral and negative words were flickered at 12.14 Hz whilst participants performed a Lexical Decision Task. Emotional word content did not modulate the 12.14 Hz SSVEP amplitude, neither did word lexicality. However, emotional words affected the ERP. Negative compared to neutral words as well as words compared to pseudowords lead to enhanced deflections in the P2 time range indicative of lexico-semantic access. The N400 was reduced for negative compared to neutral words and enhanced for pseudowords compared to words indicating facilitated semantic processing of emotional words. LPC amplitudes reflected word lexicality and thus the task-relevant response. In line with previous ERP and imaging evidence, the present results indicate that written emotional words are facilitated in processing only subsequent to visual analysis.
Jiang, Yi; Shannon, Robert W; Vizueta, Nathalie; Bernat, Edward M; Patrick, Christopher J; He, Sheng
The fusiform face area (FFA) and the superior temporal sulcus (STS) are suggested to process facial identity and facial expression information respectively. We recently demonstrated a functional dissociation between the FFA and the STS as well as correlated sensitivity of the STS and the amygdala to facial expressions using an interocular suppression paradigm [Jiang, Y., He, S., 2006. Cortical responses to invisible faces: dissociating subsystems for facial-information processing. Curr. Biol. 16, 2023-2029.]. In the current event-related brain potential (ERP) study, we investigated the temporal dynamics of facial information processing. Observers viewed neutral, fearful, and scrambled face stimuli, either visibly or rendered invisible through interocular suppression. Relative to scrambled face stimuli, intact visible faces elicited larger positive P1 (110-130 ms) and larger negative N1 or N170 (160-180 ms) potentials at posterior occipital and bilateral occipito-temporal regions respectively, with the N170 amplitude significantly greater for fearful than neutral faces. Invisible intact faces generated a stronger signal than scrambled faces at 140-200 ms over posterior occipital areas whereas invisible fearful faces (compared to neutral and scrambled faces) elicited a significantly larger negative deflection starting at 220 ms along the STS. These results provide further evidence for cortical processing of facial information without awareness and elucidate the temporal sequence of automatic facial expression information extraction.
Neuhaus, Emily; Kresse, Anna; Faja, Susan; Bernier, Raphael A; Webb, Sara Jane
Autism spectrum disorder (ASD) has a strong heritable basis, as evidenced by twin concordance rates. Within ASD, symptom domains may arise via independent genetic contributions, with varying heritabilities and genetic mechanisms. In this article, we explore social functioning in the form of (i) electrophysiological and behavioral measures of face processing (P1 and N170) and (ii) social behavior among child and adolescent twins with (N = 52) and without ASD (N = 66). Twins without ASD had better holistic face processing and face memory, faster P1 responses and greater sensitivity to the effects of facial inversion on P1. In contrast, N170 responses to faces were similar across diagnosis, with more negative amplitudes for faces vs non-face images. Across the sample, stronger social skills and fewer social difficulties were associated with faster P1 and N170 responses to upright faces, and better face memory. Twins were highly correlated within pairs across most measures, but correlations were significantly stronger for monozygotic vs dizygotic pairs on N170 latency and social problems. We suggest common developmental influences across twins for face processing and social behavior, but highlight (i) neural speed of face processing and (ii) social difficulties as important avenues in the search for genetic underpinnings in ASD. © The Author (2015). Published by Oxford University Press. For Permissions, please email: firstname.lastname@example.org.
Robinson, Amanda K; Plaut, David C; Behrmann, Marlene
Words and faces have vastly different visual properties, but increasing evidence suggests that word and face processing engage overlapping distributed networks. For instance, fMRI studies have shown overlapping activity for face and word processing in the fusiform gyrus despite well-characterized lateralization of these objects to the left and right hemispheres, respectively. To investigate whether face and word perception influences perception of the other stimulus class and elucidate the mechanisms underlying such interactions, we presented images using rapid serial visual presentations. Across 3 experiments, participants discriminated 2 face, word, and glasses targets (T1 and T2) embedded in a stream of images. As expected, T2 discrimination was impaired when it followed T1 by 200 to 300 ms relative to longer intertarget lags, the so-called attentional blink. Interestingly, T2 discrimination accuracy was significantly reduced at short intertarget lags when a face was followed by a word (face-word) compared with glasses-word and word-word combinations, indicating that face processing interfered with word perception. The reverse effect was not observed; that is, word-face performance was no different than the other object combinations. EEG results indicated the left N170 to T1 was correlated with the word decrement for face-word trials, but not for other object combinations. Taken together, the results suggest face processing interferes with word processing, providing evidence for overlapping neural mechanisms of these 2 object types. Furthermore, asymmetrical face-word interference points to greater overlap of face and word representations in the left than the right hemisphere. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Montalan, Benoit; Veujoz, Mathieu; Boitout, Alexis; Leleu, Arnaud; Camus, Odile; Lalonde, Robert; Rebai, Mohamed
Recent ERP research has indicated that the processing of faces of other races (OR) and same race (SR) as the perceiver differs at the perceptual level, more precisely for the N170 component. The purpose of the present study was to continue the investigation of the race-of-face processing across multiple orientations. Event-related brain potentials…
Brewster, Paul W H; Mullin, Caitlin R; Dobrin, Roxana A; Steeves, Jennifer K E
Previous research has demonstrated sex differences in face processing at both neural and behavioural levels. The present study examined the role of handedness and sexual orientation as mediators of this effect. We compared the performance of LH (left-handed) and RH (right-handed) heterosexual and homosexual male and female participants on a face recognition memory task. Our main findings were that homosexual males have better face recognition memory than both heterosexual males and homosexual women. We also demonstrate better face processing in women than in men. Finally, LH heterosexual participants had better face recognition than LH homosexual participants and also tended to be better than RH heterosexual participants. These findings are consistent with differences in the organisation and laterality of face-processing mechanisms as a function of sex, handedness, and sexual orientation.
Yankouskaya, Alla; Booth, David A; Humphreys, Glyn
Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247-279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.
Core Outcome Domains for early phase clinical trials of sound-, psychology-, and pharmacology-based interventions to manage chronic subjective tinnitus in adults: the COMIT'ID study protocol for using a Delphi process and face-to-face meetings to establish consensus.
Fackrell, Kathryn; Smith, Harriet; Colley, Veronica; Thacker, Brian; Horobin, Adele; Haider, Haúla F; Londero, Alain; Mazurek, Birgit; Hall, Deborah A
The reporting of outcomes in clinical trials of subjective tinnitus indicates that many different tinnitus-related complaints are of interest to investigators, from perceptual attributes of the sound (e.g. loudness) to psychosocial impacts (e.g. quality of life). Even when considering one type of intervention strategy for subjective tinnitus, there is no agreement about what is critically important for deciding whether a treatment is effective. The main purpose of this observational study is, therefore to, develop Core Outcome Domain Sets for the three different intervention strategies (sound, psychological, and pharmacological) for adults with chronic subjective tinnitus that should be measured and reported in every clinical trial of these interventions. Secondary objectives are to identify the strengths and limitations of our study design for recruiting and reducing attrition of participants, and to explore uptake of the core outcomes. The 'Core Outcome Measures in Tinnitus: International Delphi' (COMIT'ID) study will use a mixed-methods approach that incorporates input from health care users at the pre-Delphi stage, a modified three-round Delphi survey and final consensus meetings (one for each intervention). The meetings will generate recommendations by stakeholder representatives on agreed Core Outcome Domain Sets specific to each intervention. A subsequent step will establish a common cross-cutting Core Outcome Domain Set by identifying the common outcome domains included in all three intervention-specific Core Outcome Domain Sets. To address the secondary objectives, we will gather feedback from participants about their experience of taking part in the Delphi process. We aspire to conduct an observational cohort study to evaluate uptake of the core outcomes in published studies at 7 years following Core Outcome Set publication. The COMIT'ID study aims to develop a Core Outcome Domain Set that is agreed as critically important for deciding whether a
Li, Jin; Huang, Lijie; Song, Yiying; Liu, Jia
It has been long proposed that our extraordinary face recognition ability stems from holistic face processing. Two widely-used behavioral hallmarks of holistic face processing are the whole-part effect (WPE) and composite-face effect (CFE). However, it remains unknown whether these two effects reflect similar or different aspects of holistic face processing. Here we investigated this question by examining whether the WPE and CFE involved shared or distinct neural substrates in a large sample of participants (N=200). We found that the WPE and CFE showed hemispheric dissociation in the fusiform face area (FFA), that is, the WPE was correlated with face selectivity in the left FFA, while the CFE was correlated with face selectivity in the right FFA. Further, the correlation between the WPE and face selectivity was largely driven by the FFA response to faces, whereas the association between the CFE and face selectivity resulted from suppressed response to objects in the right FFA. Finally, we also observed dissociated correlation patterns of the WPE and CFE in other face-selective regions and across the whole brain. These results suggest that the WPE and CFE may reflect different aspects of holistic face processing, which shed new light on the behavioral dissociations of these two effects demonstrated in literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sato, Wataru; Kochiyama, Takanori; Uono, Shota; Matsuda, Kazumi; Usui, Keiko; Usui, Naotaka; Inoue, Yushi; Toichi, Motomi
Faces contain multifaceted information that is important for human communication. Neuroimaging studies have revealed face-specific activation in multiple brain regions, including the inferior occipital gyrus (IOG) and amygdala; it is often assumed that these regions constitute the neural network responsible for the processing of faces. However, it remains unknown whether and how these brain regions transmit information during face processing. This study investigated these questions by applying dynamic causal modeling of induced responses to human intracranial electroencephalography data recorded from the IOG and amygdala during the observation of faces, mosaics, and houses in upright and inverted orientations. Model comparisons assessing the experimental effects of upright faces versus upright houses and upright faces versus upright mosaics consistently indicated that the model having face-specific bidirectional modulatory effects between the IOG and amygdala was the most probable. The experimental effect between upright versus inverted faces also favored the model with bidirectional modulatory effects between the IOG and amygdala. The spectral profiles of modulatory effects revealed both same-frequency (e.g., gamma-gamma) and cross-frequency (e.g., theta-gamma) couplings. These results suggest that the IOG and amygdala communicate rapidly with each other using various types of oscillations for the efficient processing of faces. Hum Brain Mapp 38:4511-4524, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Ibáñez, Agustin; Petroni, Agustin; Urquina, Hugo; Torrente, Fernando; Torralva, Teresa; Hurtado, Esteban; Guex, Raphael; Blenkmann, Alejandro; Beltrachini, Leandro; Muravchik, Carlos; Baez, Sandra; Cetkovich, Marcelo; Sigman, Mariano; Lischinsky, Alicia; Manes, Facundo
Although it has been shown that adults with attention-deficit hyperactivity disorder (ADHD) have impaired social cognition, no previous study has reported the brain correlates of face valence processing. This study looked for behavioral, neuropsychological, and electrophysiological markers of emotion processing for faces (N170) in adult ADHD compared to controls matched by age, gender, educational level, and handedness. We designed an event-related potential (ERP) study based on a dual valence task (DVT), in which faces and words were presented to test the effects of stimulus type (faces, words, or face-word stimuli) and valence (positive versus negative). Individual signatures of cognitive functioning in participants with ADHD and controls were assessed with a comprehensive neuropsychological evaluation, including executive functioning (EF) and theory of mind (ToM). Compared to controls, the adult ADHD group showed deficits in N170 emotion modulation for facial stimuli. These N170 impairments were observed in the absence of any deficit in facial structural processing, suggesting a specific ADHD impairment in early facial emotion modulation. The cortical current density mapping of N170 yielded a main neural source of N170 at posterior section of fusiform gyrus (maximum at left hemisphere for words and right hemisphere for faces and simultaneous stimuli). Neural generators of N170 (fusiform gyrus) were reduced in ADHD. In those patients, N170 emotion processing was associated with performance on an emotional inference ToM task, and N170 from simultaneous stimuli was associated with EF, especially working memory. This is the first report to reveal an adult ADHD-specific impairment in the cortical modulation of emotion for faces and an association between N170 cortical measures and ToM and EF.
Namdar, Gal; Avidan, Galia; Ganel, Tzvi
Configural processing governs human perception across various domains, including face perception. An established marker of configural face perception is the face inversion effect, in which performance is typically better for upright compared to inverted faces. In two experiments, we tested whether configural processing could influence basic visual abilities such as perceptual spatial resolution (i.e., the ability to detect spatial visual changes). Face-related perceptual spatial resolution was assessed by measuring the just noticeable difference (JND) to subtle positional changes between specific features in upright and inverted faces. The results revealed robust inversion effect for spatial sensitivity to configural-based changes, such as the distance between the mouth and the nose, or the distance between the eyes and the nose. Critically, spatial resolution for face features within the region of the eyes (e.g., the interocular distance between the eyes) was not affected by inversion, suggesting that the eye region operates as a separate 'gestalt' unit which is relatively immune to manipulations that would normally hamper configural processing. Together these findings suggest that face orientation modulates fundamental psychophysical abilities including spatial resolution. Furthermore, they indicate that classic psychophysical methods can be used as a valid measure of configural face processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Herrmann, C S; Bosch, V
We examined whether early visual processing reflects perceptual properties of a stimulus in addition to physical features. We recorded event-related potentials (ERPs) of 13 subjects in a visual classification task. We used four different stimuli which were all composed of four identical elements. One of the stimuli constituted an illusory Kanizsa square, another was composed of the same number of collinear line segments but the elements did not form a Gestalt. In addition, a target and a control stimulus were used which were arranged differently. These stimuli allow us to differentiate the processing of colinear line elements (stimulus features) and illusory figures (perceptual properties). The visual N170 in response to the illusory figure was significantly larger as compared to the other collinear stimulus. This is taken to indicate that the visual N170 reflects cognitive processes of Gestalt perception in addition to attentional processes and physical stimulus properties.
Fornaciai, Michele; Brannon, Elizabeth M; Woldorff, Marty G; Park, Joonkoo
While parietal cortex is thought to be critical for representing numerical magnitudes, we recently reported an event-related potential (ERP) study demonstrating selective neural sensitivity to numerosity over midline occipital sites very early in the time course, suggesting the involvement of early visual cortex in numerosity processing. However, which specific brain area underlies such early activation is not known. Here, we tested whether numerosity-sensitive neural signatures arise specifically from the initial stages of visual cortex, aiming to localize the generator of these signals by taking advantage of the distinctive folding pattern of early occipital cortices around the calcarine sulcus, which predicts an inversion of polarity of ERPs arising from these areas when stimuli are presented in the upper versus lower visual field. Dot arrays, including 8-32dots constructed systematically across various numerical and non-numerical visual attributes, were presented randomly in either the upper or lower visual hemifields. Our results show that neural responses at about 90ms post-stimulus were robustly sensitive to numerosity. Moreover, the peculiar pattern of polarity inversion of numerosity-sensitive activity at this stage suggested its generation primarily in V2 and V3. In contrast, numerosity-sensitive ERP activity at occipito-parietal channels later in the time course (210-230ms) did not show polarity inversion, indicating a subsequent processing stage in the dorsal stream. Overall, these results demonstrate that numerosity processing begins in one of the earliest stages of the cortical visual stream. Copyright © 2017 Elsevier Inc. All rights reserved.
Lafer-Sousa, Rosa; Conway, Bevil R.
Visual-object processing culminates in inferior temporal (IT) cortex. To assess the organization of IT, we measured fMRI responses in alert monkey to achromatic images (faces, fruit, bodies, places) and colored gratings. IT contained multiple color-biased regions, which were typically ventral to face patches and, remarkably, yoked to them, spaced regularly at four locations predicted by known anatomy. Color and face selectivity increased for more anterior regions, indicative of a broad hierarchical arrangement. Responses to non-face shapes were found across IT, but were stronger outside color-biased regions and face patches, consistent with multiple parallel streams. IT also contained multiple coarse eccentricity maps: face patches overlapped central representations; color-biased regions spanned mid-peripheral representations; and place-biased regions overlapped peripheral representations. These results suggest that IT comprises parallel, multi-stage processing networks subject to one organizing principle. PMID:24141314
Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A.; Grill-Spector, Kalanit; Rossion, Bruno
Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. SIGNIFICANCE STATEMENT Brain networks consist
Zhao, Lun; Bentin, Shlomo
We explored perceptual factors that might account for the other-race classification advantage (ORCA) in classifying faces by race. Testing Chinese participants in China and Israeli participants in Israel we show that: (a) The distinction between Chinese and Israeli faces is highly accurate even on the basis of isolated eyes or faces with eyes concealed, but full faces are categorized faster. (b) The ORCA is similarly robust for full faces and for face parts. (c) The ORCA was larger when the configuration of the inner-face components was distorted, reflecting delayed categorization of own-race distorted faces relative to own-race normally configured faces but no conspicuous distortion effect on other-race faces. These data demonstrate that perceptual factors can account for the ORCA independently of social bias. We suggest that one source of the ORCA in race categorization is the configural analysis applied by default while processing own-race but not other-race faces. PMID:22008980
Fisher, Clark; Freiwald, Winrich A.
SUMMARY Facial motion transmits rich and ethologically vital information [1, 2], but how the brain interprets this complex signal is poorly understood. Facial form is analyzed by anatomically distinct face patches in the macaque brain [3, 4], and facial motion activates these patches and surrounding areas [5, 6]. Yet it is not known whether facial motion is processed by its own distinct and specialized neural machinery, and if so, what that machinery’s organization might be. To address these questions, we used functional magnetic resonance imaging (fMRI) to monitor the brain activity of macaque monkeys while they viewed low- and high-level motion and form stimuli. We found that, beyond classical motion areas and the known face patch system, moving faces recruited a heretofore-unrecognized face patch. Although all face patches displayed distinctive selectivity for face motion over object motion, only two face patches preferred naturally moving faces, while three others preferred randomized, rapidly varying sequences of facial form. This functional divide was anatomically specific, segregating dorsal from ventral face patches, thereby revealing a new organizational principle of the macaque face-processing system. PMID:25578903
Xu, Buyun; Liu-Shuang, Joan; Rossion, Bruno; Tanaka, James
A growing body of literature suggests that human individuals differ in their ability to process face identity. These findings mainly stem from explicit behavioral tasks, such as the Cambridge Face Memory Test (CFMT). However, it remains an open question whether such individual differences can be found in the absence of an explicit face identity task and when faces have to be individualized at a single glance. In the current study, we tested 49 participants with a recently developed fast periodic visual stimulation (FPVS) paradigm [Liu-Shuang, J., Norcia, A. M., & Rossion, B. An objective index of individual face discrimination in the right occipitotemporal cortex by means of fast periodic oddball stimulation. Neuropsychologia, 52, 57-72, 2014] in EEG to rapidly, objectively, and implicitly quantify face identity processing. In the FPVS paradigm, one face identity (A) was presented at the frequency of 6 Hz, allowing only one gaze fixation, with different face identities (B, C, D) presented every fifth face (1.2 Hz; i.e., AAAABAAAACAAAAD…). Results showed a face individuation response at 1.2 Hz and its harmonics, peaking over occipitotemporal locations. The magnitude of this response showed high reliability across different recording sequences and was significant in all but two participants, with the magnitude and lateralization differing widely across participants. There was a modest but significant correlation between the individuation response amplitude and the performance of the behavioral CFMT task, despite the fact that CFMT and FPVS measured different aspects of face identity processing. Taken together, the current study highlights the FPVS approach as a promising means for studying individual differences in face identity processing.
Jemel, B; George, N; Chaby, L; Fiori, N; Renault, B
We provide electrophysiological evidence supporting the hypothesis that part and whole face processing involve distinct functional mechanisms. We used a congruency judgment task and studied part-to-whole and part-to-part priming effects. Neither part-to-whole nor part-to-part conditions elicited early congruency effects on face-specific ERP components, suggesting that activation of the internal representations should occur later on. However, these components showed differential responsiveness to whole faces and isolated eyes. In addition, although late ERP components were affected when the eye targets were not associated with the prime in both conditions, their temporal and topographical features depended on the latter. These differential effects suggest the existence of distributed neural networks in the inferior temporal cortex where part and whole facial representations may be stored.
Parry, Ingrid; Sen, Soman; Palmieri, Tina; Greenhalgh, David
Special emphasis is placed on the clinical management of facial scarring because of the profound physical and psychological impact of facial burns. Noninvasive methods of facial scar management include pressure therapy, silicone, massage, and facial exercises. Early implementation of these scar management techniques after a burn injury is typically accepted as standard burn rehabilitation practice, however, little data exist to support this practice. This study evaluated the timing of common noninvasive scar management interventions after facial skin grafting in children and the impact on outcome, as measured by scar assessment and need for facial reconstructive surgery. A retrospective review of 138 patients who underwent excision and grafting of the face and subsequent noninvasive scar management during a 10-year time frame was conducted. Regression analyses were used to show that earlier application of silicone was significantly related to lower Modified Vancouver Scar Scale scores, specifically in the subscales of vascularity and pigmentation. Early use of pressure therapy and implementation of facial exercises were also related to lower Modified Vancouver Scar Scale vascularity scores. No relationship was found between timing of the interventions and facial reconstructive outcome. Early use of silicone, pressure therapy, and exercise may improve scar outcome and accelerate time to scar maturity.
Raja Beharelle, Anjali; Dick, Anthony Steven; Josse, Goulven; Solodkin, Ana; Huttenlocher, Peter R; Levine, Susan C; Small, Steven L
A predominant theory regarding early stroke and its effect on language development, is that early left hemisphere lesions trigger compensatory processes that allow the right hemisphere to assume dominant language functions, and this is thought to underlie the near normal language development observed after early stroke. To test this theory, we used functional magnetic resonance imaging to examine brain activity during category fluency in participants who had sustained pre- or perinatal left hemisphere stroke (n = 25) and in neurologically normal siblings (n = 27). In typically developing children, performance of a category fluency task elicits strong involvement of left frontal and lateral temporal regions and a lesser involvement of right hemisphere structures. In our cohort of atypically developing participants with early stroke, expressive and receptive language skills correlated with activity in the same left inferior frontal regions that support language processing in neurologically normal children. This was true independent of either the amount of brain injury or the extent that the injury was located in classical cortical language processing areas. Participants with bilateral activation in left and right superior temporal-inferior parietal regions had better language function than those with either predominantly left- or right-sided unilateral activation. The advantage conferred by left inferior frontal and bilateral temporal involvement demonstrated in our study supports a strong predisposition for typical neural language organization, despite an intervening injury, and argues against models suggesting that the right hemisphere fully accommodates language function following early injury.
Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy
Although many studies have reported face identity recognition deficits in autism spectrum disorders (ASD), two fundamental question remains: 1) Is this deficit “process specific” for face memory in particular, or does it extend to perceptual discrimination of faces as well? And 2) Is the deficit “domain specific” for faces, or is it found more generally for other social or even nonsocial stimuli? The answers to these questions are important both for understanding the nature of autism and its developmental etiology, and for understanding the functional architecture of face processing in the typical brain. Here we show that children with ASD are impaired (compared to age and IQ-matched typical children) in face memory, but not face perception, demonstrating process specificity. Further, we find no deficit for either memory or perception of places or cars, indicating domain specificity. Importantly, we further showed deficits in both the perception and memory of bodies, suggesting that the relevant domain of deficit may be social rather than specifically facial. These results provide a more precise characterization of the cognitive phenotype of autism and further indicate a functional dissociation between face memory and face perception. PMID:24040276
Parr, Lisa A.; Heintz, Matthew; Akamagwuna, Unoma
Previous studies have demonstrated the sensitivity of chimpanzees to facial configurations. Three studies further these findings by showing this sensitivity to be specific to second-order relational properties. In humans, this type of configural processing requires prolonged experience and enables subordinate-level discriminations of many…
Balconi, Michela; Lucchiari, Claudio
It remains an open question whether it is possible to assign a single brain operation or psychological function for facial emotion decoding to a certain type of oscillatory activity. Gamma band activity (GBA) offers an adequate tool for studying cortical activation patterns during emotional face information processing. In the present study brain oscillations were analyzed in response to facial expression of emotions. Specifically, GBA modulation was measured when twenty subjects looked at emotional (angry, fearful, happy, and sad faces) or neutral faces in two different conditions: supraliminal (10 ms) vs subliminal (150 ms) stimulation (100 target-mask pairs for each condition). The results showed that both consciousness and significance of the stimulus in terms of arousal can modulate the power synchronization (ERD decrease) during 150-350 time range: an early oscillatory event showed its peak at about 200 ms post-stimulus. GBA was enhanced by supraliminal more than subliminal elaboration, as well as more by high arousal (anger and fear) than low arousal (happiness and sadness) emotions. Finally a left-posterior dominance for conscious elaboration was found, whereas right hemisphere was discriminant in emotional processing of face in comparison with neutral face.
Schwab, Daniela; Schienle, Anne
The present event-related potential (ERP) study investigated for the first time whether children with early-onset social anxiety disorder (SAD) process affective facial expressions of varying intensities differently than non-anxious controls. Participants were 15 SAD patients and 15 non-anxious controls (mean age of 9 years). They were presented with schematic faces displaying anger and happiness at four intensity levels (25%, 50%, 75%, and 100%), as well as with neutral faces. ERPs in early and later time windows (P100, N170, late positivity [LP]), as well as affective ratings (valence and arousal) for the faces, were recorded. SAD patients rated the faces as generally more arousing, regardless of the type of emotion and intensity. Moreover, they displayed enhanced right-parietal LP (350-650 ms). Both arousal ratings and LP reflect stimulus intensity. Therefore, this study provides first evidence of an intensity amplification bias in pediatric SAD during facial affect processing.
Sfärlea, Anca; Greimel, Ellen; Platt, Belinda; Bartling, Jürgen; Schulte-Körne, Gerd; Dieler, Alica C
The present study explored the neurophysiological correlates of perception and recognition of emotional facial expressions in adolescent anorexia nervosa (AN) patients using event-related potentials (ERPs). We included 20 adolescent girls with AN and 24 healthy girls and recorded ERPs during a passive viewing task and three active tasks requiring processing of emotional faces in varying processing depths; one of the tasks also assessed emotion recognition abilities behaviourally. Despite the absence of behavioural differences, we found that across all tasks AN patients exhibited a less pronounced early posterior negativity (EPN) in response to all facial expressions compared to controls. The EPN is an ERP component reflecting an automatic, perceptual processing stage which is modulated by the intrinsic salience of a stimulus. Hence, the less pronounced EPN in anorexic girls suggests that they might perceive other people's faces as less intrinsically relevant, i.e. as less "important" than do healthy girls. Copyright © 2016 Elsevier B.V. All rights reserved.
Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial
Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…
Yin, Lijun; Fan, Mingxia; Lin, Lijia; Sun, Delin; Wang, Zhaoxin
Consistent attention and proper processing of infant faces by adults are essential for infant survival. Previous behavioral studies showed gender differences in processing infant cues (e.g., crying, laughing or facial attractiveness) and more importantly, the efforts invested in nurturing offspring. The underlying neural mechanisms of processing unknown infant faces provide hints for understanding behavioral differences. This functional magnetic resonance imaging (fMRI) study recruited 32 unmarried adult (16 females and 16 males) participants to view unfamiliar infant faces and rate the attractiveness. Adult faces were also included. Behaviorally, despite that females and males showed no differences in attractiveness ratings of infant faces, a positive correlation was found between female's (but not male's) subjective liking for infants and attractiveness ratings of the infant faces. Functionally, brain activations to infant faces were modulated by attractiveness differently in males and females. Specifically, in female participants, activities in the ventromedial prefrontal cortex (vmPFC) and striatum/Nucleus Accumbens (NAcc) were positively modulated by infant facial attractiveness, and the modulation coefficients of these two regions were positively correlated. In male participants, infant facial attractiveness negatively modulated the activity in the dorsomedial prefrontal cortex (dmPFC). Our findings reveal that different neural mechanisms are involved in the processing of infant faces, which might lead to observed behavioral differences between males and females towards the baby.
Faja, Susan; Webb, Sara Jane; Merkle, Kristen; Aylward, Elizabeth; Dawson, Geraldine
The present study investigates the accuracy and speed of face processing employed by high-functioning adults with autism spectrum disorders (ASDs). Two behavioral experiments measured sensitivity to distances between features and face recognition when performance depended on holistic versus featural information. Results suggest adults with ASD…
Menzel, Claudia; Hayn-Leichsenring, Gregor U; Redies, Christoph; Németh, Kornél; Kovács, Gyula
The presence of noise usually impairs the processing of a stimulus. Here, we studied the effects of noise on face processing and show, for the first time, that adaptation to noise patterns has beneficial effects on face perception. We used noiseless faces that were either surrounded by random noise or presented on a uniform background as stimuli. In addition, the faces were either preceded by noise adaptors or not. Moreover, we varied the statistics of the noise so that its spectral slope either matched that of the faces or it was steeper or shallower. Results of parallel ERP recordings showed that the background noise reduces the amplitude of the face-evoked N170, indicating less intensive face processing. Adaptation to a noise pattern, however, led to reduced P1 and enhanced N170 amplitudes as well as to a better behavioral performance in two of the three noise conditions. This effect was also augmented by the presence of background noise around the target stimuli. Additionally, the spectral slope of the noise pattern affected the size of the P1, N170 and P2 amplitudes. We reason that the observed effects are due to the selective adaptation of noise-sensitive neurons present in the face-processing cortical areas, which may enhance the signal-to-noise-ratio. Copyright © 2017 Elsevier Inc. All rights reserved.
Deruelle, Christine; Rondan, Cecilie; Gepner, Bruno; Tardif, Carole
Two experiments were designed to investigate possible abnormal face processing strategies in children with autistic spectrum disorders. A group of 11 children with autism was compared to two groups of normally developing children matched on verbal mental age and on chronological age. In the first experiment, participants had to recognize faces on…
Yin, Lijun; Fan, Mingxia; Lin, Lijia; Sun, Delin; Wang, Zhaoxin
Consistent attention and proper processing of infant faces by adults are essential for infant survival. Previous behavioral studies showed gender differences in processing infant cues (e.g., crying, laughing or facial attractiveness) and more importantly, the efforts invested in nurturing offspring. The underlying neural mechanisms of processing unknown infant faces provide hints for understanding behavioral differences. This functional magnetic resonance imaging (fMRI) study recruited 32 unmarried adult (16 females and 16 males) participants to view unfamiliar infant faces and rate the attractiveness. Adult faces were also included. Behaviorally, despite that females and males showed no differences in attractiveness ratings of infant faces, a positive correlation was found between female’s (but not male’s) subjective liking for infants and attractiveness ratings of the infant faces. Functionally, brain activations to infant faces were modulated by attractiveness differently in males and females. Specifically, in female participants, activities in the ventromedial prefrontal cortex (vmPFC) and striatum/Nucleus Accumbens (NAcc) were positively modulated by infant facial attractiveness, and the modulation coefficients of these two regions were positively correlated. In male participants, infant facial attractiveness negatively modulated the activity in the dorsomedial prefrontal cortex (dmPFC). Our findings reveal that different neural mechanisms are involved in the processing of infant faces, which might lead to observed behavioral differences between males and females towards the baby. PMID:29184490
Cheng, Xue Jun; McCarthy, Callum J; Wang, Tony S L; Palmeri, Thomas J; Little, Daniel R
Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of holistic processing in composite faces in computational terms. We use logical-rule models (Fifić, Little, & Nosofsky, 2010) and Systems Factorial Technology (Townsend & Nozawa, 1995) to examine whether composite faces are processed through pooling top and bottom face halves into a single processing channel-coactive processing-which is one common mechanistic definition of holistic processing. By specifically operationalizing holistic processing as the pooling of features into a single decision process in our task, we are able to distinguish it from other processing models that may underlie composite face processing. For instance, a failure of selective attention might result even when top and bottom components of composite faces are processed in serial or in parallel without processing the entire face coactively. Our results show that performance is best explained by a mixture of serial and parallel processing architectures across all 4 upright and inverted, aligned and misaligned face conditions. The results indicate multichannel, featural processing of composite faces in a manner inconsistent with the notion of coactivity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Joel Shaw, Daniel; Mareček, Radek; Grosbras, Marie-Helene; Leonard, Gabriel; Bruce Pike, G.
Our ability to process complex social cues presented by faces improves during adolescence. Using multivariate analyses of neuroimaging data collected longitudinally from a sample of 38 adolescents (17 males) when they were 10, 11.5, 13 and 15 years old, we tested the possibility that there exists parallel variations in the structural and functional development of neural systems supporting face processing. By combining measures of task-related functional connectivity and brain morphology, we reveal that both the structural covariance and functional connectivity among ‘distal’ nodes of the face-processing network engaged by ambiguous faces increase during this age range. Furthermore, we show that the trajectory of increasing functional connectivity between the distal nodes occurs in tandem with the development of their structural covariance. This demonstrates a tight coupling between functional and structural maturation within the face-processing network. Finally, we demonstrate that increased functional connectivity is associated with age-related improvements of face-processing performance, particularly in females. We suggest that our findings reflect greater integration among distal elements of the neural systems supporting the processing of facial expressions. This, in turn, might facilitate an enhanced extraction of social information from faces during a time when greater importance is placed on social interactions. PMID:26772669
Shaw, Daniel Joel; Mareček, Radek; Grosbras, Marie-Helene; Leonard, Gabriel; Pike, G Bruce; Paus, Tomáš
Our ability to process complex social cues presented by faces improves during adolescence. Using multivariate analyses of neuroimaging data collected longitudinally from a sample of 38 adolescents (17 males) when they were 10, 11.5, 13 and 15 years old, we tested the possibility that there exists parallel variations in the structural and functional development of neural systems supporting face processing. By combining measures of task-related functional connectivity and brain morphology, we reveal that both the structural covariance and functional connectivity among 'distal' nodes of the face-processing network engaged by ambiguous faces increase during this age range. Furthermore, we show that the trajectory of increasing functional connectivity between the distal nodes occurs in tandem with the development of their structural covariance. This demonstrates a tight coupling between functional and structural maturation within the face-processing network. Finally, we demonstrate that increased functional connectivity is associated with age-related improvements of face-processing performance, particularly in females. We suggest that our findings reflect greater integration among distal elements of the neural systems supporting the processing of facial expressions. This, in turn, might facilitate an enhanced extraction of social information from faces during a time when greater importance is placed on social interactions. © The Author (2016). Published by Oxford University Press. For Permissions, please email: email@example.com.
Domes, Gregor; Heinrichs, Markus; Kumbier, Ekkehardt; Grossmann, Annette; Hauenstein, Karlheinz; Herpertz, Sabine C
Autism spectrum disorder (ASD) is associated with altered face processing and decreased activity in brain regions involved in face processing. The neuropeptide oxytocin has been shown to promote face processing and modulate brain activity in healthy adults. The present study examined the effects of oxytocin on the neural basis of face processing in adults with Asperger syndrome (AS). A group of 14 individuals with AS and a group of 14 neurotypical control participants performed a face-matching and a house-matching task during functional magnetic resonance imaging. The effects of a single dose of 24 IU intranasally administered oxytocin were tested in a randomized, placebo-controlled, within-subject, cross-over design. Under placebo, the AS group showed decreased activity in the right amygdala, fusiform gyrus, and inferior occipital gyrus compared with the control group during face processing. After oxytocin treatment, right amygdala activity to facial stimuli increased in the AS group. These findings indicate that oxytocin increases the saliency of social stimuli and in ASD and suggest that oxytocin might promote face processing and eye contact in individuals with ASD as prerequisites for neurotypical social interaction. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Roberts, Daniel J; Lambon Ralph, Matthew A; Kim, Esther; Tainturier, Marie-Josephe; Beeson, Pelagie M; Rapcsak, Steven Z; Woollams, Anna M
Pure alexia (PA) arises from damage to the left posterior fusiform gyrus (pFG) and the striking reading disorder that defines this condition has meant that such patients are often cited as evidence for the specialisation of this region to processing of written words. There is, however, an alternative view that suggests this region is devoted to processing of high acuity foveal input, which is particularly salient for complex visual stimuli like letter strings. Previous reports have highlighted disrupted processing of non-linguistic visual stimuli after damage to the left pFG, both for familiar and unfamiliar objects and also for novel faces. This study explored the nature of face processing deficits in patients with left pFG damage. Identification of famous faces was found to be compromised in both expressive and receptive tasks. Discrimination of novel faces was also impaired, particularly for those that varied in terms of second-order spacing information, and this deficit was most apparent for the patients with the more severe reading deficits. Interestingly, discrimination of faces that varied in terms of feature identity was considerably better in these patients and it was performance in this condition that was related to the size of the length effects shown in reading. This finding complements functional imaging studies showing left pFG activation for faces varying only in spacing and frontal activation for faces varying only on features. These results suggest that the sequential part-based processing strategy that promotes the length effect in the reading of these patients also allows them to discriminate between faces on the basis of feature identity, but processing of second-order configural information is most compromised due to their left pFG lesion. This study supports a view in which the left pFG is specialised for processing of high acuity foveal visual information that supports processing of both words and faces. Copyright © 2015 The Authors
Roberts, Daniel J.; Lambon Ralph, Matthew A.; Kim, Esther; Tainturier, Marie-Josephe; Beeson, Pelagie M.; Rapcsak, Steven Z.; Woollams, Anna M.
Pure alexia (PA) arises from damage to the left posterior fusiform gyrus (pFG) and the striking reading disorder that defines this condition has meant that such patients are often cited as evidence for the specialisation of this region to processing of written words. There is, however, an alternative view that suggests this region is devoted to processing of high acuity foveal input, which is particularly salient for complex visual stimuli like letter strings. Previous reports have highlighted disrupted processing of non-linguistic visual stimuli after damage to the left pFG, both for familiar and unfamiliar objects and also for novel faces. This study explored the nature of face processing deficits in patients with left pFG damage. Identification of famous faces was found to be compromised in both expressive and receptive tasks. Discrimination of novel faces was also impaired, particularly for those that varied in terms of second-order spacing information, and this deficit was most apparent for the patients with the more severe reading deficits. Interestingly, discrimination of faces that varied in terms of feature identity was considerably better in these patients and it was performance in this condition that was related to the size of the length effects shown in reading. This finding complements functional imaging studies showing left pFG activation for faces varying only in spacing and frontal activation for faces varying only on features. These results suggest that the sequential part-based processing strategy that promotes the length effect in the reading of these patients also allows them to discriminate between faces on the basis of feature identity, but processing of second-order configural information is most compromised due to their left pFG lesion. This study supports a view in which the left pFG is specialised for processing of high acuity foveal visual information that supports processing of both words and faces. PMID:25837867
Fu, Genyue; Dong, Yan; Quinn, Paul C.; Xiao, Wen S.; Wang, Qiandong; Chen, Guowei; Pascalis, Olivier; Lee, Kang
The present study examined the effect of visual experience on the magnitude of a novel eye-size illusion: when the size of a face’s frame is increased or decreased but eye size is unchanged, observers judge the size of the eyes to be different from that in the original face frame. In the current study, we asked Chinese and Caucasian participants to judge eye size in different pairs of faces and measured the magnitude of the illusion when the faces were own- or other-age (adult vs. infant faces) and when the faces were own- or other-race (Chinese vs. Caucasian faces). We found an other-age effect and an other-race effect with the eye-size illusion: The illusion was more pronounced with own-race and own-age faces than with other-race and other-age faces. These findings taken together suggest that visual experience with faces influences the magnitude of this novel illusion. Extensive experience with certain face categories strengthens the illusion in the context of these categories, but lack of it reduces the magnitude of the illusion. Our results further imply that holistic processing may play an important role in engendering the eye-size illusion. PMID:26048685
DeGutis, Joseph; DeNicola, Cristopher; Zink, Tyler; McGlinchey, Regina; Milberg, William
Faces of one's own race are discriminated and recognized more accurately than faces of an other race (other-race effect--ORE). Studies have employed several methods to enhance individuation and recognition of other-race faces and reduce the ORE, including intensive perceptual training with other-race faces and explicitly instructing participants…
Prieto, Esther Alonso; Caharel, Stéphanie; Henson, Richard; Rossion, Bruno
Compared to objects, pictures of faces elicit a larger early electromagnetic response at occipito-temporal sites on the human scalp, with an onset of 130 ms and a peak at about 170 ms. This N170 face effect is larger in the right than the left hemisphere and has been associated with the early categorization of the stimulus as a face. Here we tested whether this effect can be observed in the absence of some of the visual areas showing a preferential response to faces as typically identified in neuroimaging. Event-related potentials were recorded in response to faces, cars, and their phase-scrambled versions in a well-known brain-damaged case of prosopagnosia (PS). Despite the patient’s right inferior occipital gyrus lesion encompassing the most posterior cortical area showing preferential response to faces (“occipital face area”), we identified an early face-sensitive component over the right occipito-temporal hemisphere of the patient that was identified as the N170. A second experiment supported this conclusion, showing the typical N170 increase of latency and amplitude in response to inverted faces. In contrast, there was no N170 in the left hemisphere, where PS has a lesion to the middle fusiform gyrus and shows no evidence of face-preferential response in neuroimaging (no left “fusiform face area”). These results were replicated by a magnetoencephalographic investigation of the patient, disclosing a M170 component only in the right hemisphere. These observations indicate that face-preferential activation in the inferior occipital cortex is not necessary to elicit early visual responses associated with face perception (N170/M170) on the human scalp. These results further suggest that when the right inferior occipital cortex is damaged, the integrity of the middle fusiform gyrus and/or the superior temporal sulcus – two areas showing face-preferential responses in the patient’s right hemisphere – might be necessary to generate the N170 effect
Rhodes, Gillian; Nishimura, Mayu; de Heering, Adelaide; Jeffery, Linda; Maurer, Daphne
Faces are adaptively coded relative to visual norms that are updated by experience, and this adaptive coding is linked to face recognition ability. Here we investigated whether adaptive coding of faces is disrupted in individuals (adolescents and adults) who experience face recognition difficulties following visual deprivation from congenital cataracts in infancy. We measured adaptive coding using face identity aftereffects, where smaller aftereffects indicate less adaptive updating of face-coding mechanisms by experience. We also examined whether the aftereffects increase with adaptor identity strength, consistent with norm-based coding of identity, as in typical populations, or whether they show a different pattern indicating some more fundamental disruption of face-coding mechanisms. Cataract-reversal patients showed significantly smaller face identity aftereffects than did controls (Experiments 1 and 2). However, their aftereffects increased significantly with adaptor strength, consistent with norm-based coding (Experiment 2). Thus we found reduced adaptability but no fundamental disruption of norm-based face-coding mechanisms in cataract-reversal patients. Our results suggest that early visual experience is important for the normal development of adaptive face-coding mechanisms. © 2016 John Wiley & Sons Ltd.
Greimel, Ellen; Schulte-Rüther, Martin; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin
Findings on face identity and facial emotion recognition in autism spectrum disorder (ASD) are inconclusive. Moreover, little is known about the developmental trajectory of face processing skills in ASD. Taking a developmental perspective, the aim of this study was to extend previous findings on face processing skills in a sample of adolescents and adults with ASD. N = 38 adolescents and adults (13-49 years) with high-functioning ASD and n = 37 typically developing (TD) control subjects matched for age and IQ participated in the study. Moreover, n = 18 TD children between the ages of 8 and 12 were included to address the question whether face processing skills in ASD follow a delayed developmental pattern. Face processing skills were assessed using computerized tasks of face identity recognition (FR) and identification of facial emotions (IFE). ASD subjects showed impaired performance on several parameters of the FR and IFE task compared to TD control adolescents and adults. Whereas TD adolescents and adults outperformed TD children in both tasks, performance in ASD adolescents and adults was similar to the group of TD children. Within the groups of ASD and control adolescents and adults, no age-related changes in performance were found. Our findings corroborate and extend previous studies showing that ASD is characterised by broad impairments in the ability to process faces. These impairments seem to reflect a developmentally delayed pattern that remains stable throughout adolescence and adulthood.
Vervloed, Mathijs P J; Hendriks, Angélique W; van den Eijnde, Esther
Face processing development is negatively affected when infants have not been exposed to faces for some time because of congenital cataract blocking all vision (Le Grand, Mondloch, Maurer, & Brent, 2001). It is not clear, however, whether more subtle differences in face exposure may also have an influence. The present study looked at the effect of the mother's preferred side of holding an infant, on her adult child's face processing lateralisation. Adults with a mother who had a left-arm preference for holding infants were compared with adults with a mother who had a right-arm holding preference. All participants were right-handed and had been exclusively bottle-fed during infancy. The participants were presented with two chimeric faces tests, one involving emotion and the other one gender. The left-arm held individuals showed a normal left-bias on the chimeric face tests, whereas the right-arm held individuals a significantly decreased left-bias. The results might suggest that reduced exposure to high quality emotional information on faces in infancy results in diminished right-hemisphere lateralisation for face processing. Copyright © 2011 Elsevier Inc. All rights reserved.
Liu, Miaomiao; Pei, Guangying; Peng, Yinuo; Wang, Changming; Yan, Tianyi; Wu, Jinglong
Abstract Schizophrenia is a complex disorder characterized by marked social dysfunctions, but the neural mechanism underlying this deficit is unknown. To investigate whether face-specific perceptual processes are influenced in schizophrenia patients, both face detection and configural analysis were assessed in normal individuals and schizophrenia patients by recording electroencephalogram (EEG) data. Here, a face processing model was built based on the frequency oscillations, and the evoked power (theta, alpha, and beta bands) and the induced power (gamma bands) were recorded while the subjects passively viewed face and nonface images presented in upright and inverted orientations. The healthy adults showed a significant face-specific effect in the alpha, beta, and gamma bands, and an inversion effect was observed in the gamma band in the occipital lobe and right temporal lobe. Importantly, the schizophrenia patients showed face-specific deficits in the low-frequency beta and gamma bands, and the face inversion effect in the gamma band was absent from the occipital lobe. All these results revealed face-specific processing in patients due to the disorder of high-frequency EEG, providing additional evidence to enrich future studies investigating neural mechanisms and serving as a marked diagnostic basis. PMID:29419668
Larra, Mauro F.; Merz, Martina U.; Schächinger, Hartmut
Facial self-resemblance has been associated with positive emotional evaluations, but this effect may be biased by self-face familiarity. Here we report two experiments utilizing startle modulation to investigate how the processing of facial expressions of emotion is affected by subtle resemblance to the self as well as to familiar faces. Participants of the first experiment (I) (N = 39) were presented with morphed faces showing happy, neutral, and fearful expressions which were manipulated to resemble either their own or unknown faces. At SOAs of either 300 ms or 3500–4500 ms after picture onset, startle responses were elicited by binaural bursts of white noise (50 ms, 105 dB), and recorded at the orbicularis oculi via EMG. Manual reaction time was measured in a simple emotion discrimination paradigm. Pictures preceding noise bursts by short SOA inhibited startle (prepulse inhibition, PPI). Both affective modulation and PPI of startle in response to emotional faces was altered by physical similarity to the self. As indexed both by relative facilitation of startle and faster manual responses, self-resemblance apparently induced deeper processing of facial affect, particularly in happy faces. Experiment II (N = 54) produced similar findings using morphs of famous faces, yet showed no impact of mere familiarity on PPI effects (or response time, either). The results are discussed with respect to differential (presumably pre-attentive) effects of self-specific vs. familiar information in face processing. PMID:29216226
Huhn, Stefan; Peeling, Derek; Burkart, Maximilian
With the availability of die face design tools and incremental solver technologies to provide detailed forming feasibility results in a timely fashion, the use of inverse solver technologies and resulting process improvements during the product development process of stamped parts often is underestimated. This paper presents some applications of inverse technologies that are currently used in the automotive industry to streamline the product development process and greatly increase the quality of a developed process and the resulting product. The first focus is on the so-called target strain technology. Application examples will show how inverse forming analysis can be applied to support the process engineer during the development of a die face geometry for Class `A' panels. The drawing process is greatly affected by the die face design and the process designer has to ensure that the resulting drawn panel will meet specific requirements regarding surface quality and a minimum strain distribution to ensure dent resistance. The target strain technology provides almost immediate feedback to the process engineer during the die face design process if a specific change of the die face design will help to achieve these specific requirements or will be counterproductive. The paper will further show how an optimization of the material flow can be achieved through the use of a newly developed technology called Sculptured Die Face (SDF). The die face generation in SDF is more suited to be used in optimization loops than any other conventional die face design technology based on cross section design. A second focus in this paper is on the use of inverse solver technologies for secondary forming operations. The paper will show how the application of inverse technology can be used to accurately and quickly develop trim lines on simple as well as on complex support geometries.
Davis, Megan M; Hudson, Sean M; Ma, Debbie S; Correll, Joshua
Participants typically process same-race faces more quickly and more accurately than cross-race faces. This deficit is amplified in the right hemisphere of the brain, presumably due to its involvement in configural processing. The present research tested the idea that cross-race contact tunes cognitive and perceptual systems, influencing this asymmetric race-based deficit in face processing. Participants with high and low levels of contact performed a lateralized recognition task with same- and cross-race faces. Replicating prior work, participants with minimal contact showed cross-race deficits in processing that were larger in the right hemisphere. For participants with more contact, this lateralized deficit disappeared. This effect of contact seems to be independent of race-based attitudes (e.g., prejudice).
Nickl-Jockschat, Thomas; Rottschy, Claudia; Thommes, Johanna; Schneider, Frank; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.
One of the most consistent neuropsychological findings in autism spectrum disorders (ASD) is a reduced interest in and impaired processing of human faces. We conducted an activation likelihood estimation meta-analysis on 14 functional imaging studies on neural correlates of face processing enrolling a total of 164 ASD patients. Subsequently, normative whole-brain functional connectivity maps for the identified regions of significant convergence were computed for the task-independent (resting-state) and task-dependent (co-activations) state in healthy subjects. Quantitative functional decoding was performed by reference to the BrainMap database. Finally, we examined the overlap of the delineated network with the results of a previous meta-analysis on structural abnormalities in ASD as well as with brain regions involved in human action observation/imitation. We found a single cluster in the left fusiform gyrus showing significantly reduced activation during face processing in ASD across all studies. Both task-dependent and task-independent analyses indicated significant functional connectivity of this region with the temporo-occipital and lateral occipital cortex, the inferior frontal and parietal cortices, the thalamus and the amygdala. Quantitative reverse inference then indicated an association of these regions mainly with face processing, affective processing, and language-related tasks. Moreover, we found that the cortex in the region of right area V5 displaying structural changes in ASD patients showed consistent connectivity with the region showing aberrant responses in the context of face processing. Finally, this network was also implicated in the human action observation/imitation network. In summary, our findings thus suggest a functionally and structurally disturbed network of occipital regions related primarily to face (but potentially also language) processing, which interact with inferior frontal as well as limbic regions and may be the core of
Nakajima, K; Minami, T; Nakauchi, S
Recent studies have suggested that both configural information, such as face shape, and surface information is important for face perception. In particular, facial color is sufficiently suggestive of emotional states, as in the phrases: "flushed with anger" and "pale with fear." However, few studies have examined the relationship between facial color and emotional expression. On the other hand, event-related potential (ERP) studies have shown that emotional expressions, such as fear, are processed unconsciously. In this study, we examined how facial color modulated the supraliminal and subliminal processing of fearful faces. We recorded electroencephalograms while participants performed a facial emotion identification task involving masked target faces exhibiting facial expressions (fearful or neutral) and colors (natural or bluish). The results indicated that there was a significant interaction between facial expression and color for the latency of the N170 component. Subsequent analyses revealed that the bluish-colored faces increased the latency effect of facial expressions compared to the natural-colored faces, indicating that the bluish color modulated the processing of fearful expressions. We conclude that the unconscious processing of fearful faces is affected by facial color. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Hung, Shao-Min; Nieh, Chih-Hsuan; Hsieh, Po-Jang
Past research has proven human’s extraordinary ability to extract information from a face in the blink of an eye, including its emotion, gaze direction, and attractiveness. However, it remains elusive whether facial attractiveness can be processed and influences our behaviors in the complete absence of conscious awareness. Here we demonstrate unconscious processing of facial attractiveness with three distinct approaches. In Experiment 1, the time taken for faces to break interocular suppression was measured. The results showed that attractive faces enjoyed the privilege of breaking suppression and reaching consciousness earlier. In Experiment 2, we further showed that attractive faces had lower visibility thresholds, again suggesting that facial attractiveness could be processed more easily to reach consciousness. Crucially, in Experiment 3, a significant decrease of accuracy on an orientation discrimination task subsequent to an invisible attractive face showed that attractive faces, albeit suppressed and invisible, still exerted an effect by orienting attention. Taken together, for the first time, we show that facial attractiveness can be processed in the complete absence of consciousness, and an unconscious attractive face is still capable of directing our attention. PMID:27848992
Hung, Shao-Min; Nieh, Chih-Hsuan; Hsieh, Po-Jang
Past research has proven human's extraordinary ability to extract information from a face in the blink of an eye, including its emotion, gaze direction, and attractiveness. However, it remains elusive whether facial attractiveness can be processed and influences our behaviors in the complete absence of conscious awareness. Here we demonstrate unconscious processing of facial attractiveness with three distinct approaches. In Experiment 1, the time taken for faces to break interocular suppression was measured. The results showed that attractive faces enjoyed the privilege of breaking suppression and reaching consciousness earlier. In Experiment 2, we further showed that attractive faces had lower visibility thresholds, again suggesting that facial attractiveness could be processed more easily to reach consciousness. Crucially, in Experiment 3, a significant decrease of accuracy on an orientation discrimination task subsequent to an invisible attractive face showed that attractive faces, albeit suppressed and invisible, still exerted an effect by orienting attention. Taken together, for the first time, we show that facial attractiveness can be processed in the complete absence of consciousness, and an unconscious attractive face is still capable of directing our attention.
Kongthong, Nutchakan; Minami, Tetsuto; Nakauchi, Shigeki
Whether visual subliminal processing involves semantic processing is still being debated. To examine this, we combined a passive electroencephalogram (EEG) study with an application of transcranial direct current stimulation (tDCS). In the masked-face priming paradigm, we presented a subliminal prime preceding the target stimulus. Participants were asked to determine whether the target face was a famous face, indicated by a button press. The prime and target pair were either the same person's face (congruent) or different person's faces (incongruent), and were always both famous or both non-famous faces. Experiments were performed over 2 days: 1 day for a real tDCS session and another for a sham session as a control condition. In the sham session, a priming effect, reflected in the difference in amplitude of the late positive component (250-500 ms to target onset), was observed only in the famous prime condition. According to a previous study, this effect might indicate a subliminal semantic process . Alternatively, a priming effect toward famous primes disappeared after tDCS stimulation. Our results suggested that a subliminal process might not be limited to processes in the occipital and temporal areas, but may proceed to the semantic level processed in prefrontal cortex. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Connors, Michael H.; Barnier, Amanda J.; Coltheart, Max; Langdon, Robyn; Cox, Rochelle E.; Rivolta, Davide; Halligan, Peter W.
Mirrored-self misidentification delusion is the belief that one’s reflection in the mirror is not oneself. This experiment used hypnotic suggestion to impair normal face processing in healthy participants and recreate key aspects of the delusion in the laboratory. From a pool of 439 participants, 22 high hypnotisable participants (“highs”) and 20 low hypnotisable participants were selected on the basis of their extreme scores on two separately administered measures of hypnotisability. These participants received a hypnotic induction and a suggestion for either impaired (i) self-face recognition or (ii) impaired recognition of all faces. Participants were tested on their ability to recognize themselves in a mirror and other visual media – including a photograph, live video, and handheld mirror – and their ability to recognize other people, including the experimenter and famous faces. Both suggestions produced impaired self-face recognition and recreated key aspects of the delusion in highs. However, only the suggestion for impaired other-face recognition disrupted recognition of other faces, albeit in a minority of highs. The findings confirm that hypnotic suggestion can disrupt face processing and recreate features of mirrored-self misidentification. The variability seen in participants’ responses also corresponds to the heterogeneity seen in clinical patients. An important direction for future research will be to examine sources of this variability within both clinical patients and the hypnotic model. PMID:24994973
McKendrick, Mel; Butler, Stephen H.; Grealy, Madeleine A.
The role of self-relevance has been somewhat neglected in static face processing paradigms but may be important in understanding how emotional faces impact on attention, cognition and affect. The aim of the current study was to investigate the effect of self-relevant primes on processing emotional composite faces. Sentence primes created an expectation of the emotion of the face before sad, happy, neutral or composite face photos were viewed. Eye movements were recorded and subsequent responses measured the cognitive and affective impact of the emotion expressed. Results indicated that primes did not guide attention, but impacted on judgments of valence intensity and self-esteem ratings. Negative self-relevant primes led to the most negative self-esteem ratings, although the effect of the prime was qualified by salient facial features. Self-relevant expectations about the emotion of a face and subsequent attention to a face that is congruent with these expectations strengthened the affective impact of viewing the face. PMID:27175487
Prete, Giulia; Laeng, Bruno; Tommasi, Luca
It is well known that hemispheric asymmetries exist for both the analyses of low-level visual information (such as spatial frequency) and high-level visual information (such as emotional expressions). In this study, we assessed which of the above factors underlies perceptual laterality effects with "hybrid faces": a type of stimulus that allows testing for unaware processing of emotional expressions, when the emotion is displayed in the low-frequency information while an image of the same face with a neutral expression is superimposed to it. Despite hybrid faces being perceived as neutral, the emotional information modulates observers' social judgements. In the present study, participants were asked to assess friendliness of hybrid faces displayed tachistoscopically, either centrally or laterally to fixation. We found a clear influence of the hidden emotions also with lateral presentations. Happy faces were rated as more friendly and angry faces as less friendly with respect to neutral faces. In general, hybrid faces were evaluated as less friendly when they were presented in the left visual field/right hemisphere than in the right visual field/left hemisphere. The results extend the validity of the valence hypothesis in the specific domain of unaware (subcortical) emotion processing.
McKendrick, Mel; Butler, Stephen H; Grealy, Madeleine A
The role of self-relevance has been somewhat neglected in static face processing paradigms but may be important in understanding how emotional faces impact on attention, cognition and affect. The aim of the current study was to investigate the effect of self-relevant primes on processing emotional composite faces. Sentence primes created an expectation of the emotion of the face before sad, happy, neutral or composite face photos were viewed. Eye movements were recorded and subsequent responses measured the cognitive and affective impact of the emotion expressed. Results indicated that primes did not guide attention, but impacted on judgments of valence intensity and self-esteem ratings. Negative self-relevant primes led to the most negative self-esteem ratings, although the effect of the prime was qualified by salient facial features. Self-relevant expectations about the emotion of a face and subsequent attention to a face that is congruent with these expectations strengthened the affective impact of viewing the face.
van Peer, Jacobien M; Enter, Dorien; van Steenbergen, Henk; Spinhoven, Philip; Roelofs, Karin
Testosterone plays an important role in social threat processing. Recent evidence suggests that testosterone administration has socially anxiolytic effects, but it remains unknown whether this involves early vigilance or later, more sustained, processing-stages. We investigated the acute effects of testosterone administration on social threat processing in 19 female patients with Social Anxiety Disorder (SAD) and 19 healthy controls. Event-related potentials (ERPs) were recorded during an emotional Stroop task with subliminally presented faces. Testosterone induced qualitative changes in early ERPs (<200ms after stimulus onset) in both groups. An initial testosterone-induced spatial shift reflected a change in the basic processing (N170/VPP) of neutral faces, which was followed by a shift for angry faces suggesting a decrease in early threat bias. These findings suggest that testosterone specifically affects early automatic social information processing. The decreased attentional bias for angry faces explains how testosterone can decrease threat avoidance, which is particularly relevant for SAD. Copyright © 2017 Elsevier B.V. All rights reserved.
Van Rheenen, Tamsyn E; Joshua, Nicole; Castle, David J; Rossell, Susan L
Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287-291).
Bate, Sarah; Haslam, Catherine; Hodgson, Timothy L; Jansari, Ashok; Gregory, Nicola; Kay, Janice
Previous work has consistently reported a facilitatory influence of positive emotion in face recognition (e.g., D'Argembeau, Van der Linden, Comblain, & Etienne, 2003). However, these reports asked participants to make recognition judgments in response to faces, and it is unknown whether emotional valence may influence other stages of processing, such as at the level of semantics. Furthermore, other evidence suggests that negative rather than positive emotion facilitates higher level judgments when processing nonfacial stimuli (e.g., Mickley & Kensinger, 2008), and it is possible that negative emotion also influences latter stages of face processing. The present study addressed this issue, examining the influence of emotional valence while participants made semantic judgments in response to a set of famous faces. Eye movements were monitored while participants performed this task, and analyses revealed a reduction in information extraction for the faces of liked and disliked celebrities compared with those of emotionally neutral celebrities. Thus, in contrast to work using familiarity judgments, both positive and negative emotion facilitated processing in this semantic-based task. This pattern of findings is discussed in relation to current models of face processing. Copyright 2009 APA, all rights reserved.
Carbajal-Valenzuela, Cintli Carolina; Santiago-Rodríguez, Efraín; Quirarte, Gina L; Harmony, Thalía
The rate of premature births has increased in the past 2 decades. Ten percent of premature birth survivors develop motor impairment, but almost half exhibit later sensorial, cognitive, and emotional disabilities attributed to white matter injury and decreased volume of neuronal structures. The aim of this study was to test the hypothesis that premature and full-term infants differ in their development of emotional face processing. A comparative longitudinal study was conducted in premature and full-term infants at 4 and 8 months of age. The absolute power of the electroencephalogram was analyzed in both groups during 5 conditions of an emotional face processing task: positive, negative, neutral faces, non-face, and rest. Differences between the conditions of the task at 4 months were limited to rest versus non-rest comparisons in both groups. Eight-month-old term infants had increases ( P ≤ .05) in absolute power in the left occipital region at the frequency of 10.1 Hz and in the right occipital region at 3.5, 12.8, and 16.0 Hz when shown a positive face in comparison with a neutral face. They also showed increases in absolute power in the left occipital region at 1.9 Hz and in the right occipital region at 2.3 and 3.5 Hz with positive compared to non-face stimuli. In contrast, positive, negative, and neutral faces elicited the same responses in premature infants. In conclusion, our study provides electrophysiological evidence that emotional face processing develops differently in premature than in full-term infants, suggesting that premature birth alters mechanisms of brain development, such as the myelination process, and consequently affects complex cognitive functions.
Paulus, Martin P.; Simmons, Alan N.; Fitzpatrick, Summer N.; Potterat, Eric G.; Van Orden, Karl F.; Bauman, James; Swain, Judith L.
Background Little is known about the neural basis of elite performers and their optimal performance in extreme environments. The purpose of this study was to examine brain processing differences between elite warfighters and comparison subjects in brain structures that are important for emotion processing and interoception. Methodology/Principal Findings Navy Sea, Air, and Land Forces (SEALs) while off duty (n = 11) were compared with n = 23 healthy male volunteers while performing a simple emotion face-processing task during functional magnetic resonance imaging. Irrespective of the target emotion, elite warfighters relative to comparison subjects showed relatively greater right-sided insula, but attenuated left-sided insula, activation. Navy SEALs showed selectively greater activation to angry target faces relative to fearful or happy target faces bilaterally in the insula. This was not accounted for by contrasting positive versus negative emotions. Finally, these individuals also showed slower response latencies to fearful and happy target faces than did comparison subjects. Conclusions/Significance These findings support the hypothesis that elite warfighters deploy greater processing resources toward potential threat-related facial expressions and reduced processing resources to non-threat-related facial expressions. Moreover, rather than expending more effort in general, elite warfighters show more focused neural and performance tuning. In other words, greater neural processing resources are directed toward threat stimuli and processing resources are conserved when facing a nonthreat stimulus situation. PMID:20418943
Kret, Mariska E; Tomonaga, Masaki
For social species such as primates, the recognition of conspecifics is crucial for their survival. As demonstrated by the 'face inversion effect', humans are experts in recognizing faces and unlike objects, recognize their identity by processing it configurally. The human face, with its distinct features such as eye-whites, eyebrows, red lips and cheeks signals emotions, intentions, health and sexual attraction and, as we will show here, shares important features with the primate behind. Chimpanzee females show a swelling and reddening of the anogenital region around the time of ovulation. This provides an important socio-sexual signal for group members, who can identify individuals by their behinds. We hypothesized that chimpanzees process behinds configurally in a way humans process faces. In four different delayed matching-to-sample tasks with upright and inverted body parts, we show that humans demonstrate a face, but not a behind inversion effect and that chimpanzees show a behind, but no clear face inversion effect. The findings suggest an evolutionary shift in socio-sexual signalling function from behinds to faces, two hairless, symmetrical and attractive body parts, which might have attuned the human brain to process faces, and the human face to become more behind-like.
Moulson, Margaret C; Fox, Nathan A; Zeanah, Charles H; Nelson, Charles A
To examine the neurobiological consequences of early institutionalization, the authors recorded event-related potentials (ERPs) from 3 groups of Romanian children--currently institutionalized, previously institutionalized but randomly assigned to foster care, and family-reared children--in response to pictures of happy, angry, fearful, and sad facial expressions of emotion. At 3 assessments (baseline, 30 months, and 42 months), institutionalized children showed markedly smaller amplitudes and longer latencies for the occipital components P1, N170, and P400 compared to family-reared children. By 42 months, ERP amplitudes and latencies of children placed in foster care were intermediate between the institutionalized and family-reared children, suggesting that foster care may be partially effective in ameliorating adverse neural changes caused by institutionalization. The age at which children were placed into foster care was unrelated to their ERP outcomes at 42 months. Facial emotion processing was similar in all 3 groups of children; specifically, fearful faces elicited larger amplitude and longer latency responses than happy faces for the frontocentral components P250 and Nc. These results have important implications for understanding of the role that experience plays in shaping the developing brain.
Ofan, Renana H.; Rubin, Nava
Social anxiety is the intense fear of negative evaluation by others, and it emerges uniquely from a social situation. Given its social origin, we asked whether an anxiety-inducing social situation could enhance the processing of faces linked to the situational threat. While past research has focused on how individual differences in social anxiety relate to face processing, we tested the effect of manipulated social anxiety in the context of anxiety about appearing racially prejudiced in front of a peer. Visual processing of faces was indexed by the N170 component of the event-related potential. Participants viewed faces of Black and White males, along with nonfaces, either in private or while being monitored by the experimenter for signs of prejudice in a ‘public’ condition. Results revealed a difference in the N170 response to Black and Whites faces that emerged only in the public condition and only among participants high in dispositional social anxiety. These results provide new evidence that anxiety arising from the social situation modulates the earliest stages of face processing in a way that is specific to a social threat, and they shed new light on how anxiety effects on perception may contribute to the regulation of intergroup responses. PMID:23709354
Guan, Lili; Zhao, Yufang; Wang, Yige; Chen, Yujie; Yang, Juan
The self-face processing advantage (SPA) refers to the research finding that individuals generally recognize their own face faster than another’s face; self-face also elicits an enhanced P3 amplitude compared to another’s face. It has been suggested that social evaluation threats could weaken the SPA and that self-esteem could be regarded as a threat buffer. However, little research has directly investigated the neural evidence of how self-esteem modulates the social evaluation threat to the SPA. In the current event-related potential study, 27 healthy Chinese undergraduate students were primed with emotional faces (angry, happy, or neutral) and were asked to judge whether the target face (self, friend, and stranger) was familiar or unfamiliar. Electrophysiological results showed that after priming with emotional faces (angry and happy), self-face elicited similar P3 amplitudes to friend-face in individuals with low self-esteem, but not in individuals with high self-esteem. The results suggest that as low self-esteem raises fears of social rejection and exclusion, priming with emotional faces (angry and happy) can weaken the SPA in low self-esteem individuals but not in high self-esteem individuals. PMID:28868041
Guan, Lili; Zhao, Yufang; Wang, Yige; Chen, Yujie; Yang, Juan
The self-face processing advantage (SPA) refers to the research finding that individuals generally recognize their own face faster than another's face; self-face also elicits an enhanced P3 amplitude compared to another's face. It has been suggested that social evaluation threats could weaken the SPA and that self-esteem could be regarded as a threat buffer. However, little research has directly investigated the neural evidence of how self-esteem modulates the social evaluation threat to the SPA. In the current event-related potential study, 27 healthy Chinese undergraduate students were primed with emotional faces (angry, happy, or neutral) and were asked to judge whether the target face (self, friend, and stranger) was familiar or unfamiliar. Electrophysiological results showed that after priming with emotional faces (angry and happy), self-face elicited similar P3 amplitudes to friend-face in individuals with low self-esteem, but not in individuals with high self-esteem. The results suggest that as low self-esteem raises fears of social rejection and exclusion, priming with emotional faces (angry and happy) can weaken the SPA in low self-esteem individuals but not in high self-esteem individuals.
Luyster, Rhiannon J; Bick, Johanna; Westerlund, Alissa; Nelson, Charles A
Individuals with autism spectrum disorder (ASD) commonly show global deficits in the processing of facial emotion, including impairments in emotion recognition and slowed processing of emotional faces. Growing evidence has suggested that these challenges may increase with age, perhaps due to minimal improvement with age in individuals with ASD. In the present study, we explored the role of age, emotion type and emotion intensity in face processing for individuals with and without ASD. Twelve- and 18-22- year-old children with and without ASD participated. No significant diagnostic group differences were observed on behavioral measures of emotion processing for younger versus older individuals with and without ASD. However, there were significant group differences in neural responses to emotional faces. Relative to TD, at 12 years of age and during adulthood, individuals with ASD showed slower N170 to emotional faces. While the TD groups' P1 latency was significantly shorter in adults when compared to 12 year olds, there was no significant age-related difference in P1 latency among individuals with ASD. Findings point to potential differences in the maturation of cortical networks that support visual processing (whether of faces or stimuli more broadly), among individuals with and without ASD between late childhood and adulthood. Finally, associations between ERP amplitudes and behavioral responses on emotion processing tasks suggest possible neural markers for emotional and behavioral deficits among individuals with ASD. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pezzulo, G; Iodice, P; Barca, L; Chausse, P; Monceau, S; Mermillod, M
Embodied theories of emotion assume that emotional processing is grounded in bodily and affective processes. Accordingly, the perception of an emotion re-enacts congruent sensory and affective states; and conversely, bodily states congruent with a specific emotion facilitate emotional processing. This study tests whether the ability to process facial expressions (faces having a neutral expression, expressing fear, or disgust) can be influenced by making the participants' body state congruent with the expressed emotion (e.g., high heart rate in the case of faces expressing fear). We designed a task requiring participants to categorize pictures of male and female faces that either had a neutral expression (neutral), or expressed emotions whose linkage with high heart rate is strong (fear) or significantly weaker or absent (disgust). Critically, participants were tested in two conditions: with experimentally induced high heart rate (Exercise) and with normal heart rate (Normal). Participants processed fearful faces (but not disgusted or neutral faces) faster when they were in the Exercise condition than in the Normal condition. These results support the idea that an emotionally congruent body state facilitates the automatic processing of emotionally-charged stimuli and this effect is emotion-specific rather than due to generic factors such as arousal.
Germine, Laura T; Garrido, Lucia; Bruce, Lori; Hooker, Christine
Human beings are social organisms with an intrinsic desire to seek and participate in social interactions. Social anhedonia is a personality trait characterized by a reduced desire for social affiliation and reduced pleasure derived from interpersonal interactions. Abnormally high levels of social anhedonia prospectively predict the development of schizophrenia and contribute to poorer outcomes for schizophrenia patients. Despite the strong association between social anhedonia and schizophrenia, the neural mechanisms that underlie individual differences in social anhedonia have not been studied and are thus poorly understood. Deficits in face emotion recognition are related to poorer social outcomes in schizophrenia, and it has been suggested that face emotion recognition deficits may be a behavioral marker for schizophrenia liability. In the current study, we used functional magnetic resonance imaging (fMRI) to see whether there are differences in the brain networks underlying basic face emotion processing in a community sample of individuals low vs. high in social anhedonia. We isolated the neural mechanisms related to face emotion processing by comparing face emotion discrimination with four other baseline conditions (identity discrimination of emotional faces, identity discrimination of neutral faces, object discrimination, and pattern discrimination). Results showed a group (high/low social anhedonia) × condition (emotion discrimination/control condition) interaction in the anterior portion of the rostral medial prefrontal cortex, right superior temporal gyrus, and left somatosensory cortex. As predicted, high (relative to low) social anhedonia participants showed less neural activity in face emotion processing regions during emotion discrimination as compared to each control condition. The findings suggest that social anhedonia is associated with abnormalities in networks responsible for basic processes associated with social cognition, and provide a
Hulse, Nathan C.; Ranade-Kharkar, Pallavi; Post, Herman; Wood, Grant M.; Williams, Marc S.; Haug, Peter J.
Personalized medicine will require detailed clinical patient profiles, and a particular focus on capturing data that is useful in forecasting risk. A detailed family health history is considered a critical component of these profiles, insomuch that it has been coined as ‘the best genetic test available’. Despite this, tools aimed at capturing this information for use in electronic health records have been characterized as inadequate. In this manuscript we detail the creation of a patient-facing family health history tool known as OurFamilyHealth, whose long-term emphasis is to facilitate risk assessment and clinical decision support. We present the rationale for such a tool, describe its development and release as a component of Intermountain Healthcare’s patient portal, and detail early usage statistics surrounding the application. Data derived from the tool since its release are also compared against family history charting patterns in Intermountain’s electronic health records, revealing differences in data availability. PMID:22195113
Weiner, Kevin S; Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A; Grill-Spector, Kalanit; Rossion, Bruno
Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. Brain networks consist of interconnected
Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier
Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…
Schwarzkopf, D. Samuel; Alvarez, Ivan; Lawson, Rebecca P.; Henriksson, Linda; Kriegeskorte, Nikolaus; Rees, Geraint
Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather
Fisher, Katie; Towler, John; Eimer, Martin
It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mueller, Christina J; Kuchinke, Lars
Affective flanker tasks often present affective facial expressions as stimuli. However, it is not clear whether the identity of the person on the target picture needs to be the same for the flanker stimuli or whether it is better to use pictures of different persons as flankers. While Grose-Fifer, Rodrigues, Hoover & Zottoli (Advances in Cognitive Psychology 9(2):81-91, 2013) state that attentional focus might be captured by processing the differences between faces, i.e. the identity, and therefore use pictures of the same individual as target and flanker stimuli, Munro, Dywan, Harris, McKee, Unsal & Segalowitz (Biological Psychology, 76:31-42, 2007) propose an advantage in presenting pictures of a different individual as flankers. They state that participants might focus only on small visual changes when targets and flankers are from the same individual instead of processing the affective content of the stimuli. The present study manipulated face identity in a between-subject design. Through investigation of behavioral measures as well as diffusion model parameters, we conclude that both types of flankers work equally efficient. This result seems best supported by recent accounts that propose an advantage of emotional processing over identity processing in face recognition. In the present study, there is no evidence that the processing of the face identity attracts sufficient attention to interfere with the affective evaluation of the target and flanker faces.
Schepman, Karen; Taylor, Eric; Collishaw, Stephan; Fombonne, Eric
Studies of adults with depression point to characteristic neurocognitive deficits, including differences in processing facial expressions. Few studies have examined face processing in juvenile depression, or taken account of other comorbid disorders. Three groups were compared: depressed children and adolescents with conduct disorder (n = 23),…
This research was carried out to determine technology barriers faced by the instructors within the framework of quality processes conducted at the University of Sakarya.Therefore, technology barriers encountered in the process of teaching while using web sites developed in order to manage quality operations from a single center were examined…
Röder, Brigitte; Ley, Pia; Shenoy, Bhamy H; Kekunnaya, Ramesh; Bottari, Davide
The aim of the study was to identify possible sensitive phases in the development of the processing system for human faces. We tested the neural processing of faces in 11 humans who had been blind from birth and had undergone cataract surgery between 2 mo and 14 y of age. Pictures of faces and houses, scrambled versions of these pictures, and pictures of butterflies were presented while event-related potentials were recorded. Participants had to respond to the pictures of butterflies (targets) only. All participants, even those who had been blind from birth for several years, were able to categorize the pictures and to detect the targets. In healthy controls and in a group of visually impaired individuals with a history of developmental or incomplete congenital cataracts, the well-known enhancement of the N170 (negative peak around 170 ms) event-related potential to faces emerged, but a face-sensitive response was not observed in humans with a history of congenital dense cataracts. By contrast, this group showed a similar N170 response to all visual stimuli, which was indistinguishable from the N170 response to faces in the controls. The face-sensitive N170 response has been associated with the structural encoding of faces. Therefore, these data provide evidence for the hypothesis that the functional differentiation of category-specific neural representations in humans, presumably involving the elaboration of inhibitory circuits, is dependent on experience and linked to a sensitive period. Such functional specialization of neural systems seems necessary to archive high processing proficiency.
Ran, Guangming; Chen, Xu; Pan, Yangu
There is evidence that women and men show differences in the perception of affective facial expressions. However, none of the previous studies directly investigated sex differences in emotional processing of own-race and other-race faces. The current study addressed this issue using high time resolution event-related potential techniques. In total, data from 25 participants (13 women and 12 men) were analyzed. It was found that women showed increased N170 amplitudes to negative White faces compared with negative Chinese faces over the right hemisphere electrodes. This result suggests that women show enhanced sensitivity to other-race faces showing negative emotions (fear or disgust), which may contribute toward evolution. However, the current data showed that men had increased N170 amplitudes to happy Chinese versus happy White faces over the left hemisphere electrodes, indicating that men show enhanced sensitivity to own-race faces showing positive emotions (happiness). In this respect, men might use past pleasant emotional experiences to boost recognition of own-race faces.
Golarai, Golijeh; Liberman, Alina; Grill-Spector, Kalanit
In adult humans, the ventral temporal cortex (VTC) represents faces in a reproducible topology. However, it is unknown what role visual experience plays in the development of this topology. Using functional magnetic resonance imaging in children and adults, we found a sequential development, in which the topology of face-selective activations across the VTC was matured by age 7, but the spatial extent and degree of face selectivity continued to develop past age 7 into adulthood. Importantly, own- and other-age faces were differentially represented, both in the distributed multivoxel patterns across the VTC, and also in the magnitude of responses of face-selective regions. These results provide strong evidence that experience shapes cortical representations of faces during development from childhood to adulthood. Our findings have important implications for the role of experience and age in shaping the neural substrates of face processing in the human VTC. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
Johnson, Mark H; Senju, Atsushi; Tomalski, Przemyslaw
Johnson and Morton (1991. Biology and Cognitive Development: The Case of Face Recognition. Blackwell, Oxford) used Gabriel Horn's work on the filial imprinting model to inspire a two-process theory of the development of face processing in humans. In this paper we review evidence accrued over the past two decades from infants and adults, and from other primates, that informs this two-process model. While work with newborns and infants has been broadly consistent with predictions from the model, further refinements and questions have been raised. With regard to adults, we discuss more recent evidence on the extension of the model to eye contact detection, and to subcortical face processing, reviewing functional imaging and patient studies. We conclude with discussion of outstanding caveats and future directions of research in this field. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Suzuki, Hideo; Luby, Joan L; Botteron, Kelly N; Dietrich, Rachel; McAvoy, Mark P; Barch, Deanna M
Previous studies have examined the relationships between structural brain characteristics and early life stress in adults. However, there is limited evidence for functional brain variation associated with early life stress in children. We hypothesized that early life stress and trauma would be associated with increased functional brain activation response to negative emotional faces in children with and without a history of depression. Psychiatric diagnosis and life events in children (starting at age 3-5 years) were assessed in a longitudinal study. A follow-up magnetic resonance imaging (MRI) study acquired data (N = 115 at ages 7-12, 51% girls) on functional brain response to fearful, sad, and happy faces relative to neutral faces. We used a region-of-interest mask within cortico-limbic areas and conducted regression analyses and repeated-measures analysis of covariance. Greater activation responses to fearful, sad, and happy faces in the amygdala and its neighboring regions were found in children with greater life stress. Moreover, an association between life stress and left hippocampal and globus pallidus activity depended on children's diagnostic status. Finally, all children with greater life trauma showed greater bilateral amygdala and cingulate activity specific to sad faces but not the other emotional faces, although right amygdala activity was moderated by psychiatric status. These findings suggest that limbic hyperactivity may be a biomarker of early life stress and trauma in children and may have implications in the risk trajectory for depression and other stress-related disorders. However, this pattern varied based on emotion type and history of psychopathology. Copyright © 2014 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Bobes, Maria A.; Góngora, Daylin; Valdes, Annette; Santos, Yusniel; Acosta, Yanely; Fernandez Garcia, Yuriem; Lage, Agustin; Valdés-Sosa, Mitchell
Although Capgras delusion (CD) patients are capable of recognizing familiar faces, they present a delusional belief that some relatives have been replaced by impostors. CD has been explained as a selective disruption of a pathway processing affective values of familiar faces. To test the integrity of connections within face processing circuitry, diffusion tensor imaging was performed in a CD patient and 10 age-matched controls. Voxel-based morphometry indicated gray matter damage in right frontal areas. Tractography was used to examine two important tracts of the face processing circuitry: the inferior fronto-occipital fasciculus (IFOF) and the inferior longitudinal (ILF). The superior longitudinal fasciculus (SLF) and commissural tracts were also assessed. CD patient did not differ from controls in the commissural fibers, or the SLF. Right and left ILF, and right IFOF were also equivalent to those of controls. However, the left IFOF was significantly reduced respect to controls, also showing a significant dissociation with the ILF, which represents a selective impairment in the fiber-tract connecting occipital and frontal areas. This suggests a possible involvement of the IFOF in affective processing of faces in typical observers and in covert recognition in some cases with prosopagnosia. PMID:26909325
Riesenhuber, Maximilian; Wolff, Brian S.
Summary A recent article in Acta Psychologica (“Picture-plane inversion leads to qualitative changes of face perception” by B. Rossion, 2008) criticized several aspects of an earlier paper of ours (Riesenhuber et al., “Face processing in humans is compatible with a simple shape-based model of vision”, Proc Biol Sci, 2004). We here address Rossion’s criticisms and correct some misunderstandings. To frame the discussion, we first review our previously presented computational model of face recognition in cortex (Jiang et al., “Evaluation of a shape-based model of human face discrimination using fMRI and behavioral techniques”, Neuron, 2006) that provides a concrete biologically plausible computational substrate for holistic coding, namely a neural representation learned for upright faces, in the spirit of the original simple-to-complex hierarchical model of vision by Hubel and Wiesel. We show that Rossion’s and others’ data support the model, and that there is actually a convergence of views on the mechanisms underlying face recognition, in particular regarding holistic processing. PMID:19665104
Pilz, Karin S; Vuong, Quoc C; Bülthoff, Heinrich H; Thornton, Ian M
A highly familiar type of movement occurs whenever a person walks towards you. In the present study, we investigated whether this type of motion has an effect on face processing. We took a range of different 3D head models and placed them on a single, identical 3D body model. The resulting figures were animated to approach the observer. In a first series of experiments, we used a sequential matching task to investigate how the motion of an approaching person affects immediate responses to faces. We compared observers' responses following approach sequences to their performance with figures walking backwards (receding motion) or remaining still. Observers were significantly faster in responding to a target face that followed an approach sequence, compared to both receding and static primes. In a second series of experiments, we investigated long-term effects of motion using a delayed visual search paradigm. After studying moving or static avatars, observers searched for target faces in static arrays of varying set sizes. Again, observers were faster at responding to faces that had been learned in the context of an approach sequence. Together these results suggest that the context of a moving body influences face processing, and support the hypothesis that our visual system has mechanisms that aid the encoding of behaviourally-relevant and familiar dynamic events. Copyright © 2010 Elsevier B.V. All rights reserved.
Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun
The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…
The face milling process of the metal surface is a well-known machining process of using rotary cutters to remove material from a workpiece. Flat metal surfaces can be produced by a face milling process. However, in practice, visible, traced marks following the motion of points on the cutter's face are usually apparent. In this study, it was shown that milled patterns can be removed by means of 20 W fiber laser on the aluminum surface (AA7075). Experimental results also showed that roughened and hydrophobic surface can be produced with optimized laser parameters. It is a new approach to remove the patterns from the metal surface and can be explained through roughening by re-melting instead of ablation. The new method is a strong candidate to replace sandblasting the metal surface. It is also cheap and environmentally friendly.
Mothersill, Omar; Morris, Derek W; Kelly, Sinead; Rose, Emma Jane; Bokde, Arun; Reilly, Richard; Gill, Michael; Corvin, Aiden P; Donohoe, Gary
Processing the emotional content of faces is recognised as a key deficit of schizophrenia, associated with poorer functional outcomes and possibly contributing to the severity of clinical symptoms such as paranoia. At the neural level, fMRI studies have reported altered limbic activity in response to facial stimuli. However, previous studies may be limited by the use of cognitively demanding tasks and static facial stimuli. To address these issues, the current study used a face processing task involving both passive face viewing and dynamic social stimuli. Such a task may (1) lack the potentially confounding effects of high cognitive demands and (2) show higher ecological validity. Functional MRI was used to examine neural activity in 25 patients with a DSM-IV diagnosis of schizophrenia/schizoaffective disorder and 21 age- and gender-matched healthy controls while they participated in a face processing task, which involved viewing videos of angry and neutral facial expressions, and a non-biological baseline condition. While viewing faces, patients showed significantly weaker deactivation of the medial prefrontal cortex, including the anterior cingulate, and decreased activation in the left cerebellum, compared to controls. Patients also showed weaker medial prefrontal deactivation while viewing the angry faces relative to baseline. Given that the anterior cingulate plays a role in processing negative emotion, weaker deactivation of this region in patients while viewing faces may contribute to an increased perception of social threat. Future studies examining the neurobiology of social cognition in schizophrenia using fMRI may help establish targets for treatment interventions. Copyright © 2014 Elsevier B.V. All rights reserved.
Haist, Frank; Adamo, Maha; Han, Jarnet; Lee, Kang; Stiles, Joan
Expertise in processing faces is a cornerstone of human social interaction. However, the developmental course of many key brain regions supporting face preferential processing in the human brain remains undefined. Here, we present findings from an FMRI study using a simple viewing paradigm of faces and objects in a continuous age sample covering the age range from 6 years through adulthood. These findings are the first to use such a sample paired with whole-brain FMRI analyses to investigate development within the core and extended face networks across the developmental spectrum from middle childhood to adulthood. We found evidence, albeit modest, for a developmental trend in the volume of the right fusiform face area (rFFA) but no developmental change in the intensity of activation. From a spatial perspective, the middle portion of the right fusiform gyrus most commonly found in adult studies of face processing was increasingly likely to be included in the FFA as age increased to adulthood. Outside of the FFA, the most striking finding was that children hyperactivated nearly every aspect of the extended face system relative to adults, including the amygdala, anterior temporal pole, insula, inferior frontal gyrus, anterior cingulate gyrus, and parietal cortex. Overall, the findings suggest that development is best characterized by increasing modulation of face-sensitive regions throughout the brain to engage only those systems necessary for task requirements. PMID:23948645
Perlman, Susan B; Fournier, Jay C; Bebko, Genna; Bertocci, Michele A; Hinze, Amanda K; Bonar, Lisa; Almeida, Jorge R C; Versace, Amelia; Schirda, Claudiu; Travis, Michael; Gill, Mary Kay; Demeter, Christine; Diwadkar, Vaibhav A; Sunshine, Jeffrey L; Holland, Scott K; Kowatch, Robert A; Birmaher, Boris; Axelson, David; Horwitz, Sarah M; Arnold, L Eugene; Fristad, Mary A; Youngstrom, Eric A; Findling, Robert L; Phillips, Mary L
Pediatric bipolar disorder involves poor social functioning, but the neural mechanisms underlying these deficits are not well understood. Previous neuroimaging studies have found deficits in emotional face processing localized to emotional brain regions. However, few studies have examined dysfunction in other regions of the face processing circuit. This study assessed hypoactivation in key face processing regions of the brain in pediatric bipolar disorder. Youth with a bipolar spectrum diagnosis (n = 20) were matched to a nonbipolar clinical group (n = 20), with similar demographics and comorbid diagnoses, and a healthy control group (n = 20). Youth participated in a functional magnetic resonance imaging (fMRI) scanning which employed a task-irrelevant emotion processing design in which processing of facial emotions was not germane to task performance. Hypoactivation, isolated to the fusiform gyrus, was found when viewing animated, emerging facial expressions of happiness, sadness, fearfulness, and especially anger in pediatric bipolar participants relative to matched clinical and healthy control groups. The results of the study imply that differences exist in visual regions of the brain's face processing system and are not solely isolated to emotional brain regions such as the amygdala. Findings are discussed in relation to facial emotion recognition and fusiform gyrus deficits previously reported in the autism literature. Behavioral interventions targeting attention to facial stimuli might be explored as possible treatments for bipolar disorder in youth. Copyright © 2013 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Legrand, Lore B.; Del Zotto, Marzia; Tyrand, Rémi; Pegna, Alan J.
This study investigates the spatiotemporal dynamics associated with conscious and non-conscious processing of naked and dressed human bodies. To this effect, stimuli of naked men and women with visible primary sexual characteristics, as well as dressed bodies, were presented to 20 heterosexual male and female participants while acquiring high resolution EEG data. The stimuli were either consciously detectable (supraliminal presentations) or were rendered non-conscious through backward masking (subliminal presentations). The N1 event-related potential component was significantly enhanced in participants when they viewed naked compared to dressed bodies under supraliminal viewing conditions. More importantly, naked bodies of the opposite sex produced a significantly greater N1 component compared to dressed bodies during subliminal presentations, when participants were not aware of the stimulus presented. A source localization algorithm computed on the N1 showed that the response for naked bodies in the supraliminal viewing condition was stronger in body processing areas, primary visual areas and additional structures related to emotion processing. By contrast, in the subliminal viewing condition, only visual and body processing areas were found to be activated. These results suggest that naked bodies and primary sexual characteristics are processed early in time (i.e., <200 ms) and activate key brain structures even when they are not consciously detected. It appears that, similarly to what has been reported for emotional faces, sexual features benefit from automatic and rapid processing, most likely due to their high relevance for the individual and their importance for the species in terms of reproductive success. PMID:23894532
Legrand, Lore B; Del Zotto, Marzia; Tyrand, Rémi; Pegna, Alan J
This study investigates the spatiotemporal dynamics associated with conscious and non-conscious processing of naked and dressed human bodies. To this effect, stimuli of naked men and women with visible primary sexual characteristics, as well as dressed bodies, were presented to 20 heterosexual male and female participants while acquiring high resolution EEG data. The stimuli were either consciously detectable (supraliminal presentations) or were rendered non-conscious through backward masking (subliminal presentations). The N1 event-related potential component was significantly enhanced in participants when they viewed naked compared to dressed bodies under supraliminal viewing conditions. More importantly, naked bodies of the opposite sex produced a significantly greater N1 component compared to dressed bodies during subliminal presentations, when participants were not aware of the stimulus presented. A source localization algorithm computed on the N1 showed that the response for naked bodies in the supraliminal viewing condition was stronger in body processing areas, primary visual areas and additional structures related to emotion processing. By contrast, in the subliminal viewing condition, only visual and body processing areas were found to be activated. These results suggest that naked bodies and primary sexual characteristics are processed early in time (i.e., <200 ms) and activate key brain structures even when they are not consciously detected. It appears that, similarly to what has been reported for emotional faces, sexual features benefit from automatic and rapid processing, most likely due to their high relevance for the individual and their importance for the species in terms of reproductive success.
Werner, Nicole-Simone; Kühnel, Sina; Markowitsch, Hans J
Humans are experts in face perception. We are better able to distinguish between the differences of faces and their components than between any other kind of objects. Several studies investigating the underlying neural networks provided evidence for deviated face processing in criminal individuals, although results are often confounded by accompanying mental or addiction disorders. On the other hand, face processing in non-criminal healthy persons can be of high juridical interest in cases of witnessing a felony and afterward identifying a culprit. Memory and therefore recognition of a person can be affected by many parameters and thus become distorted. But also face processing itself is modulated by different factors like facial characteristics, degree of familiarity, and emotional relation. These factors make the comparison of different cases, as well as the transfer of laboratory results to real live settings very challenging. Several neuroimaging studies have been published in recent years and some progress was made connecting certain brain activation patterns with the correct recognition of an individual. However, there is still a long way to go before brain imaging can make a reliable contribution to court procedures.
Werner, Nicole-Simone; Kühnel, Sina; Markowitsch, Hans J.
Humans are experts in face perception. We are better able to distinguish between the differences of faces and their components than between any other kind of objects. Several studies investigating the underlying neural networks provided evidence for deviated face processing in criminal individuals, although results are often confounded by accompanying mental or addiction disorders. On the other hand, face processing in non-criminal healthy persons can be of high juridical interest in cases of witnessing a felony and afterward identifying a culprit. Memory and therefore recognition of a person can be affected by many parameters and thus become distorted. But also face processing itself is modulated by different factors like facial characteristics, degree of familiarity, and emotional relation. These factors make the comparison of different cases, as well as the transfer of laboratory results to real live settings very challenging. Several neuroimaging studies have been published in recent years and some progress was made connecting certain brain activation patterns with the correct recognition of an individual. However, there is still a long way to go before brain imaging can make a reliable contribution to court procedures. PMID:24367306
Rigby, Sarah N; Stoesz, Brenda M; Jakobson, Lorna S
Many factors contribute to social difficulties in individuals with autism spectrum disorder (ASD). The goal of the present work was to determine whether atypicalities in how individuals with ASD process static, socially engaging faces persist when nonrigid facial motion cues are present. We also sought to explore the relationships between various face processing abilities and individual differences in autism symptom severity and traits such as empathy. Participants included 16 adults with ASD without intellectual impairment and 16 sex- and age-matched controls. Mean Verbal IQ was comparable across groups [t(30) = 0.70, P = 0.49]. The two groups responded similarly to many of the experimental manipulations; however, relative to controls, participants with ASD responded more slowly to dynamic expressive faces, even when no judgment was required; were less accurate at identity matching with static and dynamic faces; and needed more time to make identity and expression judgments [F(1, 30) ≥ 6.37, P ≤ 0.017, η p 2 ≥ 0.175 in all cases], particularly when the faces were moving [F(1, 30) = 3.40, P = 0.072, η p 2 = 0.104]. In the full sample, as social autistic traits increased and empathic skills declined, participants needed more time to judge static identity, and static or dynamic expressions [0.43 < |r s | < 0.56]. The results suggest that adults with ASD show general impairments in face and motion processing and support the view that an examination of individual variation in particular personality traits and abilities is important for advancing our understanding of face perception. Autism Res 2018, 11: 942-955. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. Our findings suggest that people with ASD have problems processing expressive faces, especially when seen in motion. It is important to learn who is most at risk for face processing problems, given that in the general population such
Righi, Giulia; Westerlund, Alissa; Congdon, Eliza L.; Troller-Renfree, Sonya; Nelson, Charles A.
The goal of the present study was to investigate infants’ processing of female and male faces. We used an event-related potential (ERP) priming task, as well as a visual-paired comparison (VPC) eye tracking task to explore how 7-month-old “female expert” infants differed in their responses to faces of different genders. Female faces elicited larger N290 amplitudes than male faces. Furthermore, infants showed a priming effect for female faces only, whereby the N290 was significantly more negative for novel females compared to primed female faces. The VPC experiment was designed to test whether infants could reliably discriminate between two female and two male faces. Analyses showed that infants were able to differentiate faces of both genders. The results of the present study suggest that 7-month olds with a large amount of female face experience show a processing advantage for forming a neural representation of female faces, compared to male faces. However, the enhanced neural sensitivity to the repetition of female faces is not due to the infants' inability to discriminate male faces. Instead, the combination of results from the two tasks suggests that the differential processing for female faces may be a signature of expert-level processing. PMID:24200421
Bobst, Cora; Sauter, Sabine; Foppa, Andrina; Lobmaier, Janek S
It has been shown that women's preference for masculinity in male faces changes across the menstrual cycle. Preference for masculinity is stronger when conception probability is high than when it is low. These findings have been linked to cyclic fluctuations of hormone levels. The purpose of the present study is to further investigate the link between gonadal steroids (i.e. testosterone, estradiol, and progesterone) and masculinity preference in women, while holding the cycle phase constant. Sixty-two female participants were tested in their early follicular cycle phase, when conception probability is low. Participants were shown face pairs and where asked to choose the more attractive face. Face pairs consisted of a masculinized and feminized version of the same face. For naturally cycling women we found a positive relationship between saliva testosterone levels and masculinity preference, but there was no link between any hormones and masculinity preference for women taking hormonal contraception. We conclude that in naturally cycling women early follicular testosterone levels are associated with masculinity preference. However, these hormonal links were not found for women with artificially modified hormonal levels, that is, for women taking hormonal contraception. Copyright © 2013 Elsevier Ltd. All rights reserved.
Svendsen, Marie Louise; Ehlers, Lars H; Hundborg, Heidi H; Ingeman, Annette; Johnsen, Søren P
The relationship between processes of early stroke care and hospital costs remains unclear. We therefore examined the association in a population based cohort study. We identified 5909 stroke patients who were admitted to stroke units in a Danish county between 2005 and 2010.The examined recommended processes of care included early admission to a stroke unit, early initiation of antiplatelet or anticoagulant therapy, early computed tomography/magnetic resonance imaging (CT/MRI) scan, early physiotherapy and occupational therapy, early assessment of nutritional risk, constipation risk and of swallowing function, early mobilization,early catheterization, and early thromboembolism prophylaxis.Hospital costs were assessed for each patient based on the number of days spent in different in-hospital facilities using local hospital charges. The mean costs of hospitalization were $23 352 (standard deviation 27 827). The relationship between receiving more relevant processes of early stroke care and lower hospital costs followed a dose–response relationship. The adjusted costs were $24 566 (95% confidence interval 19 364–29 769) lower for patients who received 75–100% of the relevant processes of care compared with patients receiving 0–24%. All processes of care were associated with potential cost savings, except for early catheterization and early thromboembolism prophylaxis. Early care in agreement with key guidelines recommendations for the management of patients with stroke may be associated with hospital savings.
Zhao, Mintao; Bülthoff, Heinrich H; Bülthoff, Isabelle
Holistic processing-the tendency to perceive objects as indecomposable wholes-has long been viewed as a process specific to faces or objects of expertise. Although current theories differ in what causes holistic processing, they share a fundamental constraint for its generalization: Nonface objects cannot elicit facelike holistic processing in the absence of expertise. Contrary to this prevailing view, here we show that line patterns with salient Gestalt information (i.e., connectedness, closure, and continuity between parts) can be processed as holistically as faces without any training. Moreover, weakening the saliency of Gestalt information in these patterns reduced holistic processing of them, which indicates that Gestalt information plays a crucial role in holistic processing. Therefore, holistic processing can be achieved not only via a top-down route based on expertise, but also via a bottom-up route relying merely on object-based information. The finding that facelike holistic processing can extend beyond the domains of faces and objects of expertise poses a challenge to current dominant theories. © The Author(s) 2015.
Olivares, Ela I.; Lage-Castellanos, Agustín; Bobes, María A.; Iglesias, Jaime
We investigated the neural correlates of the access to and retrieval of face structure information in contrast to those concerning the access to and retrieval of person-related verbal information, triggered by faces. We experimentally induced stimulus familiarity via a systematic learning procedure including faces with and without associated verbal information. Then, we recorded event-related potentials (ERPs) in both intra-domain (face-feature) and cross-domain (face-occupation) matching tasks while N400-like responses were elicited by incorrect eyes-eyebrows completions and occupations, respectively. A novel Bayesian source reconstruction approach plus conjunction analysis of group effects revealed that in both cases the generated N170s were of similar amplitude but had different neural origin. Thus, whereas the N170 of faces was associated predominantly to right fusiform and occipital regions (the so-called “Fusiform Face Area”, “FFA” and “Occipital Face Area”, “OFA”, respectively), the N170 of occupations was associated to a bilateral very posterior activity, suggestive of basic perceptual processes. Importantly, the right-sided perceptual P200 and the face-related N250 were evoked exclusively in the intra-domain task, with sources in OFA and extensively in the fusiform region, respectively. Regarding later latencies, the intra-domain N400 seemed to be generated in right posterior brain regions encompassing mainly OFA and, to some extent, the FFA, likely reflecting neural operations triggered by structural incongruities. In turn, the cross-domain N400 was related to more anterior left-sided fusiform and temporal inferior sources, paralleling those described previously for the classic verbal N400. These results support the existence of differentiated neural streams for face structure and person-related verbal processing triggered by faces, which can be activated differentially according to specific task demands. PMID:29628877
Olivares, Ela I; Lage-Castellanos, Agustín; Bobes, María A; Iglesias, Jaime
We investigated the neural correlates of the access to and retrieval of face structure information in contrast to those concerning the access to and retrieval of person-related verbal information, triggered by faces. We experimentally induced stimulus familiarity via a systematic learning procedure including faces with and without associated verbal information. Then, we recorded event-related potentials (ERPs) in both intra-domain (face-feature) and cross-domain (face-occupation) matching tasks while N400-like responses were elicited by incorrect eyes-eyebrows completions and occupations, respectively. A novel Bayesian source reconstruction approach plus conjunction analysis of group effects revealed that in both cases the generated N170s were of similar amplitude but had different neural origin. Thus, whereas the N170 of faces was associated predominantly to right fusiform and occipital regions (the so-called "Fusiform Face Area", "FFA" and "Occipital Face Area", "OFA", respectively), the N170 of occupations was associated to a bilateral very posterior activity, suggestive of basic perceptual processes. Importantly, the right-sided perceptual P200 and the face-related N250 were evoked exclusively in the intra-domain task, with sources in OFA and extensively in the fusiform region, respectively. Regarding later latencies, the intra-domain N400 seemed to be generated in right posterior brain regions encompassing mainly OFA and, to some extent, the FFA, likely reflecting neural operations triggered by structural incongruities. In turn, the cross-domain N400 was related to more anterior left-sided fusiform and temporal inferior sources, paralleling those described previously for the classic verbal N400. These results support the existence of differentiated neural streams for face structure and person-related verbal processing triggered by faces, which can be activated differentially according to specific task demands.
Morand, Stéphanie M; Grosbras, Marie-Hélène; Caldara, Roberto; Harvey, Monika
Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.
Liu, Tina T; Behrmann, Marlene
Congenital prosopagnosia (CP) refers to a lifelong impairment in face processing despite normal visual and intellectual skills. Many studies have suggested that the key underlying deficit in CP is one of a failure to engage holistic processing. Moreover, there has been some suggestion that, in normal observers, there may be greater involvement of the right than left hemisphere in holistic processing. To examine the proposed deficit in holistic processing and its potential hemispheric atypicality in CP, we compared the performance of 8 CP individuals with both matched controls and a large group of non-matched controls on a novel, vertical composite task. In this task, participants judged whether a cued half of a face (either left or right half) was the same or different at study and test, and the two face halves could be either aligned or misaligned. The standard index of holistic processing is one in which the unattended face half influences performance on the cued half and this influence is greater in the aligned than in the misaligned condition. Relative to controls, the CP participants, both at a group and at an individual level, did not show holistic processing in the vertical composite task. There was also no difference in performance as a function of hemifield of the cued face half in the CP individuals, and this was true in the control participants, as well. The findings clearly confirm the deficit in holistic processing in CP and reveal the useful application of this novel experimental paradigm to this population and potentially to others as well.
Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei
Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the
Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei
Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the
Spapé, M M; Harjunen, Ville; Ravaja, N
Being touched is known to affect emotion, and even a casual touch can elicit positive feelings and affinity. Psychophysiological studies have recently shown that tactile primes affect visual evoked potentials to emotional stimuli, suggesting altered affective stimulus processing. As, however, these studies approached emotion from a purely unidimensional perspective, it remains unclear whether touch biases emotional evaluation or a more general feature such as salience. Here, we investigated how simple tactile primes modulate event related potentials (ERPs), facial EMG and cardiac response to pictures of facial expressions of emotion. All measures replicated known effects of emotional face processing: Disgust and fear modulated early ERPs, anger increased the cardiac orienting response, and expressions elicited emotion-congruent facial EMG activity. Tactile primes also affected these measures, but priming never interacted with the type of emotional expression. Thus, touch may additively affect general stimulus processing, but it does not bias or modulate immediate affective evaluation. Copyright © 2017. Published by Elsevier B.V.
Hernández-Gutiérrez, David; Abdel Rahman, Rasha; Martín-Loeches, Manuel; Muñoz, Francisco; Schacht, Annekathrin; Sommer, Werner
Face-to-face interactions characterize communication in social contexts. These situations are typically multimodal, requiring the integration of linguistic auditory input with facial information from the speaker. In particular, eye gaze and visual speech provide the listener with social and linguistic information, respectively. Despite the importance of this context for an ecological study of language, research on audiovisual integration has mainly focused on the phonological level, leaving aside effects on semantic comprehension. Here we used event-related potentials (ERPs) to investigate the influence of facial dynamic information on semantic processing of connected speech. Participants were presented with either a video or a still picture of the speaker, concomitant to auditory sentences. Along three experiments, we manipulated the presence or absence of the speaker's dynamic facial features (mouth and eyes) and compared the amplitudes of the semantic N400 elicited by unexpected words. Contrary to our predictions, the N400 was not modulated by dynamic facial information; therefore, semantic processing seems to be unaffected by the speaker's gaze and visual speech. Even though, during the processing of expected words, dynamic faces elicited a long-lasting late posterior positivity compared to the static condition. This effect was significantly reduced when the mouth of the speaker was covered. Our findings may indicate an increase of attentional processing to richer communicative contexts. The present findings also demonstrate that in natural communicative face-to-face encounters, perceiving the face of a speaker in motion provides supplementary information that is taken into account by the listener, especially when auditory comprehension is non-demanding. Copyright © 2018 Elsevier Ltd. All rights reserved.
Centanni, Tracy M; Norton, Elizabeth S; Park, Anne; Beach, Sara D; Halverson, Kelly; Ozernov-Palchik, Ola; Gaab, Nadine; Gabrieli, John DE
A functional region of left fusiform gyrus termed "the visual word form area" (VWFA) develops during reading acquisition to respond more strongly to printed words than to other visual stimuli. Here, we examined responses to letters among 5- and 6-year-old early kindergarten children (N = 48) with little or no school-based reading instruction who varied in their reading ability. We used functional magnetic resonance imaging (fMRI) to measure responses to individual letters, false fonts, and faces in left and right fusiform gyri. We then evaluated whether signal change and size (spatial extent) of letter-sensitive cortex (greater activation for letters versus faces) and letter-specific cortex (greater activation for letters versus false fonts) in these regions related to (a) standardized measures of word-reading ability and (b) signal change and size of face-sensitive cortex (fusiform face area or FFA; greater activation for faces versus letters). Greater letter specificity, but not letter sensitivity, in left fusiform gyrus correlated positively with word reading scores. Across children, in the left fusiform gyrus, greater size of letter-sensitive cortex correlated with lesser size of FFA. These findings are the first to suggest that in beginning readers, development of letter responsivity in left fusiform cortex is associated with both better reading ability and also a reduction of the size of left FFA that may result in right-hemisphere dominance for face perception. © 2018 John Wiley & Sons Ltd.
Brislin, Sarah J; Yancey, James R; Perkins, Emily R; Palumbo, Isabella M; Drislane, Laura E; Salekin, Randall T; Fanti, Kostas A; Kimonis, Eva R; Frick, Paul J; Blair, R James R; Patrick, Christopher J
The investigation of callous-unemotional (CU) traits has been central to contemporary research on child behavior problems, and served as the impetus for inclusion of a specifier for conduct disorder in the latest edition of the official psychiatric diagnostic system. Here, we report results from 2 studies that evaluated the construct validity of callousness as assessed in adults, by testing for affiliated deficits in behavioral and neural processing of fearful faces, as have been shown in youthful samples. We hypothesized that scores on an established measure of callousness would predict reduced recognition accuracy and diminished electocortical reactivity for fearful faces in adult participants. In Study 1, 66 undergraduate participants performed an emotion recognition task in which they viewed affective faces of different types and indicated the emotion expressed by each. In Study 2, electrocortical data were collected from 254 adult twins during viewing of fearful and neutral face stimuli, and scored for event-related response components. Analyses of Study 1 data revealed that higher callousness was associated with decreased recognition accuracy for fearful faces specifically. In Study 2, callousness was associated with reduced amplitude of both N170 and P200 responses to fearful faces. Current findings demonstrate for the first time that callousness in adults is associated with both behavioral and physiological deficits in the processing of fearful faces. These findings support the validity of the CU construct with adults and highlight the possibility of a multidomain measurement framework for continued study of this important clinical construct. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Victor, Teresa A; Furey, Maura L; Fromm, Stephen J; Bellgowan, Patrick S F; Öhman, Arne; Drevets, Wayne C
Major depressive disorder (MDD) is associated with a mood-congruent processing bias in the amygdala toward face stimuli portraying sad expressions that is evident even when such stimuli are presented below the level of conscious awareness. The extended functional anatomical network that maintains this response bias has not been established, however. To identify neural network differences in the hemodynamic response to implicitly presented facial expressions between depressed and healthy control participants. Unmedicated-depressed participants with MDD (n=22) and healthy controls (HC; n=25) underwent functional MRI as they viewed face stimuli showing sad, happy or neutral face expressions, presented using a backward masking design. The blood-oxygen-level dependent (BOLD) signal was measured to identify regions where the hemodynamic response to the emotionally valenced stimuli differed between groups. The MDD subjects showed greater BOLD responses than the controls to masked-sad versus masked-happy faces in the hippocampus, amygdala and anterior inferotemporal cortex. While viewing both masked-sad and masked-happy faces relative to masked-neutral faces, the depressed subjects showed greater hemodynamic responses than the controls in a network that included the medial and orbital prefrontal cortices and anterior temporal cortex. Depressed and healthy participants showed distinct hemodynamic responses to masked-sad and masked-happy faces in neural circuits known to support the processing of emotionally valenced stimuli and to integrate the sensory and visceromotor aspects of emotional behavior. Altered function within these networks in MDD may establish and maintain illness-associated differences in the salience of sensory/social stimuli, such that attention is biased toward negative and away from positive stimuli.
Liu, Ping; Forte, Jason; Sewell, David; Carter, Olivia
Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.
Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M
Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).
Reinl, Maren; Bartels, Andreas
Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Scott, Lisa S.; Shannon, Robert W.; Nelson, Charles A.
Behavioral and electrophysiological evidence suggests a gradual, experience-dependent specialization of cortical face processing systems that takes place largely in the 1st year of life. To further investigate these findings, event-related potentials (ERPs) were collected from typically developing 9-month-old infants presented with pictures of…
Robbins, Rachel; McKone, Elinor
In the debate between expertise and domain-specific explanations of "special" processing for faces, a common belief is that behavioural studies support the expertise hypothesis. The present article refutes this view, via a combination of new data and review. We tested dog experts with confirmed good individuation of exemplars of their…
Olsavsky, Aviva K.; Brotman, Melissa A.; Rutenberg, Julia G.; Muhrer, Eli J.; Deveney, Christen M.; Fromm, Stephen J.; Towbin, Kenneth; Pine, Daniel S.; Leibenluft, Ellen
Objective: Youth at familial risk for bipolar disorder (BD) show deficits in face emotion processing, but the neural correlates of these deficits have not been examined. This preliminary study tests the hypothesis that, relative to healthy comparison (HC) subjects, both BD subjects and youth at risk for BD (i.e., those with a first-degree BD…
Pilz, Karin S.; Vuong, Quoc C.; Bulthoff, Heinrich H.; Thornton, Ian M.
A highly familiar type of movement occurs whenever a person walks towards you. In the present study, we investigated whether this type of motion has an effect on face processing. We took a range of different 3D head models and placed them on a single, identical 3D body model. The resulting figures were animated to approach the observer. In a first…
Otte, R A; Donkers, F C L; Braeken, M A K A; Van den Bergh, B R H
Making sense of emotions manifesting in human voice is an important social skill which is influenced by emotions in other modalities, such as that of the corresponding face. Although processing emotional information from voices and faces simultaneously has been studied in adults, little is known about the neural mechanisms underlying the development of this ability in infancy. Here we investigated multimodal processing of fearful and happy face/voice pairs using event-related potential (ERP) measures in a group of 84 9-month-olds. Infants were presented with emotional vocalisations (fearful/happy) preceded by the same or a different facial expression (fearful/happy). The ERP data revealed that the processing of emotional information appearing in human voice was modulated by the emotional expression appearing on the corresponding face: Infants responded with larger auditory ERPs after fearful compared to happy facial primes. This finding suggests that infants dedicate more processing capacities to potentially threatening than to non-threatening stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.
Farzin, Faraz; Rivera, Susan M.; Hessl, David
Gaze avoidance is a hallmark behavioral feature of fragile X syndrome (FXS), but little is known about whether abnormalities in the visual processing of faces, including disrupted autonomic reactivity, may underlie this behavior. Eye tracking was used to record fixations and pupil diameter while adolescents and young adults with FXS and sex- and…
Wyer, Natalie A.; Martin, Douglas; Pickup, Tracey; Macrae, C. Neil
Recent research suggests that individuals with relatively weak global precedence (i.e., a smaller propensity to view visual stimuli in a configural manner) show a reduced face inversion effect (FIE). Coupled with such findings, a number of recent studies have demonstrated links between an advantage for feature-based processing and the presentation…
Guillem, F.; Mograss, M.
This study investigated gender differences on memory processing using event-related potentials (ERPs). Behavioral data and ERPs were recorded in 16 males and 10 females during a recognition memory task for faces. The behavioral data results showed that females performed better than males. Gender differences on ERPs were evidenced over anterior…
Peng, Ming; Cai, Mengfei; Zhou, Renlai
Attentional load may be increased by task-relevant attention, such as difficulty of task, or task-irrelevant attention, such as an unexpected light-spot in the screen. Several studies have focused on the influence of task-relevant attentional load on task-irrelevant emotion processing. In this study, we used event-related potentials to examine the impact of task-irrelevant attentional load on task-irrelevant expression processing. Eighteen participants identified the color of a word (i.e. the color Stroop task) while a picture of a fearful or a neutral face was shown in the background. The task-irrelevant attentional load was increased by regularly presented congruence trials (congruence between the color and the meaning of the word) in the regular condition because implicit sequence learning was induced. We compared the task-irrelevant expression processing between the regular condition and the random condition (the congruence and incongruence trials were presented randomly). Behaviorally, reaction times for the fearful face condition were faster than the neutral faces condition in the random condition, whereas no significant difference was found in the regular condition. The event-related potential results indicated enhanced positive amplitudes in P2, N2, and P3 components relative to neutral faces in the random condition. In comparison, only P2 differed significantly for the two types of expressions in the regular condition. The study showed that attentional load increased by implicit sequence learning influenced the late processing of task-irrelevant expression.
Nakajima, Kae; Minami, Tetsuto; Tanabe, Hiroki C; Sadato, Norihiro; Nakauchi, Shigeki
Facial color is important information for social communication as it provides important clues to recognize a person's emotion and health condition. Our previous EEG study suggested that N170 at the left occipito-temporal site is related to facial color processing (Nakajima et al., : Neuropsychologia 50:2499-2505). However, because of the low spatial resolution of EEG experiment, the brain region is involved in facial color processing remains controversial. In the present study, we examined the neural substrates of facial color processing using functional magnetic resonance imaging (fMRI). We measured brain activity from 25 subjects during the presentation of natural- and bluish-colored face and their scrambled images. The bilateral fusiform face (FFA) area and occipital face area (OFA) were localized by the contrast of natural-colored faces versus natural-colored scrambled images. Moreover, region of interest (ROI) analysis showed that the left FFA was sensitive to facial color, whereas the right FFA and the right and left OFA were insensitive to facial color. In combination with our previous EEG results, these data suggest that the left FFA may play an important role in facial color processing. Copyright © 2014 Wiley Periodicals, Inc.
Leung, Rachel C; Ye, Annette X; Wong, Simeon M; Taylor, Margot J; Doesburg, Sam M
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impairments in social cognition. The biological basis of deficits in social cognition in ASD, and their difficulty in processing emotional face information in particular, remains unclear. Atypical communication within and between brain regions has been reported in ASD. Interregional phase-locking is a neurophysiological mechanism mediating communication among brain areas and is understood to support cognitive functions. In the present study we investigated interregional magnetoencephalographic phase synchronization during the perception of emotional faces in adolescents with ASD. A total of 22 adolescents with ASD (18 males, mean age =14.2 ± 1.15 years, 22 right-handed) with mild to no cognitive delay and 17 healthy controls (14 males, mean age =14.4 ± 0.33 years, 16 right-handed) performed an implicit emotional processing task requiring perception of happy, angry and neutral faces while we recorded neuromagnetic signals. The faces were presented rapidly (80 ms duration) to the left or right of a central fixation cross and participants responded to a scrambled pattern that was presented concurrently on the opposite side of the fixation point. Task-dependent interregional phase-locking was calculated among source-resolved brain regions. Task-dependent increases in interregional beta synchronization were observed. Beta-band interregional phase-locking in adolescents with ASD was reduced, relative to controls, during the perception of angry faces in a distributed network involving the right fusiform gyrus and insula. No significant group differences were found for happy or neutral faces, or other analyzed frequency ranges. Significant reductions in task-dependent beta connectivity strength, clustering and eigenvector centrality (all P <0.001) in the right insula were found in adolescents with ASD, relative to controls. Reduced beta synchronization may reflect inadequate
Naumann, R. J.; Herring, H. W.
The characteristics of the space environment were reviewed. Potential applications of space processing are discussed and include metallurgical processing, and processing of semiconductor materials. The behavior of fluid in low gravity is described. The evolution of apparatus for materials processing in space was reviewed.
Cohen Kadosh, Kathrin; Johnson, Mark H; Dick, Frederic; Cohen Kadosh, Roi; Blakemore, Sarah-Jayne
In this combined structural and functional MRI developmental study, we tested 48 participants aged 7–37 years on 3 simple face-processing tasks (identity, expression, and gaze task), which were designed to yield very similar performance levels across the entire age range. The same participants then carried out 3 more difficult out-of-scanner tasks, which provided in-depth measures of changes in performance. For our analysis we adopted a novel, systematic approach that allowed us to differentiate age- from performance-related changes in the BOLD response in the 3 tasks, and compared these effects to concomitant changes in brain structure. The processing of all face aspects activated the core face-network across the age range, as well as additional and partially separable regions. Small task-specific activations in posterior regions were found to increase with age and were distinct from more widespread activations that varied as a function of individual task performance (but not of age). Our results demonstrate that activity during face-processing changes with age, and these effects are still observed when controlling for changes associated with differences in task performance. Moreover, we found that changes in white and gray matter volume were associated with changes in activation with age and performance in the out-of-scanner tasks. PMID:22661406
Horovitz, Silvina G; Rossion, Bruno; Skudlarski, Pawel; Gore, John C
Face perception is typically associated with activation in the inferior occipital, superior temporal (STG), and fusiform gyri (FG) and with an occipitotemporal electrophysiological component peaking around 170 ms on the scalp, the N170. However, the relationship between the N170 and the multiple face-sensitive activations observed in neuroimaging is unclear. It has been recently shown that the amplitude of the N170 component monotonically decreases as gaussian noise is added to a picture of a face [Jemel et al., 2003]. To help clarify the sources of the N170 without a priori assumptions regarding their number and locations, ERPs and fMRI were recorded in five subjects in the same experiment, in separate sessions. We used a parametric paradigm in which the amplitude of the N170 was modulated by varying the level of noise in a picture, and identified regions where the percent signal change in fMRI correlated with the ERP data. N170 signals were observed for pictures of both cars and faces but were stronger for faces. A monotonic decrease with added noise was observed for the N170 at right hemisphere sites but was less clear on the left and occipital central sites. Correlations between fMRI signal and N170 amplitudes for faces were highly significant (P < 0.001) in bilateral fusiform gyrus and superior temporal gyrus. For cars, the strongest correlations were observed in the parahippocampal region and in the STG (P < 0.005). Besides contributing to clarify the spatiotemporal course of face processing, this study illustrates how ERP information may be used synergistically in fMRI analyses. Parametric designs may be developed further to provide some timing information on fMRI activity and help identify the generators of ERP signals.
Jiang, Fang; Stecker, G. Christopher; Fine, Ione
Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion. PMID:25378368
Arcaro, Michael J.; Schade, Peter F.; Vincent, Justin L.; Ponce, Carlos R.; Livingstone, Margaret S.
Here we report that monkeys raised without exposure to faces did not develop face patches, but did develop domains for other categories, and did show normal retinotopic organization, indicating that early face deprivation leads to a highly selective cortical processing deficit. Therefore experience must be necessary for the formation, or maintenance, of face domains. Gaze tracking revealed that control monkeys looked preferentially at faces, even at ages prior to the emergence of face patches, but face-deprived monkeys did not, indicating that face looking is not innate. A retinotopic organization is present throughout the visual system at birth, so selective early viewing behavior could bias category-specific visual responses towards particular retinotopic representations, thereby leading to domain formation in stereotyped locations in IT, without requiring category-specific templates or biases. Thus we propose that environmental importance influences viewing behavior, viewing behavior drives neuronal activity, and neuronal activity sculpts domain formation. PMID:28869581
Arcaro, Michael J; Schade, Peter F; Vincent, Justin L; Ponce, Carlos R; Livingstone, Margaret S
Here we report that monkeys raised without exposure to faces did not develop face domains, but did develop domains for other categories and did show normal retinotopic organization, indicating that early face deprivation leads to a highly selective cortical processing deficit. Therefore, experience must be necessary for the formation (or maintenance) of face domains. Gaze tracking revealed that control monkeys looked preferentially at faces, even at ages prior to the emergence of face domains, but face-deprived monkeys did not, indicating that face looking is not innate. A retinotopic organization is present throughout the visual system at birth, so selective early viewing behavior could bias category-specific visual responses toward particular retinotopic representations, thereby leading to domain formation in stereotyped locations in inferotemporal cortex, without requiring category-specific templates or biases. Thus, we propose that environmental importance influences viewing behavior, viewing behavior drives neuronal activity, and neuronal activity sculpts domain formation.
Hildebrandt, A; Kiy, A; Reuter, M; Sommer, W; Wilhelm, O
Face cognition, including face identity and facial expression processing, is a crucial component of socio-emotional abilities, characterizing humans as highest developed social beings. However, for these trait domains molecular genetic studies investigating gene-behavior associations based on well-founded phenotype definitions are still rare. We examined the relationship between 5-HTTLPR/rs25531 polymorphisms - related to serotonin-reuptake - and the ability to perceive and recognize faces and emotional expressions in human faces. For this aim we conducted structural equation modeling on data from 230 young adults, obtained by using a comprehensive, multivariate task battery with maximal effort tasks. By additionally modeling fluid intelligence and immediate and delayed memory factors, we aimed to address the discriminant relationships of the 5-HTTLPR/rs25531 polymorphisms with socio-emotional abilities. We found a robust association between the 5-HTTLPR/rs25531 polymorphism and facial emotion perception. Carriers of two long (L) alleles outperformed carriers of one or two S alleles. Weaker associations were present for face identity perception and memory for emotional facial expressions. There was no association between the 5-HTTLPR/rs25531 polymorphism and non-social abilities, demonstrating discriminant validity of the relationships. We discuss the implications and possible neural mechanisms underlying these novel findings. © 2016 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.
Stropahl, Maren; Plotz, Karsten; Schönfeld, Rüdiger; Lenarz, Thomas; Sandmann, Pascale; Yovel, Galit; De Vos, Maarten; Debener, Stefan
There is converging evidence that the auditory cortex takes over visual functions during a period of auditory deprivation. A residual pattern of cross-modal take-over may prevent the auditory cortex to adapt to restored sensory input as delivered by a cochlear implant (CI) and limit speech intelligibility with a CI. The aim of the present study was to investigate whether visual face processing in CI users activates auditory cortex and whether this has adaptive or maladaptive consequences. High-density electroencephalogram data were recorded from CI users (n=21) and age-matched normal hearing controls (n=21) performing a face versus house discrimination task. Lip reading and face recognition abilities were measured as well as speech intelligibility. Evaluation of event-related potential (ERP) topographies revealed significant group differences over occipito-temporal scalp regions. Distributed source analysis identified significantly higher activation in the right auditory cortex for CI users compared to NH controls, confirming visual take-over. Lip reading skills were significantly enhanced in the CI group and appeared to be particularly better after a longer duration of deafness, while face recognition was not significantly different between groups. However, auditory cortex activation in CI users was positively related to face recognition abilities. Our results confirm a cross-modal reorganization for ecologically valid visual stimuli in CI users. Furthermore, they suggest that residual takeover, which can persist even after adaptation to a CI is not necessarily maladaptive. Copyright © 2015 Elsevier Inc. All rights reserved.
Arteche, Adriane; Joormann, Jutta; Harvey, Allison; Craske, Michelle; Gotlib, Ian H.; Lehtonen, Annukka; Counsell, Nicholas; Stein, Alan
Background Postnatally depressed mothers have difficulties responding appropriately to their infants. The quality of the mother–child relationship depends on a mother's ability to respond to her infant's cues, which are largely non-verbal. Therefore, it is likely that difficulties in a mother's appraisal of her infants' facial expressions will affect the quality of mother–infant interaction. This study aimed to investigate the effects of postnatal depression and anxiety on the processing of infants' facial expressions. Method A total of 89 mothers, 34 with Generalised Anxiety Disorder, 21 with Major Depressive Disorder, and 34 controls, completed a ‘morphed infants’ faces task when their children were between 10 and 18 months. Results Overall, mothers were more likely to identify happy faces accurately and at lower intensity than sad faces. Depressed compared to control participants, however, were less likely to accurately identify happy infant faces. Interestingly, mothers with GAD tended to identify happy faces at a lower intensity than controls. There were no differences between the groups in relation to sad faces. Limitations Our sample was relatively small and further research is needed to investigate the links between mothers' perceptions of infant expressions and both maternal responsiveness and later measures of child development. Conclusion Our findings have potential clinical implications as the difficulties in the processing of positive facial expressions in depression may lead to less maternal responsiveness to positive affect in the offspring and may diminish the quality of the mother–child interactions. Results for participants with GAD are consistent with the literature demonstrating that persons with GAD are intolerant of uncertainty and seek reassurance due to their worries. PMID:21641652
Arteche, Adriane; Joormann, Jutta; Harvey, Allison; Craske, Michelle; Gotlib, Ian H; Lehtonen, Annukka; Counsell, Nicholas; Stein, Alan
Postnatally depressed mothers have difficulties responding appropriately to their infants. The quality of the mother-child relationship depends on a mother's ability to respond to her infant's cues, which are largely non-verbal. Therefore, it is likely that difficulties in a mother's appraisal of her infants' facial expressions will affect the quality of mother-infant interaction. This study aimed to investigate the effects of postnatal depression and anxiety on the processing of infants' facial expressions. A total of 89 mothers, 34 with Generalised Anxiety Disorder, 21 with Major Depressive Disorder, and 34 controls, completed a 'morphed infants' faces task when their children were between 10 and 18 months. Overall, mothers were more likely to identify happy faces accurately and at lower intensity than sad faces. Depressed compared to control participants, however, were less likely to accurately identify happy infant faces. Interestingly, mothers with GAD tended to identify happy faces at a lower intensity than controls. There were no differences between the groups in relation to sad faces. Our sample was relatively small and further research is needed to investigate the links between mothers' perceptions of infant expressions and both maternal responsiveness and later measures of child development. Our findings have potential clinical implications as the difficulties in the processing of positive facial expressions in depression may lead to less maternal responsiveness to positive affect in the offspring and may diminish the quality of the mother-child interactions. Results for participants with GAD are consistent with the literature demonstrating that persons with GAD are intolerant of uncertainty and seek reassurance due to their worries. Copyright © 2011 Elsevier B.V. All rights reserved.
Roberson, Debi; Kikutani, Mariko; Doge, Paula; Whitaker, Lydia; Majid, Asifa
Three studies investigated developmental changes in facial expression processing, between 3 years-of-age and adulthood. For adults and older children, the addition of sunglasses to upright faces caused an equivalent decrement in performance to face inversion. However, younger children showed "better" classification of expressions of faces wearing…
Gauthier, Isabel; Chua, Kao-Wei; Richler, Jennifer J
The Vanderbilt Holistic Processing Test for faces (VHPT-F) is the first standard test designed to measure individual differences in holistic processing. The test measures failures of selective attention to face parts through congruency effects, an operational definition of holistic processing. However, this conception of holistic processing has been challenged by the suggestion that it may tap into the same selective attention or cognitive control mechanisms that yield congruency effects in Stroop and Flanker paradigms. Here, we report data from 130 subjects on the VHPT-F, several versions of Stroop and Flanker tasks, as well as fluid IQ. Results suggested a small degree of shared variance in Stroop and Flanker congruency effects, which did not relate to congruency effects on the VHPT-F. Variability on the VHPT-F was also not correlated with Fluid IQ. In sum, we find no evidence that holistic face processing as measured by congruency in the VHPT-F is accounted for by domain-general control mechanisms.
Degabriele, Racheal; Lagopoulos, Jim; Malhi, Gin
Behavioural and imaging studies report that individuals with bipolar disorder (BD) exhibit impairments in emotional face processing. However, few studies have studied the temporal characteristics of these impairments, and event-related potential (ERP) studies that investigate emotion perception in BD are rare. The aim of our study was to explore these processes as indexed by the face-specific P100 and N170 ERP components in a BD cohort. Eighteen subjects diagnosed with BD and 18 age- and sex-matched healthy volunteers completed an emotional go/no-go inhibition task during electroencephalogram (EEG) and ERP acquisition. Patients demonstrated faster responses to happy compared to sad faces, whereas control data revealed no emotional discrimination. Errors of omission were more frequent in the BD group in both emotion conditions, but there were no between-group differences in commission errors. Significant differences were found between groups in P100 amplitude variation across levels of affect, with the BD group exhibiting greater responses to happy compared to sad faces. Conversely, the control cohort failed to demonstrate a differentiation between emotions. A statistically significant between-group effect was also found for N170 amplitudes, indicating reduced responses in the BD group. Future studies should ideally recruit BD patients across all three mood states (manic, depressive, and euthymic) with greater scrutiny of the effects of psychotropic medication. These ERP results primarily suggest an emotion-sensitive face processing impairment in BD whereby patients are initially more attuned to positive emotions as indicated by the P100 ERP component, and this may contribute to the emergence of bipolar-like symptoms. Copyright © 2011 Elsevier B.V. All rights reserved.
Troyer, Angela K; Häfliger, Andrea; Cadieux, Mélanie J; Craik, Fergus I M
Many older adults are interested in strategies to help them learn new names. We examined the learning conditions that provide maximal benefit to name and face learning. In Experiment 1, consistent with levels-of-processing theory, name recall and recognition by 20 younger and 20 older adults was poorest with physical processing, intermediate with phonemic processing, and best with semantic processing. In Experiment 2, name and face learning in 20 younger and 20 older adults was maximized with semantic processing of names and physical processing of faces. Experiment 3 showed a benefit of self-generation and of intentional learning of name-face pairs in 24 older adults. Findings suggest that memory interventions should emphasize processing names semantically, processing faces physically, self-generating this information, and keeping in mind that memory for the names will be needed in the future.
Wang, Lili; Feng, Chunliang; Mai, Xiaoqin; Jia, Lina; Zhu, Xiangru; Luo, Wenbo; Luo, Yue-jia
Emotional stimuli can be processed without consciousness. In the current study, we used event-related potentials (ERPs) to assess whether perceptual load influences non-conscious processing of fearful facial expressions. Perceptual load was manipulated using a letter search task with the target letter presented at the fixation point, while facial expressions were presented peripherally and masked to prevent conscious awareness. The letter string comprised six letters (X or N) that were identical (low load) or different (high load). Participants were instructed to discriminate the letters at fixation or the facial expression (fearful or neutral) in the periphery. Participants were faster and more accurate at detecting letters in the low load condition than in the high load condition. Fearful faces elicited a sustained positivity from 250 ms to 700 ms post-stimulus over fronto-central areas during the face discrimination and low-load letter discrimination conditions, but this effect was completely eliminated during high-load letter discrimination. Our findings imply that non-conscious processing of fearful faces depends on perceptual load, and attentional resources are necessary for non-conscious processing. PMID:27149273
Patrick, Regan E; Rastogi, Anuj; Christensen, Bruce K
Adaptive emotional responding relies on dual automatic and effortful processing streams. Dual-stream models of schizophrenia (SCZ) posit a selective deficit in neural circuits that govern goal-directed, effortful processes versus reactive, automatic processes. This imbalance suggests that when patients are confronted with competing automatic and effortful emotional response cues, they will exhibit diminished effortful responding and intact, possibly elevated, automatic responding compared to controls. This prediction was evaluated using a modified version of the face-vignette task (FVT). Participants viewed emotional faces (automatic response cue) paired with vignettes (effortful response cue) that signalled a different emotion category and were instructed to discriminate the manifest emotion. Patients made less vignette and more face responses than controls. However, the relationship between group and FVT responding was moderated by IQ and reading comprehension ability. These results replicate and extend previous research and provide tentative support for abnormal conflict resolution between automatic and effortful emotional processing predicted by dual-stream models of SCZ.
Price, Daniel; Burris, Debra; Cloutier, Anna; Thompson, Carol B.; Rilling, James K.; Thompson, Richmond R.
Arginine vasopressin (AVP) and related peptides have diverse effects on social behaviors in vertebrates, sometimes promoting affiliative interactions and sometimes aggressive or antisocial responses. The type of influence, in at least some species, depends on social contexts, including the sex of the individuals in the interaction and/or on the levels of peptide within brain circuits that control the behaviors. To determine if AVP promotes different responses to same- and other-sex faces in men, and if those effects are dose dependent, we measured the effects of two doses of AVP on subjective ratings of male and female faces. We also tested if any influences persist beyond the time of drug delivery. When AVP was administered intranasally on an initial test day, 20 IU was associated with decreased social assessments relative to placebo and 40 IU, and some of the effects persisted beyond the initial drug delivery and appeared to generalize to novel faces on subsequent test days. In single men, those influences were most pronounced, but not exclusive, for male faces, whereas in coupled men they were primarily associated with responses to female faces. Similar influences were not observed if AVP was delivered after placebo on a second test day. In a preliminary analysis, the differences in social assessments observed between men who received 20 and 40 IU, which we suggest primarily reflect lowered social assessments induced by the lower dose, appeared most pronounced in subjects who carry what has been identified as a risk allele for the V1a receptor gene. Together, these results suggest that AVP’s effects on face processing, and possibly other social responses, differ according to dose, depend on relationship status, and may be more prolonged than previously recognized. PMID:29018407
Striemer, Christopher L; Whitwell, Robert L; Goodale, Melvyn A
Previous research suggests that the implicit recognition of emotional expressions may be carried out by pathways that bypass primary visual cortex (V1) and project to the amygdala. Some of the strongest evidence supporting this claim comes from case studies of "affective blindsight" in which patients with V1 damage can correctly guess whether an unseen face was depicting a fearful or happy expression. In the current study, we report a new case of affective blindsight in patient MC who is cortically blind following extensive bilateral lesions to V1, as well as face and object processing regions in her ventral visual stream. Despite her large lesions, MC has preserved motion perception which is related to sparing of the motion sensitive region MT+ in both hemispheres. To examine affective blindsight in MC we asked her to perform gender and emotion discrimination tasks in which she had to guess, using a two-alternative forced-choice procedure, whether the face presented was male or female, happy or fearful, or happy or angry. In addition, we also tested MC in a four-alternative forced-choice target localization task. Results indicated that MC was not able to determine the gender of the faces (53% accuracy), or localize targets in a forced-choice task. However, she was able to determine, at above chance levels, whether the face presented was depicting a happy or fearful (67%, p = .006), or a happy or angry (64%, p = .025) expression. Interestingly, although MC was better than chance at discriminating between emotions in faces when asked to make rapid judgments, her performance fell to chance when she was asked to provide subjective confidence ratings about her performance. These data lend further support to the idea that there is a non-conscious visual pathway that bypasses V1 which is capable of processing affective signals from facial expressions without input from higher-order face and object processing regions in the ventral visual stream. Copyright © 2017
Ho, Tiffany C; Zhang, Shunan; Sacchet, Matthew D; Weng, Helen; Connolly, Colm G; Henje Blom, Eva; Han, Laura K M; Mobayed, Nisreen O; Yang, Tony T
While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed.
Ho, Tiffany C.; Zhang, Shunan; Sacchet, Matthew D.; Weng, Helen; Connolly, Colm G.; Henje Blom, Eva; Han, Laura K. M.; Mobayed, Nisreen O.; Yang, Tony T.
While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed. PMID:26869950
Proietti, Valentina; Pisacane, Antonella; Macchi Cassia, Viola
Just like other face dimensions, age influences the way faces are processed by adults as well as by children. However, it remains unclear under what conditions exactly such influence occurs at both ages, in that there is some mixed evidence concerning the presence of a systematic processing advantage for peer faces (own-age bias) across the lifespan. Inconsistency in the results may stem from the fact that the individual’s face representation adapts to represent the most predominant age traits of the faces present in the environment, which is reflective of the individual’s specific living conditions and social experience. In the current study we investigated the processing of younger and older adult faces in two groups of adults (Experiment 1) and two groups of 3-year-old children (Experiment 2) who accumulated different amounts of experience with elderly people. Contact with elderly adults influenced the extent to which both adult and child participants showed greater discrimination abilities and stronger sensitivity to configural/featural cues in younger versus older adult faces, as measured by the size of the inversion effect. In children, the size of the inversion effect for older adult faces was also significantly correlated with the amount of contact with elderly people. These results show that, in both adults and children, visual experience with older adult faces can tune perceptual processing strategies to the point of abolishing the discrimination disadvantage that participants typically manifest for those faces in comparison to younger adult faces. PMID:23460867
Tso, Ivy F; Fang, Yu; Phan, K Luan; Welsh, Robert C; Taylor, Stephan F
The involvement of the gamma-aminobutyric acid (GABA) system in schizophrenia is suggested by postmortem studies and the common use of GABA receptor-potentiating agents in treatment. In a recent study, we used a benzodiazepine challenge to demonstrate abnormal GABAergic function during processing of negative visual stimuli in schizophrenia. This study extended this investigation by mapping GABAergic mechanisms associated with face processing and social appraisal in schizophrenia using a benzodiazepine challenge. Fourteen stable, medicated schizophrenia/schizoaffective patients (SZ) and 13 healthy controls (HC) underwent functional MRI using the blood oxygenation level-dependent (BOLD) technique while they performed the Socio-emotional Preference Task (SePT) on emotional face stimuli ("Do you like this face?"). Participants received single-blinded intravenous saline and lorazepam (LRZ) in two separate sessions separated by 1-3weeks. Both SZ and HC recruited medial prefrontal cortex/anterior cingulate during the SePT, relative to gender identification. A significant drug by group interaction was observed in the medial occipital cortex, such that SZ showed increased BOLD signal to LRZ challenge, while HC showed an expected decrease of signal; the interaction did not vary by task. The altered BOLD response to LRZ challenge in SZ was significantly correlated with increased negative affect across multiple measures. The altered response to LRZ challenge suggests that abnormal face processing and negative affect in SZ are associated with altered GABAergic function in the visual cortex, underscoring the role of impaired visual processing in socio-emotional deficits in schizophrenia. Copyright © 2015 Elsevier B.V. All rights reserved.
Klein, Fabian; Iffland, Benjamin; Schindler, Sebastian; Wabnitz, Pascal; Neuner, Frank
Recent studies have shown that the perceptual processing of human faces is affected by context information, such as previous experiences and information about the person represented by the face. The present study investigated the impact of verbally presented information about the person that varied with respect to affect (neutral, physically threatening, socially threatening) and reference (self-referred, other-referred) on the processing of faces with an inherently neutral expression. Stimuli were presented in a randomized presentation paradigm. Event-related potential (ERP) analysis demonstrated a modulation of the evoked potentials by reference at the EPN (early posterior negativity) and LPP (late positive potential) stage and an enhancing effect of affective valence on the LPP (700-1000 ms) with socially threatening context information leading to the most pronounced LPP amplitudes. We also found an interaction between reference and valence with self-related neutral context information leading to more pronounced LPP than other related neutral context information. Our results indicate an impact of self-reference on early, presumably automatic processing stages and also a strong impact of valence on later stages. Using a randomized presentation paradigm, this study confirms that context information affects the visual processing of faces, ruling out possible confounding factors such as facial configuration or conditional learning effects.
Noiret, Nicolas; Carvalho, Nicolas; Laurent, Éric; Vulliez, Lauriane; Bennabi, Djamila; Chopard, Gilles; Haffen, Emmanuel; Nicolier, Magali; Monnin, Julie; Vandel, Pierre
Although several reported studies have suggested that younger adults with depression display depression-related biases during the processing of emotional faces, there remains a lack of data concerning these biases in older adults. The aim of our study was to assess scanning behavior during the processing of emotional faces in depressed older adults. Older adults with and without depression viewed happy, neutral or sad portraits during an eye movement recording. Depressed older adults spent less time with fewer fixations on emotional features than healthy older adults, but only for sad and neutral portraits, with no significant difference for happy portraits. These results suggest disengagement from sad and neutral faces in depressed older adults, which is not consistent with standard theoretical proposals on congruence biases in depression. Also, aging and associated emotional regulation change may explain the expression of depression-related biases. Our preliminary results suggest that information processing in depression consists of a more complex phenomenon than merely a general searching for mood-congruent stimuli or general disengagement from all kinds of stimuli. These findings underline that care must be used when evaluating potential variables, such as aging, which interact with depression and selectively influence the choice of relevant stimulus dimensions.
Guo, Kun; Soornack, Yoshi; Settle, Rebecca
Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features. Copyright © 2018 Elsevier Ltd. All rights reserved.
Jhang, Yuna; Franklin, Beau; Ramsdell-Hudock, Heather L.; Oller, D. Kimbrough
Seeking roots of language, we probed infant facial expressions and vocalizations. Both have roles in language, but the voice plays an especially flexible role, expressing a variety of functions and affect conditions with the same vocal categories—a word can be produced with many different affective flavors. This requirement of language is seen in very early infant vocalizations. We examined the extent to which affect is transmitted by early vocal categories termed “protophones” (squeals, vowel-like sounds, and growls) and by their co-occurring facial expressions, and similarly the extent to which vocal type is transmitted by the voice and co-occurring facial expressions. Our coder agreement data suggest infant affect during protophones was most reliably transmitted by the face (judged in video-only), while vocal type was transmitted most reliably by the voice (judged in audio-only). Voice alone transmitted negative affect more reliably than neutral or positive affect, suggesting infant protophones may be used especially to call for attention when the infant is in distress. By contrast, the face alone provided no significant information about protophone categories. Indeed coders in VID could scarcely recognize the difference between silence and voice when coding protophones in VID. The results suggest that partial decoupling of communicative roles for face and voice occurs even in the first months of life. Affect in infancy appears to be transmitted in a way that audio and video aspects are flexibly interwoven, as in mature language. PMID:29423398
Jhang, Yuna; Franklin, Beau; Ramsdell-Hudock, Heather L; Oller, D Kimbrough
Seeking roots of language, we probed infant facial expressions and vocalizations. Both have roles in language, but the voice plays an especially flexible role, expressing a variety of functions and affect conditions with the same vocal categories-a word can be produced with many different affective flavors. This requirement of language is seen in very early infant vocalizations. We examined the extent to which affect is transmitted by early vocal categories termed "protophones" (squeals, vowel-like sounds, and growls) and by their co-occurring facial expressions, and similarly the extent to which vocal type is transmitted by the voice and co-occurring facial expressions. Our coder agreement data suggest infant affect during protophones was most reliably transmitted by the face (judged in video-only), while vocal type was transmitted most reliably by the voice (judged in audio-only). Voice alone transmitted negative affect more reliably than neutral or positive affect, suggesting infant protophones may be used especially to call for attention when the infant is in distress. By contrast, the face alone provided no significant information about protophone categories. Indeed coders in VID could scarcely recognize the difference between silence and voice when coding protophones in VID. The results suggest that partial decoupling of communicative roles for face and voice occurs even in the first months of life. Affect in infancy appears to be transmitted in a way that audio and video aspects are flexibly interwoven, as in mature language.
Shah, Punit; Bird, Geoffrey; Cook, Richard
Characteristic problems with social interaction have prompted considerable interest in the face processing of individuals with Autism Spectrum Disorder (ASD). Studies suggest that reduced integration of information from disparate facial regions likely contributes to difficulties recognizing static faces in this population. Recent work also indicates that observers with ASD have problems using patterns of facial motion to judge identity and gender, and may be less able to derive global motion percepts. These findings raise the possibility that feature integration deficits also impact the perception of moving faces. To test this hypothesis, we examined whether observers with ASD exhibit susceptibility to a new dynamic face illusion, thought to index integration of moving facial features. When typical observers view eye-opening and -closing in the presence of asynchronous mouth-opening and -closing, the concurrent mouth movements induce a strong illusory slowing of the eye transitions. However, we find that observers with ASD are not susceptible to this illusion, suggestive of weaker integration of cross-feature dynamics. Nevertheless, observers with ASD and typical controls were equally able to detect the physical differences between comparison eye transitions. Importantly, this confirms that observers with ASD were able to fixate the eye-region, indicating that the striking group difference has a perceptual, not attentional origin. The clarity of the present results contrasts starkly with the modest effect sizes and equivocal findings seen throughout the literature on static face perception in ASD. We speculate that differences in the perception of facial motion may be a more reliable feature of this condition. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Sullivan, Regina M.; Wilson, Donald A.; Leon, Michael
Acquisition of behavioral conditioned responding and learned odor preferences during olfactory classical conditioning in rat pups requires forward or simultaneous pairings of the conditioned stimulus (CS) and the unconditioned stimulus (US). Other temporal relationships between the CS and US do not usually result in learning. The present study examined the influence of this CS-US relationship upon the neural olfactory bulb modifications that are acquired during early classical conditioning. Wistar rat pups were trained from Postnatal Days (PN) 1-18 with either forward (odor overlapping temporally with reinforcing stroking) or backward (stroking followed by odor) CS-US pairings. On PN 19, pups received either a behavioral odor preference test to the odor CS or an injection of 14C 2-DG and exposure to the odor CS, or olfactory bulb single unit responses were recorded in response to exposure to the odor CS. Only pups that received forward presentations of the CS and US exhibited both a preference for the CS and modified olfactory bulb neural responses to the CS. These results, then, suggest that the modified olfactory bulb neural responses acquired during classical conditioning are guided by the same temporal constraints as those which govern the acquisition of behavioral conditioned responses. PMID:17572798
Nomi, Jason S.; Uddin, Lucina Q.
Autism spectrum disorder (ASD) is characterized by reduced attention to social stimuli including the human face. This hypo-responsiveness to stimuli that are engaging to typically developing individuals may result from dysfunctioning motivation, reward, and attention systems in the brain. Here we review an emerging neuroimaging literature that emphasizes a shift from focusing on hypo-activation of isolated brain regions such as the fusiform gyrus, amygdala, and superior temporal sulcus in ASD to a more holistic approach to understanding face perception as a process supported by distributed cortical and subcortical brain networks. We summarize evidence for atypical activation patterns within brain networks that may contribute to social deficits characteristic of the disorder. We conclude by pointing to gaps in the literature and future directions that will continue to shed light on aspects of face processing in autism that are still under-examined. In particular, we highlight the need for more developmental studies and studies examining ecologically valid and naturalistic social stimuli. PMID:25829246
Wagner, Jennifer B.; Hirsch, Suzanna B.; Vogel-Farley, Vanessa K.; Redcay, Elizabeth; Nelson, Charles A.
Individuals with autism spectrum disorder (ASD) often have difficulty with social-emotional cues. This study examined the neural, behavioral, and autonomic correlates of emotional face processing in adolescents with ASD and typical development (TD) using eye-tracking and event-related potentials (ERPs) across two different paradigms. Scanning of faces was similar across groups in the first task, but the second task found that face-sensitive ERPs varied with emotional expressions only in TD. Further, ASD showed enhanced neural responding to non-social stimuli. In TD only, attention to eyes during eye-tracking related to faster face-sensitive ERPs in a separate task; in ASD, a significant positive association was found between autonomic activity and attention to mouths. Overall, ASD showed an atypical pattern of emotional face processing, with reduced neural differentiation between emotions and a reduced relationship between gaze behavior and neural processing of faces. PMID:22684525
Morales, Guadalupe E.; Lopez, Ernesto O.
Participants with Down syndrome (DS) were required to participate in a face recognition experiment to recognize familiar (DS faces) and unfamiliar emotional faces (non DS faces), by using an affective priming paradigm. Pairs of emotional facial stimuli were presented (one face after another) with a short Stimulus Onset Asynchrony of 300…
Richler, Jennifer J.; Bukach, Cindy M.; Gauthier, Isabel
We explore whether holistic-like effects can be observed for non-face objects in novices as a result of the task context. We measure contextually-induced congruency effects for novel objects (Greebles) in a sequential matching selective attention task (composite task). When format at study was blocked, congruency effects were observed for study-misaligned, but not study-aligned, conditions (Experiment 1). However, congruency effects were observed in all conditions when study formats were randomized (Experiment 2), revealing that the presence of certain trial types (study-misaligned) in an experiment can induce congruency effects. In a dual task, a congruency effect for Greebles was induced in trials where a face was first encoded, only if it was aligned (Experiment 3). Thus, congruency effects can be induced by context that operates at the scale of the entire experiment or within a single trial. Implications for using the composite task to measure holistic processing are discussed. PMID:19304644
Davies-Thompson, Jodie; Johnston, Samantha; Tashakkor, Yashar; Pancaroglu, Raika; Barton, Jason J S
Visual words and faces activate similar networks but with complementary hemispheric asymmetries, faces being lateralized to the right and words to the left. A recent theory proposes that this reflects developmental competition between visual word and face processing. We investigated whether this results in an inverse correlation between the degree of lateralization of visual word and face activation in the fusiform gyri. 26 literate right-handed healthy adults underwent functional MRI with face and word localizers. We derived lateralization indices for cluster size and peak responses for word and face activity in left and right fusiform gyri, and correlated these across subjects. A secondary analysis examined all face- and word-selective voxels in the inferior occipitotemporal cortex. No negative correlations were found. There were positive correlations for the peak MR response between word and face activity within the left hemisphere, and between word activity in the left visual word form area and face activity in the right fusiform face area. The face lateralization index was positively rather than negatively correlated with the word index. In summary, we do not find a complementary relationship between visual word and face lateralization across subjects. The significance of the positive correlations is unclear: some may reflect the influences of general factors such as attention, but others may point to other factors that influence lateralization of function. Copyright © 2016 Elsevier B.V. All rights reserved.
Bier, Nathalie; Van Der Linden, Martial; Gagnon, Lise; Desrosiers, Johanne; Adam, Stephane; Louveaux, Stephanie; Saint-Mleux, Julie
This study compared the efficacy of five learning methods in the acquisition of face-name associations in early dementia of Alzheimer type (AD). The contribution of error production and implicit memory to the efficacy of each method was also examined. Fifteen participants with early AD and 15 matched controls were exposed to five learning methods: spaced retrieval, vanishing cues, errorless, and two trial-and-error methods, one with explicit and one with implicit memory task instructions. Under each method, participants had to learn a list of five face-name associations, followed by free recall, cued recall and recognition. Delayed recall was also assessed. For AD, results showed that all methods were efficient but there were no significant differences between them. The number of errors produced during the learning phases varied between the five methods but did not influence learning. There were no significant differences between implicit and explicit memory task instructions on test performances. For the control group, there were no differences between the five methods. Finally, no significant correlations were found between the performance of the AD participants in free recall and their cognitive profile, but generally, the best performers had better remaining episodic memory. Also, case study analyses showed that spaced retrieval was the method for which the greatest number of participants (four) obtained results as good as the controls. This study suggests that the five methods are effective for new learning of face-name associations in AD. It appears that early AD patients can learn, even in the context of error production and explicit memory conditions.
Cao, Jianqin; Liu, Quanying; Li, Yang; Yang, Jun; Gu, Ruolei; Liang, Jin; Qi, Yanyan; Wu, Haiyan; Liu, Xun
Previous studies of patients with social anxiety have demonstrated abnormal early processing of facial stimuli in social contexts. In other w