Sample records for face perception task

  1. Holistic face perception is modulated by experience-dependent perceptual grouping.

    PubMed

    Curby, Kim M; Entenman, Robert J; Fleming, Justin T

    2016-07-01

    What role do general-purpose, experience-sensitive perceptual mechanisms play in producing characteristic features of face perception? We previously demonstrated that different-colored, misaligned framing backgrounds, designed to disrupt perceptual grouping of face parts appearing upon them, disrupt holistic face perception. In the current experiments, a similar part-judgment task with composite faces was performed: face parts appeared in either misaligned, different-colored rectangles or aligned, same-colored rectangles. To investigate whether experience can shape impacts of perceptual grouping on holistic face perception, a pre-task fostered the perception of either (a) the misaligned, differently colored rectangle frames as parts of a single, multicolored polygon or (b) the aligned, same-colored rectangle frames as a single square shape. Faces appearing in the misaligned, differently colored rectangles were processed more holistically by those in the polygon-, compared with the square-, pre-task group. Holistic effects for faces appearing in aligned, same-colored rectangles showed the opposite pattern. Experiment 2, which included a pre-task condition fostering the perception of the aligned, same-colored frames as pairs of independent rectangles, provided converging evidence that experience can modulate impacts of perceptual grouping on holistic face perception. These results are surprising given the proposed impenetrability of holistic face perception and provide insights into the elusive mechanisms underlying holistic perception.

  2. Neural architecture underlying classification of face perception paradigms.

    PubMed

    Laird, Angela R; Riedel, Michael C; Sutherland, Matthew T; Eickhoff, Simon B; Ray, Kimberly L; Uecker, Angela M; Fox, P Mickle; Turner, Jessica A; Fox, Peter T

    2015-10-01

    We present a novel strategy for deriving a classification system of functional neuroimaging paradigms that relies on hierarchical clustering of experiments archived in the BrainMap database. The goal of our proof-of-concept application was to examine the underlying neural architecture of the face perception literature from a meta-analytic perspective, as these studies include a wide range of tasks. Task-based results exhibiting similar activation patterns were grouped as similar, while tasks activating different brain networks were classified as functionally distinct. We identified four sub-classes of face tasks: (1) Visuospatial Attention and Visuomotor Coordination to Faces, (2) Perception and Recognition of Faces, (3) Social Processing and Episodic Recall of Faces, and (4) Face Naming and Lexical Retrieval. Interpretation of these sub-classes supports an extension of a well-known model of face perception to include a core system for visual analysis and extended systems for personal information, emotion, and salience processing. Overall, these results demonstrate that a large-scale data mining approach can inform the evolution of theoretical cognitive models by probing the range of behavioral manipulations across experimental tasks. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. How does cognitive load influence speech perception? An encoding hypothesis.

    PubMed

    Mitterer, Holger; Mattys, Sven L

    2017-01-01

    Two experiments investigated the conditions under which cognitive load exerts an effect on the acuity of speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference.

  4. Social perception and aging: The relationship between aging and the perception of subtle changes in facial happiness and identity.

    PubMed

    Yang, Tao; Penton, Tegan; Köybaşı, Şerife Leman; Banissy, Michael J

    2017-09-01

    Previous findings suggest that older adults show impairments in the social perception of faces, including the perception of emotion and facial identity. The majority of this work has tended to examine performance on tasks involving young adult faces and prototypical emotions. While useful, this can influence performance differences between groups due to perceptual biases and limitations on task performance. Here we sought to examine how typical aging is associated with the perception of subtle changes in facial happiness and facial identity in older adult faces. We developed novel tasks that permitted the ability to assess facial happiness, facial identity, and non-social perception (object perception) across similar task parameters. We observe that aging is linked with declines in the ability to make fine-grained judgements in the perception of facial happiness and facial identity (from older adult faces), but not for non-social (object) perception. This pattern of results is discussed in relation to mechanisms that may contribute to declines in facial perceptual processing in older adulthood. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Face and body perception in schizophrenia: a configural processing deficit?

    PubMed

    Soria Bauser, Denise; Thoma, Patrizia; Aizenberg, Victoria; Brüne, Martin; Juckel, Georg; Daum, Irene

    2012-01-30

    Face and body perception rely on common processing mechanisms and activate similar but not identical brain networks. Patients with schizophrenia show impaired face perception, and the present study addressed for the first time body perception in this group. Seventeen patients diagnosed with schizophrenia or schizoaffective disorder were compared to 17 healthy controls on standardized tests assessing basic face perception skills (identity discrimination, memory for faces, recognition of facial affect). A matching-to-sample task including emotional and neutral faces, bodies and cars either in an upright or in an inverted position was administered to assess potential category-specific performance deficits and impairments of configural processing. Relative to healthy controls, schizophrenia patients showed poorer performance on the tasks assessing face perception skills. In the matching-to-sample task, they also responded more slowly and less accurately than controls, regardless of the stimulus category. Accuracy analysis showed significant inversion effects for faces and bodies across groups, reflecting configural processing mechanisms; however reaction time analysis indicated evidence of reduced inversion effects regardless of category in schizophrenia patients. The magnitude of the inversion effects was not related to clinical symptoms. Overall, the data point towards reduced configural processing, not only for faces but also for bodies and cars in individuals with schizophrenia. © 2011 Elsevier Ltd. All rights reserved.

  6. Framing faces: Frame alignment impacts holistic face perception.

    PubMed

    Curby, Kim M; Entenman, Robert

    2016-11-01

    Traditional accounts of face perception emphasise the importance of the prototypical configuration of features within faces. However, here we probe influences of more general perceptual grouping mechanisms on holistic face perception. Participants made part-matching judgments about composite faces presented in intact external oval frames or frames made from misaligned oval parts. This manipulation served to disrupt basic perceptual grouping cues that facilitate the grouping of the two face halves together. This manipulation also produced an external face contour like that in the standard misaligned condition used within the classic composite face task. Notably, by introducing a discontinuity in the external contour, grouping of the face halves into a cohesive unit was discouraged, but face configuration was preserved. Conditions where both the face parts and the frames were misaligned together, as in the typical composite task paradigm, or where just the internal face parts where misaligned, were also included. Disrupting only the face frame similarly disrupted holistic face perception as disrupting both the frame and face configuration. However, misaligned face parts presented in aligned frames also incurred a cost to holistic perception. These findings provide support for the contribution of general-purpose perceptual grouping mechanisms to holistic face perception and are presented and discussed in the context of an enhanced object-based selection account of holistic perception.

  7. Collaborative Accounting Problem Solving via Group Support Systems in a Face-to-Face versus Distant Learning Environment.

    ERIC Educational Resources Information Center

    Burke, Jacqueline A.

    2001-01-01

    Accounting students (n=128) used either face-to-face or distant Group support systems to complete collaborative tasks. Participation and social presence perceptions were significantly higher face to face. Task difficulty did not affect participation in either environment. (Contains 54 references.) (JOW)

  8. Inter-hemispheric interaction facilitates face processing.

    PubMed

    Compton, Rebecca J

    2002-01-01

    Many recent studies have revealed that interaction between the left and right cerebral hemispheres can aid in task performance, but these studies have tended to examine perception of simple stimuli such as letters, digits or simple shapes, which may have limited naturalistic validity. The present study extends these prior findings to a more naturalistic face perception task. Matching tasks required subjects to indicate when a target face matched one of two probe faces. Matches could be either across-field, requiring inter-hemispheric interaction, or within-field, not requiring inter-hemispheric interaction. Subjects indicated when faces matched in emotional expression (Experiment 1; n=32) or in character identity (Experiment 2; n=32). In both experiments, across-field performance was significantly better than within-field performance, supporting the primary hypothesis. Further, this advantage was greater for the more difficult character identity task. Results offer qualified support for the hypothesis that inter-hemispheric interaction is especially advantageous as task demands increase.

  9. Why Some Humanoid Faces Are Perceived More Positively Than Others: Effects of Human-Likeness and Task

    PubMed Central

    Rogers, Wendy A.

    2015-01-01

    Ample research in social psychology has highlighted the importance of the human face in human–human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger (N = 32) and older adults (N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots. PMID:26294936

  10. Why Some Humanoid Faces Are Perceived More Positively Than Others: Effects of Human-Likeness and Task.

    PubMed

    Prakash, Akanksha; Rogers, Wendy A

    2015-04-01

    Ample research in social psychology has highlighted the importance of the human face in human-human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger ( N = 32) and older adults ( N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots.

  11. The automaticity of face perception is influenced by familiarity.

    PubMed

    Yan, Xiaoqian; Young, Andrew W; Andrews, Timothy J

    2017-10-01

    In this study, we explore the automaticity of encoding for different facial characteristics and ask whether it is influenced by face familiarity. We used a matching task in which participants had to report whether the gender, identity, race, or expression of two briefly presented faces was the same or different. The task was made challenging by allowing nonrelevant dimensions to vary across trials. To test for automaticity, we compared performance on trials in which the task instruction was given at the beginning of the trial, with trials in which the task instruction was given at the end of the trial. As a strong criterion for automatic processing, we reasoned that if perception of a given characteristic (gender, race, identity, or emotion) is fully automatic, the timing of the instruction should not influence performance. We compared automaticity for the perception of familiar and unfamiliar faces. Performance with unfamiliar faces was higher for all tasks when the instruction was given at the beginning of the trial. However, we found a significant interaction between instruction and task with familiar faces. Accuracy of gender and identity judgments to familiar faces was the same regardless of whether the instruction was given before or after the trial, suggesting automatic processing of these properties. In contrast, there was an effect of instruction for judgments of expression and race to familiar faces. These results show that familiarity enhances the automatic processing of some types of facial information more than others.

  12. Long-Term Exposure to American and European Movies and Television Series Facilitates Caucasian Face Perception in Young Chinese Watchers.

    PubMed

    Wang, Yamin; Zhou, Lu

    2016-10-01

    Most young Chinese people now learn about Caucasian individuals via media, especially American and European movies and television series (AEMT). The current study aimed to explore whether long-term exposure to AEMT facilitates Caucasian face perception in young Chinese watchers. Before the experiment, we created Chinese, Caucasian, and generic average faces (generic average face was created from both Chinese and Caucasian faces) and tested participants' ability to identify them. In the experiment, we asked AEMT watchers and Chinese movie and television series (CMT) watchers to complete a facial norm detection task. This task was developed recently to detect norms used in facial perception. The results indicated that AEMT watchers coded Caucasian faces relative to a Caucasian face norm better than they did to a generic face norm, whereas no such difference was found among CMT watchers. All watchers coded Chinese faces by referencing a Chinese norm better than they did relative to a generic norm. The results suggested that long-term exposure to AEMT has the same effect as daily other-race face contact in shaping facial perception. © The Author(s) 2016.

  13. Parallel Processing in Face Perception

    ERIC Educational Resources Information Center

    Martens, Ulla; Leuthold, Hartmut; Schweinberger, Stefan R.

    2010-01-01

    The authors examined face perception models with regard to the functional and temporal organization of facial identity and expression analysis. Participants performed a manual 2-choice go/no-go task to classify faces, where response hand depended on facial familiarity (famous vs. unfamiliar) and response execution depended on facial expression…

  14. Atypical Face Perception in Autism: A Point of View?

    PubMed

    Morin, Karine; Guy, Jacalyn; Habak, Claudine; Wilson, Hugh R; Pagani, Linda; Mottron, Laurent; Bertone, Armando

    2015-10-01

    Face perception is the most commonly used visual metric of social perception in autism. However, when found to be atypical, the origin of face perception differences in autism is contentious. One hypothesis proposes that a locally oriented visual analysis, characteristic of individuals with autism, ultimately affects performance on face tasks where a global analysis is optimal. The objective of this study was to evaluate this hypothesis by assessing face identity discrimination with synthetic faces presented with and without changes in viewpoint, with the former condition minimizing access to local face attributes used for identity discrimination. Twenty-eight individuals with autism and 30 neurotypical participants performed a face identity discrimination task. Stimuli were synthetic faces extracted from traditional face photographs in both front and 20° side viewpoints, digitized from 37 points to provide a continuous measure of facial geometry. Face identity discrimination thresholds were obtained using a two-alternative, temporal forced choice match-to-sample paradigm. Analyses revealed an interaction between group and condition, with group differences found only for the viewpoint change condition, where performance in the autism group was decreased compared to that of neurotypical participants. The selective decrease in performance for the viewpoint change condition suggests that face identity discrimination in autism is more difficult when access to local cues is minimized, and/or when dependence on integrative analysis is increased. These results lend support to a perceptual contribution of atypical face perception in autism. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  15. Seeing race: N170 responses to race and their relation to automatic racial attitudes and controlled processing.

    PubMed

    Ofan, Renana H; Rubin, Nava; Amodio, David M

    2011-10-01

    We examined the relation between neural activity reflecting early face perception processes and automatic and controlled responses to race. Participants completed a sequential evaluative priming task, in which two-tone images of Black faces, White faces, and cars appeared as primes, followed by target words categorized as pleasant or unpleasant, while encephalography was recorded. Half of these participants were alerted that the task assessed racial prejudice and could reveal their personal bias ("alerted" condition). To assess face perception processes, the N170 component of the ERP was examined. For all participants, stronger automatic pro-White bias was associated with larger N170 amplitudes to Black than White faces. For participants in the alerted condition only, larger N170 amplitudes to Black versus White faces were also associated with less controlled processing on the word categorization task. These findings suggest that preexisting racial attitudes affect early face processing and that situational factors moderate the link between early face processing and behavior.

  16. Emotion Perception or Social Cognitive Complexity: What Drives Face Processing Deficits in Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Walsh, Jennifer A.; Creighton, Sarah E.; Rutherford, M. D.

    2016-01-01

    Some, but not all, relevant studies have revealed face processing deficits among those with autism spectrum disorder (ASD). In particular, deficits are revealed in face processing tasks that involve emotion perception. The current study examined whether either deficits in processing emotional expression or deficits in processing social cognitive…

  17. Neurons in the Fusiform Gyrus are Fewer and Smaller in Autism

    ERIC Educational Resources Information Center

    van Kooten, Imke A. J.; Palmen, Saskia J. M. C.; von Cappeln, Patricia; Steinbusch, Harry W. M.; Korr, Hubert; Heinsen, Helmut; Hof, Patrick R.; van Engeland, Herman; Schmitz, Christoph

    2008-01-01

    Abnormalities in face perception are a core feature of social disabilities in autism. Recent functional magnetic resonance imaging studies showed that patients with autism could perform face perception tasks. However, the fusiform gyrus (FG) and other cortical regions supporting face processing in controls are hypoactive in patients with autism.…

  18. Is attentional prioritisation of infant faces unique in humans?: Comparative demonstrations by modified dot-probe task in monkeys.

    PubMed

    Koda, Hiroki; Sato, Anna; Kato, Akemi

    2013-09-01

    Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Altered attentional and perceptual processes as indexed by N170 during gaze perception in schizophrenia: Relationship with perceived threat and paranoid delusions.

    PubMed

    Tso, Ivy F; Calwas, Anita M; Chun, Jinsoo; Mueller, Savanna A; Taylor, Stephan F; Deldin, Patricia J

    2015-08-01

    Using gaze information to orient attention and guide behavior is critical to social adaptation. Previous studies have suggested that abnormal gaze perception in schizophrenia (SCZ) may originate in abnormal early attentional and perceptual processes and may be related to paranoid symptoms. Using event-related brain potentials (ERPs), this study investigated altered early attentional and perceptual processes during gaze perception and their relationship to paranoid delusions in SCZ. Twenty-eight individuals with SCZ or schizoaffective disorder and 32 demographically matched healthy controls (HCs) completed a gaze-discrimination task with face stimuli varying in gaze direction (direct, averted), head orientation (forward, deviated), and emotion (neutral, fearful). ERPs were recorded during the task. Participants rated experienced threat from each face after the task. Participants with SCZ were as accurate as, though slower than, HCs on the task. Participants with SCZ displayed enlarged N170 responses over the left hemisphere to averted gaze presented in fearful relative to neutral faces, indicating a heightened encoding sensitivity to faces signaling external threat. This abnormality was correlated with increased perceived threat and paranoid delusions. Participants with SCZ also showed a reduction of N170 modulation by head orientation (normally increased amplitude to deviated faces relative to forward faces), suggesting less integration of contextual cues of head orientation in gaze perception. The psychophysiological deviations observed during gaze discrimination in SCZ underscore the role of early attentional and perceptual abnormalities in social information processing and paranoid symptoms of SCZ. (c) 2015 APA, all rights reserved).

  20. The Influence of Flankers on Race Categorization of Faces

    PubMed Central

    Sun, Hsin-Mei; Balas, Benjamin

    2012-01-01

    Context affects multiple cognitive and perceptual processes. In the present study, we asked how the context of a set of faces affected the perception of a target face’s race in two distinct tasks. In Experiments 1 and 2, participants categorized target faces according to perceived racial category (Black or White). In Experiment 1, the target face was presented alone, or with Black or White flanker faces. The orientation of flanker faces was also manipulated to investigate how face inversion effect interacts with the influences of flanker faces on the target face. The results showed that participants were more likely to categorize the target face as White when it was surrounded by inverted White faces (an assimilation effect). Experiment 2 further examined how different aspects of the visual context affect the perception of the target face by manipulating flanker faces’ shape and pigmentation as well as their orientation. The results showed that flanker faces’ shape and pigmentation affected the perception of the target face differently. While shape elicited a contrast effect, pigmentation appeared to be assimilative. These novel findings suggest that the perceived race of a face is modulated by the appearance of other faces and their distinct shape and pigmentation properties. However, the contrast and assimilation effects elicited by flanker faces’ shape and pigmentation may be specific to race categorization, since the same stimuli used in a delayed matching task (Experiment 3) revealed that flanker pigmentation induced a contrast effect on the perception of target pigmentation. PMID:22825930

  1. The perception of positive and negative facial expressions by unilateral stroke patients.

    PubMed

    Abbott, Jacenta D; Wijeratne, Tissa; Hughes, Andrew; Perre, Diana; Lindell, Annukka K

    2014-04-01

    There remains conflict in the literature about the lateralisation of affective face perception. Some studies have reported a right hemisphere advantage irrespective of valence, whereas others have found a left hemisphere advantage for positive, and a right hemisphere advantage for negative, emotion. Differences in injury aetiology and chronicity, proportion of male participants, participant age, and the number of emotions used within a perception task may contribute to these contradictory findings. The present study therefore controlled and/or directly examined the influence of these possible moderators. Right brain-damaged (RBD; n=17), left brain-damaged (LBD; n=17), and healthy control (HC; n=34) participants completed two face perception tasks (identification and discrimination). No group differences in facial expression perception according to valence were found. Across emotions, the RBD group was less accurate thanthe HC group, however RBD and LBD group performancedid not differ. The lack of difference between RBD and LBD groups indicates that both hemispheres are involved in positive and negative expression perception. The inclusion of older adults and the well-defined chronicity range of the brain-damaged participants may have moderated these findings. Participant sex and general face perception ability did not influence performance. Furthermore, while the RBD group was less accurate than the LBD group when the identification task tested two emotions, performance of the two groups was indistinguishable when the number of emotions increased (four or six). This suggests that task demand moderates a study's ability to find hemispheric differences in the perception of facial emotion. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Brain connectivity analysis from EEG signals using stable phase-synchronized states during face perception tasks

    NASA Astrophysics Data System (ADS)

    Jamal, Wasifa; Das, Saptarshi; Maharatna, Koushik; Pan, Indranil; Kuyucu, Doga

    2015-09-01

    Degree of phase synchronization between different Electroencephalogram (EEG) channels is known to be the manifestation of the underlying mechanism of information coupling between different brain regions. In this paper, we apply a continuous wavelet transform (CWT) based analysis technique on EEG data, captured during face perception tasks, to explore the temporal evolution of phase synchronization, from the onset of a stimulus. Our explorations show that there exists a small set (typically 3-5) of unique synchronized patterns or synchrostates, each of which are stable of the order of milliseconds. Particularly, in the beta (β) band, which has been reported to be associated with visual processing task, the number of such stable states has been found to be three consistently. During processing of the stimulus, the switching between these states occurs abruptly but the switching characteristic follows a well-behaved and repeatable sequence. This is observed in a single subject analysis as well as a multiple-subject group-analysis in adults during face perception. We also show that although these patterns remain topographically similar for the general category of face perception task, the sequence of their occurrence and their temporal stability varies markedly between different face perception scenarios (stimuli) indicating toward different dynamical characteristics for information processing, which is stimulus-specific in nature. Subsequently, we translated these stable states into brain complex networks and derived informative network measures for characterizing the degree of segregated processing and information integration in those synchrostates, leading to a new methodology for characterizing information processing in human brain. The proposed methodology of modeling the functional brain connectivity through the synchrostates may be viewed as a new way of quantitative characterization of the cognitive ability of the subject, stimuli and information integration/segregation capability.

  3. Social cognition and neural substrates of face perception: implications for neurodevelopmental and neuropsychiatric disorders.

    PubMed

    Lazar, Steven M; Evans, David W; Myers, Scott M; Moreno-De Luca, Andres; Moore, Gregory J

    2014-04-15

    Social cognition is an important aspect of social behavior in humans. Social cognitive deficits are associated with neurodevelopmental and neuropsychiatric disorders. In this study we examine the neural substrates of social cognition and face processing in a group of healthy young adults to examine the neural substrates of social cognition. Fifty-seven undergraduates completed a battery of social cognition tasks and were assessed with electroencephalography (EEG) during a face-perception task. A subset (N=22) were administered a face-perception task during functional magnetic resonance imaging. Variance in the N170 EEG was predicted by social attribution performance and by a quantitative measure of empathy. Neurally, face processing was more bilateral in females than in males. Variance in fMRI voxel count in the face-sensitive fusiform gyrus was predicted by quantitative measures of social behavior, including the Social Responsiveness Scale (SRS) and the Empathizing Quotient. When measured as a quantitative trait, social behaviors in typical and pathological populations share common neural pathways. The results highlight the importance of viewing neurodevelopmental and neuropsychiatric disorders as spectrum phenomena that may be informed by studies of the normal distribution of relevant traits in the general population. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Losing face: impaired discrimination of featural and configural information in the mouth region of an inverted face.

    PubMed

    Tanaka, James W; Kaiser, Martha D; Hagen, Simen; Pierce, Lara J

    2014-05-01

    Given that all faces share the same set of features-two eyes, a nose, and a mouth-that are arranged in similar configuration, recognition of a specific face must depend on our ability to discern subtle differences in its featural and configural properties. An enduring question in the face-processing literature is whether featural or configural information plays a larger role in the recognition process. To address this question, the face dimensions task was designed, in which the featural and configural properties in the upper (eye) and lower (mouth) regions of a face were parametrically and independently manipulated. In a same-different task, two faces were sequentially presented and tested in their upright or in their inverted orientation. Inversion disrupted the perception of featural size (Exp. 1), featural shape (Exp. 2), and configural changes in the mouth region, but it had relatively little effect on the discrimination of featural size and shape and configural differences in the eye region. Inversion had little effect on the perception of information in the top and bottom halves of houses (Exp. 3), suggesting that the lower-half impairment was specific to faces. Spatial cueing to the mouth region eliminated the inversion effect (Exp. 4), suggesting that participants have a bias to attend to the eye region of an inverted face. The collective findings from these experiments suggest that inversion does not differentially impair featural or configural face perceptions, but rather impairs the perception of information in the mouth region of the face.

  5. Plastic reorganization of neural systems for perception of others in the congenitally blind.

    PubMed

    Fairhall, S L; Porter, K B; Bellucci, C; Mazzetti, M; Cipolli, C; Gobbini, M I

    2017-09-01

    Recent evidence suggests that the function of the core system for face perception might extend beyond visual face-perception to a broader role in person perception. To critically test the broader role of core face-system in person perception, we examined the role of the core system during the perception of others in 7 congenitally blind individuals and 15 sighted subjects by measuring their neural responses using fMRI while they listened to voices and performed identity and emotion recognition tasks. We hypothesised that in people who have had no visual experience of faces, core face-system areas may assume a role in the perception of others via voices. Results showed that emotions conveyed by voices can be decoded in homologues of the core face system only in the blind. Moreover, there was a specific enhancement of response to verbal as compared to non-verbal stimuli in bilateral fusiform face areas and the right posterior superior temporal sulcus showing that the core system also assumes some language-related functions in the blind. These results indicate that, in individuals with no history of visual experience, areas of the core system for face perception may assume a role in aspects of voice perception that are relevant to social cognition and perception of others' emotions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  6. Individual differences in perceiving and recognizing faces-One element of social cognition.

    PubMed

    Wilhelm, Oliver; Herzmann, Grit; Kunina, Olga; Danthiir, Vanessa; Schacht, Annekathrin; Sommer, Werner

    2010-09-01

    Recognizing faces swiftly and accurately is of paramount importance to humans as a social species. Individual differences in the ability to perform these tasks may therefore reflect important aspects of social or emotional intelligence. Although functional models of face cognition based on group and single case studies postulate multiple component processes, little is known about the ability structure underlying individual differences in face cognition. In 2 large individual differences experiments (N = 151 and N = 209), a broad variety of face-cognition tasks were tested and the component abilities of face cognition-face perception, face memory, and the speed of face cognition-were identified and then replicated. Experiment 2 also showed that the 3 face-cognition abilities are clearly distinct from immediate and delayed memory, mental speed, general cognitive ability, and object cognition. These results converge with functional and neuroanatomical models of face cognition by demonstrating the difference between face perception and face memory. The results also underline the importance of distinguishing between speed and accuracy of face cognition. Together our results provide a first step toward establishing face-processing abilities as an independent ability reflecting elements of social intelligence. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  7. Global processing in amblyopia: a review

    PubMed Central

    Hamm, Lisa M.; Black, Joanna; Dai, Shuan; Thompson, Benjamin

    2014-01-01

    Amblyopia is a neurodevelopmental disorder of the visual system that is associated with disrupted binocular vision during early childhood. There is evidence that the effects of amblyopia extend beyond the primary visual cortex to regions of the dorsal and ventral extra-striate visual cortex involved in visual integration. Here, we review the current literature on global processing deficits in observers with either strabismic, anisometropic, or deprivation amblyopia. A range of global processing tasks have been used to investigate the extent of the cortical deficit in amblyopia including: global motion perception, global form perception, face perception, and biological motion. These tasks appear to be differentially affected by amblyopia. In general, observers with unilateral amblyopia appear to show deficits for local spatial processing and global tasks that require the segregation of signal from noise. In bilateral cases, the global processing deficits are exaggerated, and appear to extend to specialized perceptual systems such as those involved in face processing. PMID:24987383

  8. Enhanced Visual Short-Term Memory for Angry Faces

    ERIC Educational Resources Information Center

    Jackson, Margaret C.; Wu, Chia-Yun; Linden, David E. J.; Raymond, Jane E.

    2009-01-01

    Although some views of face perception posit independent processing of face identity and expression, recent studies suggest interactive processing of these 2 domains. The authors examined expression-identity interactions in visual short-term memory (VSTM) by assessing recognition performance in a VSTM task in which face identity was relevant and…

  9. Priming Facial Gender and Emotional Valence: The Influence of Spatial Frequency on Face Perception in ASD

    ERIC Educational Resources Information Center

    Vanmarcke, Steven; Wagemans, Johan

    2017-01-01

    Adolescents with and without autism spectrum disorder (ASD) performed two priming experiments in which they implicitly processed a prime stimulus, containing high and/or low spatial frequency information, and then explicitly categorized a target face either as male/female (gender task) or as positive/negative (Valence task). Adolescents with ASD…

  10. Integration of internal and external facial features in 8- to 10-year-old children and adults.

    PubMed

    Meinhardt-Injac, Bozana; Persike, Malte; Meinhardt, Günter

    2014-06-01

    Investigation of whole-part and composite effects in 4- to 6-year-old children gave rise to claims that face perception is fully mature within the first decade of life (Crookes & McKone, 2009). However, only internal features were tested, and the role of external features was not addressed, although external features are highly relevant for holistic face perception (Sinha & Poggio, 1996; Axelrod & Yovel, 2010, 2011). In this study, 8- to 10-year-old children and adults performed a same-different matching task with faces and watches. In this task participants attended to either internal or external features. Holistic face perception was tested using a congruency paradigm, in which face and non-face stimuli either agreed or disagreed in both features (congruent contexts) or just in the attended ones (incongruent contexts). In both age groups, pronounced context congruency and inversion effects were found for faces, but not for watches. These findings indicate holistic feature integration for faces. While inversion effects were highly similar in both age groups, context congruency effects were stronger for children. Moreover, children's face matching performance was generally better when attending to external compared to internal features. Adults tended to perform better when attending to internal features. Our results indicate that both adults and 8- to 10-year-old children integrate external and internal facial features into holistic face representations. However, in children's face representations external features are much more relevant. These findings suggest that face perception is holistic but still not adult-like at the end of the first decade of life. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Reduced beta connectivity during emotional face processing in adolescents with autism.

    PubMed

    Leung, Rachel C; Ye, Annette X; Wong, Simeon M; Taylor, Margot J; Doesburg, Sam M

    2014-01-01

    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impairments in social cognition. The biological basis of deficits in social cognition in ASD, and their difficulty in processing emotional face information in particular, remains unclear. Atypical communication within and between brain regions has been reported in ASD. Interregional phase-locking is a neurophysiological mechanism mediating communication among brain areas and is understood to support cognitive functions. In the present study we investigated interregional magnetoencephalographic phase synchronization during the perception of emotional faces in adolescents with ASD. A total of 22 adolescents with ASD (18 males, mean age =14.2 ± 1.15 years, 22 right-handed) with mild to no cognitive delay and 17 healthy controls (14 males, mean age =14.4 ± 0.33 years, 16 right-handed) performed an implicit emotional processing task requiring perception of happy, angry and neutral faces while we recorded neuromagnetic signals. The faces were presented rapidly (80 ms duration) to the left or right of a central fixation cross and participants responded to a scrambled pattern that was presented concurrently on the opposite side of the fixation point. Task-dependent interregional phase-locking was calculated among source-resolved brain regions. Task-dependent increases in interregional beta synchronization were observed. Beta-band interregional phase-locking in adolescents with ASD was reduced, relative to controls, during the perception of angry faces in a distributed network involving the right fusiform gyrus and insula. No significant group differences were found for happy or neutral faces, or other analyzed frequency ranges. Significant reductions in task-dependent beta connectivity strength, clustering and eigenvector centrality (all P <0.001) in the right insula were found in adolescents with ASD, relative to controls. Reduced beta synchronization may reflect inadequate recruitment of task-relevant networks during emotional face processing in ASD. The right insula, specifically, was a hub of reduced functional connectivity and may play a prominent role in the inability to effectively extract emotional information from faces. These findings suggest that functional disconnection in brain networks mediating emotional processes may contribute to deficits in social cognition in this population.

  12. Is fear perception special? Evidence at the level of decision-making and subjective confidence.

    PubMed

    Koizumi, Ai; Mobbs, Dean; Lau, Hakwan

    2016-11-01

    Fearful faces are believed to be prioritized in visual perception. However, it is unclear whether the processing of low-level facial features alone can facilitate such prioritization or whether higher-level mechanisms also contribute. We examined potential biases for fearful face perception at the levels of perceptual decision-making and perceptual confidence. We controlled for lower-level visual processing capacity by titrating luminance contrasts of backward masks, and the emotional intensity of fearful, angry and happy faces. Under these conditions, participants showed liberal biases in perceiving a fearful face, in both detection and discrimination tasks. This effect was stronger among individuals with reduced density in dorsolateral prefrontal cortex, a region linked to perceptual decision-making. Moreover, participants reported higher confidence when they accurately perceived a fearful face, suggesting that fearful faces may have privileged access to consciousness. Together, the results suggest that mechanisms in the prefrontal cortex contribute to making fearful face perception special. © The Author (2016). Published by Oxford University Press.

  13. Beyond attentional bias: a perceptual bias in a dot-probe task.

    PubMed

    Bocanegra, Bruno R; Huijding, Jorg; Zeelenberg, René

    2012-12-01

    Previous dot-probe studies indicate that threat-related face cues induce a bias in spatial attention. Independently of spatial attention, a recent psychophysical study suggests that a bilateral fearful face cue improves low spatial-frequency perception (LSF) and impairs high spatial-frequency perception (HSF). Here, we combine these separate lines of research within a single dot-probe paradigm. We found that a bilateral fearful face cue, compared with a bilateral neutral face cue, speeded up responses to LSF targets and slowed down responses to HSF targets. This finding is important, as it shows that emotional cues in dot-probe tasks not only bias where information is preferentially processed (i.e., an attentional bias in spatial location), but also bias what type of information is preferentially processed (i.e., a perceptual bias in spatial frequency). PsycINFO Database Record (c) 2012 APA, all rights reserved.

  14. Holistic processing of words modulated by reading experience.

    PubMed

    Wong, Alan C-N; Bukach, Cindy M; Yuen, Crystal; Yang, Lizhuang; Leung, Shirley; Greenspon, Emma

    2011-01-01

    Perceptual expertise has been studied intensively with faces and object categories involving detailed individuation. A common finding is that experience in fulfilling the task demand of fine, subordinate-level discrimination between highly similar instances is associated with the development of holistic processing. This study examines whether holistic processing is also engaged by expert word recognition, which is thought to involve coarser, basic-level processing that is more part-based. We adopted a paradigm widely used for faces--the composite task, and found clear evidence of holistic processing for English words. A second experiment further showed that holistic processing for words was sensitive to the amount of experience with the language concerned (native vs. second-language readers) and with the specific stimuli (words vs. pseudowords). The adoption of a paradigm from the face perception literature to the study of expert word perception is important for further comparison between perceptual expertise with words and face-like expertise.

  15. The Occipital Face Area Is Causally Involved in Facial Viewpoint Perception

    PubMed Central

    Poltoratski, Sonia; König, Peter; Blake, Randolph; Tong, Frank; Ling, Sam

    2015-01-01

    Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: (1) judging the viewpoint symmetry; or (2) judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. Although viewpoint symmetry judgments were significantly disrupted, we observed no effect on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of interhemispheric crosstalk in the formation of viewpoint-invariant face perception. SIGNIFICANCE STATEMENT Faces are among the most salient objects we encounter during our everyday activities. Moreover, we are remarkably adept at identifying people at a glance, despite the diversity of viewpoints during our social encounters. Here, we investigate the cortical mechanisms underlying this ability by focusing on effects of viewpoint symmetry, i.e., the invariance of neural responses to mirror-symmetric facial viewpoints. We did this by temporarily disrupting neural processing in the occipital face area (OFA) using transcranial magnetic stimulation. Our results demonstrate that the OFA causally contributes to judgments facial viewpoints and suggest that effects of viewpoint symmetry, previously observed using fMRI, arise from an interhemispheric integration of visual information even when only one hemisphere receives direct visual stimulation. PMID:26674865

  16. The Occipital Face Area Is Causally Involved in Facial Viewpoint Perception.

    PubMed

    Kietzmann, Tim C; Poltoratski, Sonia; König, Peter; Blake, Randolph; Tong, Frank; Ling, Sam

    2015-12-16

    Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: (1) judging the viewpoint symmetry; or (2) judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. Although viewpoint symmetry judgments were significantly disrupted, we observed no effect on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of interhemispheric crosstalk in the formation of viewpoint-invariant face perception. Faces are among the most salient objects we encounter during our everyday activities. Moreover, we are remarkably adept at identifying people at a glance, despite the diversity of viewpoints during our social encounters. Here, we investigate the cortical mechanisms underlying this ability by focusing on effects of viewpoint symmetry, i.e., the invariance of neural responses to mirror-symmetric facial viewpoints. We did this by temporarily disrupting neural processing in the occipital face area (OFA) using transcranial magnetic stimulation. Our results demonstrate that the OFA causally contributes to judgments facial viewpoints and suggest that effects of viewpoint symmetry, previously observed using fMRI, arise from an interhemispheric integration of visual information even when only one hemisphere receives direct visual stimulation. Copyright © 2015 the authors 0270-6474/15/3516398-06$15.00/0.

  17. Seeing is not stereotyping: the functional independence of categorization and stereotype activation

    PubMed Central

    Tomelleri, Silvia

    2017-01-01

    Abstract Social categorization has been viewed as necessarily resulting in stereotyping, yet extant research suggests the two processes are differentially sensitive to task manipulations. Here, we simultaneously test the degree to which race perception and stereotyping are conditionally automatic. Participants performed a sequential priming task while either explicitly attending to the race of face primes or directing attention away from their semantic nature. We find a dissociation between the perceptual encoding of race and subsequent activation of associated stereotypes, with race perception occurring in both task conditions, but implicit stereotyping occurring only when attention is directed to the race of the face primes. These results support a clear conceptual distinction between categorization and stereotyping and show that the encoding of racial category need not result in stereotype activation. PMID:28338829

  18. Acute tryptophan depletion attenuates conscious appraisal of social emotional signals in healthy female volunteers.

    PubMed

    Beacher, Felix D C C; Gray, Marcus A; Minati, Ludovico; Whale, Richard; Harrison, Neil A; Critchley, Hugo D

    2011-02-01

    Acute tryptophan depletion (ATD) decreases levels of central serotonin. ATD thus enables the cognitive effects of serotonin to be studied, with implications for the understanding of psychiatric conditions, including depression. To determine the role of serotonin in conscious (explicit) and unconscious/incidental processing of emotional information. A randomized, double-blind, cross-over design was used with 15 healthy female participants. Subjective mood was recorded at baseline and after 4 h, when participants performed an explicit emotional face processing task, and a task eliciting unconscious processing of emotionally aversive and neutral images presented subliminally using backward masking. ATD was associated with a robust reduction in plasma tryptophan at 4 h but had no effect on mood or autonomic physiology. ATD was associated with significantly lower attractiveness ratings for happy faces and attenuation of intensity/arousal ratings of angry faces. ATD also reduced overall reaction times on the unconscious perception task, but there was no interaction with emotional content of masked stimuli. ATD did not affect breakthrough perception (accuracy in identification) of masked images. ATD attenuates the attractiveness of positive faces and the negative intensity of threatening faces, suggesting that serotonin contributes specifically to the appraisal of the social salience of both positive and negative salient social emotional cues. We found no evidence that serotonin affects unconscious processing of negative emotional stimuli. These novel findings implicate serotonin in conscious aspects of active social and behavioural engagement and extend knowledge regarding the effects of ATD on emotional perception.

  19. Content-specific activational effects of estrogen on working memory performance.

    PubMed

    Vranić, Andrea; Hromatko, Ivana

    2008-07-01

    The authors explored the influence of task content and the menstrual cycle phase on working memory (WM) performance. They addressed the content specificity of WM in the framework of evolutionary psychology, proposing a hormone-mediated adaptive design governing face perception. The authors tested 2 groups of healthy young women (n = 66 women with regular menstrual cycle, n = 27 oral contraceptive users) on a WM task with adult male or infant face photographs. Analyses of variance showed significant interaction between task content and estrogen level. Women were more efficient in solving the male faces task during high-estrogen phase of the cycle than during low-estrogen phase. No differences were found in the efficacy of solving the infant faces task between different phases of the cycle. Results suggest content-specific activational effects of estrogen on the WM performance and are consistent with the notion of a hormonal mechanism underlying adaptive shifts in cognition related to mating motivation.

  20. Following the time course of face gender and expression processing: a task-dependent ERP study.

    PubMed

    Valdés-Conroy, Berenice; Aguado, Luis; Fernández-Cahill, María; Romero-Ferreiro, Verónica; Diéguez-Risco, Teresa

    2014-05-01

    The effects of task demands and the interaction between gender and expression in face perception were studied using event-related potentials (ERPs). Participants performed three different tasks with male and female faces that were emotionally inexpressive or that showed happy or angry expressions. In two of the tasks (gender and expression categorization) facial properties were task-relevant while in a third task (symbol discrimination) facial information was irrelevant. Effects of expression were observed on the visual P100 component under all task conditions, suggesting the operation of an automatic process that is not influenced by task demands. The earliest interaction between expression and gender was observed later in the face-sensitive N170 component. This component showed differential modulations by specific combinations of gender and expression (e.g., angry male vs. angry female faces). Main effects of expression and task were observed in a later occipito-temporal component peaking around 230 ms post-stimulus onset (EPN or early posterior negativity). Less positive amplitudes in the presence of angry faces and during performance of the gender and expression tasks were observed. Finally, task demands also modulated a positive component peaking around 400 ms (LPC, or late positive complex) that showed enhanced amplitude for the gender task. The pattern of results obtained here adds new evidence about the sequence of operations involved in face processing and the interaction of facial properties (gender and expression) in response to different task demands. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Visual short-term memory load modulates the early attention and perception of task-irrelevant emotional faces

    PubMed Central

    Yang, Ping; Wang, Min; Jin, Zhenlan; Li, Ling

    2015-01-01

    The ability to focus on task-relevant information, while suppressing distraction, is critical for human cognition and behavior. Using a delayed-match-to-sample (DMS) task, we investigated the effects of emotional face distractors (positive, negative, and neutral faces) on early and late phases of visual short-term memory (VSTM) maintenance intervals, using low and high VSTM loads. Behavioral results showed decreased accuracy and delayed reaction times (RTs) for high vs. low VSTM load. Event-related potentials (ERPs) showed enhanced frontal N1 and occipital P1 amplitudes for negative faces vs. neutral or positive faces, implying rapid attentional alerting effects and early perceptual processing of negative distractors. However, high VSTM load appeared to inhibit face processing in general, showing decreased N1 amplitudes and delayed P1 latencies. An inverse correlation between the N1 activation difference (high-load minus low-load) and RT costs (high-load minus low-load) was found at left frontal areas when viewing negative distractors, suggesting that the greater the inhibition the lower the RT cost for negative faces. Emotional interference effect was not found in the late VSTM-related parietal P300, frontal positive slow wave (PSW) and occipital negative slow wave (NSW) components. In general, our findings suggest that the VSTM load modulates the early attention and perception of emotional distractors. PMID:26388763

  2. Infant Face Preferences after Binocular Visual Deprivation

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Lewis, Terri L.; Levin, Alex V.; Maurer, Daphne

    2013-01-01

    Early visual deprivation impairs some, but not all, aspects of face perception. We investigated the possible developmental roots of later abnormalities by using a face detection task to test infants treated for bilateral congenital cataract within 1 hour of their first focused visual input. The seven patients were between 5 and 12 weeks old…

  3. Seeing is not stereotyping: the functional independence of categorization and stereotype activation.

    PubMed

    Ito, Tiffany A; Tomelleri, Silvia

    2017-05-01

    Social categorization has been viewed as necessarily resulting in stereotyping, yet extant research suggests the two processes are differentially sensitive to task manipulations. Here, we simultaneously test the degree to which race perception and stereotyping are conditionally automatic. Participants performed a sequential priming task while either explicitly attending to the race of face primes or directing attention away from their semantic nature. We find a dissociation between the perceptual encoding of race and subsequent activation of associated stereotypes, with race perception occurring in both task conditions, but implicit stereotyping occurring only when attention is directed to the race of the face primes. These results support a clear conceptual distinction between categorization and stereotyping and show that the encoding of racial category need not result in stereotype activation. © The Author (2017). Published by Oxford University Press.

  4. Social categories shape the neural representation of emotion: evidence from a visual face adaptation task

    PubMed Central

    Otten, Marte; Banaji, Mahzarin R.

    2012-01-01

    A number of recent behavioral studies have shown that emotional expressions are differently perceived depending on the race of a face, and that perception of race cues is influenced by emotional expressions. However, neural processes related to the perception of invariant cues that indicate the identity of a face (such as race) are often described to proceed independently of processes related to the perception of cues that can vary over time (such as emotion). Using a visual face adaptation paradigm, we tested whether these behavioral interactions between emotion and race also reflect interdependent neural representation of emotion and race. We compared visual emotion aftereffects when the adapting face and ambiguous test face differed in race or not. Emotion aftereffects were much smaller in different race (DR) trials than same race (SR) trials, indicating that the neural representation of a facial expression is significantly different depending on whether the emotional face is black or white. It thus seems that invariable cues such as race interact with variable face cues such as emotion not just at a response level, but also at the level of perception and neural representation. PMID:22403531

  5. Both physical exercise and progressive muscle relaxation reduce the facing-the-viewer bias in biological motion perception.

    PubMed

    Heenan, Adam; Troje, Nikolaus F

    2014-01-01

    Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli.

  6. Both Physical Exercise and Progressive Muscle Relaxation Reduce the Facing-the-Viewer Bias in Biological Motion Perception

    PubMed Central

    Heenan, Adam; Troje, Nikolaus F.

    2014-01-01

    Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli. PMID:24987956

  7. Acute tryptophan depletion attenuates conscious appraisal of social emotional signals in healthy female volunteers

    PubMed Central

    Gray, Marcus A.; Minati, Ludovico; Whale, Richard; Harrison, Neil A.; Critchley, Hugo D.

    2010-01-01

    Rationale Acute tryptophan depletion (ATD) decreases levels of central serotonin. ATD thus enables the cognitive effects of serotonin to be studied, with implications for the understanding of psychiatric conditions, including depression. Objective To determine the role of serotonin in conscious (explicit) and unconscious/incidental processing of emotional information. Materials and methods A randomized, double-blind, cross-over design was used with 15 healthy female participants. Subjective mood was recorded at baseline and after 4 h, when participants performed an explicit emotional face processing task, and a task eliciting unconscious processing of emotionally aversive and neutral images presented subliminally using backward masking. Results ATD was associated with a robust reduction in plasma tryptophan at 4 h but had no effect on mood or autonomic physiology. ATD was associated with significantly lower attractiveness ratings for happy faces and attenuation of intensity/arousal ratings of angry faces. ATD also reduced overall reaction times on the unconscious perception task, but there was no interaction with emotional content of masked stimuli. ATD did not affect breakthrough perception (accuracy in identification) of masked images. Conclusions ATD attenuates the attractiveness of positive faces and the negative intensity of threatening faces, suggesting that serotonin contributes specifically to the appraisal of the social salience of both positive and negative salient social emotional cues. We found no evidence that serotonin affects unconscious processing of negative emotional stimuli. These novel findings implicate serotonin in conscious aspects of active social and behavioural engagement and extend knowledge regarding the effects of ATD on emotional perception. PMID:20596858

  8. Distinct facial processing in schizophrenia and schizoaffective disorders

    PubMed Central

    Chen, Yue; Cataldo, Andrea; Norton, Daniel J; Ongur, Dost

    2011-01-01

    Although schizophrenia and schizoaffective disorders have both similar and differing clinical features, it is not well understood whether similar or differing pathophysiological processes mediate patients’ cognitive functions. Using psychophysical methods, this study compared the performances of schizophrenia (SZ) patients, patients with schizoaffective disorder (SA), and a healthy control group in two face-related cognitive tasks: emotion discrimination, which tested perception of facial affect, and identity discrimination, which tested perception of non-affective facial features. Compared to healthy controls, SZ patients, but not SA patients, exhibited deficient performance in both fear and happiness discrimination, as well as identity discrimination. SZ patients, but not SA patients, also showed impaired performance in a theory-of-mind task for which emotional expressions are identified based upon the eye regions of face images. This pattern of results suggests distinct processing of face information in schizophrenia and schizoaffective disorders. PMID:21868199

  9. Facial EMG responses to emotional expressions are related to emotion perception ability.

    PubMed

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG)--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.

  10. Facial EMG Responses to Emotional Expressions Are Related to Emotion Perception Ability

    PubMed Central

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a “reactivation” of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG) - in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective. PMID:24489647

  11. A Window of Opportunity for Cognitive Training in Adolescence

    PubMed Central

    Knoll, Lisa J.; Fuhrmann, Delia; Sakhardande, Ashok L.; Stamp, Fabian; Speekenbrink, Maarten; Blakemore, Sarah-Jayne

    2016-01-01

    In the current study, we investigated windows for enhanced learning of cognitive skills during adolescence. Six hundred thirty-three participants (11–33 years old) were divided into four age groups, and each participant was randomly allocated to one of three training groups. Each training group completed up to 20 days of online training in numerosity discrimination (i.e., discriminating small from large numbers of objects), relational reasoning (i.e., detecting abstract relationships between groups of items), or face perception (i.e., identifying differences in faces). Training yielded some improvement in performance on the numerosity-discrimination task, but only in older adolescents or adults. In contrast, training in relational reasoning improved performance on that task in all age groups, but training benefits were greater for people in late adolescence and adulthood than for people earlier in adolescence. Training did not increase performance on the face-perception task for any age group. Our findings suggest that for certain cognitive skills, training during late adolescence and adulthood yields greater improvement than training earlier in adolescence, which highlights the relevance of this late developmental stage for education. PMID:27815519

  12. Visual attractiveness is leaky: the asymmetrical relationship between face and hair.

    PubMed

    Saegusa, Chihiro; Intoy, Janis; Shimojo, Shinsuke

    2015-01-01

    Predicting personality is crucial when communicating with people. It has been revealed that the perceived attractiveness or beauty of the face is a cue. As shown in the well-known "what is beautiful is good" stereotype, perceived attractiveness is often associated with desirable personality. Although such research on attractiveness used mainly the face isolated from other body parts, the face is not always seen in isolation in the real world. Rather, it is surrounded by one's hairstyle, and is perceived as a part of total presence. In human vision, perceptual organization/integration occurs mostly in a bottom up, task-irrelevant fashion. This raises an intriguing possibility that task-irrelevant stimulus that is perceptually integrated with a target may influence our affective evaluation. In such a case, there should be a mutual influence between attractiveness perception of the face and surrounding hair, since they are assumed to share strong and unique perceptual organization. In the current study, we examined the influence of a task-irrelevant stimulus on our attractiveness evaluation, using face and hair as stimuli. The results revealed asymmetrical influences in the evaluation of one while ignoring the other. When hair was task-irrelevant, it still affected attractiveness of the face, but only if the hair itself had never been evaluated by the same evaluator. On the other hand, the face affected the hair regardless of whether the face itself was evaluated before. This has intriguing implications on the asymmetry between face and hair, and perceptual integration between them in general. Together with data from a post hoc questionnaire, it is suggested that both implicit non-selective and explicit selective processes contribute to attractiveness evaluation. The findings provide an understanding of attractiveness perception in real-life situations, as well as a new paradigm to reveal unknown implicit aspects of information integration for emotional judgment.

  13. Visual attractiveness is leaky: the asymmetrical relationship between face and hair

    PubMed Central

    Saegusa, Chihiro; Intoy, Janis; Shimojo, Shinsuke

    2015-01-01

    Predicting personality is crucial when communicating with people. It has been revealed that the perceived attractiveness or beauty of the face is a cue. As shown in the well-known “what is beautiful is good” stereotype, perceived attractiveness is often associated with desirable personality. Although such research on attractiveness used mainly the face isolated from other body parts, the face is not always seen in isolation in the real world. Rather, it is surrounded by one’s hairstyle, and is perceived as a part of total presence. In human vision, perceptual organization/integration occurs mostly in a bottom up, task-irrelevant fashion. This raises an intriguing possibility that task-irrelevant stimulus that is perceptually integrated with a target may influence our affective evaluation. In such a case, there should be a mutual influence between attractiveness perception of the face and surrounding hair, since they are assumed to share strong and unique perceptual organization. In the current study, we examined the influence of a task-irrelevant stimulus on our attractiveness evaluation, using face and hair as stimuli. The results revealed asymmetrical influences in the evaluation of one while ignoring the other. When hair was task-irrelevant, it still affected attractiveness of the face, but only if the hair itself had never been evaluated by the same evaluator. On the other hand, the face affected the hair regardless of whether the face itself was evaluated before. This has intriguing implications on the asymmetry between face and hair, and perceptual integration between them in general. Together with data from a post hoc questionnaire, it is suggested that both implicit non-selective and explicit selective processes contribute to attractiveness evaluation. The findings provide an understanding of attractiveness perception in real-life situations, as well as a new paradigm to reveal unknown implicit aspects of information integration for emotional judgment. PMID:25914656

  14. The effect of face eccentricity on the perception of gaze direction.

    PubMed

    Todorović, Dejan

    2009-01-01

    The perception of a looker's gaze direction depends not only on iris eccentricity (the position of the looker's irises within the sclera) but also on the orientation of the lookers' head. One among several potential cues of head orientation is face eccentricity, the position of the inner features of the face (eyes, nose, mouth) within the head contour, as viewed by the observer. For natural faces this cue is confounded with many other head-orientation cues, but in schematic faces it can be studied in isolation. Salient novel illustrations of the effectiveness of face eccentricity are 'Necker faces', which involve equal iris eccentricities but multiple perceived gaze directions. In four experiments, iris and face eccentricity in schematic faces were manipulated, revealing strong and consistent effects of face eccentricity on perceived gaze direction, with different types of tasks. An additional experiment confirmed the 'Mona Lisa' effect with this type of stimuli. Face eccentricity most likely acted as a simple but robust cue of head turn. A simple computational account of combined effects of cues of eye and head turn on perceived gaze direction is presented, including a formal condition for the perception of direct gaze. An account of the 'Mona Lisa' effect is presented.

  15. A leftward bias however you look at it: Revisiting the emotional chimeric face task as a tool for measuring emotion lateralization.

    PubMed

    R Innes, Bobby; Burt, D Michael; Birch, Yan K; Hausmann, Markus

    2015-12-28

    Left hemiface biases observed within the Emotional Chimeric Face Task (ECFT) support emotional face perception models whereby all expressions are preferentially processed by the right hemisphere. However, previous research using this task has not considered that the visible midline between hemifaces might engage atypical facial emotion processing strategies in upright or inverted conditions, nor controlled for left visual field (thus right hemispheric) visuospatial attention biases. This study used novel emotional chimeric faces (blended at the midline) to examine laterality biases for all basic emotions. Left hemiface biases were demonstrated across all emotional expressions and were reduced, but not reversed, for inverted faces. The ECFT bias in upright faces was significantly increased in participants with a large attention bias. These results support the theory that left hemiface biases reflect a genuine bias in emotional face processing, and this bias can interact with attention processes similarly localized in the right hemisphere.

  16. Categorical Perception of Emotional Facial Expressions in Preschoolers

    ERIC Educational Resources Information Center

    Cheal, Jenna L.; Rutherford, M. D.

    2011-01-01

    Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers' discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum "felt the…

  17. Impressions of dominance are made relative to others in the visual environment.

    PubMed

    Re, Daniel E; Lefevre, Carmen E; DeBruine, Lisa M; Jones, Benedict C; Perrett, David I

    2014-03-27

    Face judgments of dominance play an important role in human social interaction. Perceived facial dominance is thought to indicate physical formidability, as well as resource acquisition and holding potential. Dominance cues in the face affect perceptions of attractiveness, emotional state, and physical strength. Most experimental paradigms test perceptions of facial dominance in individual faces, or they use manipulated versions of the same face in a forced-choice task but in the absence of other faces. Here, we extend this work by assessing whether dominance ratings are absolute or are judged relative to other faces. We presented participants with faces to be rated for dominance (target faces), while also presenting a second face (non-target faces) that was not to be rated. We found that both the masculinity and sex of the non-target face affected dominance ratings of the target face. Masculinized non-target faces decreased the perceived dominance of a target face relative to a feminized non-target face, and displaying a male non-target face decreased perceived dominance of a target face more so than a female non-target face. Perceived dominance of male target faces was affected more by masculinization of male non-target faces than female non-target faces. These results indicate that dominance perceptions can be altered by surrounding faces, demonstrating that facial dominance is judged at least partly relative to other faces.

  18. Decreased activation along the dorsal visual pathway after a 3-month treatment with galantamine in mild Alzheimer disease: a functional magnetic resonance imaging study.

    PubMed

    Bokde, Arun L W; Karmann, Michaela; Teipel, Stefan J; Born, Christine; Lieb, Martin; Reiser, Maximilian F; Möller, Hans-Jürgen; Hampel, Harald

    2009-04-01

    Visual perception has been shown to be altered in Alzheimer disease (AD) patients, and it is associated with decreased cognitive function. Galantamine is an active cholinergic agent, which has been shown to lead to improved cognition in mild to moderate AD patients. This study examined brain activation in a group of mild AD patients after a 3-month open-label treatment with galantamine. The objective was to examine the changes in brain activation due to treatment. There were 2 tasks to visual perception. The first task was a face-matching task to test the activation along the ventral visual pathway, and the second task was a location-matching task to test neuronal function along the dorsal pathway. Brain activation was measured using functional magnetic resonance imaging. There were 5 mild AD patients in the study. There were no differences in the task performance and in the cognitive scores of the Consortium to Establish a Registry for Alzheimer's Disease battery before and after treatment. In the location-matching task, we found a statistically significant decrease in activation along the dorsal visual pathway after galantamine treatment. A previous study found that AD patients had higher activation in the location-matching task compared with healthy controls. There were no differences in activation for the face-matching task after treatment. Our data indicate that treatment with galantamine leads to more efficient visual processing of stimuli or changes the compensatory mechanism in the AD patients. A visual perception task recruiting the dorsal visual system may be useful as a biomarker of treatment effects.

  19. The role of the fusiform face area in social cognition: implications for the pathobiology of autism.

    PubMed Central

    Schultz, Robert T; Grelotti, David J; Klin, Ami; Kleinman, Jamie; Van der Gaag, Christiaan; Marois, René; Skudlarski, Pawel

    2003-01-01

    A region in the lateral aspect of the fusiform gyrus (FG) is more engaged by human faces than any other category of image. It has come to be known as the 'fusiform face area' (FFA). The origin and extent of this specialization is currently a topic of great interest and debate. This is of special relevance to autism, because recent studies have shown that the FFA is hypoactive to faces in this disorder. In two linked functional magnetic resonance imaging (fMRI) studies of healthy young adults, we show here that the FFA is engaged by a social attribution task (SAT) involving perception of human-like interactions among three simple geometric shapes. The amygdala, temporal pole, medial prefrontal cortex, inferolateral frontal cortex and superior temporal sulci were also significantly engaged. Activation of the FFA to a task without faces challenges the received view that the FFA is restricted in its activities to the perception of faces. We speculate that abstract semantic information associated with faces is encoded in the FG region and retrieved for social computations. From this perspective, the literature on hypoactivation of the FFA in autism may be interpreted as a reflection of a core social cognitive mechanism underlying the disorder. PMID:12639338

  20. Visual Influences on Speech Perception in Children with Autism

    ERIC Educational Resources Information Center

    Iarocci, Grace; Rombough, Adrienne; Yager, Jodi; Weeks, Daniel J.; Chua, Romeo

    2010-01-01

    The bimodal perception of speech sounds was examined in children with autism as compared to mental age--matched typically developing (TD) children. A computer task was employed wherein only the mouth region of the face was displayed and children reported what they heard or saw when presented with consonant-vowel sounds in unimodal auditory…

  1. Social and emotional relevance in face processing: happy faces of future interaction partners enhance the late positive potential

    PubMed Central

    Bublatzky, Florian; Gerdes, Antje B. M.; White, Andrew J.; Riemer, Martin; Alpers, Georg W.

    2014-01-01

    Human face perception is modulated by both emotional valence and social relevance, but their interaction has rarely been examined. Event-related brain potentials (ERP) to happy, neutral, and angry facial expressions with different degrees of social relevance were recorded. To implement a social anticipation task, relevance was manipulated by presenting faces of two specific actors as future interaction partners (socially relevant), whereas two other face actors remained non-relevant. In a further control task all stimuli were presented without specific relevance instructions (passive viewing). Face stimuli of four actors (2 women, from the KDEF) were randomly presented for 1s to 26 participants (16 female). Results showed an augmented N170, early posterior negativity (EPN), and late positive potential (LPP) for emotional in contrast to neutral facial expressions. Of particular interest, face processing varied as a function of experimental tasks. Whereas task effects were observed for P1 and EPN regardless of instructed relevance, LPP amplitudes were modulated by emotional facial expression and relevance manipulation. The LPP was specifically enhanced for happy facial expressions of the anticipated future interaction partners. This underscores that social relevance can impact face processing already at an early stage of visual processing. These findings are discussed within the framework of motivated attention and face processing theories. PMID:25076881

  2. Do neural correlates of face expertise vary with task demands? Event-related potential correlates of own- and other-race face inversion

    PubMed Central

    Wiese, Holger

    2013-01-01

    We are typically more accurate at remembering own- than other-race faces. This “own-race bias” has been suggested to result from enhanced expertise with and more efficient perceptual processing of own-race than other-race faces. In line with this idea, the N170, an event-related potential correlate of face perception, has been repeatedly found to be larger for other-race faces. Other studies, however, found no difference in N170 amplitude for faces from diverse ethnic groups. The present study tested whether these seemingly incongruent findings can be explained by varying task demands. European participants were presented with upright and inverted European and Asian faces (as well as European and Asian houses), and asked to either indicate the ethnicity or the orientation of the stimuli. Larger N170s for other-race faces were observed in the ethnicity but not in the orientation task, suggesting that the necessity to process facial category information is a minimum prerequisite for the occurrence of the effect. In addition, N170 inversion effects, with larger amplitudes for inverted relative to upright stimuli, were more pronounced for own- relative to other-race faces in both tasks. Overall, the present findings suggest that the occurrence of ethnicity effects in N170 for upright faces depends on the amount of facial detail required for the task at hand. At the same time, the larger inversion effects for own- than other-race faces occur independent of task and may reflect the fine-tuning of perceptual processing to faces of maximum expertise. PMID:24399955

  3. The Caledonian face test: A new test of face discrimination.

    PubMed

    Logan, Andrew J; Wilkinson, Frances; Wilson, Hugh R; Gordon, Gael E; Loffler, Gunter

    2016-02-01

    This study aimed to develop a clinical test of face perception which is applicable to a wide range of patients and can capture normal variability. The Caledonian face test utilises synthetic faces which combine simplicity with sufficient realism to permit individual identification. Face discrimination thresholds (i.e. minimum difference between faces required for accurate discrimination) were determined in an "odd-one-out" task. The difference between faces was controlled by an adaptive QUEST procedure. A broad range of face discrimination sensitivity was determined from a group (N=52) of young adults (mean 5.75%; SD 1.18; range 3.33-8.84%). The test is fast (3-4 min), repeatable (test-re-test r(2)=0.795) and demonstrates a significant inversion effect. The potential to identify impairments of face discrimination was evaluated by testing LM who reported a lifelong difficulty with face perception. While LM's impairment for two established face tests was close to the criterion for significance (Z-scores of -2.20 and -2.27) for the Caledonian face test, her Z-score was -7.26, implying a more than threefold higher sensitivity. The new face test provides a quantifiable and repeatable assessment of face discrimination ability. The enhanced sensitivity suggests that the Caledonian face test may be capable of detecting more subtle impairments of face perception than available tests. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Reprint of "Investigating ensemble perception of emotions in autistic and typical children and adolescents".

    PubMed

    Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth

    2018-01-01

    Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  5. Ensemble perception of emotions in autistic and typical children and adolescents.

    PubMed

    Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth

    2017-04-01

    Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Impairment of holistic face perception following right occipito-temporal damage in prosopagnosia: converging evidence from gaze-contingency.

    PubMed

    Van Belle, Goedele; Busigny, Thomas; Lefèvre, Philippe; Joubert, Sven; Felician, Olivier; Gentile, Francesco; Rossion, Bruno

    2011-09-01

    Gaze-contingency is a method traditionally used to investigate the perceptual span in reading by selectively revealing/masking a portion of the visual field in real time. Introducing this approach in face perception research showed that the performance pattern of a brain-damaged patient with acquired prosopagnosia (PS) in a face matching task was reversed, as compared to normal observers: the patient showed almost no further decrease of performance when only one facial part (eye, mouth, nose, etc.) was available at a time (foveal window condition, forcing part-based analysis), but a very large impairment when the fixated part was selectively masked (mask condition, promoting holistic perception) (Van Belle, De Graef, Verfaillie, Busigny, & Rossion, 2010a; Van Belle, De Graef, Verfaillie, Rossion, & Lefèvre, 2010b). Here we tested the same manipulation in a recently reported case of pure prosopagnosia (GG) with unilateral right hemisphere damage (Busigny, Joubert, Felician, Ceccaldi, & Rossion, 2010). Contrary to normal observers, GG was also significantly more impaired with a mask than with a window, demonstrating impairment with holistic face perception. Together with our previous study, these observations support a generalized account of acquired prosopagnosia as a critical impairment of holistic (individual) face perception, implying that this function is a key element of normal human face recognition. Furthermore, the similar behavioral pattern of the two patients despite different lesion localizations supports a distributed network view of the neural face processing structures, suggesting that the key function of human face processing, namely holistic perception of individual faces, requires the activity of several brain areas of the right hemisphere and their mutual connectivity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Face imagery is based on featural representations.

    PubMed

    Lobmaier, Janek S; Mast, Fred W

    2008-01-01

    The effect of imagery on featural and configural face processing was investigated using blurred and scrambled faces. By means of blurring, featural information is reduced; by scrambling a face into its constituent parts configural information is lost. Twenty-four participants learned ten faces together with the sound of a name. In following matching-to-sample tasks participants had to decide whether an auditory presented name belonged to a visually presented scrambled or blurred face in two experimental conditions. In the imagery condition, the name was presented prior to the visual stimulus and participants were required to imagine the corresponding face as clearly and vividly as possible. In the perception condition name and test face were presented simultaneously, thus no facilitation via mental imagery was possible. Analyses of the hit values showed that in the imagery condition scrambled faces were recognized significantly better than blurred faces whereas there was no such effect for the perception condition. The results suggest that mental imagery activates featural representations more than configural representations.

  8. Task-irrelevant fear enhances amygdala-FFG inhibition and decreases subsequent face processing.

    PubMed

    Schulte Holthausen, Barbara; Habel, Ute; Kellermann, Thilo; Schelenz, Patrick D; Schneider, Frank; Christopher Edgar, J; Turetsky, Bruce I; Regenbogen, Christina

    2016-09-01

    Facial threat is associated with changes in limbic activity as well as modifications in the cortical face-related N170. It remains unclear if task-irrelevant threat modulates the response to a subsequent facial stimulus, and whether the amygdala's role in early threat perception is independent and direct, or modulatory. In 19 participants, crowds of emotional faces were followed by target faces and a rating task while simultaneous EEG-fMRI were recorded. In addition to conventional analyses, fMRI-informed EEG analyses and fMRI dynamic causal modeling (DCM) were performed. Fearful crowds reduced EEG N170 target face amplitudes and increased responses in a fMRI network comprising insula, amygdala and inferior frontal cortex. Multimodal analyses showed that amygdala response was present ∼60 ms before the right fusiform gyrus-derived N170. DCM indicated inhibitory connections from amygdala to fusiform gyrus, strengthened when fearful crowds preceded a target face. Results demonstrated the suppressing influence of task-irrelevant fearful crowds on subsequent face processing. The amygdala may be sensitive to task-irrelevant fearful crowds and subsequently strengthen its inhibitory influence on face-responsive fusiform N170 generators. This provides spatiotemporal evidence for a feedback mechanism of the amygdala by narrowing attention in order to focus on potential threats. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. Valence Specific Laterality Effects in Free Viewing Conditions: The Role of Expectancy and Gender of Image

    ERIC Educational Resources Information Center

    Stafford, Lorenzo D.; Brandaro, Nicola

    2010-01-01

    Recent research has looked at whether the expectancy of an emotion can account for subsequent valence specific laterality effects of prosodic emotion, though no research has examined this effect for facial emotion. In the study here (n = 58), we investigated this issue using two tasks; an emotional face perception task and a novel word task that…

  10. Holistic processing of human body postures: evidence from the composite effect.

    PubMed

    Willems, Sam; Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2014-01-01

    The perception of socially relevant stimuli (e.g., faces and bodies) has received considerable attention in the vision science community. It is now widely accepted that human faces are processed holistically and not only analytically. One observation that has been taken as evidence for holistic face processing is the face composite effect: two identical top halves of a face tend to be perceived as being different when combined with different bottom halves. This supports the hypothesis that face processing proceeds holistically. Indeed, the interference effect disappears when the two face parts are misaligned (blocking holistic perception). In the present study, we investigated whether there is also a composite effect for the perception of body postures: are two identical body halves perceived as being in different poses when the irrelevant body halves differ from each other? Both a horizontal (i.e., top-bottom body halves; Experiment 1) and a vertical composite effect (i.e., left-right body halves; Experiment 2) were examined by means of a delayed matching-to-sample task. Results of both experiments indicate the existence of a body posture composite effect. This provides evidence for the hypothesis that body postures, as faces, are processed holistically.

  11. Holistic processing of human body postures: evidence from the composite effect

    PubMed Central

    Willems, Sam; Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2014-01-01

    The perception of socially relevant stimuli (e.g., faces and bodies) has received considerable attention in the vision science community. It is now widely accepted that human faces are processed holistically and not only analytically. One observation that has been taken as evidence for holistic face processing is the face composite effect: two identical top halves of a face tend to be perceived as being different when combined with different bottom halves. This supports the hypothesis that face processing proceeds holistically. Indeed, the interference effect disappears when the two face parts are misaligned (blocking holistic perception). In the present study, we investigated whether there is also a composite effect for the perception of body postures: are two identical body halves perceived as being in different poses when the irrelevant body halves differ from each other? Both a horizontal (i.e., top-bottom body halves; Experiment 1) and a vertical composite effect (i.e., left-right body halves; Experiment 2) were examined by means of a delayed matching-to-sample task. Results of both experiments indicate the existence of a body posture composite effect. This provides evidence for the hypothesis that body postures, as faces, are processed holistically. PMID:24999337

  12. Decoding Task and Stimulus Representations in Face-responsive Cortex

    PubMed Central

    Kliemann, Dorit; Jacoby, Nir; Anzellotti, Stefano; Saxe, Rebecca R.

    2017-01-01

    Faces provide rich social information about others’ stable traits (e.g., age) and fleeting states of mind (e.g., emotional expression). While some of these facial aspects may be processed automatically, observers can also deliberately attend to some features while ignoring others. It remains unclear how internal goals (e.g., task context) influence the representational geometry of variable and stable facial aspects in face-responsive cortex. We investigated neural response patterns related to decoding i) the intention to attend to a facial aspect before its perception, ii) the attended aspect of a face and iii) stimulus properties. We measured neural responses while subjects watched videos of dynamic positive and negative expressions, and judged the age or the expression’s valence. Split-half multivoxel pattern analyses (MVPA) showed that (i) the intention to attend to a specific aspect of a face can be decoded from left fronto-lateral, but not face-responsive regions; (ii) during face perception, the attend aspect (age vs emotion) could be robustly decoded from almost all face-responsive regions; and (iii) a stimulus property (valence), was represented in right posterior superior temporal sulcus and medial prefrontal cortices. The effect of deliberately shifting the focus of attention on representations suggest a powerful influence of top-down signals on cortical representation of social information, varying across cortical regions, likely reflecting neural flexibility to optimally integrate internal goals and dynamic perceptual input. PMID:27978778

  13. Emotion perception, but not affect perception, is impaired with semantic memory loss.

    PubMed

    Lindquist, Kristen A; Gendron, Maria; Barrett, Lisa Feldman; Dickerson, Bradford C

    2014-04-01

    For decades, psychologists and neuroscientists have hypothesized that the ability to perceive emotions on others' faces is inborn, prelinguistic, and universal. Concept knowledge about emotion has been assumed to be epiphenomenal to emotion perception. In this article, we report findings from 3 patients with semantic dementia that cannot be explained by this "basic emotion" view. These patients, who have substantial deficits in semantic processing abilities, spontaneously perceived pleasant and unpleasant expressions on faces, but not discrete emotions such as anger, disgust, fear, or sadness, even in a task that did not require the use of emotion words. Our findings support the hypothesis that discrete emotion concept knowledge helps transform perceptions of affect (positively or negatively valenced facial expressions) into perceptions of discrete emotions such as anger, disgust, fear, and sadness. These findings have important consequences for understanding the processes supporting emotion perception.

  14. Emotion perception, but not affect perception, is impaired with semantic memory loss

    PubMed Central

    Lindquist, Kristen A.; Gendron, Maria; Feldman Barrett, Lisa; Dickerson, Bradford C.

    2014-01-01

    For decades, psychologists and neuroscientists have hypothesized that the ability to perceive emotions on others’ faces is inborn, pre-linguistic, and universal. Concept knowledge about emotion has been assumed to be epiphenomenal to emotion perception. In this paper, we report findings from three patients with semantic dementia that cannot be explained by this “basic emotion” view. These patients, who have substantial deficits in semantic processing abilities, spontaneously perceived pleasant and unpleasant expressions on faces, but not discrete emotions such as anger, disgust, fear, or sadness, even in a task that did not require the use of emotion words. Our findings support the hypothesis that discrete emotion concept knowledge helps transform perceptions of affect (positively or negatively valenced facial expressions) into perceptions of discrete emotions such as anger, disgust, fear and sadness. These findings have important consequences for understanding the processes supporting emotion perception. PMID:24512242

  15. Reward Activates Stimulus-Specific and Task-Dependent Representations in Visual Association Cortices

    PubMed Central

    Muller, Timothy; Yeung, Nick; Waszak, Florian

    2014-01-01

    Humans reliably learn which actions lead to rewards. One prominent question is how credit is assigned to environmental stimuli that are acted upon. Recent functional magnetic resonance imaging (fMRI) studies have provided evidence that representations of rewarded stimuli are activated upon reward delivery, providing possible eligibility traces for credit assignment. Our study sought evidence of postreward activation in sensory cortices satisfying two conditions of instrumental learning: postreward activity should reflect the stimulus category that preceded reward (stimulus specificity), and should occur only if the stimulus was acted on to obtain reward (task dependency). Our experiment implemented two tasks in the fMRI scanner. The first was a perceptual decision-making task on degraded face and house stimuli. Stimulus specificity was evident as rewards activated the sensory cortices associated with face versus house perception more strongly after face versus house decisions, respectively, particularly in the fusiform face area. Stimulus specificity was further evident in a psychophysiological interaction analysis wherein face-sensitive areas correlated with nucleus accumbens activity after face-decision rewards, whereas house-sensitive areas correlated with nucleus accumbens activity after house-decision rewards. The second task required participants to make an instructed response. The criterion of task dependency was fulfilled as rewards after face versus house responses activated the respective association cortices to a larger degree when faces and houses were relevant to the performed task. Our study is the first to show that postreward sensory cortex activity meets these two key criteria of credit assignment, and does so independently from bottom-up perceptual processing. PMID:25411489

  16. Priming global and local processing of composite faces: revisiting the processing-bias effect on face perception.

    PubMed

    Gao, Zaifeng; Flevaris, Anastasia V; Robertson, Lynn C; Bentin, Shlomo

    2011-07-01

    We used the composite-face illusion and Navon stimuli to determine the consequences of priming local or global processing on subsequent face recognition. The composite-face illusion reflects the difficulty of ignoring the task-irrelevant half-face while attending the task-relevant half if the half-faces in the composite are aligned. On each trial, participants first matched two Navon stimuli, attending to either the global or the local level, and then matched the upper halves of two composite faces presented sequentially. Global processing of Navon stimuli increased the sensitivity to incongruence between the upper and the lower halves of the composite face, relative to a baseline in which the composite faces were not primed. Local processing of Navon stimuli did not influence the sensitivity to incongruence. Although incongruence induced a bias toward different responses, this bias was not modulated by priming. We conclude that global processing of Navon stimuli augments holistic processing of the face.

  17. Face-n-Food: Gender Differences in Tuning to Faces.

    PubMed

    Pavlova, Marina A; Scheffler, Klaus; Sokolov, Alexander N

    2015-01-01

    Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing.

  18. Face-n-Food: Gender Differences in Tuning to Faces

    PubMed Central

    Pavlova, Marina A.; Scheffler, Klaus; Sokolov, Alexander N.

    2015-01-01

    Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing. PMID:26154177

  19. Finding a face in the crowd: testing the anger superiority effect in Asperger Syndrome.

    PubMed

    Ashwin, Chris; Wheelwright, Sally; Baron-Cohen, Simon

    2006-06-01

    Social threat captures attention and is processed rapidly and efficiently, with many lines of research showing involvement of the amygdala. Visual search paradigms looking at social threat have shown angry faces 'pop-out' in a crowd, compared to happy faces. Autism and Asperger Syndrome (AS) are neurodevelopmental conditions characterised by social deficits, abnormal face processing, and amygdala dysfunction. We tested adults with high-functioning autism (HFA) and AS using a facial visual search paradigm with schematic neutral and emotional faces. We found, contrary to predictions, that people with HFA/AS performed similarly to controls in many conditions. However, the effect was reduced in the HFA/AS group when using widely varying crowd sizes and when faces were inverted, suggesting a difference in face-processing style may be evident even with simple schematic faces. We conclude there are intact threat detection mechanisms in AS, under simple and predictable conditions, but that like other face-perception tasks, the visual search of threat faces task reveals atypical face-processing in HFA/AS.

  20. Coarse-to-Fine Encoding of Spatial Frequency Information into Visual Short-Term Memory for Faces but Impartial Decay

    ERIC Educational Resources Information Center

    Gao, Zaifeng; Bentin, Shlomo

    2011-01-01

    Face perception studies investigated how spatial frequencies (SF) are extracted from retinal display while forming a perceptual representation, or their selective use during task-imposed categorization. Here we focused on the order of encoding low-spatial frequencies (LSF) and high-spatial frequencies (HSF) from perceptual representations into…

  1. Enhancing Learning with the Social Media: Student Teachers' Perceptions on Twitter in a Debate Activity

    ERIC Educational Resources Information Center

    Tur, Gemma; Marín, Victoria I.

    2015-01-01

    This paper presents research focused on the educational experience of students using the microblogging platform Twitter for debate activities in three groups in different teacher education programmes at the University of the Balearic Islands, Spain. The implementation of this technology-based task in a face-to-face class was introduced as an…

  2. Learners' Perceptions of Online Elements in a Beginners' Language Blended Course--Implications for CALL Design

    ERIC Educational Resources Information Center

    Pulker, Hélène; Vialleton, Elodie

    2015-01-01

    Much research has been done on blended learning and the design of tasks most appropriate for online environments and computer-mediated communication. Increasingly, language teachers and Second Language Acquisition (SLA) practitioners recognise the different nature of communications in online settings and in face-to-face settings; teachers do not…

  3. Early stages of figure-ground segregation during perception of the face-vase.

    PubMed

    Pitts, Michael A; Martínez, Antígona; Brewer, James B; Hillyard, Steven A

    2011-04-01

    The temporal sequence of neural processes supporting figure-ground perception was investigated by recording ERPs associated with subjects' perceptions of the face-vase figure. In Experiment 1, subjects continuously reported whether they perceived the face or the vase as the foreground figure by pressing one of two buttons. Each button press triggered a probe flash to the face region, the vase region, or the borders between the two. The N170/vertex positive potential (VPP) component of the ERP elicited by probes to the face region was larger when subjects perceived the faces as figure. Preceding the N170/VPP, two additional components were identified. First, when the borders were probed, ERPs differed in amplitude as early as 110 msec after probe onset depending on subjects' figure-ground perceptions. Second, when the face or vase regions were probed, ERPs were more positive (at ∼ 150-200 msec) when that region was perceived as figure versus background. These components likely reflect an early "border ownership" stage, and a subsequent "figure-ground segregation" stage of processing. To explore the influence of attention on these stages of processing, two additional experiments were conducted. In Experiment 2, subjects selectively attended to the face or vase region, and the same early ERP components were again produced. In Experiment 3, subjects performed an identical selective attention task, but on a display lacking distinctive figure-ground borders, and neither of the early components were produced. Results from these experiments suggest sequential stages of processing underlying figure-ground perception, each which are subject to modifications by selective attention.

  4. Wanting it Too Much: An Inverse Relation Between Social Motivation and Facial Emotion Recognition in Autism Spectrum Disorder

    PubMed Central

    Garman, Heather D.; Spaulding, Christine J.; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P.

    2016-01-01

    This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those with shorter N170 latencies exhibited better FER for child angry faces stimuli. Social motivation partially mediated the relationship between a faster N170 and better FER. These effects were all robust to variations in IQ, age, and ASD severity. These findings augur against theories implicating social motivation as uniformly valuable for individuals with ASD, and augment models suggesting a close link between early-stage face perception, social motivation, and FER in this population. Broader implications for models and development of FER in ASD are discussed. PMID:26743637

  5. Wanting it Too Much: An Inverse Relation Between Social Motivation and Facial Emotion Recognition in Autism Spectrum Disorder.

    PubMed

    Garman, Heather D; Spaulding, Christine J; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P; Lerner, Matthew D

    2016-12-01

    This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those with shorter N170 latencies exhibited better FER for child angry faces stimuli. Social motivation partially mediated the relationship between a faster N170 and better FER. These effects were all robust to variations in IQ, age, and ASD severity. These findings augur against theories implicating social motivation as uniformly valuable for individuals with ASD, and augment models suggesting a close link between early-stage face perception, social motivation, and FER in this population. Broader implications for models and development of FER in ASD are discussed.

  6. Discrimination and categorization of emotional facial expressions and faces in Parkinson's disease.

    PubMed

    Alonso-Recio, Laura; Martín, Pilar; Rubio, Sandra; Serrano, Juan M

    2014-09-01

    Our objective was to compare the ability to discriminate and categorize emotional facial expressions (EFEs) and facial identity characteristics (age and/or gender) in a group of 53 individuals with Parkinson's disease (PD) and another group of 53 healthy subjects. On the one hand, by means of discrimination and identification tasks, we compared two stages in the visual recognition process that could be selectively affected in individuals with PD. On the other hand, facial expression versus gender and age comparison permits us to contrast whether the emotional or non-emotional content influences the configural perception of faces. In Experiment I, we did not find differences between groups, either with facial expression or age, in discrimination tasks. Conversely, in Experiment II, we found differences between the groups, but only in the EFE identification task. Taken together, our results indicate that configural perception of faces does not seem to be globally impaired in PD. However, this ability is selectively altered when the categorization of emotional faces is required. A deeper assessment of the PD group indicated that decline in facial expression categorization is more evident in a subgroup of patients with higher global impairment (motor and cognitive). Taken together, these results suggest that the problems found in facial expression recognition may be associated with the progressive neuronal loss in frontostriatal and mesolimbic circuits, which characterizes PD. © 2013 The British Psychological Society.

  7. Face identity recognition in autism spectrum disorders: a review of behavioral studies.

    PubMed

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2012-03-01

    Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Assessing the Cognitive Functioning of Students with Intellectual Disabilities: Practices and Perceptions of School Psychologists

    ERIC Educational Resources Information Center

    Costner, Ashley Nicole

    2016-01-01

    School psychologists are faced with the task of conducting evaluations of students in order to determine special education eligibility. This often equates to administering a cognitive assessment measure to obtain information about skills or abilities. Although this may be a straightforward task when working with children of average or higher…

  9. Oxytocin does not make a face appear more trustworthy but improves the accuracy of trustworthiness judgments.

    PubMed

    Lambert, Bruno; Declerck, Carolyn H; Boone, Christophe

    2014-02-01

    Previous research on the relation between oxytocin and trustworthiness evaluations has yielded inconsistent results. The current study reports an experiment using artificial faces which allows manipulating the dimension of trustworthiness without changing factors like emotions or face symmetry. We investigate whether (1) oxytocin increases the average trustworthiness evaluation of faces (level effect), and/or whether (2) oxytocin improves the discriminatory ability of trustworthiness perception so that people become more accurate in distinguishing faces that vary along a gradient of trustworthiness. In a double blind oxytocin/placebo experiment (N=106) participants conducted two judgement tasks. First they evaluated the trustworthiness of a series of pictures of artificially generated faces, neutral in the trustworthiness dimension. Next they compared neutral faces with artificially generated faces that were manipulated to vary in trustworthiness. The results indicate that oxytocin (relative to a placebo) does not affect the evaluation of trustworthiness in the first task. However, in the second task, misclassification of untrustworthy faces as trustworthy occurred significantly less in the oxytocin group. Furthermore, oxytocin improved the discriminatory ability of untrustworthy, but not trustworthy faces. We conclude that oxytocin does not increase trustworthiness judgments on average, but that it helps people to more accurately recognize an untrustworthy face. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Social presence and the composite face effect.

    PubMed

    Garcia-Marques, Teresa; Fernandes, Alexandre; Fonseca, Ricardo; Prada, Marilia

    2015-06-01

    A robust finding in social psychology research is that performance is modulated by the social nature of a given context, promoting social inhibition or facilitation effects. In the present experiment, we examined if and how social presence impacts holistic face perception processes by asking participants, in the presence of others and alone, to perform the composite face task. Results suggest that completing the task in the presence of others (i.e., mere co-action) is associated with better performance in face recognition (less bias and higher discrimination between presented and non-presented targets) and with a reduction in the composite face effect. These results make clear that social presence impact on the composite face effect does not occur because presence increases reliance on holistic processing as a "dominant" well-learned response, but instead, because it increases monitoring of the interference produced by automatic response. Copyright © 2015. Published by Elsevier B.V.

  11. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    PubMed

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions.

  12. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    PubMed Central

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions. PMID:27630604

  13. Seeing Objects as Faces Enhances Object Detection.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2015-10-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.

  14. Seeing Objects as Faces Enhances Object Detection

    PubMed Central

    Watanabe, Katsumi

    2015-01-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219

  15. Barely legal: is attraction and estimated age of young female faces disrupted by alcohol use, make up, and the sex of the observer?

    PubMed

    Egan, Vincent; Cordan, Giray

    2009-05-01

    One 'reasonable ground' for unlawful sex with a minor is mistaken age. Alcohol consumption and make-up are often deemed further influences on impaired perception. Two hundred and forty persons in bars and cafes rated the attractiveness of composite faces of immature and mature females with and without additional makeup, alcohol users having their concurrent blood-alcohol level measured using a breathalyser. A non-sex-specific preference for immature faces over sexually mature faces was found. Alcohol and make-up did not inflate attractiveness ratings in immature faces. While alcohol consumption significantly inflated attractiveness ratings for participants viewing made-up sexually mature faces, greater alcohol consumption itself did not lead to overestimation of age. Although alcohol limited the processing of maturity cues in female observers, it had no effect on the age perceptions of males viewing female faces, suggesting male mate preferences are not easily disrupted. Participants consistently overestimated the age of sexually immature- and sexually mature-faces by an average of 3.5 years. Our study suggests that even heavy alcohol consumption does not interfere with age-perception tasks in men, so is not of itself an excuse for apparent mistaken age in cases of unlawful sex with a minor.

  16. Sensitivity to spatial frequency content is not specific to face perception

    PubMed Central

    Williams, N. Rankin; Willenbockel, Verena; Gauthier, Isabel

    2010-01-01

    Prior work using a matching task between images that were complementary in spatial frequency and orientation information suggested that the representation of faces, but not objects, retains low-level spatial frequency (SF) information (Biederman & Kalocsai. 1997). In two experiments, we reexamine the claim that faces are uniquely sensitive to changes in SF. In contrast to prior work, we used a design allowing the computation of sensitivity and response criterion for each category, and in one experiment, equalized low-level image properties across object categories. In both experiments, we find that observers are sensitive to SF changes for upright and inverted faces and nonface objects. Differential response biases across categories contributed to a larger sensitivity for faces, but even sensitivity showed a larger effect for faces, especially when faces were upright and in a front-facing view. However, when objects were inverted, or upright but shown in a three-quarter view, the matching of objects and faces was equally sensitive to SF changes. Accordingly, face perception does not appear to be uniquely affected by changes in SF content. PMID:19576237

  17. Ensemble coding of face identity is present but weaker in congenital prosopagnosia.

    PubMed

    Robson, Matthew K; Palermo, Romina; Jeffery, Linda; Neumann, Markus F

    2018-03-01

    Individuals with congenital prosopagnosia (CP) are impaired at identifying individual faces but do not appear to show impairments in extracting the average identity from a group of faces (known as ensemble coding). However, possible deficits in ensemble coding in a previous study (CPs n = 4) may have been masked because CPs relied on pictorial (image) cues rather than identity cues. Here we asked whether a larger sample of CPs (n = 11) would show intact ensemble coding of identity when availability of image cues was minimised. Participants viewed a "set" of four faces and then judged whether a subsequent individual test face, either an exemplar or a "set average", was in the preceding set. Ensemble coding occurred when matching (vs. mismatching) averages were mistakenly endorsed as set members. We assessed both image- and identity-based ensemble coding, by varying whether test faces were either the same or different images of the identities in the set. CPs showed significant ensemble coding in both tasks, indicating that their performance was independent of image cues. As a group, CPs' ensemble coding was weaker than controls in both tasks, consistent with evidence that perceptual processing of face identity is disrupted in CP. This effect was driven by CPs (n= 3) who, in addition to having impaired face memory, also performed particularly poorly on a measure of face perception (CFPT). Future research, using larger samples, should examine whether deficits in ensemble coding may be restricted to CPs who also have substantial face perception deficits. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Evidence for perceptual deficits in associative visual (prosop)agnosia: a single-case study.

    PubMed

    Delvenne, Jean François; Seron, Xavier; Coyette, Françoise; Rossion, Bruno

    2004-01-01

    Associative visual agnosia is classically defined as normal visual perception stripped of its meaning [Archiv für Psychiatrie und Nervenkrankheiten 21 (1890) 22/English translation: Cognitive Neuropsychol. 5 (1988) 155]: these patients cannot access to their stored visual memories to categorize the objects nonetheless perceived correctly. However, according to an influential theory of visual agnosia [Farah, Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision, MIT Press, Cambridge, MA, 1990], visual associative agnosics necessarily present perceptual deficits that are the cause of their impairment at object recognition Here we report a detailed investigation of a patient with bilateral occipito-temporal lesions strongly impaired at object and face recognition. NS presents normal drawing copy, and normal performance at object and face matching tasks as used in classical neuropsychological tests. However, when tested with several computer tasks using carefully controlled visual stimuli and taking both his accuracy rate and response times into account, NS was found to have abnormal performances at high-level visual processing of objects and faces. Albeit presenting a different pattern of deficits than previously described in integrative agnosic patients such as HJA and LH, his deficits were characterized by an inability to integrate individual parts into a whole percept, as suggested by his failure at processing structurally impossible three-dimensional (3D) objects, an absence of face inversion effects and an advantage at detecting and matching single parts. Taken together, these observations question the idea of separate visual representations for object/face perception and object/face knowledge derived from investigations of visual associative (prosop)agnosia, and they raise some methodological issues in the analysis of single-case studies of (prosop)agnosic patients.

  19. Gaze control during face exploration in schizophrenia.

    PubMed

    Delerue, Céline; Laprévote, Vincent; Verfaillie, Karl; Boucart, Muriel

    2010-10-04

    Patients with schizophrenia perform worse than controls on various face perception tasks. Studies monitoring eye movements have shown reduced scan paths and a lower number of fixations to relevant facial features (eyes, nose, mouth) than to other parts. We examine whether attentional control, through instructions, modulates visual scanning in schizophrenia. Visual scan paths were monitored in 20 patients with schizophrenia and 20 controls. Participants started with a "free viewing" task followed by tasks in which they were asked to determine the gender, identify the facial expression, estimate the age, or decide whether the face was known or unknown. Temporal and spatial characteristics of scan paths were compared for each group and task. Consistent with the literature, patients with schizophrenia showed reduced attention to salient facial features in the passive viewing. However, their scan paths did not differ from that of controls when asked to determine the facial expression, the gender, the age or the familiarity of the face. The results are interpreted in terms of attentional control and cognitive flexibility. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  20. Coordinated roles of motivation and perception in the regulation of intergroup responses: frontal cortical asymmetry effects on the P2 event-related potential and behavior.

    PubMed

    Amodio, David M

    2010-11-01

    Self-regulation is believed to involve changes in motivation and perception that function to promote goal-driven behavior. However, little is known about the way these processes interact during the on-line engagement of self-regulation. The present study examined the coordination of motivation, perception, and action control in White American participants as they regulated responses on a racial stereotyping task. Electroencephalographic indices of approach motivation (left frontal cortical asymmetry) and perceptual attention to Black versus White faces (the P2 event-related potential) were assessed during task performance. Action control was modeled from task behavior using the process-dissociation procedure. A pattern of moderated mediation emerged, such that stronger left frontal activity predicted larger P2 responses to race, which in turn predicted better action control, especially for participants holding positive racial attitudes. Results supported the hypothesis that motivation tunes perception to facilitate goal-directed action. Implications for theoretical models of intergroup response regulation, the P2 component, and the relation between motivation and perception are discussed.

  1. Auditory Emotion Word Primes Influence Emotional Face Categorization in Children and Adults, but Not Vice Versa.

    PubMed

    Vesker, Michael; Bahn, Daniela; Kauschke, Christina; Tschense, Monika; Degé, Franziska; Schwarzer, Gudrun

    2018-01-01

    In order to assess how the perception of audible speech and facial expressions influence one another for the perception of emotions, and how this influence might change over the course of development, we conducted two cross-modal priming experiments with three age groups of children (6-, 9-, and 12-years old), as well as college-aged adults. In Experiment 1, 74 children and 24 adult participants were tasked with categorizing photographs of emotional faces as positive or negative as quickly as possible after being primed with emotion words presented via audio in valence-congruent and valence-incongruent trials. In Experiment 2, 67 children and 24 adult participants carried out a similar categorization task, but with faces acting as visual primes, and emotion words acting as auditory targets. The results of Experiment 1 showed that participants made more errors when categorizing positive faces primed by negative words versus positive words, and that 6-year-old children are particularly sensitive to positive word primes, giving faster correct responses regardless of target valence. Meanwhile, the results of Experiment 2 did not show any congruency effects for priming by facial expressions. Thus, audible emotion words seem to exert an influence on the emotional categorization of faces, while faces do not seem to influence the categorization of emotion words in a significant way.

  2. Face and emotion expression processing and the serotonin transporter polymorphism 5-HTTLPR/rs22531.

    PubMed

    Hildebrandt, A; Kiy, A; Reuter, M; Sommer, W; Wilhelm, O

    2016-06-01

    Face cognition, including face identity and facial expression processing, is a crucial component of socio-emotional abilities, characterizing humans as highest developed social beings. However, for these trait domains molecular genetic studies investigating gene-behavior associations based on well-founded phenotype definitions are still rare. We examined the relationship between 5-HTTLPR/rs25531 polymorphisms - related to serotonin-reuptake - and the ability to perceive and recognize faces and emotional expressions in human faces. For this aim we conducted structural equation modeling on data from 230 young adults, obtained by using a comprehensive, multivariate task battery with maximal effort tasks. By additionally modeling fluid intelligence and immediate and delayed memory factors, we aimed to address the discriminant relationships of the 5-HTTLPR/rs25531 polymorphisms with socio-emotional abilities. We found a robust association between the 5-HTTLPR/rs25531 polymorphism and facial emotion perception. Carriers of two long (L) alleles outperformed carriers of one or two S alleles. Weaker associations were present for face identity perception and memory for emotional facial expressions. There was no association between the 5-HTTLPR/rs25531 polymorphism and non-social abilities, demonstrating discriminant validity of the relationships. We discuss the implications and possible neural mechanisms underlying these novel findings. © 2016 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  3. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    PubMed

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re-education programs in children presenting with deficits in social cue processing.

  4. Motion facilitates face perception across changes in viewpoint and expression in older adults.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2014-12-01

    Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  5. Which technology to investigate visual perception in sport: video vs. virtual reality.

    PubMed

    Vignais, Nicolas; Kulpa, Richard; Brault, Sébastien; Presse, Damien; Bideau, Benoit

    2015-02-01

    Visual information uptake is a fundamental element of sports involving interceptive tasks. Several methodologies, like video and methods based on virtual environments, are currently employed to analyze visual perception during sport situations. Both techniques have advantages and drawbacks. The goal of this study is to determine which of these technologies may be preferentially used to analyze visual information uptake during a sport situation. To this aim, we compared a handball goalkeeper's performance using two standardized methodologies: video clip and virtual environment. We examined this performance for two response tasks: an uncoupled task (goalkeepers show where the ball ends) and a coupled task (goalkeepers try to intercept the virtual ball). Variables investigated in this study were percentage of correct zones, percentage of correct responses, radial error and response time. The results showed that handball goalkeepers were more effective, more accurate and started to intercept earlier when facing a virtual handball thrower than when facing the video clip. These findings suggested that the analysis of visual information uptake for handball goalkeepers was better performed by using a 'virtual reality'-based methodology. Technical and methodological aspects of these findings are discussed further. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Structural encoding processes contribute to individual differences in face and object cognition: Inferences from psychometric test performance and event-related brain potentials.

    PubMed

    Nowparast Rostami, Hadiseh; Sommer, Werner; Zhou, Changsong; Wilhelm, Oliver; Hildebrandt, Andrea

    2017-10-01

    The enhanced N1 component in event-related potentials (ERP) to face stimuli, termed N170, is considered to indicate the structural encoding of faces. Previously, individual differences in the latency of the N170 have been related to face and object cognition abilities. By orthogonally manipulating content domain (faces vs objects) and task demands (easy/speed vs difficult/accuracy) in both psychometric and EEG tasks, we investigated the uniqueness of the processes underlying face cognition as compared with object cognition and the extent to which the N1/N170 component can explain individual differences in face and object cognition abilities. Data were recorded from N = 198 healthy young adults. Structural equation modeling (SEM) confirmed that the accuracies of face perception (FP) and memory are specific abilities above general object cognition; in contrast, the speed of face processing was not differentiable from the speed of object cognition. Although there was considerable domain-general variance in the N170 shared with the N1, there was significant face-specific variance in the N170. The brain-behavior relationship showed that faster face-specific processes for structural encoding of faces are associated with higher accuracy in both perceiving and memorizing faces. Moreover, in difficult task conditions, qualitatively different processes are additionally needed for recognizing face and object stimuli as compared with easy tasks. The difficulty-dependent variance components in the N170 amplitude were related with both face and object memory (OM) performance. We discuss implications for understanding individual differences in face cognition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Cultural differences in on-line sensitivity to emotional voices: comparing East and West

    PubMed Central

    Liu, Pan; Rigoulot, Simon; Pell, Marc D.

    2015-01-01

    Evidence that culture modulates on-line neural responses to the emotional meanings encoded by vocal and facial expressions was demonstrated recently in a study comparing English North Americans and Chinese (Liu et al., 2015). Here, we compared how individuals from these two cultures passively respond to emotional cues from faces and voices using an Oddball task. Participants viewed in-group emotional faces, with or without simultaneous vocal expressions, while performing a face-irrelevant visual task as the EEG was recorded. A significantly larger visual Mismatch Negativity (vMMN) was observed for Chinese vs. English participants when faces were accompanied by voices, suggesting that Chinese were influenced to a larger extent by task-irrelevant vocal cues. These data highlight further differences in how adults from East Asian vs. Western cultures process socio-emotional cues, arguing that distinct cultural practices in communication (e.g., display rules) shape neurocognitive activity associated with the early perception and integration of multi-sensory emotional cues. PMID:26074808

  8. Global-Local Precedence in the Perception of Facial Age and Emotional Expression by Children with Autism and Other Developmental Disabilities

    ERIC Educational Resources Information Center

    Gross, Thomas F.

    2005-01-01

    Global information processing and perception of facial age and emotional expression was studied in children with autism, language disorders, mental retardation, and a clinical control group. Children were given a global-local task and asked to recognize age and emotion in human and canine faces. Children with autism made fewer global responses and…

  9. Alterations in neural processing of emotional faces in adolescent anorexia nervosa patients - an event-related potential study.

    PubMed

    Sfärlea, Anca; Greimel, Ellen; Platt, Belinda; Bartling, Jürgen; Schulte-Körne, Gerd; Dieler, Alica C

    2016-09-01

    The present study explored the neurophysiological correlates of perception and recognition of emotional facial expressions in adolescent anorexia nervosa (AN) patients using event-related potentials (ERPs). We included 20 adolescent girls with AN and 24 healthy girls and recorded ERPs during a passive viewing task and three active tasks requiring processing of emotional faces in varying processing depths; one of the tasks also assessed emotion recognition abilities behaviourally. Despite the absence of behavioural differences, we found that across all tasks AN patients exhibited a less pronounced early posterior negativity (EPN) in response to all facial expressions compared to controls. The EPN is an ERP component reflecting an automatic, perceptual processing stage which is modulated by the intrinsic salience of a stimulus. Hence, the less pronounced EPN in anorexic girls suggests that they might perceive other people's faces as less intrinsically relevant, i.e. as less "important" than do healthy girls. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Differential effects of object-based attention on evoked potentials to fearful and disgusted faces.

    PubMed

    Santos, Isabel M; Iglesias, Jaime; Olivares, Ela I; Young, Andrew W

    2008-04-01

    Event-related potentials (ERPs) were used to investigate the role of attention on the processing of facial expressions of fear and disgust. Stimuli consisted of overlapping pictures of a face and a house. Participants had to monitor repetitions of faces or houses, in separate blocks of trials, so that object-based attention was manipulated while spatial attention was kept constant. Faces varied in expression and could be either fearful or neutral (in the fear condition) or disgusted or neutral (in the disgust condition). When attending to faces, participants were required to signal repetitions of the same person, with the facial expressions being completely irrelevant to the task. Different effects of selective attention and different patterns of brain activity were observed for faces with fear and disgust expressions. Results indicated that the perception of fear from faces is gated by selective attention at early latencies, whereas a sustained positivity for fearful faces compared to neutral faces emerged around 160ms at central-parietal sites, independent of selective attention. In the case of disgust, ERP differences began only around 160ms after stimulus onset, and only after 480ms was the perception of disgust modulated by attention allocation. Results are interpreted in terms of different neural mechanisms for the perception of fear and disgust and related to the functional significance of these two emotions for the survival of the organism.

  11. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    PubMed Central

    Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643

  12. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    PubMed

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  13. Bidirectional communication between amygdala and fusiform gyrus during facial recognition.

    PubMed

    Herrington, John D; Taylor, James M; Grupe, Daniel W; Curby, Kim M; Schultz, Robert T

    2011-06-15

    Decades of research have documented the specialization of fusiform gyrus (FG) for facial information processes. Recent theories indicate that FG activity is shaped by input from amygdala, but effective connectivity from amygdala to FG remains undocumented. In this fMRI study, 39 participants completed a face recognition task. 11 participants underwent the same experiment approximately four months later. Robust face-selective activation of FG, amygdala, and lateral occipital cortex were observed. Dynamic causal modeling and Bayesian Model Selection (BMS) were used to test the intrinsic connections between these structures, and their modulation by face perception. BMS results strongly favored a dynamic causal model with bidirectional, face-modulated amygdala-FG connections. However, the right hemisphere connections diminished at time 2, with the face modulation parameter no longer surviving Bonferroni correction. These findings suggest that amygdala strongly influences FG function during face perception, and that this influence is shaped by experience and stimulus salience. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Passive and Motivated Perception of Emotional Faces: Qualitative and Quantitative Changes in the Face Processing Network

    PubMed Central

    Skelly, Laurie R.; Decety, Jean

    2012-01-01

    Emotionally expressive faces are processed by a distributed network of interacting sub-cortical and cortical brain regions. The components of this network have been identified and described in large part by the stimulus properties to which they are sensitive, but as face processing research matures interest has broadened to also probe dynamic interactions between these regions and top-down influences such as task demand and context. While some research has tested the robustness of affective face processing by restricting available attentional resources, it is not known whether face network processing can be augmented by increased motivation to attend to affective face stimuli. Short videos of people expressing emotions were presented to healthy participants during functional magnetic resonance imaging. Motivation to attend to the videos was manipulated by providing an incentive for improved recall performance. During the motivated condition, there was greater coherence among nodes of the face processing network, more widespread correlation between signal intensity and performance, and selective signal increases in a task-relevant subset of face processing regions, including the posterior superior temporal sulcus and right amygdala. In addition, an unexpected task-related laterality effect was seen in the amygdala. These findings provide strong evidence that motivation augmentsco-activity among nodes of the face processing network and the impact of neural activity on performance. These within-subject effects highlight the necessity to consider motivation when interpreting neural function in special populations, and to further explore the effect of task demands on face processing in healthy brains. PMID:22768287

  15. The complex duration perception of emotional faces: effects of face direction.

    PubMed

    Kliegl, Katrin M; Limbrecht-Ecklundt, Kerstin; Dürr, Lea; Traue, Harald C; Huckauf, Anke

    2015-01-01

    The perceived duration of emotional face stimuli strongly depends on the expressed emotion. But, emotional faces also differ regarding a number of other features like gaze, face direction, or sex. Usually, these features have been controlled by only using pictures of female models with straight gaze and face direction. Doi and Shinohara (2009) reported that an overestimation of angry faces could only be found when the model's gaze was oriented toward the observer. We aimed at replicating this effect for face direction. Moreover, we explored the effect of face direction on the duration perception sad faces. Controlling for the sex of the face model and the participant, female and male participants rated the duration of neutral, angry, and sad face stimuli of both sexes photographed from different perspectives in a bisection task. In line with current findings, we report a significant overestimation of angry compared to neutral face stimuli that was modulated by face direction. Moreover, the perceived duration of sad face stimuli did not differ from that of neutral faces and was not influenced by face direction. Furthermore, we found that faces of the opposite sex appeared to last longer than those of the same sex. This outcome is discussed with regards to stimulus parameters like the induced arousal, social relevance, and an evolutionary context.

  16. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  17. When does Subliminal Affective Image Priming Influence the Ability of Schizophrenic Patients to Perceive Face Emotions?

    PubMed Central

    Vaina, Lucia M.; Rana, Kunjan D.; Cotos, Ionela; Li-Yang, Chen; Huang, Melissa A.; Podea, Delia

    2014-01-01

    Background Deficits in face emotion perception are among the most pervasive aspects of schizophrenia impairments which strongly affects interpersonal communication and social skills. Material/Methods Schizophrenic patients (PSZ) and healthy control subjects (HCS) performed 2 psychophysical tasks. One, the SAFFIMAP test, was designed to determine the impact of subliminally presented affective or neutral images on the accuracy of face-expression (angry or neutral) perception. In the second test, FEP, subjects saw pictures of face-expression and were asked to rate them as angry, happy, or neutral. The following clinical scales were used to determine the acute symptoms in PSZ: Positive and Negative Syndrome (PANSS), Young Mania Rating (YMRS), Hamilton Depression (HAM-D), and Hamilton Anxiety (HAM-A). Results On the SAFFIMAP test, different from the HCS group, the PSZ group tended to categorize the neutral expression of test faces as angry and their response to the test-face expression was not influenced by the affective content of the primes. In PSZ, the PANSS-positive score was significantly correlated with correct perception of angry faces for aggressive or pleasant primes. YMRS scores were strongly correlated with PSZ’s tendency to recognize angry face expressions when the prime was a pleasant or a neutral image. The HAM-D score was positively correlated with categorizing the test-faces as neutral, regardless of the affective content of the prime or of the test-face expression (angry or neutral). Conclusions Despite its exploratory nature, this study provides the first evidence that conscious perception and categorization of facial emotions (neutral or angry) in PSZ is directly affected by their positive or negative symptoms of the disease as defined by their individual scores on the clinical diagnostic scales. PMID:25537115

  18. When does subliminal affective image priming influence the ability of schizophrenic patients to perceive face emotions?

    PubMed

    Vaina, Lucia Maria; Rana, Kunjan D; Cotos, Ionela; Li-Yang, Chen; Huang, Melissa A; Podea, Delia

    2014-12-24

    Deficits in face emotion perception are among the most pervasive aspects of schizophrenia impairments which strongly affects interpersonal communication and social skills. Schizophrenic patients (PSZ) and healthy control subjects (HCS) performed 2 psychophysical tasks. One, the SAFFIMAP test, was designed to determine the impact of subliminally presented affective or neutral images on the accuracy of face-expression (angry or neutral) perception. In the second test, FEP, subjects saw pictures of face-expression and were asked to rate them as angry, happy, or neutral. The following clinical scales were used to determine the acute symptoms in PSZ: Positive and Negative Syndrome (PANSS), Young Mania Rating (YMRS), Hamilton Depression (HAM-D), and Hamilton Anxiety (HAM-A). On the SAFFIMAP test, different from the HCS group, the PSZ group tended to categorize the neutral expression of test faces as angry and their response to the test-face expression was not influenced by the affective content of the primes. In PSZ, the PANSS-positive score was significantly correlated with correct perception of angry faces for aggressive or pleasant primes. YMRS scores were strongly correlated with PSZ's tendency to recognize angry face expressions when the prime was a pleasant or a neutral image. The HAM-D score was positively correlated with categorizing the test-faces as neutral, regardless of the affective content of the prime or of the test-face expression (angry or neutral). Despite its exploratory nature, this study provides the first evidence that conscious perception and categorization of facial emotions (neutral or angry) in PSZ is directly affected by their positive or negative symptoms of the disease as defined by their individual scores on the clinical diagnostic scales.

  19. Early Stages of Figure–Ground Segregation during Perception of the Face–Vase

    PubMed Central

    Pitts, Michael A.; Martínez, Antígona; Brewer, James B.; Hillyard, Steven A.

    2011-01-01

    The temporal sequence of neural processes supporting figure–ground perception was investigated by recording ERPs associated with subjects’ perceptions of the face–vase figure. In Experiment 1, subjects continuously reported whether they perceived the face or the vase as the foreground figure by pressing one of two buttons. Each button press triggered a probe flash to the face region, the vase region, or the borders between the two. The N170/vertex positive potential (VPP) component of the ERP elicited by probes to the face region was larger when subjects perceived the faces as figure. Preceding the N170/VPP, two additional components were identified. First, when the borders were probed, ERPs differed in amplitude as early as 110 msec after probe onset depending on subjects’ figure–ground perceptions. Second, when the face or vase regions were probed, ERPs were more positive (at ~150–200 msec) when that region was perceived as figure versus background. These components likely reflect an early “border ownership” stage, and a subsequent “figure–ground segregation” stage of processing. To explore the influence of attention on these stages of processing, two additional experiments were conducted. In Experiment 2, subjects selectively attended to the face or vase region, and the same early ERP components were again produced. In Experiment 3, subjects performed an identical selective attention task, but on a display lacking distinctive figure–ground borders, and neither of the early components were produced. Results from these experiments suggest sequential stages of processing underlying figure–ground perception, each which are subject to modifications by selective attention. PMID:20146604

  20. Functional MRI study of diencephalic amnesia in Wernicke-Korsakoff syndrome.

    PubMed

    Caulo, M; Van Hecke, J; Toma, L; Ferretti, A; Tartaro, A; Colosimo, C; Romani, G L; Uncini, A

    2005-07-01

    Anterograde amnesia in Wernicke-Korsakoff syndrome is associated with diencephalic lesions, mainly in the anterior thalamic nuclei. Whether diencephalic and temporal lobe amnesias are distinct entities is still not clear. We investigated episodic memory for faces using functional MRI (fMRI) in eight controls and in a 34-year-old man with Wernicke-Korsakoff syndrome and diencephalic lesions but without medial temporal lobe (MTL) involvement at MRI. fMRI was performed with a 1.5 tesla unit. Three dual-choice tasks were employed: (i) face encoding (18 faces were randomly presented three times and subjects were asked to memorize the faces); (ii) face perception (subjects indicated which of two faces matched a third face); and (iii) face recognition (subjects indicated which of two faces belonged to the group they had been asked to memorize during encoding). All activation was greater in the right hemisphere. In controls both the encoding and recognition tasks activated two hippocampal regions (anterior and posterior). The anterior hippocampal region was more activated during recognition. Activation in the prefrontal cortex was greater during recognition. In the subject with Wernicke-Korsakoff syndrome, fMRI did not show hippocampal activation during either encoding or recognition. During recognition, although behavioural data showed defective retrieval, the prefrontal regions were activated as in controls, except for the ventrolateral prefrontal cortex. fMRI activation of the visual cortices and the behavioural score on the perception task indicated that the subject with Wernicke-Korsakoff syndrome perceived the faces, paid attention to the task and demonstrated accurate judgement. In the subject with Wernicke-Korsakoff syndrome, although the anatomical damage does not involve the MTL, the hippocampal memory encoding has been lost, possibly as a consequence of the hippocampal-anterior thalamic axis involvement. Anterograde amnesia could therefore be the expression of damage to an extended hippocampal system, and the distinction between temporal lobe and diencephalic amnesia has limited value. In the subject with Wernicke-Korsakoff syndrome, the preserved dorsolateral prefrontal cortex activation during incorrect recognition suggests that this region is more involved in either the orientation or attention at retrieval than in retrieval. The lack of activation of the prefrontal ventrolateral cortex confirms the role of this area in episodic memory formation.

  1. Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2014-01-01

    Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.

  2. Enhanced ERPs to visual stimuli in unaffected male siblings of ASD children.

    PubMed

    Anzures, Gizelle; Goyet, Louise; Ganea, Natasa; Johnson, Mark H

    2016-01-01

    Autism spectrum disorders are characterized by deficits in social and communication abilities. While unaffected relatives lack severe deficits, milder impairments have been reported in some first-degree relatives. The present study sought to verify whether mild deficits in face perception are evident among the unaffected younger siblings of children with ASD. Children between 6-9 years of age completed a face-recognition task and a passive viewing ERP task with face and house stimuli. Sixteen children were typically developing with no family history of ASD, and 17 were unaffected children with an older sibling with ASD. Findings indicate that, while unaffected siblings are comparable to controls in their face-recognition abilities, unaffected male siblings in particular show relatively enhanced P100 and P100-N170 peak-to-peak amplitude responses to faces and houses. Enhanced ERPs among unaffected male siblings is discussed in relation to potential differences in neural network recruitment during visual and face processing.

  3. Effects of ecstasy on cooperative behaviour and perception of trustworthiness: a naturalistic study.

    PubMed

    Stewart, L H; Ferguson, B; Morgan, C J A; Swaboda, N; Jones, L; Fenton, R; Wall, M B; Curran, H V

    2014-11-01

    Acute recreational use of 3,4-methylenedioxymethamphetamine (MDMA; 'ecstasy') can promote pro-social effects which may alter interpersonal perceptions. To explore such effects, this study investigated whether acute recreational use of ecstasy was associated with changes in individual perception of trustworthiness of people's faces and co-operative behaviours. An independent group, repeated measures design was used in which 17 ecstasy users were tested on the night of drug use (day 0) and again three days later (day 3); 22 controls were tested on parallel days. On each day, participants rated the trustworthiness of 66 faces, carried out three co-operative behaviour tasks (public good; dictator; ultimatum game) and completed mood self-ratings. Acute ecstasy use was associated with increased face trustworthiness ratings and increased cooperative behaviour on the dictator and ultimatum games; on day 3 there were no group differences on any task. Self-ratings showed the standard acute ecstasy effects (euphoria, energy, jaw clenching) with negative effects (less empathy, compassion, more distrust, hostility) emerging on day 3. Our findings of increased perceived trustworthiness and co-operative behaviours following use of ecstasy suggest that a single dose of the drug enhances aspects of empathy. This may in turn contribute to its popularity as a recreational drug and potentially to its enhancement of the therapeutic alliance in psychotherapy. © The Author(s) 2014.

  4. Face-specific and domain-general visual processing deficits in children with developmental prosopagnosia.

    PubMed

    Dalrymple, Kirsten A; Elison, Jed T; Duchaine, Brad

    2017-02-01

    Evidence suggests that face and object recognition depend on distinct neural circuitry within the visual system. Work with adults with developmental prosopagnosia (DP) demonstrates that some individuals have preserved object recognition despite severe face recognition deficits. This face selectivity in adults with DP indicates that face- and object-processing systems can develop independently, but it is unclear at what point in development these mechanisms are separable. Determining when individuals with DP first show dissociations between faces and objects is one means to address this question. In the current study, we investigated face and object processing in six children with DP (5-12-years-old). Each child was assessed with one face perception test, two different face memory tests, and two object memory tests that were matched to the face memory tests in format and difficulty. Scores from the DP children on the matched face and object tasks were compared to within-subject data from age-matched controls. Four of the six DP children, including the 5-year-old, showed evidence of face-specific deficits, while one child appeared to have more general visual-processing deficits. The remaining child had inconsistent results. The presence of face-specific deficits in children with DP suggests that face and object perception depend on dissociable processes in childhood.

  5. The effect of intranasal oxytocin on perceiving and understanding emotion on the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT).

    PubMed

    Cardoso, Christopher; Ellenbogen, Mark A; Linnen, Anne-Marie

    2014-02-01

    Evidence suggests that intranasal oxytocin enhances the perception of emotion in facial expressions during standard emotion identification tasks. However, it is not clear whether this effect is desirable in people who do not show deficits in emotion perception. That is, a heightened perception of emotion in faces could lead to "oversensitivity" to the emotions of others in nonclinical participants. The goal of this study was to assess the effects of intranasal oxytocin on emotion perception using ecologically valid social and nonsocial visual tasks. Eighty-two participants (42 women) self-administered a 24 IU dose of intranasal oxytocin or a placebo in a double-blind, randomized experiment and then completed the perceiving and understanding emotion components of the Mayer-Salovey-Caruso Emotional Intelligence Test. In this test, emotion identification accuracy is based on agreement with a normative sample. As expected, participants administered intranasal oxytocin rated emotion in facial stimuli as expressing greater emotional intensity than those given a placebo. Consequently, accurate identification of emotion in faces, based on agreement with a normative sample, was impaired in the oxytocin group relative to placebo. No such effect was observed for tests using nonsocial stimuli. The results are consistent with the hypothesis that intranasal oxytocin enhances the salience of social stimuli in the environment, but not nonsocial stimuli. The present findings support a growing literature showing that the effects of intranasal oxytocin on social cognition can be negative under certain circumstances, in this case promoting "oversensitivity" to emotion in faces in healthy people. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  6. The implicit processing of categorical and dimensional strategies: an fMRI study of facial emotion perception

    PubMed Central

    Matsuda, Yoshi-Taka; Fujimura, Tomomi; Katahira, Kentaro; Okada, Masato; Ueno, Kenichi; Cheng, Kang; Okanoya, Kazuo

    2013-01-01

    Our understanding of facial emotion perception has been dominated by two seemingly opposing theories: the categorical and dimensional theories. However, we have recently demonstrated that hybrid processing involving both categorical and dimensional perception can be induced in an implicit manner (Fujimura etal., 2012). The underlying neural mechanisms of this hybrid processing remain unknown. In this study, we tested the hypothesis that separate neural loci might intrinsically encode categorical and dimensional processing functions that serve as a basis for hybrid processing. We used functional magnetic resonance imaging to measure neural correlates while subjects passively viewed emotional faces and performed tasks that were unrelated to facial emotion processing. Activity in the right fusiform face area (FFA) increased in response to psychologically obvious emotions and decreased in response to ambiguous expressions, demonstrating the role of the FFA in categorical processing. The amygdala, insula and medial prefrontal cortex exhibited evidence of dimensional (linear) processing that correlated with physical changes in the emotional face stimuli. The occipital face area and superior temporal sulcus did not respond to these changes in the presented stimuli. Our results indicated that distinct neural loci process the physical and psychological aspects of facial emotion perception in a region-specific and implicit manner. PMID:24133426

  7. Facial emotion perception in Chinese patients with schizophrenia and non-psychotic first-degree relatives.

    PubMed

    Li, Huijie; Chan, Raymond C K; Zhao, Qing; Hong, Xiaohong; Gong, Qi-Yong

    2010-03-17

    Although there is a consensus that patients with schizophrenia have certain deficits in perceiving and expressing facial emotions, previous studies of facial emotion perception in schizophrenia do not present consistent results. The objective of this study was to explore facial emotion perception deficits in Chinese patients with schizophrenia and their non-psychotic first-degree relatives. Sixty-nine patients with schizophrenia, 56 of their first-degree relatives (33 parents and 23 siblings), and 92 healthy controls (67 younger healthy controls matched to the patients and siblings, and 25 older healthy controls matched to the parents) completed a set of facial emotion perception tasks, including facial emotion discrimination, identification, intensity, valence, and corresponding face identification tasks. The results demonstrated that patients with schizophrenia performed significantly worse than their siblings and younger healthy controls in accuracy in a variety of facial emotion perception tasks, whereas the siblings of the patients performed as well as the corresponding younger healthy controls in all of the facial emotion perception tasks. Patients with schizophrenia also showed significantly reduced speed than younger healthy controls, while siblings of patients did not demonstrate significant differences with both patients and younger healthy controls in speed. Meanwhile, we also found that parents of the schizophrenia patients performed significantly worse than the corresponding older healthy controls in accuracy in terms of facial emotion identification, valence, and the composite index of the facial discrimination, identification, intensity and valence tasks. Moreover, no significant differences were found between the parents of patients and older healthy controls in speed after controlling the years of education and IQ. Taken together, the results suggest that facial emotion perception deficits may serve as potential endophenotypes for schizophrenia. Copyright 2010 Elsevier Inc. All rights reserved.

  8. Intranasal oxytocin selectively attenuates rhesus monkeys' attention to negative facial expressions.

    PubMed

    Parr, Lisa A; Modi, Meera; Siebert, Erin; Young, Larry J

    2013-09-01

    Intranasal oxytocin (IN-OT) modulates social perception and cognition in humans and could be an effective pharmacotherapy for treating social impairments associated with neuropsychiatric disorders, like autism. However, it is unknown how IN-OT modulates social cognition, its effect after repeated use, or its impact on the developing brain. Animal models are urgently needed. This study examined the effect of IN-OT on social perception in monkeys using tasks that reveal some of the social impairments seen in autism. Six rhesus macaques (Macaca mulatta, 4 males) received a 48 IU dose of OT or saline placebo using a pediatric nebulizer. An hour later, they performed a computerized task (the dot-probe task) to measure their attentional bias to social, emotional, and nonsocial images. Results showed that IN-OT significantly reduced monkeys' attention to negative facial expressions, but not neutral faces or clip art images and, additionally, showed a trend to enhance monkeys' attention to direct vs. averted gaze faces. This study is the first to demonstrate an effect of IN-OT on social perception in monkeys, IN-OT selectively reduced monkey's attention to negative facial expressions, but not neutral social or nonsocial images. These findings complement several reports in humans showing that IN-OT reduces the aversive quality of social images suggesting that, like humans, monkey social perception is mediated by the oxytocinergic system. Importantly, these results in monkeys suggest that IN-OT does not dampen the emotional salience of social stimuli, but rather acts to affect the evaluation of emotional images during the early stages of information processing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Multisensory emotion perception in congenitally, early, and late deaf CI users

    PubMed Central

    Nava, Elena; Villwock, Agnes K.; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences. PMID:29023525

  10. Multisensory emotion perception in congenitally, early, and late deaf CI users.

    PubMed

    Fengler, Ineke; Nava, Elena; Villwock, Agnes K; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.

  11. Brain regions sensitive to the face inversion effect: a functional magnetic resonance imaging study in humans.

    PubMed

    Leube, Dirk T; Yoon, Hyo Woon; Rapp, Alexander; Erb, Michael; Grodd, Wolfgang; Bartels, Mathias; Kircher, Tilo T J

    2003-05-22

    Perception of upright faces relies on configural processing. Therefore recognition of inverted, compared to upright faces is impaired. In a functional magnetic resonance imaging experiment we investigated the neural correlate of a face inversion task. Thirteen healthy subjects were presented with a equal number of upright and inverted faces alternating with a low level baseline with an upright and inverted picture of an abstract symbol. Brain activation was calculated for upright minus inverted faces. For this differential contrast, we found a signal change in the right superior temporal sulcus and right insula. Configural properties are processed in a network comprising right superior temporal and insular cortex.

  12. Priming Facial Gender and Emotional Valence: The Influence of Spatial Frequency on Face Perception in ASD.

    PubMed

    Vanmarcke, Steven; Wagemans, Johan

    2017-04-01

    Adolescents with and without autism spectrum disorder (ASD) performed two priming experiments in which they implicitly processed a prime stimulus, containing high and/or low spatial frequency information, and then explicitly categorized a target face either as male/female (gender task) or as positive/negative (Valence task). Adolescents with ASD made more categorization errors than typically developing adolescents. They also showed an age-dependent improvement in categorization speed and had more difficulties with categorizing facial expressions than gender. However, in neither of the categorization tasks, we found group differences in the processing of coarse versus fine prime information. This contradicted our expectations, and indicated that the perceptual differences between adolescents with and without ASD critically depended on the processing time available for the primes.

  13. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  14. Holistic processing of musical notation: Dissociating failures of selective attention in experts and novices.

    PubMed

    Wong, Yetta Kwailing; Gauthier, Isabel

    2010-12-01

    Holistic processing (i.e., the tendency to process objects as wholes) is associated with face perception and also with expertise individuating novel objects. Surprisingly, recent work also reveals holistic effects in novice observers. It is unclear whether the same mechanisms support holistic effects in experts and in novices. In the present study, we measured holistic processing of music sequences using a selective attention task in participants who vary in music-reading expertise. We found that holistic effects were strategic in novices but were relatively automatic in experts. Correlational analyses revealed that individual holistic effects were predicted by both individual music-reading ability and neural responses for musical notation in the right fusiform face area (rFFA), but in opposite directions for experts and novices, suggesting that holistic effects in the two groups may be of different natures. To characterize expert perception, it is important not only to measure the tendency to process objects as wholes, but also to test whether this effect is dependent on task constraints.

  15. Odor Valence Linearly Modulates Attractiveness, but Not Age Assessment, of Invariant Facial Features in a Memory-Based Rating Task

    PubMed Central

    Seubert, Janina; Gregory, Kristen M.; Chamberland, Jessica; Dessirier, Jean-Marc; Lundström, Johan N.

    2014-01-01

    Scented cosmetic products are used across cultures as a way to favorably influence one's appearance. While crossmodal effects of odor valence on perceived attractiveness of facial features have been demonstrated experimentally, it is unknown whether they represent a phenomenon specific to affective processing. In this experiment, we presented odors in the context of a face battery with systematic feature manipulations during a speeded response task. Modulatory effects of linear increases of odor valence were investigated by juxtaposing subsequent memory-based ratings tasks – one predominantly affective (attractiveness) and a second, cognitive (age). The linear modulation pattern observed for attractiveness was consistent with additive effects of face and odor appraisal. Effects of odor valence on age perception were not linearly modulated and may be the result of cognitive interference. Affective and cognitive processing of faces thus appear to differ in their susceptibility to modulation by odors, likely as a result of privileged access of olfactory stimuli to affective brain networks. These results are critically discussed with respect to potential biases introduced by the preceding speeded response task. PMID:24874703

  16. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device.

    PubMed

    Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M

    2002-09-01

    (1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. 30 subjects with AMD (age range 66-90 years; visual acuity 0.4-1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = -0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = -0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance.

  17. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device

    PubMed Central

    Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M

    2002-01-01

    Aims: (1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. Methods: 30 subjects with AMD (age range 66–90 years; visual acuity 0.4–1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Results: Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = −0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = −0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Conclusion: Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance. PMID:12185131

  18. Repetition suppression of faces is modulated by emotion

    NASA Astrophysics Data System (ADS)

    Ishai, Alumit; Pessoa, Luiz; Bikle, Philip C.; Ungerleider, Leslie G.

    2004-06-01

    Single-unit recordings and functional brain imaging studies have shown reduced neural responses to repeated stimuli in the visual cortex. By using event-related functional MRI, we compared the activation evoked by repetitions of neutral and fearful faces, which were either task relevant (targets) or irrelevant (distracters). We found that within the inferior occipital gyri, lateral fusiform gyri, superior temporal sulci, amygdala, and the inferior frontal gyri/insula, targets evoked stronger responses than distracters and their repetition was associated with significantly reduced responses. Repetition suppression, as manifested by the difference in response amplitude between the first and third repetitions of a target, was stronger for fearful than neutral faces. Distracter faces, regardless of their repetition or valence, evoked negligible activation, indicating top-down attenuation of behaviorally irrelevant stimuli. Our findings demonstrate a three-way interaction between emotional valence, repetition, and task relevance and suggest that repetition suppression is influenced by high-level cognitive processes in the human brain. face perception | functional MRI

  19. Phasic alertness enhances processing of face and non-face stimuli in congenital prosopagnosia.

    PubMed

    Tanzer, Michal; Weinbach, Noam; Mardo, Elite; Henik, Avishai; Avidan, Galia

    2016-08-01

    Congenital prosopagnosia (CP) is a severe face processing impairment that occurs in the absence of any obvious brain damage and has often been associated with a more general deficit in deriving holistic relations between facial features or even between non-face shape dimensions. Here we further characterized this deficit and examined a potential way to ameliorate it. To this end we manipulated phasic alertness using alerting cues previously shown to modulate attention and enhance global processing of visual stimuli in normal observers. Specifically, we first examined whether individuals with CP, similarly to controls, would show greater global processing when exposed to an alerting cue in the context of a non-facial task (Navon global/local task). We then explored the effect of an alerting cue on face processing (upright/inverted face discrimination). Confirming previous findings, in the absence of alerting cues, controls showed a typical global bias in the Navon task and an inversion effect indexing holistic processing in the upright/inverted task, while CP failed to show these effects. Critically, when alerting cues preceded the experimental trials, both groups showed enhanced global interference and a larger inversion effect. These results suggest that phasic alertness may modulate visual processing and consequently, affect global/holistic perception. Hence, these findings further reinforce the notion that global/holistic processing may serve as a possible mechanism underlying the face processing deficit in CP. Moreover, they imply a possible route for enhancing face processing in individuals with CP and thus shed new light on potential amelioration of this disorder. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Altered medial prefrontal activity during dynamic face processing in schizophrenia spectrum patients.

    PubMed

    Mothersill, Omar; Morris, Derek W; Kelly, Sinead; Rose, Emma Jane; Bokde, Arun; Reilly, Richard; Gill, Michael; Corvin, Aiden P; Donohoe, Gary

    2014-08-01

    Processing the emotional content of faces is recognised as a key deficit of schizophrenia, associated with poorer functional outcomes and possibly contributing to the severity of clinical symptoms such as paranoia. At the neural level, fMRI studies have reported altered limbic activity in response to facial stimuli. However, previous studies may be limited by the use of cognitively demanding tasks and static facial stimuli. To address these issues, the current study used a face processing task involving both passive face viewing and dynamic social stimuli. Such a task may (1) lack the potentially confounding effects of high cognitive demands and (2) show higher ecological validity. Functional MRI was used to examine neural activity in 25 patients with a DSM-IV diagnosis of schizophrenia/schizoaffective disorder and 21 age- and gender-matched healthy controls while they participated in a face processing task, which involved viewing videos of angry and neutral facial expressions, and a non-biological baseline condition. While viewing faces, patients showed significantly weaker deactivation of the medial prefrontal cortex, including the anterior cingulate, and decreased activation in the left cerebellum, compared to controls. Patients also showed weaker medial prefrontal deactivation while viewing the angry faces relative to baseline. Given that the anterior cingulate plays a role in processing negative emotion, weaker deactivation of this region in patients while viewing faces may contribute to an increased perception of social threat. Future studies examining the neurobiology of social cognition in schizophrenia using fMRI may help establish targets for treatment interventions. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Visual imagery of famous faces: effects of memory and attention revealed by fMRI.

    PubMed

    Ishai, Alumit; Haxby, James V; Ungerleider, Leslie G

    2002-12-01

    Complex pictorial information can be represented and retrieved from memory as mental visual images. Functional brain imaging studies have shown that visual perception and visual imagery share common neural substrates. The type of memory (short- or long-term) that mediates the generation of mental images, however, has not been addressed previously. The purpose of this study was to investigate the neural correlates underlying imagery generated from short- and long-term memory (STM and LTM). We used famous faces to localize the visual response during perception and to compare the responses during visual imagery generated from STM (subjects memorized specific pictures of celebrities before the imagery task) and imagery from LTM (subjects imagined famous faces without seeing specific pictures during the experimental session). We found that visual perception of famous faces activated the inferior occipital gyri, lateral fusiform gyri, the superior temporal sulcus, and the amygdala. Small subsets of these face-selective regions were activated during imagery. Additionally, visual imagery of famous faces activated a network of regions composed of bilateral calcarine, hippocampus, precuneus, intraparietal sulcus (IPS), and the inferior frontal gyrus (IFG). In all these regions, imagery generated from STM evoked more activation than imagery from LTM. Regardless of memory type, focusing attention on features of the imagined faces (e.g., eyes, lips, or nose) resulted in increased activation in the right IPS and right IFG. Our results suggest differential effects of memory and attention during the generation and maintenance of mental images of faces.

  2. Electrophysiological correlates of facial decision: insights from upright and upside-down Mooney-face perception.

    PubMed

    George, Nathalie; Jemel, Boutheina; Fiori, Nicole; Chaby, Laurence; Renault, Bernard

    2005-08-01

    We investigated the ERP correlates of the subjective perception of upright and upside-down ambiguous pictures as faces using two-tone Mooney stimuli in an explicit facial decision task (deciding whether a face is perceived or not in the display). The difficulty in perceiving upside-down Mooneys as faces was reflected by both lower rates of "Face" responses and delayed "Face" reaction times for upside-down relative to upright stimuli. The N170 was larger for the stimuli reported as "faces". It was also larger for the upright than the upside-down stimuli only when they were reported as faces. Furthermore, facial decision as well as stimulus orientation effects spread from 140-190 ms to 390-440 ms. The behavioural delay in 'Face' responses to upside-down stimuli was reflected in ERPs by later effect of facial decision for upside-down relative to upright Mooneys over occipito-temporal electrodes. Moreover, an orientation effect was observed only for the stimuli reported as faces; it yielded a marked hemispheric asymmetry, lasting from 140-190 ms to 390-440 ms post-stimulus onset in the left hemisphere and from 340-390 to 390-440 ms only in the right hemisphere. Taken together, the results supported a preferential involvement of the right hemisphere in the detection of faces, whatever their orientation. By contrast, the early orientation effect in the left hemisphere suggested that upside-down Mooney stimuli were processed as non face objects until facial decision was reached in this hemisphere. The present data show that face perception involves not only spatially but also temporally distributed activities in occipito-temporal regions.

  3. Processing of configural and componential information in face-selective cortical areas.

    PubMed

    Zhao, Mintao; Cheung, Sing-Hang; Wong, Alan C-N; Rhodes, Gillian; Chan, Erich K S; Chan, Winnie W L; Hayward, William G

    2014-01-01

    We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition.

  4. Differentiating between self and others: an ALE meta-analysis of fMRI studies of self-recognition and theory of mind.

    PubMed

    van Veluw, Susanne J; Chance, Steven A

    2014-03-01

    The perception of self and others is a key aspect of social cognition. In order to investigate the neurobiological basis of this distinction we reviewed two classes of task that study self-awareness and awareness of others (theory of mind, ToM). A reliable task to measure self-awareness is the recognition of one's own face in contrast to the recognition of others' faces. False-belief tasks are widely used to identify neural correlates of ToM as a measure of awareness of others. We performed an activation likelihood estimation meta-analysis, using the fMRI literature on self-face recognition and false-belief tasks. The brain areas involved in performing false-belief tasks were the medial prefrontal cortex (MPFC), bilateral temporo-parietal junction, precuneus, and the bilateral middle temporal gyrus. Distinct self-face recognition regions were the right superior temporal gyrus, the right parahippocampal gyrus, the right inferior frontal gyrus/anterior cingulate cortex, and the left inferior parietal lobe. Overlapping brain areas were the superior temporal gyrus, and the more ventral parts of the MPFC. We confirmed that self-recognition in contrast to recognition of others' faces, and awareness of others involves a network that consists of separate, distinct neural pathways, but also includes overlapping regions of higher order prefrontal cortex where these processes may be combined. Insights derived from the neurobiology of disorders such as autism and schizophrenia are consistent with this notion.

  5. Normal voice processing after posterior superior temporal sulcus lesion.

    PubMed

    Jiahui, Guo; Garrido, Lúcia; Liu, Ran R; Susilo, Tirta; Barton, Jason J S; Duchaine, Bradley

    2017-10-01

    The right posterior superior temporal sulcus (pSTS) shows a strong response to voices, but the cognitive processes generating this response are unclear. One possibility is that this activity reflects basic voice processing. However, several fMRI and magnetoencephalography findings suggest instead that pSTS serves as an integrative hub that combines voice and face information. Here we investigate whether right pSTS contributes to basic voice processing by testing Faith, a patient whose right pSTS was resected, with eight behavioral tasks assessing voice identity perception and recognition, voice sex perception, and voice expression perception. Faith performed normally on all the tasks. Her normal performance indicates right pSTS is not necessary for intact voice recognition and suggests that pSTS activations to voices reflect higher-level processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Intuitive Face Judgments Rely on Holistic Eye Movement Pattern

    PubMed Central

    Mega, Laura F.; Volz, Kirsten G.

    2017-01-01

    Non-verbal signals such as facial expressions are of paramount importance for social encounters. Their perception predominantly occurs without conscious awareness and is effortlessly integrated into social interactions. In other words, face perception is intuitive. Contrary to classical intuition tasks, this work investigates intuitive processes in the realm of every-day type social judgments. Two differently instructed groups of participants judged the authenticity of emotional facial expressions, while their eye movements were recorded: an ‘intuitive group,’ instructed to rely on their “gut feeling” for the authenticity judgments, and a ‘deliberative group,’ instructed to make their judgments after careful analysis of the face. Pixel-wise statistical maps of the resulting eye movements revealed a differential viewing pattern, wherein the intuitive judgments relied on fewer, longer and more centrally located fixations. These markers have been associated with a global/holistic viewing strategy. The holistic pattern of intuitive face judgments is in line with evidence showing that intuition is related to processing the “gestalt” of an object, rather than focusing on details. Our work thereby provides further evidence that intuitive processes are characterized by holistic perception, in an understudied and real world domain of intuition research. PMID:28676773

  7. Intuitive Face Judgments Rely on Holistic Eye Movement Pattern.

    PubMed

    Mega, Laura F; Volz, Kirsten G

    2017-01-01

    Non-verbal signals such as facial expressions are of paramount importance for social encounters. Their perception predominantly occurs without conscious awareness and is effortlessly integrated into social interactions. In other words, face perception is intuitive. Contrary to classical intuition tasks, this work investigates intuitive processes in the realm of every-day type social judgments. Two differently instructed groups of participants judged the authenticity of emotional facial expressions, while their eye movements were recorded: an 'intuitive group,' instructed to rely on their "gut feeling" for the authenticity judgments, and a 'deliberative group,' instructed to make their judgments after careful analysis of the face. Pixel-wise statistical maps of the resulting eye movements revealed a differential viewing pattern, wherein the intuitive judgments relied on fewer, longer and more centrally located fixations. These markers have been associated with a global/holistic viewing strategy. The holistic pattern of intuitive face judgments is in line with evidence showing that intuition is related to processing the "gestalt" of an object, rather than focusing on details. Our work thereby provides further evidence that intuitive processes are characterized by holistic perception, in an understudied and real world domain of intuition research.

  8. Mapping face encoding using functional MRI in multiple sclerosis across disease phenotypes.

    PubMed

    Rocca, Maria A; Vacchi, Laura; Rodegher, Mariaemma; Meani, Alessandro; Martinelli, Vittorio; Possa, Francesca; Comi, Giancarlo; Falini, Andrea; Filippi, Massimo

    2017-10-01

    Using fMRI during a face encoding (FE) task, we investigated the behavioral and fMRI correlates of FE in patients with relapse-onset multiple sclerosis (MS) at different stages of the disease and their relation with attentive-executive performance and structural MRI measures of disease-related damage. A fMRI FE task was administered to 75 MS patients (11 clinically isolated syndromes - CIS, 40 relapsing-remitting - RRMS - and 24 secondary progressive - SPMS) and 22 healthy controls (HC). fMRI activity during the face encoding condition was correlated with behavioral, clinical, neuropsychological and structural MRI variables. All study subjects activated brain regions belonging to face perception and encoding network, and deactivated areas of the default-mode network. Compared to HC, MS patients had the concomitant presence of areas of increased and decreased activations as well as increased and decreased deactivations. Compared to HC or RRMS, CIS patients experienced an increased recruitment of posterior-visual areas. Thalami, para-hippocampal gyri and right anterior cingulum were more activated in RRMS vs CIS or SPMS patients, while an increased recruitment of frontal areas was observed in SPMS vs RRMS. Areas of abnormal activations were significantly correlated with clinical, cognitive-behavioral and structural MRI measures. Abnormalities of FE network occur in MS and vary across disease clinical phenotypes. Early in the disease, an increased recruitment of areas typically devoted to face perception and encoding occurs. In SPMS patients, abnormal functional recruitment of frontal lobe areas might contribute to the severity of clinical manifestations.

  9. Non-conscious visual cues related to affect and action alter perception of effort and endurance performance

    PubMed Central

    Blanchfield, Anthony; Hardy, James; Marcora, Samuele

    2014-01-01

    The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effects of these non-conscious visual cues on effort and performance during physical tasks are however unknown. We report two experiments investigating the effects of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces) revealed that individuals cycled significantly longer (178 s, p = 0.04) when subliminally primed with happy faces. A 2 × 5 (condition × iso-time) ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE) during the time to exhaustion (TTE) test with lower RPE when subjects were subliminally primed with happy faces (p = 0.04). In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer TTE (399 s, p = 0.04) in comparison to inaction words. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = 0.03). These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise. PMID:25566014

  10. Abnormal brain activation during threatening face processing in schizophrenia: A meta-analysis of functional neuroimaging studies.

    PubMed

    Dong, Debo; Wang, Yulin; Jia, Xiaoyan; Li, Yingjia; Chang, Xuebin; Vandekerckhove, Marie; Luo, Cheng; Yao, Dezhong

    2017-11-15

    Impairment of face perception in schizophrenia is a core aspect of social cognitive dysfunction. This impairment is particularly marked in threatening face processing. Identifying reliable neural correlates of the impairment of threatening face processing is crucial for targeting more effective treatments. However, neuroimaging studies have not yet obtained robust conclusions. Through comprehensive literature search, twenty-one whole brain datasets were included in this meta-analysis. Using seed-based d-Mapping, in this voxel-based meta-analysis, we aimed to: 1) establish the most consistent brain dysfunctions related to threating face processing in schizophrenia; 2) address task-type heterogeneity in this impairment; 3) explore the effect of potential demographic or clinical moderator variables on this impairment. Main meta-analysis indicated that patients with chronic schizophrenia demonstrated attenuated activations in limbic emotional system along with compensatory over-activation in medial prefrontal cortex (MPFC) during threatening faces processing. Sub-task analyses revealed under-activations in right amygdala and left fusiform gyrus in both implicit and explicit tasks. The remaining clusters were found to be differently involved in different types of tasks. Moreover, meta-regression analyses showed brain abnormalities in schizophrenia were partly modulated by age, gender, medication and severity of symptoms. Our results highlighted breakdowns in limbic-MPFC circuit in schizophrenia, suggesting general inability to coordinate and contextualize salient threat stimuli. These findings provide potential targets for neurotherapeutic and pharmacological interventions for schizophrenia. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. New Tests to Measure Individual Differences in Matching and Labelling Facial Expressions of Emotion, and Their Association with Ability to Recognise Vocal Emotions and Facial Identity

    PubMed Central

    Palermo, Romina; O’Connor, Kirsty B.; Davis, Joshua M.; Irons, Jessica; McKone, Elinor

    2013-01-01

    Although good tests are available for diagnosing clinical impairments in face expression processing, there is a lack of strong tests for assessing “individual differences” – that is, differences in ability between individuals within the typical, nonclinical, range. Here, we develop two new tests, one for expression perception (an odd-man-out matching task in which participants select which one of three faces displays a different expression) and one additionally requiring explicit identification of the emotion (a labelling task in which participants select one of six verbal labels). We demonstrate validity (careful check of individual items, large inversion effects, independence from nonverbal IQ, convergent validity with a previous labelling task), reliability (Cronbach’s alphas of.77 and.76 respectively), and wide individual differences across the typical population. We then demonstrate the usefulness of the tests by addressing theoretical questions regarding the structure of face processing, specifically the extent to which the following processes are common or distinct: (a) perceptual matching and explicit labelling of expression (modest correlation between matching and labelling supported partial independence); (b) judgement of expressions from faces and voices (results argued labelling tasks tap into a multi-modal system, while matching tasks tap distinct perceptual processes); and (c) expression and identity processing (results argued for a common first step of perceptual processing for expression and identity). PMID:23840821

  12. New tests to measure individual differences in matching and labelling facial expressions of emotion, and their association with ability to recognise vocal emotions and facial identity.

    PubMed

    Palermo, Romina; O'Connor, Kirsty B; Davis, Joshua M; Irons, Jessica; McKone, Elinor

    2013-01-01

    Although good tests are available for diagnosing clinical impairments in face expression processing, there is a lack of strong tests for assessing "individual differences"--that is, differences in ability between individuals within the typical, nonclinical, range. Here, we develop two new tests, one for expression perception (an odd-man-out matching task in which participants select which one of three faces displays a different expression) and one additionally requiring explicit identification of the emotion (a labelling task in which participants select one of six verbal labels). We demonstrate validity (careful check of individual items, large inversion effects, independence from nonverbal IQ, convergent validity with a previous labelling task), reliability (Cronbach's alphas of.77 and.76 respectively), and wide individual differences across the typical population. We then demonstrate the usefulness of the tests by addressing theoretical questions regarding the structure of face processing, specifically the extent to which the following processes are common or distinct: (a) perceptual matching and explicit labelling of expression (modest correlation between matching and labelling supported partial independence); (b) judgement of expressions from faces and voices (results argued labelling tasks tap into a multi-modal system, while matching tasks tap distinct perceptual processes); and (c) expression and identity processing (results argued for a common first step of perceptual processing for expression and identity).

  13. Importance of the Inverted Control in Measuring Holistic Face Processing with the Composite Effect and Part-Whole Effect

    PubMed Central

    McKone, Elinor; Davies, Anne Aimola; Darke, Hayley; Crookes, Kate; Wickramariyaratne, Tushara; Zappia, Stephanie; Fiorentini, Chiara; Favelle, Simone; Broughton, Mary; Fernando, Dinusha

    2013-01-01

    Holistic coding for faces is shown in several illusions that demonstrate integration of the percept across the entire face. The illusions occur upright but, crucially, not inverted. Converting the illusions into experimental tasks that measure their strength – and thus index degree of holistic coding – is often considered straightforward yet in fact relies on a hidden assumption, namely that there is no contribution to the experimental measure from secondary cognitive factors. For the composite effect, a relevant secondary factor is size of the “spotlight” of visuospatial attention. The composite task assumes this spotlight can be easily restricted to the target half (e.g., top-half) of the compound face stimulus. Yet, if this assumption were not true then a large spotlight, in the absence of holistic perception, could produce a false composite effect, present even for inverted faces and contributing partially to the score for upright faces. We review evidence that various factors can influence spotlight size: race/culture (Asians often prefer a more global distribution of attention than Caucasians); sex (females can be more global); appearance of the join or gap between face halves; and location of the eyes, which typically attract attention. Results from five experiments then show inverted faces can sometimes produce large false composite effects, and imply that whether this happens or not depends on complex interactions between causal factors. We also report, for both identity and expression, that only top-half face targets (containing eyes) produce valid composite measures. A sixth experiment demonstrates an example of a false inverted part-whole effect, where encoding-specificity is the secondary cognitive factor. We conclude the inverted face control should be tested in all composite and part-whole studies, and an effect for upright faces should be interpreted as a pure measure of holistic processing only when the experimental design produces no effect inverted. PMID:23382725

  14. Identifiable Images of Bystanders Extracted from Corneal Reflections

    PubMed Central

    Jenkins, Rob; Kerr, Christie

    2013-01-01

    Criminal investigations often use photographic evidence to identify suspects. Here we combined robust face perception and high-resolution photography to mine face photographs for hidden information. By zooming in on high-resolution face photographs, we were able to recover images of unseen bystanders from reflections in the subjects' eyes. To establish whether these bystanders could be identified from the reflection images, we presented them as stimuli in a face matching task (Experiment 1). Accuracy in the face matching task was well above chance (50%), despite the unpromising source of the stimuli. Participants who were unfamiliar with the bystanders' faces (n = 16) performed at 71% accuracy [t(15) = 7.64, p<.0001, d = 1.91], and participants who were familiar with the faces (n = 16) performed at 84% accuracy [t(15) = 11.15, p<.0001, d = 2.79]. In a test of spontaneous recognition (Experiment 2), observers could reliably name a familiar face from an eye reflection image. For crimes in which the victims are photographed (e.g., hostage taking, child sex abuse), reflections in the eyes of the photographic subject could help to identify perpetrators. PMID:24386177

  15. Psychopathic traits affect the visual exploration of facial expressions.

    PubMed

    Boll, Sabrina; Gamer, Matthias

    2016-05-01

    Deficits in emotional reactivity and recognition have been reported in psychopathy. Impaired attention to the eyes along with amygdala malfunctions may underlie these problems. Here, we investigated how different facets of psychopathy modulate the visual exploration of facial expressions by assessing personality traits in a sample of healthy young adults using an eye-tracking based face perception task. Fearless Dominance (the interpersonal-emotional facet of psychopathy) and Coldheartedness scores predicted reduced face exploration consistent with findings on lowered emotional reactivity in psychopathy. Moreover, participants high on the social deviance facet of psychopathy ('Self-Centered Impulsivity') showed a reduced bias to shift attention towards the eyes. Our data suggest that facets of psychopathy modulate face processing in healthy individuals and reveal possible attentional mechanisms which might be responsible for the severe impairments of social perception and behavior observed in psychopathy. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Face Perception and Learning in Autism Spectrum Disorders

    PubMed Central

    Webb, Sara Jane; Neuhaus, Emily; Faja, Susan

    2016-01-01

    Autism Spectrum Disorder (ASD) is characterized by impairment in social communication and restricted and repetitive interests (American Psychiatric Association, 2013). While not included in the diagnostic characterization, aspects of face processing and learning have shown disruptions at all stages of development in ASD, although the exact nature and extent of the impairment varies by age and level of functioning of the ASD sample as well as by task demands. In this review, we examine the nature of face attention, perception, and learning in individuals with ASD focusing on 3 broad age ranges (early development, middle childhood, and adolescence/adulthood). We propose that early delays in basic face processing contribute to the atypical trajectory of social communicative skills in individuals with ASD and contribute to poor social learning throughout development. Face learning is a life-long necessity, as the social world of individual only broadens with age, and thus addressing both the source of the impairment in ASD as well as the trajectory of ability throughout the lifespan, through targeted treatments, may serve to positively impact the lives of individuals who struggle with social understanding and information. PMID:26886246

  17. The perception of (naked only) bodies and faceless heads relies on holistic processing: Evidence from the inversion effect.

    PubMed

    Bonemei, Rob; Costantino, Andrea I; Battistel, Ilenia; Rivolta, Davide

    2018-05-01

    Faces and bodies are more difficult to perceive when presented inverted than when presented upright (i.e., stimulus inversion effect), an effect that has been attributed to the disruption of holistic processing. The features that can trigger holistic processing in faces and bodies, however, still remain elusive. In this study, using a sequential matching task, we tested whether stimulus inversion affects various categories of visual stimuli: faces, faceless heads, faceless heads in body context, headless bodies naked, whole bodies naked, headless bodies clothed, and whole bodies clothed. Both accuracy and inversion efficiency score results show inversion effects for all categories but for clothed bodies (with and without heads). In addition, the magnitude of the inversion effect for face, naked body, and faceless heads was similar. Our findings demonstrate that the perception of faces, faceless heads, and naked bodies relies on holistic processing. Clothed bodies (with and without heads), on the other side, may trigger clothes-sensitive rather than body-sensitive perceptual mechanisms. © 2017 The British Psychological Society.

  18. Face perception and learning in autism spectrum disorders.

    PubMed

    Webb, Sara Jane; Neuhaus, Emily; Faja, Susan

    2017-05-01

    Autism Spectrum Disorder (ASD) is characterized by impairment in social communication and restricted and repetitive interests. While not included in the diagnostic characterization, aspects of face processing and learning have shown disruptions at all stages of development in ASD, although the exact nature and extent of the impairment vary by age and level of functioning of the ASD sample as well as by task demands. In this review, we examine the nature of face attention, perception, and learning in individuals with ASD focusing on three broad age ranges (early development, middle childhood, and adolescence/adulthood). We propose that early delays in basic face processing contribute to the atypical trajectory of social communicative skills in individuals with ASD and contribute to poor social learning throughout development. Face learning is a life-long necessity, as the social world of individual only broadens with age, and thus addressing both the source of the impairment in ASD as well as the trajectory of ability throughout the lifespan, through targeted treatments, may serve to positively impact the lives of individuals who struggle with social information and understanding.

  19. Distortion in time perception as a result of concern about appearing biased

    PubMed Central

    Moskowitz, Gordon B.; Olcaysoy Okten, Irmak; Gooch, Cynthia M.

    2017-01-01

    Two experiments illustrate that the perception of a given time duration slows when white participants observe faces of black men, but only if participants are concerned with appearing biased. In Experiment 1 the concern with the appearance of bias is measured as a chronic state using the external motivation to respond without prejudice scale (Plant & Devine, 1998). In Experiment 2 it is manipulated by varying the race of the experimenter (black versus white). Time perception is assessed via a temporal discrimination task commonly used in the literature. Models of time perception identify arousal as a factor that causes perceived time to slow, and we speculate that arousal arising in intergroup interactions can alter time perception. PMID:28792515

  20. Individual differences in false memory from misinformation: cognitive factors.

    PubMed

    Zhu, Bi; Chen, Chuansheng; Loftus, Elizabeth F; Lin, Chongde; He, Qinghua; Chen, Chunhui; Li, He; Xue, Gui; Lu, Zhonglin; Dong, Qi

    2010-07-01

    This research investigated the cognitive correlates of false memories that are induced by the misinformation paradigm. A large sample of Chinese college students (N=436) participated in a misinformation procedure and also took a battery of cognitive tests. Results revealed sizable and systematic individual differences in false memory arising from exposure to misinformation. False memories were significantly and negatively correlated with measures of intelligence (measured with Raven's Advanced Progressive Matrices and Wechsler Adult Intelligence Scale), perception (Motor-Free Visual Perception Test, Change Blindness, and Tone Discrimination), memory (Wechsler Memory Scales and 2-back Working Memory tasks), and face judgement (Face Recognition and Facial Expression Recognition). These findings suggest that people with relatively low intelligence and poor perceptual abilities might be more susceptible to the misinformation effect.

  1. Facial Emotion Recognition Performance Differentiates Between Behavioral Variant Frontotemporal Dementia and Major Depressive Disorder.

    PubMed

    Chiu, Isabelle; Piguet, Olivier; Diehl-Schmid, Janine; Riedl, Lina; Beck, Johannes; Leyhe, Thomas; Holsboer-Trachsler, Edith; Kressig, Reto W; Berres, Manfred; Monsch, Andreas U; Sollberger, Marc

    Misdiagnosis of early behavioral variant frontotemporal dementia (bvFTD) with major depressive disorder (MDD) is not uncommon due to overlapping symptoms. The aim of this study was to improve the discrimination between these disorders using a novel facial emotion perception task. In this prospective cohort study (July 2013-March 2016), we compared 25 patients meeting Rascovsky diagnostic criteria for bvFTD, 20 patients meeting DSM-IV criteria for MDD, 21 patients meeting McKhann diagnostic criteria for Alzheimer's disease dementia, and 31 healthy participants on a novel emotion intensity rating task comprising morphed low-intensity facial stimuli. Participants were asked to rate the intensity of morphed faces on the congruent basic emotion (eg, rating on sadness when sad face is shown) and on the 5 incongruent basic emotions (eg, rating on each of the other basic emotions when sad face is shown). While bvFTD patients underrated congruent emotions (P < .01), they also overrated incongruent emotions (P < .001), resulting in confusion of facial emotions. In contrast, MDD patients overrated congruent negative facial emotions (P < .001), but not incongruent facial emotions. Accordingly, ratings of congruent and incongruent emotions highly discriminated between bvFTD and MDD patients, ranging from area under the curve (AUC) = 93% to AUC = 98%. Further, an almost complete discrimination (AUC = 99%) was achieved by contrasting the 2 rating types. In contrast, Alzheimer's disease dementia patients perceived emotions similarly to healthy participants, indicating no impact of cognitive impairment on rating scores. Our congruent and incongruent facial emotion intensity rating task allows a detailed assessment of facial emotion perception in patient populations. By using this simple task, we achieved an almost complete discrimination between bvFTD and MDD, potentially helping improve the diagnostic certainty in early bvFTD. © Copyright 2018 Physicians Postgraduate Press, Inc.

  2. Many faces of expertise: fusiform face area in chess experts and novices.

    PubMed

    Bilalić, Merim; Langner, Robert; Ulrich, Rolf; Grodd, Wolfgang

    2011-07-13

    The fusiform face area (FFA) is involved in face perception to such an extent that some claim it is a brain module for faces exclusively. The other possibility is that FFA is modulated by experience in individuation in any visual domain, not only faces. Here we test this latter FFA expertise hypothesis using the game of chess as a domain of investigation. We exploited the characteristic of chess, which features multiple objects forming meaningful spatial relations. In three experiments, we show that FFA activity is related to stimulus properties and not to chess skill directly. In all chess and non-chess tasks, experts' FFA was more activated than that of novices' only when they dealt with naturalistic full-board chess positions. When common spatial relationships formed by chess objects in chess positions were randomly disturbed, FFA was again differentially active only in experts, regardless of the actual task. Our experiments show that FFA contributes to the holistic processing of domain-specific multipart stimuli in chess experts. This suggests that FFA may not only mediate human expertise in face recognition but, supporting the expertise hypothesis, may mediate the automatic holistic processing of any highly familiar multipart visual input.

  3. Functional asymmetry and interhemispheric cooperation in the perception of emotions from facial expressions.

    PubMed

    Tamietto, Marco; Latini Corazzini, Luca; de Gelder, Beatrice; Geminiani, Giuliano

    2006-05-01

    The present study used the redundant target paradigm on healthy subjects to investigate functional hemispheric asymmetries and interhemispheric cooperation in the perception of emotions from faces. In Experiment 1 participants responded to checkerboards presented either unilaterally to the left (LVF) or right visual half field (RVF), or simultaneously to both hemifields (BVF), while performing a pointing task for the control of eye movements. As previously reported (Miniussi et al. in J Cogn Neurosci 10:216-230, 1998), redundant stimulation led to shorter latencies for stimulus detection (bilateral gain or redundant target effect, RTE) that exceeded the limit for a probabilistic interpretation, thereby validating the pointing procedure and supporting interhemispheric cooperation. In Experiment 2 the same pointing procedure was used in a go/no-go task requiring subjects to respond when seeing a target emotional expression (happy or fearful, counterbalanced between blocks). Faster reaction times to unilateral LVF than RVF emotions, regardless of valence, indicate that the perception of positive and negative emotional faces is lateralized toward the right hemisphere. Simultaneous presentation of two congruent emotional faces, either happy or fearful, produced an RTE that cannot be explained by probability summation and suggests interhemispheric cooperation and neural summation. No such effect was present with BVF incongruent facial expressions. In Experiment 3 we studied whether the RTE for emotional faces depends on the physical identity between BVF stimuli, and we set a second BVF congruent condition in which there was only emotional but not physical or gender identity between stimuli (i.e. two different faces expressing the same emotion). The RTE and interhemispheric cooperation were present also in this second BVF congruent condition. This shows that emotional congruency is the sufficient condition for the RTE to take place in the intact brain and that the cerebral hemispheres can interact in spite of physical differences between stimuli.

  4. Reduced anterior temporal and hippocampal functional connectivity during face processing discriminates individuals with social anxiety disorder from healthy controls and panic disorder, and increases following treatment.

    PubMed

    Pantazatos, Spiro P; Talati, Ardesheer; Schneier, Franklin R; Hirsch, Joy

    2014-01-01

    Group functional magnetic resonance imaging (fMRI) studies suggest that anxiety disorders are associated with anomalous brain activation and functional connectivity (FC). However, brain-based features sensitive enough to discriminate individual subjects with a specific anxiety disorder and that track symptom severity longitudinally, desirable qualities for putative disorder-specific biomarkers, remain to be identified. Blood oxygen level-dependent (BOLD) fMRI during emotional face perceptual tasks and a new, large-scale and condition-dependent FC and machine learning approach were used to identify features (pair-wise correlations) that discriminated patients with social anxiety disorder (SAD, N=16) from controls (N=19). We assessed whether these features discriminated SAD from panic disorder (PD, N=16), and SAD from controls in an independent replication sample that performed a similar task at baseline (N: SAD=15, controls=17) and following 8-weeks paroxetine treatment (N: SAD=12, untreated controls=7). High SAD vs HCs discrimination (area under the ROC curve, AUC, arithmetic mean of sensitivity and specificity) was achieved with two FC features during unattended neutral face perception (AUC=0.88, P<0.05 corrected). These features also discriminated SAD vs PD (AUC=0.82, P=0.0001) and SAD vs HCs in the independent replication sample (FC during unattended angry face perception, AUC=0.71, P=0.01). The most informative FC was left hippocampus-left temporal pole, which was reduced in both SAD samples (replication sample P=0.027), and this FC increased following the treatment (post>pre, t(11)=2.9, P=0.007). In conclusion, SAD is associated with reduced FC between left temporal pole and left hippocampus during face perception, and results suggest promise for emerging FC-based biomarkers for SAD diagnosis and treatment effects.

  5. Effects of oxytocin on behavioral and ERP measures of recognition memory for own-race and other-race faces in women and men

    PubMed Central

    Herzmann, Grit; Bird, Christopher W.; Freeman, Megan; Curran, Tim

    2013-01-01

    Oxytocin has been shown to affect human social information processing including recognition memory for faces. Here we investigated the neural processes underlying the effect of oxytocin on memorizing own-race and other-race faces in men and women. In a placebo-controlled, doubleblind, between-subject study, participants received either oxytocin or placebo before studying own-race and other-race faces. We recorded event-related potentials (ERPs) during both the study and recognition phase to investigate neural correlates of oxytocin’s effect on memory encoding, memory retrieval, and perception. Oxytocin increased the accuracy of familiarity judgments in the recognition test. Neural correlates for this effect were found in ERPs related to memory encoding and retrieval but not perception. In contrast to its facilitating effects on familiarity, oxytocin impaired recollection judgments, but in men only. Oxytocin did not differentially affect own-race and other-race faces. This study shows that oxytocin influences memory, but not perceptual processes, in a face recognition task and is the first to reveal sex differences in the effect of oxytocin on face memory. Contrary to recent findings in oxytocin and moral decision making, oxytocin did not preferentially improve memory for own-race faces. PMID:23648370

  6. Effects of oxytocin on behavioral and ERP measures of recognition memory for own-race and other-race faces in women and men.

    PubMed

    Herzmann, Grit; Bird, Christopher W; Freeman, Megan; Curran, Tim

    2013-10-01

    Oxytocin has been shown to affect human social information processing including recognition memory for faces. Here we investigated the neural processes underlying the effect of oxytocin on memorizing own-race and other-race faces in men and women. In a placebo-controlled, double-blind, between-subject study, participants received either oxytocin or placebo before studying own-race and other-race faces. We recorded event-related potentials (ERPs) during both the study and recognition phase to investigate neural correlates of oxytocin's effect on memory encoding, memory retrieval, and perception. Oxytocin increased the accuracy of familiarity judgments in the recognition test. Neural correlates for this effect were found in ERPs related to memory encoding and retrieval but not perception. In contrast to its facilitating effects on familiarity, oxytocin impaired recollection judgments, but in men only. Oxytocin did not differentially affect own-race and other-race faces. This study shows that oxytocin influences memory, but not perceptual processes, in a face recognition task and is the first to reveal sex differences in the effect of oxytocin on face memory. Contrary to recent findings in oxytocin and moral decision making, oxytocin did not preferentially improve memory for own-race faces. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Effect of Visual Cues on the Resolution of Perceptual Ambiguity in Parkinson’s Disease and Normal Aging

    PubMed Central

    Díaz-Santos, Mirella; Cao, Bo; Mauro, Samantha A.; Yazdanbakhsh, Arash; Neargarder, Sandy; Cronin-Golomb, Alice

    2017-01-01

    Parkinson’s disease (PD) and normal aging have been associated with changes in visual perception, including reliance on external cues to guide behavior. This raises the question of the extent to which these groups use visual cues when disambiguating information. Twenty-seven individuals with PD, 23 normal control adults (NC), and 20 younger adults (YA) were presented a Necker cube in which one face was highlighted by thickening the lines defining the face. The hypothesis was that the visual cues would help PD and NC to exert better control over bistable perception. There were three conditions, including passive viewing and two volitional-control conditions (hold one percept in front; and switch: speed up the alternation between the two). In the Hold condition, the cue was either consistent or inconsistent with task instructions. Mean dominance durations (time spent on each percept) under passive viewing were comparable in PD and NC, and shorter in YA. PD and YA increased dominance durations in the Hold cue-consistent condition relative to NC, meaning that appropriate cues helped PD but not NC hold one perceptual interpretation. By contrast, in the Switch condition, NC and YA decreased dominance durations relative to PD, meaning that the use of cues helped NC but not PD in expediting the switch between percepts. Provision of low-level cues has effects on volitional control in PD that are different from in normal aging, and only under task-specific conditions does the use of such cues facilitate the resolution of perceptual ambiguity. PMID:25765890

  8. Using Time Perception to Explore Implicit Sensitivity to Emotional Stimuli in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Jones, Catherine R. G.; Lambrechts, Anna; Gaigg, Sebastian B.

    2017-01-01

    Establishing whether implicit responses to emotional cues are intact in autism spectrum disorder (ASD) is fundamental to ascertaining why their emotional understanding is compromised. We used a temporal bisection task to assess for responsiveness to face and wildlife images that varied in emotional salience. There were no significant differences…

  9. Acute stress influences the discrimination of complex scenes and complex faces in young healthy men.

    PubMed

    Paul, M; Lech, R K; Scheil, J; Dierolf, A M; Suchan, B; Wolf, O T

    2016-04-01

    The stress-induced release of glucocorticoids has been demonstrated to influence hippocampal functions via the modulation of specific receptors. At the behavioral level stress is known to influence hippocampus dependent long-term memory. In recent years, studies have consistently associated the hippocampus with the non-mnemonic perception of scenes, while adjacent regions in the medial temporal lobe were associated with the perception of objects, and faces. So far it is not known whether and how stress influences non-mnemonic perceptual processes. In a behavioral study, fifty male participants were subjected either to the stressful socially evaluated cold-pressor test or to a non-stressful control procedure, before they completed a visual discrimination task, comprising scenes and faces. The complexity of the face and scene stimuli was manipulated in easy and difficult conditions. A significant three way interaction between stress, stimulus type and complexity was found. Stressed participants tended to commit more errors in the complex scenes condition. For complex faces a descriptive tendency in the opposite direction (fewer errors under stress) was observed. As a result the difference between the number of errors for scenes and errors for faces was significantly larger in the stress group. These results indicate that, beyond the effects of stress on long-term memory, stress influences the discrimination of spatial information, especially when the perception is characterized by a high complexity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Adult Attachment Styles Associated with Brain Activity in Response to Infant Faces in Nulliparous Women: An Event-Related Potentials Study.

    PubMed

    Ma, Yuanxiao; Ran, Guangming; Chen, Xu; Ma, Haijing; Hu, Na

    2017-01-01

    Adult attachment style is a key for understanding emotion regulation and feelings of security in human interactions as well as for the construction of the caregiving system. The caregiving system is a group of representations about affiliative behaviors, which is guided by the caregiver's sensitivity and empathy, and is mature in young adulthood. Appropriate perception and interpretation of infant emotions is a crucial component of the formation of a secure attachment relationship between infant and caregiver. As attachment styles influence the ways in which people perceive emotional information, we examined how different attachment styles associated with brain response to the perception of infant facial expressions in nulliparous females with secure, anxious, and avoidant attachment styles. The event-related potentials of 65 nulliparous females were assessed during a facial recognition task with joy, neutral, and crying infant faces. The results showed that anxiously attached females exhibited larger N170 amplitudes than those with avoidant attachment in response to all infant faces. Regarding the P300 component, securely attached females showed larger amplitudes to all infant faces in comparison with avoidantly attached females. Moreover, anxiously attached females exhibited greater amplitudes than avoidantly attached females to only crying infant faces. In conclusion, the current results provide evidence that attachment style differences are associated with brain responses to the perception of infant faces. Furthermore, these findings further separate the psychological mechanisms underlying the caregiving behavior of those with anxious and avoidant attachment from secure attachment.

  11. Cross-modal cueing effects of visuospatial attention on conscious somatosensory perception.

    PubMed

    Doruk, Deniz; Chanes, Lorena; Malavera, Alejandra; Merabet, Lotfi B; Valero-Cabré, Antoni; Fregni, Felipe

    2018-04-01

    The impact of visuospatial attention on perception with supraliminal stimuli and stimuli at the threshold of conscious perception has been previously investigated. In this study, we assess the cross-modal effects of visuospatial attention on conscious perception for near-threshold somatosensory stimuli applied to the face. Fifteen healthy participants completed two sessions of a near-threshold cross-modality cue-target discrimination/conscious detection paradigm. Each trial began with an endogenous visuospatial cue that predicted the location of a weak near-threshold electrical pulse delivered to the right or left cheek with high probability (∼75%). Participants then completed two tasks: first, a forced-choice somatosensory discrimination task (felt once or twice?) and then, a somatosensory conscious detection task (did you feel the stimulus and, if yes, where (left/right)?). Somatosensory discrimination was evaluated with the response reaction times of correctly detected targets, whereas the somatosensory conscious detection was quantified using perceptual sensitivity (d') and response bias (beta). A 2 × 2 repeated measures ANOVA was used for statistical analysis. In the somatosensory discrimination task (1 st task), participants were significantly faster in responding to correctly detected targets (p < 0.001). In the somatosensory conscious detection task (2 nd task), a significant effect of visuospatial attention on response bias (p = 0.008) was observed, suggesting that participants had a less strict criterion for stimuli preceded by spatially valid than invalid visuospatial cues. We showed that spatial attention has the potential to modulate the discrimination and the conscious detection of near-threshold somatosensory stimuli as measured, respectively, by a reduction of reaction times and a shift in response bias toward less conservative responses when the cue predicted stimulus location. A shift in response bias indicates possible effects of spatial attention on internal decision processes. The lack of significant results in perceptual sensitivity (d') could be due to weaker effects of endogenous attention on perception.

  12. Face to face with emotion: holistic face processing is modulated by emotional state.

    PubMed

    Curby, Kim M; Johnson, Kareem J; Tyson, Alyssa

    2012-01-01

    Negative emotions are linked with a local, rather than global, visual processing style, which may preferentially facilitate feature-based, relative to holistic, processing mechanisms. Because faces are typically processed holistically, and because social contexts are prime elicitors of emotions, we examined whether negative emotions decrease holistic processing of faces. We induced positive, negative, or neutral emotions via film clips and measured holistic processing before and after the induction: participants made judgements about cued parts of chimeric faces, and holistic processing was indexed by the interference caused by task-irrelevant face parts. Emotional state significantly modulated face-processing style, with the negative emotion induction leading to decreased holistic processing. Furthermore, self-reported change in emotional state correlated with changes in holistic processing. These results contrast with general assumptions that holistic processing of faces is automatic and immune to outside influences, and they illustrate emotion's power to modulate socially relevant aspects of visual perception.

  13. Visual attention mechanisms in happiness versus trustworthiness processing of facial expressions.

    PubMed

    Calvo, Manuel G; Krumhuber, Eva G; Fernández-Martín, Andrés

    2018-03-01

    A happy facial expression makes a person look (more) trustworthy. Do perceptions of happiness and trustworthiness rely on the same face regions and visual attention processes? In an eye-tracking study, eye movements and fixations were recorded while participants judged the un/happiness or the un/trustworthiness of dynamic facial expressions in which the eyes and/or the mouth unfolded from neutral to happy or vice versa. A smiling mouth and happy eyes enhanced perceived happiness and trustworthiness similarly, with a greater contribution of the smile relative to the eyes. This comparable judgement output for happiness and trustworthiness was reached through shared as well as distinct attentional mechanisms: (a) entry times and (b) initial fixation thresholds for each face region were equivalent for both judgements, thereby revealing the same attentional orienting in happiness and trustworthiness processing. However, (c) greater and (d) longer fixation density for the mouth region in the happiness task, and for the eye region in the trustworthiness task, demonstrated different selective attentional engagement. Relatedly, (e) mean fixation duration across face regions was longer in the trustworthiness task, thus showing increased attentional intensity or processing effort.

  14. Time perception: the bad news and the good

    PubMed Central

    Matthews, William J; Meck, Warren H

    2014-01-01

    Time perception is fundamental and heavily researched, but the field faces a number of obstacles to theoretical progress. In this advanced review, we focus on three pieces of ‘bad news’ for time perception research: temporal perception is highly labile across changes in experimental context and task; there are pronounced individual differences not just in overall performance but in the use of different timing strategies and the effect of key variables; and laboratory studies typically bear little relation to timing in the ‘real world’. We describe recent examples of these issues and in each case offer some ‘good news’ by showing how new research is addressing these challenges to provide rich insights into the neural and information-processing bases of timing and time perception. PMID:25210578

  15. Neural differences in self-perception during illness and after weight-recovery in anorexia nervosa.

    PubMed

    McAdams, Carrie J; Jeon-Slaughter, Haekyung; Evans, Siobahn; Lohrenz, Terry; Montague, P Read; Krawczyk, Daniel C

    2016-11-01

    Anorexia nervosa (AN) is a severe mental illness characterized by problems with self-perception. Whole-brain neural activations in healthy women, women with AN and women in long-term weight recovery following AN were compared using two functional magnetic resonance imaging tasks probing different aspects of self-perception. The Social Identity-V2 task involved consideration about oneself and others using socially descriptive adjectives. Both the ill and weight-recovered women with AN engaged medial prefrontal cortex less than healthy women for self-relevant cognitions, a potential biological trait difference. Weight-recovered women also activated the inferior frontal gyri and dorsal anterior cingulate more for direct self-evaluations than for reflected self-evaluations, unlike both other groups, suggesting that recovery may include compensatory neural changes related to social perspectives. The Faces task compared viewing oneself to a stranger. Participants with AN showed elevated activity in the bilateral fusiform gyri for self-images, unlike the weight-recovered and healthy women, suggesting cognitive distortions about physical appearance are a state rather than trait problem in this disease. Because both ill and recovered women showed neural differences related to social self-perception, but only recovered women differed when considering social perspectives, these neurocognitive targets may be particularly important for treatment. © The Author (2016). Published by Oxford University Press.

  16. Face Pareidolia in the Rhesus Monkey.

    PubMed

    Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G

    2017-08-21

    Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.

  17. The relation between race-related implicit associations and scalp-recorded neural activity evoked by faces from different races.

    PubMed

    He, Yi; Johnson, Marcia K; Dovidio, John F; McCarthy, Gregory

    2009-01-01

    The neural correlates of the perception of faces from different races were investigated. White participants performed a gender identification task in which Asian, Black, and White faces were presented while event-related potentials (ERPs) were recorded. Participants also completed an implicit association task for Black (IAT-Black) and Asian (IAT-Asian) faces. ERPs evoked by Black and White faces differed, with Black faces evoking a larger positive ERP that peaked at 168 ms over the frontal scalp, and White faces evoking a larger negative ERP that peaked at 244 ms. These Black/White ERP differences significantly correlated with participants' scores on the IAT-Black. ERPs also differentiated White from Asian faces and a significant correlation was obtained between the White-Asian ERP difference waves at approximately 500 ms and the IAT-Asian. A positive ERP at 116 ms over occipital scalp differentiated all three races, but was not correlated with either IAT. In addition, a late positive component (around 592 ms) was greater for the same race compared to either other race faces, suggesting potentially more extended or deeper processing of the same race faces. Taken together, the ERP/IAT correlations observed for both other races indicate the influence of a race-sensitive evaluative process that may include early more automatic and/or implicit processes and relatively later more controlled processes.

  18. The Relation between Race-related Implicit Associations and Scalp-recorded Neural Activity Evoked by Faces from Different Races

    PubMed Central

    He, Yi; Johnson, Marcia K.; Dovidio, John F.; McCarthy, Gregory

    2009-01-01

    The neural correlates of the perception of faces from different races were investigated. White participants performed a gender identification task in which Asian, Black, and White faces were presented while event-related potentials (ERPs) were recorded. Participants also completed an implicit association task for Black (IAT-Black) and Asian (IAT-Asian) faces. ERPs evoked by Black and White faces differed, with Black faces evoking a larger positive ERP that peaked at 168 ms over the frontal scalp, and White faces evoking a larger negative ERP that peaked at 244 ms. These Black/White ERP differences significantly correlated with participants’ scores on the IAT-Black. ERPs also differentiated White from Asian faces and a significant correlation was obtained between the White-Asian ERP difference waves at ~500 ms and the IAT-Asian. A positive ERP at 116 ms over occipital scalp differentiated all three races, but was not correlated with either IAT. In addition, a late positive component (around 592 ms) was greater for the same race compared to either other race faces, suggesting potentially more extended or deeper processing of the same race faces. Taken together, the ERP/IAT correlations observed for both other races indicate the influence of a race-sensitive evaluative process that may include early more automatic and/or implicit processes and relatively later more controlled processes. PMID:19562628

  19. You may look unhappy unless you smile: the distinctiveness of a smiling face against faces without an explicit smile.

    PubMed

    Park, Hyung-Bum; Han, Ji-Eun; Hyun, Joo-Seok

    2015-05-01

    An expressionless face is often perceived as rude whereas a smiling face is considered as hospitable. Repetitive exposure to such perceptions may have developed stereotype of categorizing an expressionless face as expressing negative emotion. To test this idea, we displayed a search array where the target was an expressionless face and the distractors were either smiling or frowning faces. We manipulated set size. Search reaction times were delayed with frowning distractors. Delays became more evident as the set size increased. We also devised a short-term comparison task where participants compared two sequential sets of expressionless, smiling, and frowning faces. Detection of an expression change across the sets was highly inaccurate when the change was made between frowning and expressionless face. These results indicate that subjects were confused with expressed emotions on frowning and expressionless faces, suggesting that it is difficult to distinguish expressionless face from frowning faces. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Social identity modifies face perception: an ERP study of social categorization

    PubMed Central

    Stedehouder, Jeffrey; Ito, Tiffany A.

    2015-01-01

    Two studies examined whether social identity processes, i.e. group identification and social identity threat, amplify the degree to which people attend to social category information in early perception [assessed with event-related brain potentials (ERPs)]. Participants were presented with faces of Muslims and non-Muslims in an evaluative priming task while ERPs were measured and implicit evaluative bias was assessed. Study 1 revealed that non-Muslims showed stronger differentiation between ingroup and outgroup faces in both early (N200) and later processing stages (implicit evaluations) when they identified more strongly with their ethnic group. Moreover, identification effects on implicit bias were mediated by intergroup differentiation in the N200. In Study 2, social identity threat (vs control) was manipulated among Muslims. Results revealed that high social identity threat resulted in stronger differentiation of Muslims from non-Muslims in early (N200) and late (implicit evaluations) processing stages, with N200 effects again predicting implicit bias. Combined, these studies reveal how seemingly bottom-up early social categorization processes are affected by individual and contextual variables that affect the meaning of social identity. Implications of these results for the social identity perspective as well as social cognitive theories of person perception are discussed. PMID:25140049

  1. Effectiveness of Podcasts as Laboratory Instructional Support: Learner Perceptions of Machine Shop and Welding Students

    ERIC Educational Resources Information Center

    Lauritzen, Louis Dee

    2014-01-01

    Machine shop students face the daunting task of learning the operation of complex three-dimensional machine tools, and welding students must develop specific motor skills in addition to understanding the complexity of material types and characteristics. The use of consumer technology by the Millennial generation of vocational students, the…

  2. Integrative complexity of wildfire management: development of a scale

    Treesearch

    Joshua Carroll; Alan Bright

    2007-01-01

    Wildfire in the West has become a controversial natural resource issue that has divided the public's perceptions regarding its management, and forest managers are now faced with the difficult task of making sound decisions while balancing these varying concerns. Two widely used wildfi re management practices are prescribed fire and mechanical thinning. In order to...

  3. Less impairment in face imagery than face perception in early prosopagnosia.

    PubMed

    Michelon, Pascale; Biederman, Irving

    2003-01-01

    There have been a number of reports of preserved face imagery in prosopagnosia. We put this issue to experimental test by comparing the performance of MJH, a 34-year-old prosopagnosic since the age of 5, to controls on tasks where the participants had to judge faces of current celebrities, either in terms of overall similarity (Of Bette Midler, Hillary Clinton, and Diane Sawyer, whose face looks least like the other two?) or on individual features (Is Ronald Reagan's nose pointy?). For each task, a performance measure reflecting the degree of agreement of each participant with the average of the others (not including MJH) was calculated. On the imagery versions of these tasks, MJH was within the lower range of the controls for the agreement measure (though significantly below the mean of the controls). When the same tasks were performed from pictures, agreement among the controls markedly increased whereas MJH's performance was virtually unaffected, placing him well below the range of the controls. This pattern was also apparent with a test of facial features of emotion (Are the eyes wrinkled when someone is surprised?). On three non-face imagery tasks assessing color (What color is a football?), relative lengths of animal's tails (Is a bear's tail long in proportion to its body?), and mental size comparisons (What is bigger, a camel or a zebra?), MJH was within or close to the lower end of the normal range. As most of the celebrities became famous after the onset of MJH's prosopagnosia, our confirmation of the reports of less impaired face imagery in some prosopagnosics cannot be attributed to pre-lesion storage. We speculate that face recognition, in contrast to object recognition, relies more heavily on a representation that describes the initial spatial filter values so the metrics of the facial surface can be specified. If prosopagnosia is regarded as a form of simultanagnosia in which some of these filter values cannot be registered on any one encounter with a face, then multiple opportunities for repeated storage may partially compensate for the degraded representation on that single encounter. Imagery may allow access to this more complete representation.

  4. Behavioural evidence for distinct mechanisms related to global and biological motion perception.

    PubMed

    Miller, Louisa; Agnew, Hannah C; Pilz, Karin S

    2018-01-01

    The perception of human motion is a vital ability in our daily lives. Human movement recognition is often studied using point-light stimuli in which dots represent the joints of a moving person. Depending on task and stimulus, the local motion of the single dots, and the global form of the stimulus can be used to discriminate point-light stimuli. Previous studies often measured motion coherence for global motion perception and contrasted it with performance in biological motion perception to assess whether difficulties in biological motion processing are related to more general difficulties with motion processing. However, it is so far unknown as to how performance in global motion tasks relates to the ability to use local motion or global form to discriminate point-light stimuli. Here, we investigated this relationship in more detail. In Experiment 1, we measured participants' ability to discriminate the facing direction of point-light stimuli that contained primarily local motion, global form, or both. In Experiment 2, we embedded point-light stimuli in noise to assess whether previously found relationships in task performance are related to the ability to detect signal in noise. In both experiments, we also assessed motion coherence thresholds from random-dot kinematograms. We found relationships between performances for the different biological motion stimuli, but performance for global and biological motion perception was unrelated. These results are in accordance with previous neuroimaging studies that highlighted distinct areas for global and biological motion perception in the dorsal pathway, and indicate that results regarding the relationship between global motion perception and biological motion perception need to be interpreted with caution. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Efficacy of identifying neural components in the face and emotion processing system in schizophrenia using a dynamic functional localizer.

    PubMed

    Arnold, Aiden E G F; Iaria, Giuseppe; Goghari, Vina M

    2016-02-28

    Schizophrenia is associated with deficits in face perception and emotion recognition. Despite consistent behavioural results, the neural mechanisms underlying these cognitive abilities have been difficult to isolate, in part due to differences in neuroimaging methods used between studies for identifying regions in the face processing system. Given this problem, we aimed to validate a recently developed fMRI-based dynamic functional localizer task for use in studies of psychiatric populations and specifically schizophrenia. Previously, this functional localizer successfully identified each of the core face processing regions (i.e. fusiform face area, occipital face area, superior temporal sulcus), and regions within an extended system (e.g. amygdala) in healthy individuals. In this study, we tested the functional localizer success rate in 27 schizophrenia patients and in 24 community controls. Overall, the core face processing regions were localized equally between both the schizophrenia and control group. Additionally, the amygdala, a candidate brain region from the extended system, was identified in nearly half the participants from both groups. These results indicate the effectiveness of a dynamic functional localizer at identifying regions of interest associated with face perception and emotion recognition in schizophrenia. The use of dynamic functional localizers may help standardize the investigation of the facial and emotion processing system in this and other clinical populations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Developmental Commonalities between Object and Face Recognition in Adolescence

    PubMed Central

    Jüttner, Martin; Wakui, Elley; Petters, Dean; Davidoff, Jules

    2016-01-01

    In the visual perception literature, the recognition of faces has often been contrasted with that of non-face objects, in terms of differences with regard to the role of parts, part relations and holistic processing. However, recent evidence from developmental studies has begun to blur this sharp distinction. We review evidence for a protracted development of object recognition that is reminiscent of the well-documented slow maturation observed for faces. The prolonged development manifests itself in a retarded processing of metric part relations as opposed to that of individual parts and offers surprising parallels to developmental accounts of face recognition, even though the interpretation of the data is less clear with regard to holistic processing. We conclude that such results might indicate functional commonalities between the mechanisms underlying the recognition of faces and non-face objects, which are modulated by different task requirements in the two stimulus domains. PMID:27014176

  7. Typical and Atypical Development of Functional Connectivity in the Face Network.

    PubMed

    Song, Yiying; Zhu, Qi; Li, Jingguang; Wang, Xu; Liu, Jia

    2015-10-28

    Extensive studies have demonstrated that face recognition performance does not reach adult levels until adolescence. However, there is no consensus on whether such prolonged improvement stems from development of general cognitive factors or face-specific mechanisms. Here, we used behavioral experiments and functional magnetic resonance imaging (fMRI) to evaluate these two hypotheses. With a large cohort of children (n = 379), we found that the ability of face-specific recognition in humans increased with age throughout childhood and into late adolescence in both face memory and face perception. Neurally, to circumvent the potential problem of age differences in task performance, attention, or cognitive strategies in task-state fMRI studies, we measured the resting-state functional connectivity (RSFC) between the occipital face area (OFA) and fusiform face area (FFA) in human brain and found that the OFA-FFA RSFC increased until 11-13 years of age. Moreover, the OFA-FFA RSFC was selectively impaired in adults with developmental prosopagnosia (DP). In contrast, no age-related changes or differences between DP and normal adults were observed for RSFCs in the object system. Finally, the OFA-FFA RSFC matured earlier than face selectivity in either the OFA or FFA. These results suggest the critical role of the OFA-FFA RSFC in the development of face recognition. Together, our findings support the hypothesis that prolonged development of face recognition is face specific, not domain general. Copyright © 2015 the authors 0270-6474/15/3514624-12$15.00/0.

  8. The correlates of subjective perception of identity and expression in the face network: an fMRI adaptation study

    PubMed Central

    Fox, Christopher J.; Moon, So Young; Iaria, Giuseppe; Barton, Jason J.S.

    2009-01-01

    The recognition of facial identity and expression are distinct tasks, with current models hypothesizing anatomic segregation of processing within a face-processing network. Using fMRI adaptation and a region-of-interest approach, we assessed how the perception of identity and expression changes in morphed stimuli affected the signal within this network, by contrasting (a) changes that crossed categorical boundaries of identity or expression with those that did not, and (b) changes that subjects perceived as causing identity or expression to change, versus changes that they perceived as not affecting the category of identity or expression. The occipital face area (OFA) was sensitive to any structural change in a face, whether it was identity or expression, but its signal did not correlate with whether subjects perceived a change or not. Both the fusiform face area (FFA) and the posterior superior temporal sulcus (pSTS) showed release from adaptation when subjects perceived a change in either identity or expression, although in the pSTS this effect only occurred when subjects were explicitly attending to expression. The middle superior temporal sulcus (mSTS) showed release from adaptation for expression only, and the precuneus for identity only. The data support models where the OFA is involved in the early perception of facial structure. However, evidence for a functional overlap in the FFA and pSTS, with both identity and expression signals in both areas, argues against a complete independence of identity and expression processing in these regions of the core face-processing network. PMID:18852053

  9. The correlates of subjective perception of identity and expression in the face network: an fMRI adaptation study.

    PubMed

    Fox, Christopher J; Moon, So Young; Iaria, Giuseppe; Barton, Jason J S

    2009-01-15

    The recognition of facial identity and expression are distinct tasks, with current models hypothesizing anatomic segregation of processing within a face-processing network. Using fMRI adaptation and a region-of-interest approach, we assessed how the perception of identity and expression changes in morphed stimuli affected the signal within this network, by contrasting (a) changes that crossed categorical boundaries of identity or expression with those that did not, and (b) changes that subjects perceived as causing identity or expression to change, versus changes that they perceived as not affecting the category of identity or expression. The occipital face area (OFA) was sensitive to any structural change in a face, whether it was identity or expression, but its signal did not correlate with whether subjects perceived a change or not. Both the fusiform face area (FFA) and the posterior superior temporal sulcus (pSTS) showed release from adaptation when subjects perceived a change in either identity or expression, although in the pSTS this effect only occurred when subjects were explicitly attending to expression. The middle superior temporal sulcus (mSTS) showed release from adaptation for expression only, and the precuneus for identity only. The data support models where the OFA is involved in the early perception of facial structure. However, evidence for a functional overlap in the FFA and pSTS, with both identity and expression signals in both areas, argues against a complete independence of identity and expression processing in these regions of the core face-processing network.

  10. Adult Attachment Styles Associated with Brain Activity in Response to Infant Faces in Nulliparous Women: An Event-Related Potentials Study

    PubMed Central

    Ma, Yuanxiao; Ran, Guangming; Chen, Xu; Ma, Haijing; Hu, Na

    2017-01-01

    Adult attachment style is a key for understanding emotion regulation and feelings of security in human interactions as well as for the construction of the caregiving system. The caregiving system is a group of representations about affiliative behaviors, which is guided by the caregiver’s sensitivity and empathy, and is mature in young adulthood. Appropriate perception and interpretation of infant emotions is a crucial component of the formation of a secure attachment relationship between infant and caregiver. As attachment styles influence the ways in which people perceive emotional information, we examined how different attachment styles associated with brain response to the perception of infant facial expressions in nulliparous females with secure, anxious, and avoidant attachment styles. The event-related potentials of 65 nulliparous females were assessed during a facial recognition task with joy, neutral, and crying infant faces. The results showed that anxiously attached females exhibited larger N170 amplitudes than those with avoidant attachment in response to all infant faces. Regarding the P300 component, securely attached females showed larger amplitudes to all infant faces in comparison with avoidantly attached females. Moreover, anxiously attached females exhibited greater amplitudes than avoidantly attached females to only crying infant faces. In conclusion, the current results provide evidence that attachment style differences are associated with brain responses to the perception of infant faces. Furthermore, these findings further separate the psychological mechanisms underlying the caregiving behavior of those with anxious and avoidant attachment from secure attachment. PMID:28484415

  11. Biased recognition of facial affect in patients with major depressive disorder reflects clinical state.

    PubMed

    Münkler, Paula; Rothkirch, Marcus; Dalati, Yasmin; Schmack, Katharina; Sterzer, Philipp

    2015-01-01

    Cognitive theories of depression posit that perception is negatively biased in depressive disorder. Previous studies have provided empirical evidence for this notion, but left open the question whether the negative perceptual bias reflects a stable trait or the current depressive state. Here we investigated the stability of negatively biased perception over time. Emotion perception was examined in patients with major depressive disorder (MDD) and healthy control participants in two experiments. In the first experiment subjective biases in the recognition of facial emotional expressions were assessed. Participants were presented with faces that were morphed between sad and neutral and happy expressions and had to decide whether the face was sad or happy. The second experiment assessed automatic emotion processing by measuring the potency of emotional faces to gain access to awareness using interocular suppression. A follow-up investigation using the same tests was performed three months later. In the emotion recognition task, patients with major depression showed a shift in the criterion for the differentiation between sad and happy faces: In comparison to healthy controls, patients with MDD required a greater intensity of the happy expression to recognize a face as happy. After three months, this negative perceptual bias was reduced in comparison to the control group. The reduction in negative perceptual bias correlated with the reduction of depressive symptoms. In contrast to previous work, we found no evidence for preferential access to awareness of sad vs. happy faces. Taken together, our results indicate that MDD-related perceptual biases in emotion recognition reflect the current clinical state rather than a stable depressive trait.

  12. A collaborative working environment for small group meetings in Second Life.

    PubMed

    da Silva, Cintia Rc; Garcia, Ana Cristina B

    2013-01-01

    This paper presents the SLMeetingRoom, a virtual reality online environment to support group meetings of geographically dispersed participants. A prototype was developed to demonstrate the feasibility of the approach using the Second Life platform. Ten additional components had to be added to Second Life environment to support group work essential activities such as participants' communication, tasks' and participants' coordination, participants' collaboration and work evolution's perception. Empirical studies, both pilot and experiment, were developed comparing four different meeting settings: face-to-face, videoconference, stand Second Life and SLMeetingRoom. The study involved graduate students enrolled in the Interface and Multimedia discipline at the Fluminense Federal University (UFF) in Brazil. Results indicated that groups working within SLMeetingRoom environment presented similar results as face-to-face meeting as far as sense of presence is concerned and with low cognitive effort. Task completion and degree of participation were not affected by the meeting set up. It was concluded that Second Life, in conjunction with the SLMeetingRoom components, is a good tool for holding synchronous remote meetings and coexists with other electronic meeting technologies.

  13. A Robust Method of Measuring Other-Race and Other-Ethnicity Effects: The Cambridge Face Memory Test Format

    PubMed Central

    McKone, Elinor; Stokes, Sacha; Liu, Jia; Cohan, Sarah; Fiorentini, Chiara; Pidcock, Madeleine; Yovel, Galit; Broughton, Mary; Pelleg, Michel

    2012-01-01

    Other-race and other-ethnicity effects on face memory have remained a topic of consistent research interest over several decades, across fields including face perception, social psychology, and forensic psychology (eyewitness testimony). Here we demonstrate that the Cambridge Face Memory Test format provides a robust method for measuring these effects. Testing the Cambridge Face Memory Test original version (CFMT-original; European-ancestry faces from Boston USA) and a new Cambridge Face Memory Test Chinese (CFMT-Chinese), with European and Asian observers, we report a race-of-face by race-of-observer interaction that was highly significant despite modest sample size and despite observers who had quite high exposure to the other race. We attribute this to high statistical power arising from the very high internal reliability of the tasks. This power also allows us to demonstrate a much smaller within-race other ethnicity effect, based on differences in European physiognomy between Boston faces/observers and Australian faces/observers (using the CFMT-Australian). PMID:23118912

  14. Faces in Context: Does Face Perception Depend on the Orientation of the Visual Scene?

    PubMed

    Taubert, Jessica; van Golde, Celine; Verstraten, Frans A J

    2016-10-01

    The mechanisms held responsible for familiar face recognition are thought to be orientation dependent; inverted faces are more difficult to recognize than their upright counterparts. Although this effect of inversion has been investigated extensively, researchers have typically sliced faces from photographs and presented them in isolation. As such, it is not known whether the perceived orientation of a face is inherited from the visual scene in which it appears. Here, we address this question by measuring performance in a simultaneous same-different task while manipulating both the orientation of the faces and the scene. We found that the face inversion effect survived scene inversion. Nonetheless, an improvement in performance when the scene was upside down suggests that sensitivity to identity increased when the faces were more easily segmented from the scene. Thus, while these data identify congruency with the visual environment as a contributing factor in recognition performance, they imply different mechanisms operate on upright and inverted faces. © The Author(s) 2016.

  15. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss

    PubMed Central

    Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415

  16. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss.

    PubMed

    Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.

  17. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  18. Reading in developmental prosopagnosia: Evidence for a dissociation between word and face recognition.

    PubMed

    Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian

    2018-02-01

    Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Do you like Arcimboldo's? Esthetic appreciation modulates brain activity in solving perceptual ambiguity.

    PubMed

    Boccia, M; Nemmi, F; Tizzani, E; Guariglia, C; Ferlazzo, F; Galati, G; Giannini, A M

    2015-02-01

    Esthetic experience is a unique, affectively colored, self-transcending subject-object relationship in which cognitive processing is felt to flow differently than during everyday experiences. Notwithstanding previous multidisciplinary investigations, how esthetic experience modulates perception is still obscure. We used Arcimboldo's ambiguous portraits to assess how the esthetic context organizes ambiguous percepts. The study was carried out using functional magnetic resonance imaging (fMRI) in healthy young volunteers (mean age 25.45; S.D. 4.51; 9 females), during both an explicit esthetic judgment task and an artwork/non-artwork classification task. We show that a distinct neural mechanism in the fusiform gyrus contributes to the esthetic experience of ambiguous portraits, according to the valence of the esthetic experience. Ambiguous artworks eliciting a negative esthetic experience lead to more pronounced activation of the fusiform face areas than ambiguous artworks eliciting a positive esthetic experience. We also found an interaction between task and ambiguity in the right superior parietal lobule. Taken together, our results demonstrate that a neural mechanism in the content-dependent brain regions of face processing underlies the esthetic experience of ambiguous portraits. Furthermore, they suggest that esthetic experience interacts with perceptual qualities of stimuli in the right superior parietal lobe, supporting the idea that esthetic experience arises from the interaction between top-down orienting of attention and bottom-up perceptual facilitation. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Feeling Bad and Looking Worse: Negative Affect Is Associated with Reduced Perceptions of Face-Healthiness

    PubMed Central

    Mirams, Laura; Poliakoff, Ellen; Zandstra, Elizabeth H.; Hoeksma, Marco; Thomas, Anna; El-Deredy, Wael

    2014-01-01

    Some people perceive themselves to look more, or less attractive than they are in reality. We investigated the role of emotions in enhancement and derogation effects; specifically, whether the propensity to experience positive and negative emotions affects how healthy we perceive our own face to look and how we judge ourselves against others. A psychophysical method was used to measure healthiness of self-image and social comparisons of healthiness. Participants who self-reported high positive (N = 20) or negative affectivity (N = 20) judged themselves against healthy (red-tinged) and unhealthy looking (green-tinged) versions of their own and stranger’s faces. An adaptive staircase procedure was used to measure perceptual thresholds. Participants high in positive affectivity were un-biased in their face health judgement. Participants high in negative affectivity on the other hand, judged themselves as equivalent to less healthy looking versions of their own face and a stranger’s face. Affective traits modulated self-image and social comparisons of healthiness. Face health judgement was also related to physical symptom perception and self-esteem; high physical symptom reports were associated a less healthy self-image and high self-reported (but not implicit) self-esteem was associated with more favourable social comparisons of healthiness. Subject to further validation, our novel face health judgement task could have utility as a perceptual measure of well-being. We are currently investigating whether face health judgement is sensitive to laboratory manipulations of mood. PMID:25259802

  1. Whole-School Management Issues Concerning the PE Department: "A Natural Division of Labour?"

    ERIC Educational Resources Information Center

    Williams, Gareth Mark; Williams, Dean

    2013-01-01

    Utilising the labour ideas of Adam Smith and Emile Durkheim as a theoretical basis, the main objective of this study was to investigate the perception that Heads of Physical Education (HoPE) face unique management and leadership challenges. Results showed that HoPE believe that they are overburdened with tasks primarily involving the delegation of…

  2. Visual adaptation of the perception of "life": animacy is a basic perceptual dimension of faces.

    PubMed

    Koldewyn, Kami; Hanus, Patricia; Balas, Benjamin

    2014-08-01

    One critical component of understanding another's mind is the perception of "life" in a face. However, little is known about the cognitive and neural mechanisms underlying this perception of animacy. Here, using a visual adaptation paradigm, we ask whether face animacy is (1) a basic dimension of face perception and (2) supported by a common neural mechanism across distinct face categories defined by age and species. Observers rated the perceived animacy of adult human faces before and after adaptation to (1) adult faces, (2) child faces, and (3) dog faces. When testing the perception of animacy in human faces, we found significant adaptation to both adult and child faces, but not dog faces. We did, however, find significant adaptation when morphed dog images and dog adaptors were used. Thus, animacy perception in faces appears to be a basic dimension of face perception that is species specific but not constrained by age categories.

  3. The role of working memory in decoding emotions.

    PubMed

    Phillips, Louise H; Channon, Shelley; Tunstall, Mary; Hedenstrom, Anna; Lyons, Kathryn

    2008-04-01

    Decoding facial expressions of emotion is an important aspect of social communication that is often impaired following psychiatric or neurological illness. However, little is known of the cognitive components involved in perceiving emotional expressions. Three dual task studies explored the role of verbal working memory in decoding emotions. Concurrent working memory load substantially interfered with choosing which emotional label described a facial expression (Experiment 1). A key factor in the magnitude of interference was the number of emotion labels from which to choose (Experiment 2). In contrast the ability to decide that two faces represented the same emotion in a discrimination task was relatively unaffected by concurrent working memory load (Experiment 3). Different methods of assessing emotion perception make substantially different demands on working memory. Implications for clinical disorders which affect both working memory and emotion perception are considered. (Copyright) 2008 APA.

  4. Eye movements during information processing tasks: individual differences and cultural effects.

    PubMed

    Rayner, Keith; Li, Xingshan; Williams, Carrick C; Cave, Kyle R; Well, Arnold D

    2007-09-01

    The eye movements of native English speakers, native Chinese speakers, and bilingual Chinese/English speakers who were either born in China (and moved to the US at an early age) or in the US were recorded during six tasks: (1) reading, (2) face processing, (3) scene perception, (4) visual search, (5) counting Chinese characters in a passage of text, and (6) visual search for Chinese characters. Across the different groups, there was a strong tendency for consistency in eye movement behavior; if fixation durations of a given viewer were long on one task, they tended to be long on other tasks (and the same tended to be true for saccade size). Some tasks, notably reading, did not conform to this pattern. Furthermore, experience with a given writing system had a large impact on fixation durations and saccade lengths. With respect to cultural differences, there was little evidence that Chinese participants spent more time looking at the background information (and, conversely less time looking at the foreground information) than the American participants. Also, Chinese participants' fixations were more numerous and of shorter duration than those of their American counterparts while viewing faces and scenes, and counting Chinese characters in text.

  5. Patients with Parkinson's disease display a dopamine therapy related negative bias and an enlarged range in emotional responses to facial emotional stimuli.

    PubMed

    Lundqvist, Daniel; Svärd, Joakim; Michelgård Palmquist, Åsa; Fischer, Håkan; Svenningsson, Per

    2017-09-01

    The literature on emotional processing in Parkinson's disease (PD) patients shows mixed results. This may be because of various methodological and/or patient-related differences, such as failing to adjust for cognitive functioning, depression, and/or mood. In the current study, we tested PD patients and healthy controls (HCs) using emotional stimuli across a variety of tasks, including visual search, short-term memory (STM), categorical perception, and emotional stimulus rating. The PD and HC groups were matched on cognitive ability, depression, and mood. We also explored possible relationships between task results and antiparkinsonian treatment effects, as measured by levodopa equivalent dosages (LED), in the PD group. The results show that PD patients use a larger emotional range compared with HCs when reporting their impression of emotional faces on rated emotional valence, arousal, and potency. The results also show that dopaminergic therapy was correlated with stimulus rating results such that PD patients with higher LED scores rated negative faces as less arousing, less negative, and less powerful. Finally, results also show that PD patients display a general slowing effect in the visual search tasks compared with HCs, indicating overall slowed responses. There were no group differences observed in the STM or categorical perception tasks. Our results indicate a relationship between emotional responses, PD, and dopaminergic therapy, in which PD per se is associated with stronger emotional responses, whereas LED levels are negatively correlated with the strength of emotional responses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. The many faces of research on face perception.

    PubMed

    Little, Anthony C; Jones, Benedict C; DeBruine, Lisa M

    2011-06-12

    Face perception is fundamental to human social interaction. Many different types of important information are visible in faces and the processes and mechanisms involved in extracting this information are complex and can be highly specialized. The importance of faces has long been recognized by a wide range of scientists. Importantly, the range of perspectives and techniques that this breadth has brought to face perception research has, in recent years, led to many important advances in our understanding of face processing. The articles in this issue on face perception each review a particular arena of interest in face perception, variously focusing on (i) the social aspects of face perception (attraction, recognition and emotion), (ii) the neural mechanisms underlying face perception (using brain scanning, patient data, direct stimulation of the brain, visual adaptation and single-cell recording), and (iii) comparative aspects of face perception (comparing adult human abilities with those of chimpanzees and children). Here, we introduce the central themes of the issue and present an overview of the articles.

  7. Working Memory Capacity is Associated with Optimal Adaptation of Response Bias to Perceptual Sensitivity in Emotion Perception

    PubMed Central

    Lynn, Spencer K.; Ibagon, Camila; Bui, Eric; Palitz, Sophie A.; Simon, Naomi M.; Barrett, Lisa Feldman

    2017-01-01

    Emotion perception, inferring the emotional state of another person, is a frequent judgment made under perceptual uncertainty (e.g., a scowling facial expression can indicate anger or concentration) and behavioral risk (e.g., incorrect judgment can be costly to the perceiver). Working memory capacity (WMC), the ability to maintain controlled processing in the face of competing demands, is an important component of many decisions. We investigated the association of WMC and anger perception in a task in which “angry” and “not angry” categories comprised overlapping ranges of scowl intensity, and correct and incorrect responses earned and lost points, respectively. Participants attempted to earn as many points as they could; adopting an optimal response bias would maximize decision utility. Participants with higher WMC more optimally tuned their anger perception response bias to accommodate their perceptual sensitivity (their ability to discriminate the categories) than did participants with lower WMC. Other factors that influence response bias (i.e., the relative base rate of angry vs. not angry faces and the decision costs & benefits) were ruled out as contributors to the WMC-bias relationship. Our results suggest that WMC optimizes emotion perception by contributing to perceivers’ ability to adjust their response bias to account for their level of perceptual sensitivity, likely an important component of adapting emotion perception to dynamic social interactions and changing circumstances. PMID:26461251

  8. The effect of face patch microstimulation on perception of faces and objects.

    PubMed

    Moeller, Sebastian; Crapse, Trinity; Chang, Le; Tsao, Doris Y

    2017-05-01

    What is the range of stimuli encoded by face-selective regions of the brain? We asked how electrical microstimulation of face patches in macaque inferotemporal cortex affects perception of faces and objects. We found that microstimulation strongly distorted face percepts and that this effect depended on precise targeting to the center of face patches. While microstimulation had no effect on the percept of many non-face objects, it did affect the percept of some, including non-face objects whose shape is consistent with a face (for example, apples) as well as somewhat facelike abstract images (for example, cartoon houses). Microstimulation even perturbed the percept of certain objects that did not activate the stimulated face patch at all. Overall, these results indicate that representation of facial identity is localized to face patches, but activity in these patches can also affect perception of face-compatible non-face objects, including objects normally represented in other parts of inferotemporal cortex.

  9. Face emotion recognition is related to individual differences in psychosis-proneness.

    PubMed

    Germine, L T; Hooker, C I

    2011-05-01

    Deficits in face emotion recognition (FER) in schizophrenia are well documented, and have been proposed as a potential intermediate phenotype for schizophrenia liability. However, research on the relationship between psychosis vulnerability and FER has mixed findings and methodological limitations. Moreover, no study has yet characterized the relationship between FER ability and level of psychosis-proneness. If FER ability varies continuously with psychosis-proneness, this suggests a relationship between FER and polygenic risk factors. We tested two large internet samples to see whether psychometric psychosis-proneness, as measured by the Schizotypal Personality Questionnaire-Brief (SPQ-B), is related to differences in face emotion identification and discrimination or other face processing abilities. Experiment 1 (n=2332) showed that psychosis-proneness predicts face emotion identification ability but not face gender identification ability. Experiment 2 (n=1514) demonstrated that psychosis-proneness also predicts performance on face emotion but not face identity discrimination. The tasks in Experiment 2 used identical stimuli and task parameters, differing only in emotion/identity judgment. Notably, the relationships demonstrated in Experiments 1 and 2 persisted even when individuals with the highest psychosis-proneness levels (the putative high-risk group) were excluded from analysis. Our data suggest that FER ability is related to individual differences in psychosis-like characteristics in the normal population, and that these differences cannot be accounted for by differences in face processing and/or visual perception. Our results suggest that FER may provide a useful candidate intermediate phenotype.

  10. Social identity modifies face perception: an ERP study of social categorization.

    PubMed

    Derks, Belle; Stedehouder, Jeffrey; Ito, Tiffany A

    2015-05-01

    Two studies examined whether social identity processes, i.e. group identification and social identity threat, amplify the degree to which people attend to social category information in early perception [assessed with event-related brain potentials (ERPs)]. Participants were presented with faces of Muslims and non-Muslims in an evaluative priming task while ERPs were measured and implicit evaluative bias was assessed. Study 1 revealed that non-Muslims showed stronger differentiation between ingroup and outgroup faces in both early (N200) and later processing stages (implicit evaluations) when they identified more strongly with their ethnic group. Moreover, identification effects on implicit bias were mediated by intergroup differentiation in the N200. In Study 2, social identity threat (vs control) was manipulated among Muslims. Results revealed that high social identity threat resulted in stronger differentiation of Muslims from non-Muslims in early (N200) and late (implicit evaluations) processing stages, with N200 effects again predicting implicit bias. Combined, these studies reveal how seemingly bottom-up early social categorization processes are affected by individual and contextual variables that affect the meaning of social identity. Implications of these results for the social identity perspective as well as social cognitive theories of person perception are discussed. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  11. Facial emotion perception by intensity in children and adolescents with 22q11.2 deletion syndrome.

    PubMed

    Leleu, Arnaud; Saucourt, Guillaume; Rigard, Caroline; Chesnoy, Gabrielle; Baudouin, Jean-Yves; Rossi, Massimiliano; Edery, Patrick; Franck, Nicolas; Demily, Caroline

    2016-03-01

    Difficulties in the recognition of emotions in expressive faces have been reported in people with 22q11.2 deletion syndrome (22q11.2DS). However, while low-intensity expressive faces are frequent in everyday life, nothing is known about their ability to perceive facial emotions depending on the intensity of expression. Through a visual matching task, children and adolescents with 22q11.2DS as well as gender- and age-matched healthy participants were asked to categorise the emotion of a target face among six possible expressions. Static pictures of morphs between neutrality and expressions were used to parametrically manipulate the intensity of the target face. In comparison to healthy controls, results showed higher perception thresholds (i.e. a more intense expression is needed to perceive the emotion) and lower accuracy for the most expressive faces indicating reduced categorisation abilities in the 22q11.2DS group. The number of intrusions (i.e. each time an emotion is perceived as another one) and a more gradual perception performance indicated smooth boundaries between emotional categories. Correlational analyses with neuropsychological and clinical measures suggested that reduced visual skills may be associated with impaired categorisation of facial emotions. Overall, the present study indicates greater difficulties for children and adolescents with 22q11.2DS to perceive an emotion in low-intensity expressive faces. This disability is subtended by emotional categories that are not sharply organised. It also suggests that these difficulties may be associated with impaired visual cognition, a hallmark of the cognitive deficits observed in the syndrome. These data yield promising tracks for future experimental and clinical investigations.

  12. Being watched: the effect of social self-focus on interoceptive and exteroceptive somatosensory perception.

    PubMed

    Durlik, Caroline; Cardini, Flavia; Tsakiris, Manos

    2014-04-01

    We become aware of our bodies interoceptively, by processing signals arising from within the body, and exteroceptively, by processing signals arising on or outside the body. Recent research highlights the importance of the interaction of exteroceptive and interoceptive signals in modulating bodily self-consciousness. The current study investigated the effect of social self-focus, manipulated via a video camera that was facing the participants and that was either switched on or off, on interoceptive sensitivity (using a heartbeat perception task) and on tactile perception (using the Somatic Signal Detection Task (SSDT)). The results indicated a significant effect of self-focus on SSDT performance, but not on interoception. SSDT performance was not moderated by interoceptive sensitivity, although interoceptive sensitivity scores were positively correlated with false alarms, independently of self-focus. Together with previous research, our results suggest that self-focus may exert different effects on body perception depending on its mode (private versus social). While interoception has been previously shown to be enhanced by private self-focus, the current study failed to find an effect of social self-focus on interoceptive sensitivity, instead demonstrating that social self-focus improves exteroceptive somatosensory processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Curvilinear relationship between phonological working memory load and social-emotional modulation

    PubMed Central

    Mano, Quintino R.; Brown, Gregory G.; Bolden, Khalima; Aupperle, Robin; Sullivan, Sarah; Paulus, Martin P.; Stein, Murray B.

    2015-01-01

    Accumulating evidence suggests that working memory load is an important factor for the interplay between cognitive and facial-affective processing. However, it is unclear how distraction caused by perception of faces interacts with load-related performance. We developed a modified version of the delayed match-to-sample task wherein task-irrelevant facial distracters were presented early in the rehearsal of pseudoword memoranda that varied incrementally in load size (1-syllable, 2-syllables, or 3-syllables). Facial distracters displayed happy, sad, or neutral expressions in Experiment 1 (N=60) and happy, fearful, or neutral expressions in Experiment 2 (N=29). Facial distracters significantly disrupted task performance in the intermediate load condition (2-syllable) but not in the low or high load conditions (1- and 3-syllables, respectively), an interaction replicated and generalised in Experiment 2. All facial distracters disrupted working memory in the intermediate load condition irrespective of valence, suggesting a primary and general effect of distraction caused by faces. However, sad and fearful faces tended to be less disruptive than happy faces, suggesting a secondary and specific valence effect. Working memory appears to be most vulnerable to social-emotional information at intermediate loads. At low loads, spare capacity is capable of accommodating the combinatorial load (1-syllable plus facial distracter), whereas high loads maximised capacity and deprived facial stimuli from occupying working memory slots to cause disruption. PMID:22928750

  14. Seeing the Forest "and" the Trees: Default Local Processing in Individuals with High Autistic Traits Does Not Come at the Expense of Global Attention

    ERIC Educational Resources Information Center

    Stevenson, Ryan A.; Sun, Sol Z.; Hazlett, Naomi; Cant, Jonathan S.; Barense, Morgan D.; Ferber, Susanne

    2018-01-01

    Atypical sensory perception is one of the most ubiquitous symptoms of autism, including a tendency towards a local-processing bias. We investigated whether local-processing biases were associated with global-processing impairments on a global/local attentional-scope paradigm in conjunction with a composite-face task. Behavioural results were…

  15. Rhythm Production at School Entry as a Predictor of Poor Reading and Spelling at the End of First Grade

    ERIC Educational Resources Information Center

    Lundetrae, Kjersti; Thomson, Jenny M.

    2018-01-01

    Rhythm plays an organisational role in the prosody and phonology of language, and children with literacy difficulties have been found to demonstrate poor rhythmic perception. This study explored whether students' performance on a simple rhythm task at school entry could serve as a predictor of whether they would face difficulties in word reading…

  16. Cognitive Sciences

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Session MP4 includes short reports on: (1) Face Recognition in Microgravity: Is Gravity Direction Involved in the Inversion Effect?; (2) Motor Timing under Microgravity; (3) Perceived Self-Motion Assessed by Computer-Generated Animations: Complexity and Reliability; (4) Prolonged Weightlessness Reference Frames and Visual Symmetry Detection; (5) Mental Representation of Gravity During a Locomotor Task; and (6) Haptic Perception in Weightlessness: A Sense of Force or a Sense of Effort?

  17. Does my face FIT?: a face image task reveals structure and distortions of facial feature representation.

    PubMed

    Fuentes, Christina T; Runa, Catarina; Blanco, Xenxo Alvarez; Orvalho, Verónica; Haggard, Patrick

    2013-01-01

    Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.

  18. Threat perception in mild cognitive impairment and early dementia.

    PubMed

    Henry, Julie D; Thompson, Claire; Ruffman, Ted; Leslie, Felicity; Withall, Adrienne; Sachdev, Perminder; Brodaty, Henry

    2009-09-01

    Mild cognitive impairment (MCI) and dementia affect many aspects of emotion processing. Even though the ability to detect threat is a particularly important aspect of emotion processing, no study to date has assessed threat perception in either of these groups. The purpose of the present study was to test whether individuals with MCI (n = 38) and mild dementia (n = 34) have difficulty differentiating between faces and situations normatively judged to be either high or low in threat relative to age-matched controls (n = 34). To achieve this aim, all participants completed 2 danger rating tasks that involved viewing and rating high- and low-danger images. It was also assessed whether threat perception was related to cognitive functioning and emotion recognition. The results indicated that all 3 groups were accurately, and comparably, able to differentiate high from low-danger faces. However, the dementia group had difficulties differentiating high from low-danger situations, which reflected a bias to overattribute the level of threat posed by normatively judged nonthreatening situations. This difficulty was related to more general cognitive decline.

  19. The prevalence of visual hallucinations in non-affective psychosis, and the role of perception and attention.

    PubMed

    van Ommen, M M; van Beilen, M; Cornelissen, F W; Smid, H G O M; Knegtering, H; Aleman, A; van Laar, T

    2016-06-01

    Little is known about visual hallucinations (VH) in psychosis. We investigated the prevalence and the role of bottom-up and top-down processing in VH. The prevailing view is that VH are probably related to altered top-down processing, rather than to distorted bottom-up processing. Conversely, VH in Parkinson's disease are associated with impaired visual perception and attention, as proposed by the Perception and Attention Deficit (PAD) model. Auditory hallucinations (AH) in psychosis, however, are thought to be related to increased attention. Our retrospective database study included 1119 patients with non-affective psychosis and 586 controls. The Community Assessment of Psychic Experiences established the VH rate. Scores on visual perception tests [Degraded Facial Affect Recognition (DFAR), Benton Facial Recognition Task] and attention tests [Response Set-shifting Task, Continuous Performance Test-HQ (CPT-HQ)] were compared between 75 VH patients, 706 non-VH patients and 485 non-VH controls. The lifetime VH rate was 37%. The patient groups performed similarly on cognitive tasks; both groups showed worse perception (DFAR) than controls. Non-VH patients showed worse attention (CPT-HQ) than controls, whereas VH patients did not perform differently. We did not find significant VH-related impairments in bottom-up processing or direct top-down alterations. However, the results suggest a relatively spared attentional performance in VH patients, whereas face perception and processing speed were equally impaired in both patient groups relative to controls. This would match better with the increased attention hypothesis than with the PAD model. Our finding that VH frequently co-occur with AH may support an increased attention-induced 'hallucination proneness'.

  20. Electrocortical processing of social signals of threat in combat-related post-traumatic stress disorder.

    PubMed

    MacNamara, Annmarie; Post, David; Kennedy, Amy E; Rabinak, Christine A; Phan, K Luan

    2013-10-01

    Post-traumatic stress disorder (PTSD) is characterized by avoidance, emotional numbing, increased arousal and hypervigilance for threat following a trauma. Thirty-three veterans (19 with PTSD, 14 without PTSD) who had experienced combat trauma while on deployment in Iraq and/or Afghanistan completed an emotional faces matching task while electroencephalography was recorded. Vertex positive potentials (VPPs) elicited by happy, angry and fearful faces were smaller in veterans with versus without PTSD. In addition, veterans with PTSD exhibited smaller late positive potentials (LPPs) to angry faces and greater intrusive symptoms predicted smaller LPPs to fearful faces in the PTSD group. Veterans with PTSD were also less accurate at identifying angry faces, and accuracy decreased in the PTSD group as hyperarousal symptoms increased. These findings show reduced early processing of emotional faces, irrespective of valence, and blunted prolonged processing of social signals of threat in conjunction with impaired perception for angry faces in PTSD. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. The Face-Race Lightness Illusion Is Not Driven by Low-level Stimulus Properties: An Empirical Reply to Firestone and Scholl (2014).

    PubMed

    Baker, Lewis J; Levin, Daniel T

    2016-12-01

    Levin and Banaji (Journal of Experimental Psychology: General, 135, 501-512, 2006) reported a lightness illusion in which participants appeared to perceive Black faces to be darker than White faces, even though the faces were matched for overall brightness and contrast. Recently, this finding was challenged by Firestone and Scholl (Psychonomic Bulletin and Review, 2014), who argued that the nominal illusion remained even when the faces were blurred so as to make their race undetectable, and concluded that uncontrolled perceptual differences between the stimulus faces drove at least some observations of the original distortion effect. In this paper we report that measures of race perception used by Firestone and Scholl were insufficiently sensitive. We demonstrate that a forced choice race-identification task not only reveals that participants could detect the race of the blurred faces but also that participants' lightness judgments often aligned with their assignment of race.

  2. The Two-Systems Account of Theory of Mind: Testing the Links to Social- Perceptual and Cognitive Abilities

    PubMed Central

    Meinhardt-Injac, Bozana; Daum, Moritz M.; Meinhardt, Günter; Persike, Malte

    2018-01-01

    According to the two-systems account of theory of mind (ToM), understanding mental states of others involves both fast social-perceptual processes, as well as slower, reflexive cognitive operations (Frith and Frith, 2008; Apperly and Butterfill, 2009). To test the respective roles of specific abilities in either of these processes we administered 15 experimental procedures to a large sample of 343 participants, testing ability in face recognition and holistic perception, language, and reasoning. ToM was measured by a set of tasks requiring ability to track and to infer complex emotional and mental states of others from faces, eyes, spoken language, and prosody. We used structural equation modeling to test the relative strengths of a social-perceptual (face processing related) and reflexive-cognitive (language and reasoning related) path in predicting ToM ability. The two paths accounted for 58% of ToM variance, thus validating a general two-systems framework. Testing specific predictor paths revealed language and face recognition as strong and significant predictors of ToM. For reasoning, there were neither direct nor mediated effects, albeit reasoning was strongly associated with language. Holistic face perception also failed to show a direct link with ToM ability, while there was a mediated effect via face recognition. These results highlight the respective roles of face recognition and language for the social brain, and contribute closer empirical specification of the general two-systems account. PMID:29445336

  3. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.

    PubMed

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  4. Recognition of face identity and emotion in expressive specific language impairment.

    PubMed

    Merkenschlager, A; Amorosa, H; Kiefl, H; Martinius, J

    2012-01-01

    To study face and emotion recognition in children with mostly expressive specific language impairment (SLI-E). A test movie to study perception and recognition of faces and mimic-gestural expression was applied to 24 children diagnosed as suffering from SLI-E and an age-matched control group of normally developing children. Compared to a normal control group, the SLI-E children scored significantly worse in both the face and expression recognition tasks with a preponderant effect on emotion recognition. The performance of the SLI-E group could not be explained by reduced attention during the test session. We conclude that SLI-E is associated with a deficiency in decoding non-verbal emotional facial and gestural information, which might lead to profound and persistent problems in social interaction and development. Copyright © 2012 S. Karger AG, Basel.

  5. Holistic processing of static and moving faces.

    PubMed

    Zhao, Mintao; Bülthoff, Isabelle

    2017-07-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Optogenetic and pharmacological suppression of spatial clusters of face neurons reveal their causal role in face gender discrimination.

    PubMed

    Afraz, Arash; Boyden, Edward S; DiCarlo, James J

    2015-05-26

    Neurons that respond more to images of faces over nonface objects were identified in the inferior temporal (IT) cortex of primates three decades ago. Although it is hypothesized that perceptual discrimination between faces depends on the neural activity of IT subregions enriched with "face neurons," such a causal link has not been directly established. Here, using optogenetic and pharmacological methods, we reversibly suppressed the neural activity in small subregions of IT cortex of macaque monkeys performing a facial gender-discrimination task. Each type of intervention independently demonstrated that suppression of IT subregions enriched in face neurons induced a contralateral deficit in face gender-discrimination behavior. The same neural suppression of other IT subregions produced no detectable change in behavior. These results establish a causal link between the neural activity in IT face neuron subregions and face gender-discrimination behavior. Also, the demonstration that brief neural suppression of specific spatial subregions of IT induces behavioral effects opens the door for applying the technical advantages of optogenetics to a systematic attack on the causal relationship between IT cortex and high-level visual perception.

  7. Near-optimal integration of facial form and motion.

    PubMed

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  8. Increased fusiform area activation in schizophrenia during processing of spatial frequency-degraded faces, as revealed by fMRI.

    PubMed

    Silverstein, S M; All, S D; Kasi, R; Berten, S; Essex, B; Lathrop, K L; Little, D M

    2010-07-01

    People with schizophrenia demonstrate perceptual organization impairments, and these are thought to contribute to their face processing difficulties. We examined the neural substrates of emotionally neutral face processing in schizophrenia by investigating neural activity under three stimulus conditions: faces characterized by the full spectrum of spatial frequencies, faces with low spatial frequency information removed [high spatial frequency (HSF) condition], and faces with high spatial frequency information removed [low spatial frequency (LSF) condition]. Face perception in the HSF condition is more reliant on local feature processing whereas perception in the LSF condition requires greater reliance on global form processing. Past studies of perceptual organization in schizophrenia indicate that patients perform relatively more poorly with degraded stimuli but also that, when global information is absent, patients may perform better than controls because of their relatively increased ability to initially process individual features. Therefore, we hypothesized that people with schizophrenia (n=14) would demonstrate greater face processing difficulties than controls (n=13) in the LSF condition, whereas they would demonstrate a smaller difference or superior performance in the HSF condition. In a gender-discrimination task, behavioral data indicated high levels of accuracy for both groups, with a trend toward an interaction involving higher patient performance in the HSF condition and poorer patient performance in the LSF condition. Patients demonstrated greater activity in the fusiform gyrus compared to controls in both degraded conditions. These data suggest that impairments in basic integration abilities may be compensated for by relatively increased activity in this region.

  9. Global shape information increases but color information decreases the composite face effect.

    PubMed

    Retter, Talia L; Rossion, Bruno

    2015-01-01

    The separation of visual shape and surface information may be useful for understanding holistic face perception--that is, the perception of a face as a single unit (Jiang, Blanz, & Rossion, 2011, Visual Cognition, 19, 1003-1034). A widely used measure of holistic face perception is the composite face effect (CFE), in which identical top face halves appear different when aligned with bottom face halves from different identities. In the present study the influences of global face shape (ie contour of the face) and color information on the CFE are investigated, with the hypothesis that global face shape supports but color impairs holistic face perception as measured in this paradigm. In experiment 1 the CFE is significantly increased when face stimuli possess natural global shape information than when cropped to a generic (ie oval) global shape; this effect is not found when the stimuli are presented inverted. In experiment 2 the CFE is significantly decreased when face stimuli are presented with color information than when presented in grayscale. These findings indicate that grayscale stimuli maintaining natural global face shape information provide the most adept measure of holistic face perception in the behavioral composite face paradigm. More generally, they show that reducing different types of information diagnostic for individual face perception can have opposite effects on the CFE, illustrating the functional dissociation between shape and surface information in face perception.

  10. Perception of faces in schizophrenia: Subjective (self-report) vs. objective (psychophysics) assessments.

    PubMed

    Chen, Yue; Ekstrom, Tor

    2016-05-01

    Face perception impairment in schizophrenia has been demonstrated, mostly through experimental studies. How this laboratory-defined behavioral impairment is associated with patients' perceptual experience of various faces in everyday life is however unclear. This question is important because a first-person account of face perception has direct consequences on social functioning of patients. In this study, we adapted and administered a self-reported questionnaire on narrative perceptual experience of faces along with psychophysical assessments of face perception in schizophrenia. The self-reported questionnaire includes six rating items of face-related functioning in everyday life, providing a subjective measure of face perception. The psychophysical assessment determines perceptual threshold for discriminating different facial identities, providing an objective measure of face perception. Compared to controls (n = 25), patients (n = 35) showed significantly lower scores (worse performance) in the subjective assessment and significantly higher thresholds (worse performance) in the objective assessment. The subjective and objective face perception assessments were moderately correlated in controls but not in patients. The subjective face perception assessments were significantly correlated with measurements of a social cognitive ability (Theory of Mind), again in controls but not in patients. These results suggest that in schizophrenia the quality of face-related functioning in everyday life is degraded and the role that basic face discrimination capacity plays in face-related everyday functioning is disrupted. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Perception of faces in schizophrenia: Subjective (self-report) vs. objective (psychophysics) assessments

    PubMed Central

    Chen, Yue; Ekstrom, Tor

    2016-01-01

    Objectives Face perception impairment in schizophrenia has been demonstrated, mostly through experimental studies. How this laboratory-defined behavioral impairment is associated with patients’ perceptual experience of various faces in everyday life is however unclear. This question is important because a first-person account of face perception has direct consequences on social functioning of patients. In this study, we adapted and administered a self-reported questionnaire on narrative perceptual experience of faces along with psychophysical assessments of face perception in schizophrenia. Methods The self-reported questionnaire includes six rating items of face-related functioning in everyday life, providing a subjective measure of face perception. The psychophysical assessment determines perceptual threshold for discriminating different facial identities, providing an objective measure of face perception. Results Compared to controls (n=25), patients (n=35) showed significantly lower scores (worse performance) in the subjective assessment and significantly higher thresholds (worse performance) in the objective assessment. The subjective and objective face perception assessments were moderately correlated in controls but not in patients. The subjective face perception assessments were significantly correlated with measurements of a social cognitive ability (Theory of Mind), again in controls but not in patients. Conclusion These results suggest that in schizophrenia the quality of face-related functioning in everyday life is degraded and the role that basic face discrimination capacity plays in face-related everyday functioning is disrupted. PMID:26938027

  12. Spatial But Not Oculomotor Information Biases Perceptual Memory: Evidence From Face Perception and Cognitive Modeling.

    PubMed

    Wantz, Andrea L; Lobmaier, Janek S; Mast, Fred W; Senn, Walter

    2017-08-01

    Recent research put forward the hypothesis that eye movements are integrated in memory representations and are reactivated when later recalled. However, "looking back to nothing" during recall might be a consequence of spatial memory retrieval. Here, we aimed at distinguishing between the effect of spatial and oculomotor information on perceptual memory. Participants' task was to judge whether a morph looked rather like the first or second previously presented face. Crucially, faces and morphs were presented in a way that the morph reactivated oculomotor and/or spatial information associated with one of the previously encoded faces. Perceptual face memory was largely influenced by these manipulations. We considered a simple computational model with an excellent match (4.3% error) that expresses these biases as a linear combination of recency, saccade, and location. Surprisingly, saccades did not play a role. The results suggest that spatial and temporal rather than oculomotor information biases perceptual face memory. Copyright © 2016 Cognitive Science Society, Inc.

  13. Monocular Advantage for Face Perception Implicates Subcortical Mechanisms in Adult Humans

    PubMed Central

    Gabay, Shai; Nestor, Adrian; Dundas, Eva; Behrmann, Marlene

    2014-01-01

    The ability to recognize faces accurately and rapidly is an evolutionarily adaptive process. Most studies examining the neural correlates of face perception in adult humans have focused on a distributed cortical network of face-selective regions. There is, however, robust evidence from phylogenetic and ontogenetic studies that implicates subcortical structures, and recently, some investigations in adult humans indicate subcortical correlates of face perception as well. The questions addressed here are whether low-level subcortical mechanisms for face perception (in the absence of changes in expression) are conserved in human adults, and if so, what is the nature of these subcortical representations. In a series of four experiments, we presented pairs of images to the same or different eyes. Participants’ performance demonstrated that subcortical mechanisms, indexed by monocular portions of the visual system, play a functional role in face perception. These mechanisms are sensitive to face-like configurations and afford a coarse representation of a face, comprised of primarily low spatial frequency information, which suffices for matching faces but not for more complex aspects of face perception such as sex differentiation. Importantly, these subcortical mechanisms are not implicated in the perception of other visual stimuli, such as cars or letter strings. These findings suggest a conservation of phylogenetically and ontogenetically lower-order systems in adult human face perception. The involvement of subcortical structures in face recognition provokes a reconsideration of current theories of face perception, which are reliant on cortical level processing, inasmuch as it bolsters the cross-species continuity of the biological system for face recognition. PMID:24236767

  14. Subliminal presentation of other faces (but not own face) primes behavioral and evoked cortical processing of empathy for pain.

    PubMed

    Ibáñez, Agustín; Hurtado, Esteban; Lobos, Alejandro; Escobar, Josefina; Trujillo, Natalia; Baez, Sandra; Huepe, David; Manes, Facundo; Decety, Jean

    2011-06-29

    Current research on empathy for pain emphasizes the overlap in the neural response between the first-hand experience of pain and its perception in others. However, recent studies suggest that the perception of the pain of others may reflect the processing of a threat or negative arousal rather than an automatic pro-social response. It can thus be suggested that pain processing of other-related, but not self-related, information could imply danger rather than empathy, due to the possible threat represented in the expressions of others (especially if associated with pain stimuli). To test this hypothesis, two experiments considering subliminal stimuli were designed. In Experiment 1, neutral and semantic pain expressions previously primed with own or other faces were presented to participants. When other-face priming was used, only the detection of semantic pain expressions was facilitated. In Experiment 2, pictures with pain and neutral scenarios previously used in ERP and fMRI research were used in a categorization task. Those pictures were primed with own or other faces following the same procedure as in Experiment 1 while ERPs were recorded. Early (N1) and late (P3) cortical responses between pain and no-pain were modulated only in the other-face priming condition. These results support the threat value of pain hypothesis and suggest the necessity for the inclusion of own- versus other-related information in future empathy for pain research. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. The neural representation of social status in the extended face-processing network.

    PubMed

    Koski, Jessica E; Collins, Jessica A; Olson, Ingrid R

    2017-12-01

    Social status is a salient cue that shapes our perceptions of other people and ultimately guides our social interactions. Despite the pervasive influence of status on social behavior, how information about the status of others is represented in the brain remains unclear. Here, we tested the hypothesis that social status information is embedded in our neural representations of other individuals. Participants learned to associate faces with names, job titles that varied in associated status, and explicit markers of reputational status (star ratings). Trained stimuli were presented in an functional magnetic resonance imaging experiment where participants performed a target detection task orthogonal to the variable of interest. A network of face-selective brain regions extending from the occipital lobe to the orbitofrontal cortex was localized and served as regions of interest. Using multivoxel pattern analysis, we found that face-selective voxels in the lateral orbitofrontal cortex - a region involved in social and nonsocial valuation, could decode faces based on their status. Similar effects were observed with two different status manipulations - one based on stored semantic knowledge (e.g., different careers) and one based on learned reputation (e.g., star ranking). These data suggest that a face-selective region of the lateral orbitofrontal cortex may contribute to the perception of social status, potentially underlying the preferential attention and favorable biases humans display toward high-status individuals. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Idiosyncratic Patterns of Representational Similarity in Prefrontal Cortex Predict Attentional Performance.

    PubMed

    Lee, Jeongmi; Geng, Joy J

    2017-02-01

    The efficiency of finding an object in a crowded environment depends largely on the similarity of nontargets to the search target. Models of attention theorize that the similarity is determined by representations stored within an "attentional template" held in working memory. However, the degree to which the contents of the attentional template are individually unique and where those idiosyncratic representations are encoded in the brain are unknown. We investigated this problem using representational similarity analysis of human fMRI data to measure the common and idiosyncratic representations of famous face morphs during an identity categorization task; data from the categorization task were then used to predict performance on a separate identity search task. We hypothesized that the idiosyncratic categorical representations of the continuous face morphs would predict their distractability when searching for each target identity. The results identified that patterns of activation in the lateral prefrontal cortex (LPFC) as well as in face-selective areas in the ventral temporal cortex were highly correlated with the patterns of behavioral categorization of face morphs and search performance that were common across subjects. However, the individually unique components of the categorization behavior were reliably decoded only in right LPFC. Moreover, the neural pattern in right LPFC successfully predicted idiosyncratic variability in search performance, such that reaction times were longer when distractors had a higher probability of being categorized as the target identity. These results suggest that the prefrontal cortex encodes individually unique components of categorical representations that are also present in attentional templates for target search. Everyone's perception of the world is uniquely shaped by personal experiences and preferences. Using functional MRI, we show that individual differences in the categorization of face morphs between two identities could be decoded from the prefrontal cortex and the ventral temporal cortex. Moreover, the individually unique representations in prefrontal cortex predicted idiosyncratic variability in attentional performance when looking for each identity in the "crowd" of another morphed face in a separate search task. Our results reveal that the representation of task-related information in prefrontal cortex is individually unique and preserved across categorization and search performance. This demonstrates the possibility of predicting individual behaviors across tasks with patterns of brain activity. Copyright © 2017 the authors 0270-6474/17/371257-12$15.00/0.

  17. An investigation of a novel transdiagnostic model of delusions in a group with positive schizotypal symptoms.

    PubMed

    Cameron, Clare; Kaplan, Ryan A; Rossell, Susan L

    2014-01-01

    Although several theories of delusions have been put forward, most do not offer a comprehensive diagnosis-independent explanation of delusion aetiology. This study used a non-clinical sample to provide empirical support for a novel transdiagnostic model of delusions that implicates aberrant semantic memory and emotion perception processes as key factors in delusion formation and maintenance. It was hypothesised that among a non-clinical sample, people high in schizotypy would demonstrate differences in semantic memory and emotion perception, relative to people low in schizotypy. Using the Cognitive Disorganisation subscale of the Oxford-Liverpool Inventory of Feelings and Experiences, 41 healthy participants were separated into high and low schizotypy groups and completed facial emotion perception and semantic priming tasks. As expected, participants in the high schizotypy group demonstrated different performance on the semantic priming task and reduced facial affect accuracy for the emotion anger, and reaction time differences to fearful faces. These findings suggest that such processes may be involved in the development of the sorts of unusual beliefs which underlie delusions. Investigation of how emotion perception and semantic memory may interrelate in the aetiology of delusions would be of value in furthering our understanding of their role in delusion formation.

  18. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    PubMed

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Aging effects on selective attention-related electroencephalographic patterns during face encoding.

    PubMed

    Deiber, M-P; Rodriguez, C; Jaques, D; Missonnier, P; Emch, J; Millet, P; Gold, G; Giannakopoulos, P; Ibañez, V

    2010-11-24

    Previous electrophysiological studies revealed that human faces elicit an early visual event-related potential (ERP) within the occipito-temporal cortex, the N170 component. Although face perception has been proposed to rely on automatic processing, the impact of selective attention on N170 remains controversial both in young and elderly individuals. Using early visual ERP and alpha power analysis, we assessed the influence of aging on selective attention to faces during delayed-recognition tasks for face and letter stimuli, examining 36 elderly and 20 young adults with preserved cognition. Face recognition performance worsened with age. Aging induced a latency delay of the N1 component for faces and letters, as well as of the face N170 component. Contrasting with letters, ignored faces elicited larger N1 and N170 components than attended faces in both age groups. This counterintuitive attention effect on face processing persisted when scenes replaced letters. In contrast with young, elderly subjects failed to suppress irrelevant letters when attending faces. Whereas attended stimuli induced a parietal alpha band desynchronization within 300-1000 ms post-stimulus with bilateral-to-right distribution for faces and left lateralization for letters, ignored and passively viewed stimuli elicited a central alpha synchronization larger on the right hemisphere. Aging delayed the latency of this alpha synchronization for both face and letter stimuli, and reduced its amplitude for ignored letters. These results suggest that due to their social relevance, human faces may cause paradoxical attention effects on early visual ERP components, but they still undergo classical top-down control as a function of endogenous selective attention. Aging does not affect the face bottom-up alerting mechanism but reduces the top-down suppression of distracting letters, possibly impinging upon face recognition, and more generally delays the top-down suppression of task-irrelevant information. Copyright © 2010 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Neural Processing of Facial Identity and Emotion in Infants at High-Risk for Autism Spectrum Disorders

    PubMed Central

    Fox, Sharon E.; Wagner, Jennifer B.; Shrock, Christine L.; Tager-Flusberg, Helen; Nelson, Charles A.

    2013-01-01

    Deficits in face processing and social impairment are core characteristics of autism spectrum disorder. The present work examined 7-month-old infants at high-risk for developing autism and typically developing controls at low-risk, using a face perception task designed to differentiate between the effects of face identity and facial emotions on neural response using functional Near-Infrared Spectroscopy. In addition, we employed independent component analysis, as well as a novel method of condition-related component selection and classification to identify group differences in hemodynamic waveforms and response distributions associated with face and emotion processing. The results indicate similarities of waveforms, but differences in the magnitude, spatial distribution, and timing of responses between groups. These early differences in local cortical regions and the hemodynamic response may, in turn, contribute to differences in patterns of functional connectivity. PMID:23576966

  1. Seeing a haptically explored face: visual facial-expression aftereffect from haptic adaptation to a face.

    PubMed

    Matsumiya, Kazumichi

    2013-10-01

    Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.

  2. Memory for friends or foes: the social context of past encounters with faces modulates their subsequent neural traces in the brain.

    PubMed

    Vrticka, Pascal; Andersson, Frédéric; Sander, David; Vuilleumier, Patrik

    2009-01-01

    Every day we encounter new people, interact with them, and form person impressions based on quick and automatic inferences from minimal contextual information. Previous studies have identified an extensive network of brain areas involved in familiar face recognition, but there is little evidence to date concerning the neural bases of negative vs. positive person impressions. In the present study, participants were repeatedly exposed to 16 unfamiliar face identities within a pseudo-interactive game context to generate a perception of either "friends" or "foes". Functional magnetic resonance imaging (fMRI) was then performed during an old/new memory task to assess any difference in brain responses to these now familiar face identities, relative to unfamiliar faces. Importantly, whereas facial expressions were always emotional (either smiling or angry) during the encoding phase, they were always neutral during the memory task. Our results reveal that several brain regions involved in familiar face recognition, including fusiform cortex, posterior cingulate gyrus, and amygdala, plus additional areas involved in motivational control such as caudate and anterior cingulate cortex, were differentially modulated as a function of a previous encounter, and generally more activated when faces were perceived as "foes" rather than "friends". These findings underscore that a key dimension of social judgments, based on past impressions of who may be supportive or hostile, may lead to long-lasting effects on memory for faces and thus influence affective reactions to people during a subsequent encounter even in a different (neutral) context.

  3. Social Cognition in Williams Syndrome: Face Tuning

    PubMed Central

    Pavlova, Marina A.; Heiz, Julie; Sokolov, Alexander N.; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing. PMID:27531986

  4. Functional dissociation of the left and right fusiform gyrus in self-face recognition.

    PubMed

    Ma, Yina; Han, Shihui

    2012-10-01

    It is well known that the fusiform gyrus is engaged in face perception, such as the processes of face familiarity and identity. However, the functional role of the fusiform gyrus in face processing related to high-level social cognition remains unclear. The current study assessed the functional role of individually defined fusiform face area (FFA) in the processing of self-face physical properties and self-face identity. We used functional magnetic resonance imaging to monitor neural responses to rapidly presented face stimuli drawn from morph continua between self-face (Morph 100%) and a gender-matched friend's face (Morph 0%) in a face recognition task. Contrasting Morph 100% versus Morph 60% that differed in self-face physical properties but were both recognized as the self uncovered neural activity sensitive to self-face physical properties in the left FFA. Contrasting Morphs 50% that were recognized as the self versus a friend on different trials revealed neural modulations associated with self-face identity in the right FFA. Moreover, the right FFA activity correlated with the frequency of recognizing Morphs 50% as the self. Our results provide evidence for functional dissociations of the left and right FFAs in the representations of self-face physical properties and self-face identity. Copyright © 2011 Wiley Periodicals, Inc.

  5. Social Cognition in Williams Syndrome: Face Tuning.

    PubMed

    Pavlova, Marina A; Heiz, Julie; Sokolov, Alexander N; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing.

  6. Face perception in women with Turner syndrome and its underlying factors.

    PubMed

    Anaki, David; Zadikov Mor, Tal; Gepstein, Vardit; Hochberg, Ze'ev

    2016-09-01

    Turner syndrome (TS) is a chromosomal condition that affects development in females. It is characterized by short stature, ovarian failure and other congenital malformations, due to a partial or complete absence of the sex chromosome. Women with TS frequently suffer from various physical and hormonal dysfunctions, along with impairments in visual-spatial processing and social cognition difficulties. Previous research has also shown difficulties in face and emotion perception. In the current study we examined two questions: First, whether women with TS, that are impaired in face perception, also suffer from deficits in face-specific processes. The second question was whether these face impairments in TS are related to visual-spatial perceptual dysfunctions exhibited by TS individuals, or to impaired social cognition skills. Twenty-six women with TS and 26 control participants were tested on various cognitive and psychological tests to assess visual-spatial perception, face and facial expression perception, and social cognition skills. Results show that women with TS were less accurate in face perception and facial expression processing, yet they exhibited normal face-specific processes (configural and holistic processing). They also showed difficulties in spatial perception and social cognition capacities. Additional analyses revealed that their face perception impairments were related to their deficits in visual-spatial processing. Thus, our results do not support the claim that the impairments in face processing observed in TS are related to difficulties in social cognition. Rather, our data point to the possibility that face perception difficulties in TS stem from visual-spatial impairments and may not be specific to faces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Medium Moderates the Message. How Users Adjust Their Communication Trajectories to Different Media in Collaborative Task Solving.

    PubMed

    Lisiecka, Karolina; Rychwalska, Agnieszka; Samson, Katarzyna; Łucznik, Klara; Ziembowicz, Michał; Szóstek, Agnieszka; Nowak, Andrzej

    2016-01-01

    Rapid development of information and communications technologies (ICT) has triggered profound changes in how people manage their social contacts in both informal and professional contexts. ICT mediated communication may seem limited in possibilities compared to face-to-face encounters, but research shows that puzzlingly often it can be just as effective and satisfactory. We posit that ICT users employ specific communication strategies adapted to particular communication channels, which results in a comparable effectiveness of communication. In order to maintain a satisfactory level of conversational intelligibility they calibrate the content of their messages to a given medium's richness and adjust the whole conversation trajectory so that every stage of the communication process runs fluently. In the current study, we compared complex task solving trajectories in chat, mobile phone and face-to-face dyadic conversations. Media conditions did not influence the quality of decision outcomes or users' perceptions of the interaction, but they had impact on the amount of time devoted to each of the identified phases of decision development. In face-to-face contacts the evaluation stage of the discussion dominated the conversation; in the texting condition the orientation-evaluation-control phases were evenly distributed; and the phone condition provided a midpoint between these two extremes. The results show that contemporary ICT users adjust their communication behavior to the limitations and opportunities of various media through the regulation of attention directed to each stage of the discussion so that as a whole the communication process remains effective.

  8. Body Weight Can Change How Your Emotions Are Perceived

    PubMed Central

    2016-01-01

    Accurately interpreting other’s emotions through facial expressions has important adaptive values for social interactions. However, due to the stereotypical social perception of overweight individuals as carefree, humorous, and light-hearted, the body weight of those with whom we interact may have a systematic influence on our emotion judgment even though it has no relevance to the expressed emotion itself. In this experimental study, we examined the role of body weight in faces on the affective perception of facial expressions. We hypothesized that the weight perceived in a face would bias the assessment of an emotional expression, with overweight faces generally more likely to be perceived as having more positive and less negative expressions than healthy weight faces. Using two-alternative forced-choice perceptual decision tasks, participants were asked to sort the emotional expressions of overweight and healthy weight facial stimuli that had been gradually morphed across six emotional intensity levels into one of two categories—“neutral vs. happy” (Experiment 1) and “neutral vs. sad” (Experiment 2). As predicted, our results demonstrated that overweight faces were more likely to be categorized as happy (i.e., lower happy decision threshold) and less likely to be categorized as sad (i.e., higher sad decision threshold) compared to healthy weight faces that had the same levels of emotional intensity. The neutral-sad decision threshold shift was negatively correlated with participant’s own fear of becoming fat, that is, those without a fear of becoming fat more strongly perceived overweight faces as sad relative to those with a higher fear. These findings demonstrate that the weight of the face systematically influences how its emotional expression is interpreted, suggesting that being overweight may make emotional expressions appear more happy and less sad than they really are. PMID:27870892

  9. Body Weight Can Change How Your Emotions Are Perceived.

    PubMed

    Oh, Yujung; Hass, Norah C; Lim, Seung-Lark

    2016-01-01

    Accurately interpreting other's emotions through facial expressions has important adaptive values for social interactions. However, due to the stereotypical social perception of overweight individuals as carefree, humorous, and light-hearted, the body weight of those with whom we interact may have a systematic influence on our emotion judgment even though it has no relevance to the expressed emotion itself. In this experimental study, we examined the role of body weight in faces on the affective perception of facial expressions. We hypothesized that the weight perceived in a face would bias the assessment of an emotional expression, with overweight faces generally more likely to be perceived as having more positive and less negative expressions than healthy weight faces. Using two-alternative forced-choice perceptual decision tasks, participants were asked to sort the emotional expressions of overweight and healthy weight facial stimuli that had been gradually morphed across six emotional intensity levels into one of two categories-"neutral vs. happy" (Experiment 1) and "neutral vs. sad" (Experiment 2). As predicted, our results demonstrated that overweight faces were more likely to be categorized as happy (i.e., lower happy decision threshold) and less likely to be categorized as sad (i.e., higher sad decision threshold) compared to healthy weight faces that had the same levels of emotional intensity. The neutral-sad decision threshold shift was negatively correlated with participant's own fear of becoming fat, that is, those without a fear of becoming fat more strongly perceived overweight faces as sad relative to those with a higher fear. These findings demonstrate that the weight of the face systematically influences how its emotional expression is interpreted, suggesting that being overweight may make emotional expressions appear more happy and less sad than they really are.

  10. Prism adaptation does not alter configural processing of faces

    PubMed Central

    Bultitude, Janet H.; Downing, Paul E.; Rafal, Robert D.

    2013-01-01

    Patients with hemispatial neglect (‘neglect’) following a brain lesion show difficulty responding or orienting to objects and events on the left side of space. Substantial evidence supports the use of a sensorimotor training technique called prism adaptation as a treatment for neglect. Reaching for visual targets viewed through prismatic lenses that induce a rightward shift in the visual image results in a leftward recalibration of reaching movements that is accompanied by a reduction of symptoms in patients with neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Interestingly, prism adaptation can also alter aspects of non-lateralised spatial attention. We previously demonstrated that prism adaptation alters the extent to which neglect patients and healthy participants process local features versus global configurations of visual stimuli. Since deficits in non-lateralised spatial attention are thought to contribute to the severity of neglect symptoms, it is possible that the effect of prism adaptation on these deficits contributes to its efficacy. This study examines the pervasiveness of the effects of prism adaptation on perception by examining the effect of prism adaptation on configural face processing using a composite face task. The composite face task is a persuasive demonstration of the automatic global-level processing of faces: the top and bottom halves of two familiar faces form a seemingly new, unknown face when viewed together. Participants identified the top or bottom halves of composite faces before and after prism adaptation. Sensorimotor adaptation was confirmed by significant pointing aftereffect, however there was no significant change in the extent to which the irrelevant face half interfered with processing. The results support the proposal that the therapeutic effects of prism adaptation are limited to dorsal stream processing. PMID:25110574

  11. Prism adaptation does not alter configural processing of faces.

    PubMed

    Bultitude, Janet H; Downing, Paul E; Rafal, Robert D

    2013-01-01

    Patients with hemispatial neglect ('neglect') following a brain lesion show difficulty responding or orienting to objects and events on the left side of space. Substantial evidence supports the use of a sensorimotor training technique called prism adaptation as a treatment for neglect. Reaching for visual targets viewed through prismatic lenses that induce a rightward shift in the visual image results in a leftward recalibration of reaching movements that is accompanied by a reduction of symptoms in patients with neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Interestingly, prism adaptation can also alter aspects of non-lateralised spatial attention. We previously demonstrated that prism adaptation alters the extent to which neglect patients and healthy participants process local features versus global configurations of visual stimuli. Since deficits in non-lateralised spatial attention are thought to contribute to the severity of neglect symptoms, it is possible that the effect of prism adaptation on these deficits contributes to its efficacy. This study examines the pervasiveness of the effects of prism adaptation on perception by examining the effect of prism adaptation on configural face processing using a composite face task. The composite face task is a persuasive demonstration of the automatic global-level processing of faces: the top and bottom halves of two familiar faces form a seemingly new, unknown face when viewed together. Participants identified the top or bottom halves of composite faces before and after prism adaptation. Sensorimotor adaptation was confirmed by significant pointing aftereffect, however there was no significant change in the extent to which the irrelevant face half interfered with processing. The results support the proposal that the therapeutic effects of prism adaptation are limited to dorsal stream processing.

  12. A family at risk: congenital prosopagnosia, poor face recognition and visuoperceptual deficits within one family.

    PubMed

    Johnen, Andreas; Schmukle, Stefan C; Hüttenbrink, Judith; Kischka, Claudia; Kennerknecht, Ingo; Dobel, Christian

    2014-05-01

    Congenital prosopagnosia (CP) describes a severe face processing impairment despite intact early vision and in the absence of overt brain damage. CP is assumed to be present from birth and often transmitted within families. Previous studies reported conflicting findings regarding associated deficits in nonface visuoperceptual tasks. However, diagnostic criteria for CP significantly differed between studies, impeding conclusions on the heterogeneity of the impairment. Following current suggestions for clinical diagnoses of CP, we administered standardized tests for face processing, a self-report questionnaire and general visual processing tests to an extended family (N=28), in which many members reported difficulties with face recognition. This allowed us to assess the degree of heterogeneity of the deficit within a large sample of suspected CPs of similar genetic and environmental background. (a) We found evidence for a severe face processing deficit but intact nonface visuoperceptual skills in three family members - a father and his two sons - who fulfilled conservative criteria for a CP diagnosis on standardized tests and a self-report questionnaire, thus corroborating findings of familial transmissions of CP. (b) Face processing performance of the remaining family members was also significantly below the mean of the general population, suggesting that face processing impairments are transmitted as a continuous trait rather than in a dichotomous all-or-nothing fashion. (c) Self-rating scores of face recognition showed acceptable correlations with standardized tests, suggesting this method as a viable screening procedure for CP diagnoses. (d) Finally, some family members revealed severe impairments in general visual processing and nonface visual memory tasks either in conjunction with face perception deficits or as an isolated impairment. This finding may indicate an elevated risk for more general visuoperceptual deficits in families with prosopagnosic members. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. The other-race and other-species effects in face perception – a subordinate-level analysis

    PubMed Central

    Dahl, Christoph D.; Rasch, Malte J.; Chen, Chien-Chung

    2014-01-01

    The ability of face discrimination is modulated by the frequency of exposure to a category of faces. In other words, lower discrimination performance was measured for infrequently encountered faces as opposed to frequently encountered ones. This phenomenon has been described in the literature: the own-race advantage, a benefit in processing own-race as opposed to the other-race faces, and the own-species advantage, a benefit in processing the conspecific type of faces as opposed to the heterospecific type. So far, the exact parameters that drive either of these two effects are not fully understood. In the following we present a full assessment of data in human participants describing the discrimination performances across two races (Asian and Caucasian) as well as a range of non-human primate faces (chimpanzee, Rhesus macaque and marmoset). We measured reaction times of Asian participants performing a delayed matching-to-sample task, and correlated the results with similarity estimates of facial configuration and face parts. We found faster discrimination of own-race above other-race/species faces. Further, we found a strong reliance on configural information in upright own-species/-race faces and on individual face parts in all inverted face classes, supporting the assumption of specialized processing for the face class of most frequent exposure. PMID:25285092

  14. Optogenetic and pharmacological suppression of spatial clusters of face neurons reveal their causal role in face gender discrimination

    PubMed Central

    Afraz, Arash; Boyden, Edward S.; DiCarlo, James J.

    2015-01-01

    Neurons that respond more to images of faces over nonface objects were identified in the inferior temporal (IT) cortex of primates three decades ago. Although it is hypothesized that perceptual discrimination between faces depends on the neural activity of IT subregions enriched with “face neurons,” such a causal link has not been directly established. Here, using optogenetic and pharmacological methods, we reversibly suppressed the neural activity in small subregions of IT cortex of macaque monkeys performing a facial gender-discrimination task. Each type of intervention independently demonstrated that suppression of IT subregions enriched in face neurons induced a contralateral deficit in face gender-discrimination behavior. The same neural suppression of other IT subregions produced no detectable change in behavior. These results establish a causal link between the neural activity in IT face neuron subregions and face gender-discrimination behavior. Also, the demonstration that brief neural suppression of specific spatial subregions of IT induces behavioral effects opens the door for applying the technical advantages of optogenetics to a systematic attack on the causal relationship between IT cortex and high-level visual perception. PMID:25953336

  15. Passing faces: sequence-dependent variations in the perceptual processing of emotional faces.

    PubMed

    Karl, Christian; Hewig, Johannes; Osinsky, Roman

    2016-10-01

    There is broad evidence that contextual factors influence the processing of emotional facial expressions. Yet temporal-dynamic aspects, inter alia how face processing is influenced by the specific order of neutral and emotional facial expressions, have been largely neglected. To shed light on this topic, we recorded electroencephalogram from 168 healthy participants while they performed a gender-discrimination task with angry and neutral faces. Our event-related potential (ERP) analyses revealed a strong emotional modulation of the N170 component, indicating that the basic visual encoding and emotional analysis of a facial stimulus happen, at least partially, in parallel. While the N170 and the late positive potential (LPP; 400-600 ms) were only modestly affected by the sequence of preceding faces, we observed a strong influence of face sequences on the early posterior negativity (EPN; 200-300 ms). Finally, the differing response patterns of the EPN and LPP indicate that these two ERPs represent distinct processes during face analysis: while the former seems to represent the integration of contextual information in the perception of a current face, the latter appears to represent the net emotional interpretation of a current face.

  16. Posterior cortical atrophy: an investigation of scan paths generated during face matching tasks

    PubMed Central

    Meek, Benjamin P.; Locheed, Keri; Lawrence-Dewar, Jane M.; Shelton, Paul; Marotta, Jonathan J.

    2013-01-01

    When viewing a face, healthy individuals focus more on the area containing the eyes and upper nose in order to retrieve important featural and configural information. In contrast, individuals with face blindness (prosopagnosia) tend to direct fixations toward individual facial features—particularly the mouth. Presented here is an examination of face perception deficits in individuals with Posterior Cortical Atrophy (PCA). PCA is a rare progressive neurodegenerative disorder that is characterized by atrophy in occipito-parietal and occipito-temporal cortices. PCA primarily affects higher visual processing, while memory, reasoning, and insight remain relatively intact. A common symptom of PCA is a decreased effective field of vision caused by the inability to “see the whole picture.” Individuals with PCA and healthy control participants completed a same/different discrimination task in which images of faces were presented as cue-target pairs. Eye-tracking equipment and a novel computer-based perceptual task—the Viewing Window paradigm—were used to investigate scan patterns when faces were presented in open view or through a restricted-view, respectively. In contrast to previous prosopagnosia research, individuals with PCA each produced unique scan paths that focused on non-diagnostically useful locations. This focus on non-diagnostically useful locations was also present when using a restricted viewing aperture, suggesting that individuals with PCA have difficulty processing the face at either the featural or configural level. In fact, it appears that the decreased effective field of view in PCA patients is so severe that it results in an extreme dependence on local processing, such that a feature-based approach is not even possible. PMID:23825453

  17. The "EyeCane", a new electronic travel aid for the blind: Technology, behavior & swift learning.

    PubMed

    Maidenbaum, Shachar; Hanassy, Shlomi; Abboud, Sami; Buchs, Galit; Chebat, Daniel-Robert; Levy-Tzedek, Shelly; Amedi, Amir

    2014-01-01

    Independent mobility is one of the most pressing problems facing people who are blind. We present the EyeCane, a new mobility aid aimed at increasing perception of environment beyond what is provided by the traditional White Cane for tasks such as distance estimation, navigation and obstacle detection. The "EyeCane" enhances the traditional White Cane by using tactile and auditory output to increase detectable distance and angles. It circumvents the technical pitfalls of other devices, such as weight, short battery life, complex interface schemes, and slow learning curve. It implements multiple beams to enables detection of obstacles at different heights, and narrow beams to provide active sensing that can potentially increase the user's spatial perception of the environment. Participants were tasked with using the EyeCane for several basic tasks with minimal training. Blind and blindfolded-sighted participants were able to use the EyeCane successfully for distance estimation, simple navigation and simple obstacle detection after only several minutes of training. These results demonstrate the EyeCane's potential for mobility rehabilitation. The short training time is especially important since available mobility training resources are limited, not always available, and can be quite expensive and/or entail long waiting periods.

  18. Association between amygdala response to emotional faces and social anxiety in autism spectrum disorders.

    PubMed

    Kleinhans, Natalia M; Richards, Todd; Weaver, Kurt; Johnson, L Clark; Greenson, Jessica; Dawson, Geraldine; Aylward, Elizabeth

    2010-10-01

    Difficulty interpreting facial expressions has been reported in autism spectrum disorders (ASD) and is thought to be associated with amygdala abnormalities. To further explore the neural basis of abnormal emotional face processing in ASD, we conducted an fMRI study of emotional face matching in high-functioning adults with ASD and age, IQ, and gender matched controls. In addition, we investigated whether there was a relationship between self-reported social anxiety and fMRI activation. During fMRI scanning, study participants were instructed to match facial expressions depicting fear or anger. The control condition was a comparable shape-matching task. The control group evidenced significantly increased left prefrontal activation and decreased activation in the occipital lobes compared to the ASD group during emotional face matching. Further, within the ASD group, greater social anxiety was associated with increased activation in right amygdala and left middle temporal gyrus, and decreased activation in the fusiform face area. These results indicate that level of social anxiety mediates the neural response to emotional face perception in ASD. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. The Development of Face Perception in Infancy: Intersensory Interference and Unimodal Visual Facilitation

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina

    2013-01-01

    Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the intersensory redundancy hypothesis (IRH), that face discrimination, which relies on detection of visual…

  20. Serial dependence in the perception of attractiveness.

    PubMed

    Xia, Ye; Leib, Allison Yamanashi; Whitney, David

    2016-12-01

    The perception of attractiveness is essential for choices of food, object, and mate preference. Like perception of other visual features, perception of attractiveness is stable despite constant changes of image properties due to factors like occlusion, visual noise, and eye movements. Recent results demonstrate that perception of low-level stimulus features and even more complex attributes like human identity are biased towards recent percepts. This effect is often called serial dependence. Some recent studies have suggested that serial dependence also exists for perceived facial attractiveness, though there is also concern that the reported effects are due to response bias. Here we used an attractiveness-rating task to test the existence of serial dependence in perceived facial attractiveness. Our results demonstrate that perceived face attractiveness was pulled by the attractiveness level of facial images encountered up to 6 s prior. This effect was not due to response bias and did not rely on the previous motor response. This perceptual pull increased as the difference in attractiveness between previous and current stimuli increased. Our results reconcile previously conflicting findings and extend previous work, demonstrating that sequential dependence in perception operates across different levels of visual analysis, even at the highest levels of perceptual interpretation.

  1. Modulation of neural circuits underlying temporal production by facial expressions of pain.

    PubMed

    Ballotta, Daniela; Lui, Fausta; Porro, Carlo Adolfo; Nichelli, Paolo Frigio; Benuzzi, Francesca

    2018-01-01

    According to the Scalar Expectancy Theory, humans are equipped with a biological internal clock, possibly modulated by attention and arousal. Both emotions and pain are arousing and can absorb attentional resources, thus causing distortions of temporal perception. The aims of the present single-event fMRI study were to investigate: a) whether observation of facial expressions of pain interferes with time production; and b) the neural network subserving this kind of temporal distortions. Thirty healthy volunteers took part in the study. Subjects were asked to perform a temporal production task and a concurrent gender discrimination task, while viewing faces of unknown people with either pain-related or neutral expressions. Behavioural data showed temporal underestimation (i.e., longer produced intervals) during implicit pain expression processing; this was accompanied by increased activity of right middle temporal gyrus, a region known to be active during the perception of emotional and painful faces. Psycho-Physiological Interaction analyses showed that: 1) the activity of middle temporal gyrus was positively related to that of areas previously reported to play a role in timing: left primary motor cortex, middle cingulate cortex, supplementary motor area, right anterior insula, inferior frontal gyrus, bilateral cerebellum and basal ganglia; 2) the functional connectivity of supplementary motor area with several frontal regions, anterior cingulate cortex and right angular gyrus was correlated to the produced interval during painful expression processing. Our data support the hypothesis that observing emotional expressions distorts subjective time perception through the interaction of the neural network subserving processing of facial expressions with the brain network involved in timing. Within this frame, middle temporal gyrus appears to be the key region of the interplay between the two neural systems.

  2. Effects of facial color on the subliminal processing of fearful faces.

    PubMed

    Nakajima, K; Minami, T; Nakauchi, S

    2015-12-03

    Recent studies have suggested that both configural information, such as face shape, and surface information is important for face perception. In particular, facial color is sufficiently suggestive of emotional states, as in the phrases: "flushed with anger" and "pale with fear." However, few studies have examined the relationship between facial color and emotional expression. On the other hand, event-related potential (ERP) studies have shown that emotional expressions, such as fear, are processed unconsciously. In this study, we examined how facial color modulated the supraliminal and subliminal processing of fearful faces. We recorded electroencephalograms while participants performed a facial emotion identification task involving masked target faces exhibiting facial expressions (fearful or neutral) and colors (natural or bluish). The results indicated that there was a significant interaction between facial expression and color for the latency of the N170 component. Subsequent analyses revealed that the bluish-colored faces increased the latency effect of facial expressions compared to the natural-colored faces, indicating that the bluish color modulated the processing of fearful expressions. We conclude that the unconscious processing of fearful faces is affected by facial color. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. Holistic face representation is highly orientation-specific.

    PubMed

    Rosenthal, Gideon; Levakov, Gidon; Avidan, Galia

    2017-09-29

    It has long been argued that face processing requires disproportionate reliance on holistic processing (HP), relative to that required for nonface object recognition. Nevertheless, whether the holistic nature of face perception is achieved via a unique internal representation or by the employment of an automated attention mechanism is still debated. Previous studies had used the face inversion effect (FIE), a unique face-processing marker, or the face composite task, a gold standard paradigm measuring holistic processing, to examine the validity of these two different hypotheses, with some studies combining the two paradigms. However, the results of such studies remain inconclusive, particularly pertaining to the issue of the two proposed HP mechanisms-an internal representation as opposed to an automated attention mechanism. Here, using the complete composite paradigm design, we aimed to examine whether face rotation yields a nonlinear or a linear drop in HP, thus supporting an account that face processing is based either on an orientation-dependent internal representation or on automated attention. Our results reveal that even a relatively small perturbation in face orientation (30 deg away from upright) already causes a sharp decline in HP. These findings support the face internal representation hypothesis and the notion that the holistic processing of faces is highly orientation-specific.

  4. "I was really sceptical...But it worked really well": a qualitative study of patient perceptions of telephone-delivered exercise therapy by physiotherapists for people with knee osteoarthritis.

    PubMed

    Lawford, B J; Delany, C; Bennell, K L; Hinman, R S

    2018-06-01

    Physiotherapists typically prescribe exercise therapy for people with osteoarthritis (OA) via face-to-face consultations. This study aimed to explore peoples' perceptions of exercise therapy delivered by physiotherapists via telephone for their knee OA. A qualitative study (based on interpretivist methodology) embedded within a randomised controlled trial. Semi-structured individual interviews were conducted with 20 people with knee OA who had received exercise advice and support from one of eight physiotherapists via telephone over 6 months. Interviews were audio recorded, transcribed verbatim and thematically analysed. Although people with OA were initially sceptical about receiving exercise therapy via telephone, they described mostly positive experiences, valuing the convenience and accessibility. However, some desired visual contact with the physiotherapist and suggested including video-conferencing calls or an initial in-person clinic visit. Participants valued the sense of undivided focus and attention they received from the physiotherapist and believed that they were able to communicate effectively via telephone. Participants felt confident performing their exercise program without supervision and described benefits including increased muscular strength, improved pain, and ability to perform tasks that they had not been previously able to. People with knee OA held mostly positive perceptions about receiving exercise therapy from a physiotherapist via telephone, suggesting that such a service is broadly acceptable to consumers. Such services were generally not viewed as a substitute for face-to-face physiotherapy care, but rather as a new option that could increase accessibility of physiotherapy services, particularly for follow-up consultations. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  5. Segregation of face sensitive areas within the fusiform gyrus using global signal regression? A study on amygdala resting-state functional connectivity.

    PubMed

    Kruschwitz, Johann D; Meyer-Lindenberg, Andreas; Veer, Ilya M; Wackerhagen, Carolin; Erk, Susanne; Mohnke, Sebastian; Pöhland, Lydia; Haddad, Leila; Grimm, Oliver; Tost, Heike; Romanczuk-Seiferth, Nina; Heinz, Andreas; Walter, Martin; Walter, Henrik

    2015-10-01

    The application of global signal regression (GSR) to resting-state functional magnetic resonance imaging data and its usefulness is a widely discussed topic. In this article, we report an observation of segregated distribution of amygdala resting-state functional connectivity (rs-FC) within the fusiform gyrus (FFG) as an effect of GSR in a multi-center-sample of 276 healthy subjects. Specifically, we observed that amygdala rs-FC was distributed within the FFG as distinct anterior versus posterior clusters delineated by positive versus negative rs-FC polarity when GSR was performed. To characterize this effect in more detail, post hoc analyses revealed the following: first, direct overlays of task-functional magnetic resonance imaging derived face sensitive areas and clusters of positive versus negative amygdala rs-FC showed that the positive amygdala rs-FC cluster corresponded best with the fusiform face area, whereas the occipital face area corresponded to the negative amygdala rs-FC cluster. Second, as expected from a hierarchical face perception model, these amygdala rs-FC defined clusters showed differential rs-FC with other regions of the visual stream. Third, dynamic connectivity analyses revealed that these amygdala rs-FC defined clusters also differed in their rs-FC variance across time to the amygdala. Furthermore, subsample analyses of three independent research sites confirmed reliability of the effect of GSR, as revealed by similar patterns of distinct amygdala rs-FC polarity within the FFG. In this article, we discuss the potential of GSR to segregate face sensitive areas within the FFG and furthermore discuss how our results may relate to the functional organization of the face-perception circuit. © 2015 Wiley Periodicals, Inc.

  6. Face processing pattern under top-down perception: a functional MRI study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Liang, Jimin; Tian, Jie; Liu, Jiangang; Zhao, Jizheng; Zhang, Hui; Shi, Guangming

    2009-02-01

    Although top-down perceptual process plays an important role in face processing, its neural substrate is still puzzling because the top-down stream is extracted difficultly from the activation pattern associated with contamination caused by bottom-up face perception input. In the present study, a novel paradigm of instructing participants to detect faces from pure noise images is employed, which could efficiently eliminate the interference of bottom-up face perception in topdown face processing. Analyzing the map of functional connectivity with right FFA analyzed by conventional Pearson's correlation, a possible face processing pattern induced by top-down perception can be obtained. Apart from the brain areas of bilateral fusiform gyrus (FG), left inferior occipital gyrus (IOG) and left superior temporal sulcus (STS), which are consistent with a core system in the distributed cortical network for face perception, activation induced by top-down face processing is also found in these regions that include the anterior cingulate gyrus (ACC), right oribitofrontal cortex (OFC), left precuneus, right parahippocampal cortex, left dorsolateral prefrontal cortex (DLPFC), right frontal pole, bilateral premotor cortex, left inferior parietal cortex and bilateral thalamus. The results indicate that making-decision, attention, episodic memory retrieving and contextual associative processing network cooperate with general face processing regions to process face information under top-down perception.

  7. Face Context Influences Local Part Processing: An ERP Study.

    PubMed

    Zhang, Hong; Sun, Yaoru; Zhao, Lun

    2017-09-01

    Perception of face parts on the basis of features is thought to be different from perception of whole faces, which is more based on configural information. Face context is also suggested to play an important role in face processing. To investigate how face context influences the early-stage perception of facial local parts, we used an oddball paradigm that tested perceptual stages of face processing rather than recognition. We recorded the event-related potentials (ERPs) elicited by whole faces and face parts presented in four conditions (upright-normal, upright-thatcherised, inverted-normal and inverted-thatcherised), as well as the ERPs elicited by non-face objects (whole houses and house parts) with corresponding conditions. The results showed that face context significantly affected the N170 with increased amplitudes and earlier peak latency for upright normal faces. Removing face context delayed the P1 latency but did not affect the P1 amplitude prominently for both upright and inverted normal faces. Across all conditions, neither the N170 nor the P1 was modulated by house context. The significant changes on the N170 and P1 components revealed that face context influences local part processing at the early stage of face processing and this context effect might be specific for face perception. We further suggested that perceptions of whole faces and face parts are functionally distinguished.

  8. The perception of visual emotion: comparing different measures of awareness.

    PubMed

    Szczepanowski, Remigiusz; Traczyk, Jakub; Wierzchoń, Michał; Cleeremans, Axel

    2013-03-01

    Here, we explore the sensitivity of different awareness scales in revealing conscious reports on visual emotion perception. Participants were exposed to a backward masking task involving fearful faces and asked to rate their conscious awareness in perceiving emotion in facial expression using three different subjective measures: confidence ratings (CRs), with the conventional taxonomy of certainty, the perceptual awareness scale (PAS), through which participants categorize "raw" visual experience, and post-decision wagering (PDW), which involves economic categorization. Our results show that the CR measure was the most exhaustive and the most graded. In contrast, the PAS and PDW measures suggested instead that consciousness of emotional stimuli is dichotomous. Possible explanations of the inconsistency were discussed. Finally, our results also indicate that PDW biases awareness ratings by enhancing first-order accuracy of emotion perception. This effect was possibly a result of higher motivation induced by monetary incentives. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Neural correlates of the eye dominance effect in human face perception: the left-visual-field superiority for faces revisited.

    PubMed

    Jung, Wookyoung; Kang, Joong-Gu; Jeon, Hyeonjin; Shim, Miseon; Sun Kim, Ji; Leem, Hyun-Sung; Lee, Seung-Hwan

    2017-08-01

    Faces are processed best when they are presented in the left visual field (LVF), a phenomenon known as LVF superiority. Although one eye contributes more when perceiving faces, it is unclear how the dominant eye (DE), the eye we unconsciously use when performing a monocular task, affects face processing. Here, we examined the influence of the DE on the LVF superiority for faces using event-related potentials. Twenty left-eye-dominant (LDE group) and 23 right-eye-dominant (RDE group) participants performed the experiments. Face stimuli were randomly presented in the LVF or right visual field (RVF). The RDE group exhibited significantly larger N170 amplitudes compared with the LDE group. Faces presented in the LVF elicited N170 amplitudes that were significantly more negative in the RDE group than they were in the LDE group, whereas the amplitudes elicited by stimuli presented in the RVF were equivalent between the groups. The LVF superiority was maintained in the RDE group but not in the LDE group. Our results provide the first neural evidence of the DE's effects on the LVF superiority for faces. We propose that the RDE may be more biologically specialized for face processing. © The Author (2017). Published by Oxford University Press.

  10. Neural correlates of the eye dominance effect in human face perception: the left-visual-field superiority for faces revisited

    PubMed Central

    Jung, Wookyoung; Kang, Joong-Gu; Jeon, Hyeonjin; Shim, Miseon; Sun Kim, Ji; Leem, Hyun-Sung

    2017-01-01

    Abstract Faces are processed best when they are presented in the left visual field (LVF), a phenomenon known as LVF superiority. Although one eye contributes more when perceiving faces, it is unclear how the dominant eye (DE), the eye we unconsciously use when performing a monocular task, affects face processing. Here, we examined the influence of the DE on the LVF superiority for faces using event-related potentials. Twenty left-eye-dominant (LDE group) and 23 right-eye-dominant (RDE group) participants performed the experiments. Face stimuli were randomly presented in the LVF or right visual field (RVF). The RDE group exhibited significantly larger N170 amplitudes compared with the LDE group. Faces presented in the LVF elicited N170 amplitudes that were significantly more negative in the RDE group than they were in the LDE group, whereas the amplitudes elicited by stimuli presented in the RVF were equivalent between the groups. The LVF superiority was maintained in the RDE group but not in the LDE group. Our results provide the first neural evidence of the DE’s effects on the LVF superiority for faces. We propose that the RDE may be more biologically specialized for face processing. PMID:28379584

  11. Spatial Mechanisms within the Dorsal Visual Pathway Contribute to the Configural Processing of Faces.

    PubMed

    Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G

    2017-08-01

    Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.

  12. Affective blindsight in the absence of input from face processing regions in occipital-temporal cortex.

    PubMed

    Striemer, Christopher L; Whitwell, Robert L; Goodale, Melvyn A

    2017-11-12

    Previous research suggests that the implicit recognition of emotional expressions may be carried out by pathways that bypass primary visual cortex (V1) and project to the amygdala. Some of the strongest evidence supporting this claim comes from case studies of "affective blindsight" in which patients with V1 damage can correctly guess whether an unseen face was depicting a fearful or happy expression. In the current study, we report a new case of affective blindsight in patient MC who is cortically blind following extensive bilateral lesions to V1, as well as face and object processing regions in her ventral visual stream. Despite her large lesions, MC has preserved motion perception which is related to sparing of the motion sensitive region MT+ in both hemispheres. To examine affective blindsight in MC we asked her to perform gender and emotion discrimination tasks in which she had to guess, using a two-alternative forced-choice procedure, whether the face presented was male or female, happy or fearful, or happy or angry. In addition, we also tested MC in a four-alternative forced-choice target localization task. Results indicated that MC was not able to determine the gender of the faces (53% accuracy), or localize targets in a forced-choice task. However, she was able to determine, at above chance levels, whether the face presented was depicting a happy or fearful (67%, p = .006), or a happy or angry (64%, p = .025) expression. Interestingly, although MC was better than chance at discriminating between emotions in faces when asked to make rapid judgments, her performance fell to chance when she was asked to provide subjective confidence ratings about her performance. These data lend further support to the idea that there is a non-conscious visual pathway that bypasses V1 which is capable of processing affective signals from facial expressions without input from higher-order face and object processing regions in the ventral visual stream. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Emotional Processing of Personally Familiar Faces in the Vegetative State

    PubMed Central

    Sharon, Haggai; Pasternak, Yotam; Ben Simon, Eti; Gruberger, Michal; Giladi, Nir; Krimchanski, Ben Zion; Hassin, David; Hendler, Talma

    2013-01-01

    Background The Vegetative State (VS) is a severe disorder of consciousness in which patients are awake but display no signs of awareness. Yet, recent functional magnetic resonance imaging (fMRI) studies have demonstrated evidence for covert awareness in VS patients by recording specific brain activations during a cognitive task. However, the possible existence of incommunicable subjective emotional experiences in VS patients remains largely unexplored. This study aimed to probe the question of whether VS patients retain a brain ability to selectively process external stimuli according to their emotional value and look for evidence of covert emotional awareness in patients. Methods and Findings In order to explore these questions we employed the emotive impact of observing personally familiar faces, known to provoke specific perceptual as well as emotional brain activations. Four VS patients and thirteen healthy controls first underwent an fMRI scan while viewing pictures of non-familiar faces, personally familiar faces and pictures of themselves. In a subsequent imagery task participants were asked to actively imagine one of their parent's faces. Analyses focused on face and familiarity selective regional brain activations and inter-regional functional connectivity. Similar to controls, all patients displayed face selective brain responses with further limbic and cortical activations elicited by familiar faces. In patients as well as controls, Connectivity was observed between emotional, visual and face specific areas, suggesting aware emotional perception. This connectivity was strongest in the two patients who later recovered. Notably, these two patients also displayed selective amygdala activation during familiar face imagery, with one further exhibiting face selective activations, indistinguishable from healthy controls. Conclusions Taken together, these results show that selective emotional processing can be elicited in VS patients both by external emotionally salient stimuli and by internal cognitive processes, suggesting the ability for covert emotional awareness of self and the environment in VS patients. PMID:24086365

  14. Medium Moderates the Message. How Users Adjust Their Communication Trajectories to Different Media in Collaborative Task Solving

    PubMed Central

    Rychwalska, Agnieszka; Samson, Katarzyna; Łucznik, Klara; Ziembowicz, Michał; Szóstek, Agnieszka; Nowak, Andrzej

    2016-01-01

    Rapid development of information and communications technologies (ICT) has triggered profound changes in how people manage their social contacts in both informal and professional contexts. ICT mediated communication may seem limited in possibilities compared to face-to-face encounters, but research shows that puzzlingly often it can be just as effective and satisfactory. We posit that ICT users employ specific communication strategies adapted to particular communication channels, which results in a comparable effectiveness of communication. In order to maintain a satisfactory level of conversational intelligibility they calibrate the content of their messages to a given medium’s richness and adjust the whole conversation trajectory so that every stage of the communication process runs fluently. In the current study, we compared complex task solving trajectories in chat, mobile phone and face-to-face dyadic conversations. Media conditions did not influence the quality of decision outcomes or users’ perceptions of the interaction, but they had impact on the amount of time devoted to each of the identified phases of decision development. In face-to-face contacts the evaluation stage of the discussion dominated the conversation; in the texting condition the orientation-evaluation-control phases were evenly distributed; and the phone condition provided a midpoint between these two extremes. The results show that contemporary ICT users adjust their communication behavior to the limitations and opportunities of various media through the regulation of attention directed to each stage of the discussion so that as a whole the communication process remains effective. PMID:27337037

  15. The neural correlates of visual self-recognition.

    PubMed

    Devue, Christel; Brédart, Serge

    2011-03-01

    This paper presents a review of studies that were aimed at determining which brain regions are recruited during visual self-recognition, with a particular focus on self-face recognition. A complex bilateral network, involving frontal, parietal and occipital areas, appears to be associated with self-face recognition, with a particularly high implication of the right hemisphere. Results indicate that it remains difficult to determine which specific cognitive operation is reflected by each recruited brain area, in part due to the variability of used control stimuli and experimental tasks. A synthesis of the interpretations provided by previous studies is presented. The relevance of using self-recognition as an indicator of self-awareness is discussed. We argue that a major aim of future research in the field should be to identify more clearly the cognitive operations induced by the perception of the self-face, and search for dissociations between neural correlates and cognitive components. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Rapid processing of emotional expressions without conscious awareness.

    PubMed

    Smith, Marie L

    2012-08-01

    Rapid accurate categorization of the emotional state of our peers is of critical importance and as such many have proposed that facial expressions of emotion can be processed without conscious awareness. Typically, studies focus selectively on fearful expressions due to their evolutionary significance, leaving the subliminal processing of other facial expressions largely unexplored. Here, I investigated the time course of processing of 3 facial expressions (fearful, disgusted, and happy) plus an emotionally neutral face, during objectively unaware and aware perception. Participants completed the challenging "which expression?" task in response to briefly presented backward-masked expressive faces. Although participant's behavioral responses did not differentiate between the emotional content of the stimuli in the unaware condition, activity over frontal and occipitotemporal (OT) brain regions indicated an emotional modulation of the neuronal response. Over frontal regions this was driven by negative facial expressions and was present on all emotional trials independent of later categorization. Whereas the N170 component, recorded on lateral OT electrodes, was enhanced for all facial expressions but only on trials that would later be categorized as emotional. The results indicate that emotional faces, not only fearful, are processed without conscious awareness at an early stage and highlight the critical importance of considering categorization response when studying subliminal perception.

  17. Biological motion perception links diverse facets of theory of mind during middle childhood.

    PubMed

    Rice, Katherine; Anderson, Laura C; Velnoskey, Kayla; Thompson, James C; Redcay, Elizabeth

    2016-06-01

    Two cornerstones of social development--social perception and theory of mind--undergo brain and behavioral changes during middle childhood, but the link between these developing domains is unclear. One theoretical perspective argues that these skills represent domain-specific areas of social development, whereas other perspectives suggest that both skills may reflect a more integrated social system. Given recent evidence from adults that these superficially different domains may be related, the current study examined the developmental relation between these social processes in 52 children aged 7 to 12 years. Controlling for age and IQ, social perception (perception of biological motion in noise) was significantly correlated with two measures of theory of mind: one in which children made mental state inferences based on photographs of the eye region of the face and another in which children made mental state inferences based on stories. Social perception, however, was not correlated with children's ability to make physical inferences from stories about people. Furthermore, the mental state inference tasks were not correlated with each other, suggesting a role for social perception in linking various facets of theory of mind. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Reprint of "Biological motion perception links diverse facets of theory of mind during middle childhood".

    PubMed

    Rice, Katherine; Anderson, Laura C; Velnoskey, Kayla; Thompson, James C; Redcay, Elizabeth

    2016-09-01

    Two cornerstones of social development-social perception and theory of mind-undergo brain and behavioral changes during middle childhood, but the link between these developing domains is unclear. One theoretical perspective argues that these skills represent domain-specific areas of social development, whereas other perspectives suggest that both skills may reflect a more integrated social system. Given recent evidence from adults that these superficially different domains may be related, the current study examined the developmental relation between these social processes in 52 children aged 7 to 12years. Controlling for age and IQ, social perception (perception of biological motion in noise) was significantly correlated with two measures of theory of mind: one in which children made mental state inferences based on photographs of the eye region of the face and another in which children made mental state inferences based on stories. Social perception, however, was not correlated with children's ability to make physical inferences from stories about people. Furthermore, the mental state inference tasks were not correlated with each other, suggesting a role for social perception in linking various facets of theory of mind. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Dog owners show experience-based viewing behaviour in judging dog face approachability.

    PubMed

    Gavin, Carla Jade; Houghton, Sarah; Guo, Kun

    2017-01-01

    Our prior visual experience plays a critical role in face perception. We show superior perceptual performance for differentiating conspecific (vs non-conspecific), own-race (vs other-race) and familiar (vs unfamiliar) faces. However, it remains unclear whether our experience with faces of other species would influence our gaze allocation for extracting salient facial information. In this eye-tracking study, we asked both dog owners and non-owners to judge the approachability of human, monkey and dog faces, and systematically compared their behavioural performance and gaze pattern associated with the task. Compared to non-owners, dog owners assessed dog faces with shorter time and fewer fixations, but gave higher approachability ratings. The gaze allocation within local facial features was also modulated by the ownership. The averaged proportion of the fixations and viewing time directed at the dog mouth region were significantly less for the dog owners, and more experienced dog owners tended to look more at the dog eyes, suggesting the adoption of a prior experience-based viewing behaviour for assessing dog approachability. No differences in behavioural performance and gaze pattern were observed between dog owners and non-owners when judging human and monkey faces, implying that the dog owner's experience-based gaze strategy for viewing dog faces was not transferable across faces of other species.

  20. Monkeys and Humans Share a Common Computation for Face/Voice Integration

    PubMed Central

    Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.

    2011-01-01

    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576

  1. Emotional faces influence evaluation of natural and transformed food.

    PubMed

    Manippa, Valerio; Padulo, Caterina; Brancucci, Alfredo

    2018-07-01

    Previous evidence showed the presence of a straight relationship between feeding behavior and emotions. Despite that, no studies have focused on the influence of emotional faces on food processing. In our study, participants were presented with 72 couples of visual stimuli composed of a neutral, happy, or disgusted faces (5000 ms duration in Experiment 1, adaptation; 150 ms in Experiment 2, priming) followed by a food stimulus (1500 ms). Food stimuli were grouped in pleasant foods, further divided in natural and transformed, and unpleasant rotten foods. The task consisted in judging the food valence (as 'pleasant' or 'unpleasant') by keypress. Results showed a different pattern of response based on the transformation level of food. In general, the evaluation of natural foods was more rapid compared with transformed foods, maybe for their simplicity and healthier perception. In addition, transformed foods yielded incongruent responses with respect to the preceding emotional face, whereas natural foods yielded congruent responses with respect to it. These effects were independent of the duration of the emotional face (i.e., adaptation or priming paradigm) and may depend on pleasant food stimuli salience.

  2. ERP correlates of attention allocation in mothers processing faces of their children

    PubMed Central

    Grasso, Damion J.; Moser, Jason S.; Dozier, Mary; Simons, Robert

    2012-01-01

    This study employed visually evoked event-related potential (ERP) methodology to examine temporal patterns of structural and higher-level face processing in birth and foster/adoptive mothers viewing pictures of their children. Fourteen birth mothers and 14 foster/adoptive mothers engaged in a computerized task in which they viewed facial pictures of their own children, and of familiar and unfamiliar children and adults. All mothers, regardless of type, showed ERP patterns suggestive of increased attention allocation to their own children’s faces compared to other child and adult faces beginning as early as 100–150 ms after stimulus onset and lasting for several hundred milliseconds. These data are in line with a parallel processing model that posits the involvement of several brain regions in simultaneously encoding the structural features of faces as well as their emotional and personal significance. Additionally, late positive ERP patterns associated with greater allocation of attention predicted mothers’ perceptions of the parent–child relationship as positive and influential to their children’s psychological development. These findings suggest the potential utility of using ERP components to index maternal processes. PMID:19428973

  3. Selective attention modulates early human evoked potentials during emotional face-voice processing.

    PubMed

    Ho, Hao Tam; Schröger, Erich; Kotz, Sonja A

    2015-04-01

    Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face-voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face-voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face-voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective-one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.

  4. An Intact Social Cognitive Process in Schizophrenia: Situational Context Effects on Perception of Facial Affect

    PubMed Central

    Lee, Junghee; Kern, Robert S.; Harvey, Philippe-Olivier; Horan, William P.; Kee, Kimmy S.; Ochsner, Kevin; Penn, David L.; Green, Michael F.

    2013-01-01

    Background Impaired facial affect recognition is the most consistent social cognitive finding in schizophrenia. Although social situations provide powerful constraints on our perception, little is known about how situational context modulates facial affect recognition in schizophrenia. Methods Study 1 was a single-site study with 34 schizophrenia patients and 22 healthy controls. Study 2 was a 2-site study with 68 schizophrenia patients and 28 controls. Both studies administered a Situational Context Facial Affect Recognition Task with 2 conditions: a situational context condition and a no-context condition. For the situational context condition, a briefly shown face was preceded by a sentence describing either a fear- or surprise-inducing event. In the no-context condition, a face was presented without a sentence. For both conditions, subjects rated how fearful or surprised the face appeared on a 9-point Likert scale. Results For the situational context condition of study 1, both patients and controls rated faces as more afraid when they were paired with fear-inducing sentences and as more surprised when they were paired with surprise-inducing sentences. The degree of modulation was comparable across groups. For the no-context condition, patients rated faces comparably to controls. The findings of study 2 replicated those from study 1. Conclusions Despite previous abnormalities in other types of context paradigms, this study found intact situational context processing in schizophrenia, suggesting that patients benefit from situational context when interpreting ambiguous facial expression. This area of relative social cognitive strength in schizophrenia has implications for social cognitive training programs. PMID:22532704

  5. Early and late temporo-spatial effects of contextual interference during perception of facial affect.

    PubMed

    Frühholz, Sascha; Fehr, Thorsten; Herrmann, Manfred

    2009-10-01

    Contextual features during recognition of facial affect are assumed to modulate the temporal course of emotional face processing. Here, we simultaneously presented colored backgrounds during valence categorizations of facial expressions. Subjects incidentally learned to perceive negative, neutral and positive expressions within a specific colored context. Subsequently, subjects made fast valence judgments while presented with the same face-color-combinations as in the first run (congruent trials) or with different face-color-combinations (incongruent trials). Incongruent trials induced significantly increased response latencies and significantly decreased performance accuracy. Contextual incongruent information during processing of neutral expressions modulated the P1 and the early posterior negativity (EPN) both localized in occipito-temporal areas. Contextual congruent information during emotional face perception revealed an emotion-related modulation of the P1 for positive expressions and of the N170 and the EPN for negative expressions. Highest amplitude of the N170 was found for negative expressions in a negatively associated context and the N170 amplitude varied with the amount of overall negative information. Incongruent trials with negative expressions elicited a parietal negativity which was localized to superior parietal cortex and which most likely represents a posterior manifestation of the N450 as an indicator of conflict processing. A sustained activation of the late LPP over parietal cortex for all incongruent trials might reflect enhanced engagement with facial expression during task conditions of contextual interference. In conclusion, whereas early components seem to be sensitive to the emotional valence of facial expression in specific contexts, late components seem to subserve interference resolution during emotional face processing.

  6. Task relevance of emotional information affects anxiety-linked attention bias in visual search.

    PubMed

    Dodd, Helen F; Vogt, Julia; Turkileri, Nilgun; Notebaert, Lies

    2017-01-01

    Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Translation and articulation in biological motion perception.

    PubMed

    Masselink, Jana; Lappe, Markus

    2015-08-01

    Recent models of biological motion processing focus on the articulational aspect of human walking investigated by point-light figures walking in place. However, in real human walking, the change in the position of the limbs relative to each other (referred to as articulation) results in a change of body location in space over time (referred to as translation). In order to examine the role of this translational component on the perception of biological motion we designed three psychophysical experiments of facing (leftward/rightward) and articulation discrimination (forward/backward and leftward/rightward) of a point-light walker viewed from the side, varying translation direction (relative to articulation direction), the amount of local image motion, and trial duration. In a further set of a forward/backward and a leftward/rightward articulation task, we additionally tested the influence of translational speed, including catch trials without articulation. We found a perceptual bias in translation direction in all three discrimination tasks. In the case of facing discrimination the bias was limited to short stimulus presentation. Our results suggest an interaction of articulation analysis with the processing of translational motion leading to best articulation discrimination when translational direction and speed match articulation. Moreover, we conclude that the global motion of the center-of-mass of the dot pattern is more relevant to processing of translation than the local motion of the dots. Our findings highlight that translation is a relevant cue that should be integrated in models of human motion detection.

  8. Covert face recognition in congenital prosopagnosia: a group study.

    PubMed

    Rivolta, Davide; Palermo, Romina; Schmalzl, Laura; Coltheart, Max

    2012-03-01

    Even though people with congenital prosopagnosia (CP) never develop a normal ability to "overtly" recognize faces, some individuals show indices of "covert" (or implicit) face recognition. The aim of this study was to demonstrate covert face recognition in CP when participants could not overtly recognize the faces. Eleven people with CP completed three tasks assessing their overt face recognition ability, and three tasks assessing their "covert" face recognition: a Forced choice familiarity task, a Forced choice cued task, and a Priming task. Evidence of covert recognition was observed with the Forced choice familiarity task, but not the Priming task. In addition, we propose that the Forced choice cued task does not measure covert processing as such, but instead "provoked-overt" recognition. Our study clearly shows that people with CP demonstrate covert recognition for faces that they cannot overtly recognize, and that behavioural tasks vary in their sensitivity to detect covert recognition in CP. Copyright © 2011 Elsevier Srl. All rights reserved.

  9. Prior probability and feature predictability interactively bias perceptual decisions

    PubMed Central

    Dunovan, Kyle E.; Tremel, Joshua J.; Wheeler, Mark E.

    2014-01-01

    Anticipating a forthcoming sensory experience facilitates perception for expected stimuli but also hinders perception for less likely alternatives. Recent neuroimaging studies suggest that expectation biases arise from feature-level predictions that enhance early sensory representations and facilitate evidence accumulation for contextually probable stimuli while suppressing alternatives. Reasonably then, the extent to which prior knowledge biases subsequent sensory processing should depend on the precision of expectations at the feature level as well as the degree to which expected features match those of an observed stimulus. In the present study we investigated how these two sources of uncertainty modulated pre- and post-stimulus bias mechanisms in the drift-diffusion model during a probabilistic face/house discrimination task. We tested several plausible models of choice bias, concluding that predictive cues led to a bias in both the starting-point and rate of evidence accumulation favoring the more probable stimulus category. We further tested the hypotheses that prior bias in the starting-point was conditional on the feature-level uncertainty of category expectations and that dynamic bias in the drift-rate was modulated by the match between expected and observed stimulus features. Starting-point estimates suggested that subjects formed a constant prior bias in favor of the face category, which exhibits less feature-level variability, that was strengthened or weakened by trial-wise predictive cues. Furthermore, we found that the gain on face/house evidence was increased for stimuli with less ambiguous features and that this relationship was enhanced by valid category expectations. These findings offer new evidence that bridges psychological models of decision-making with recent predictive coding theories of perception. PMID:24978303

  10. Perceptual Decision-Making as Probabilistic Inference by Neural Sampling.

    PubMed

    Haefner, Ralf M; Berkes, Pietro; Fiser, József

    2016-05-04

    We address two main challenges facing systems neuroscience today: understanding the nature and function of cortical feedback between sensory areas and of correlated variability. Starting from the old idea of perception as probabilistic inference, we show how to use knowledge of the psychophysical task to make testable predictions for the influence of feedback signals on early sensory representations. Applying our framework to a two-alternative forced choice task paradigm, we can explain multiple empirical findings that have been hard to account for by the traditional feedforward model of sensory processing, including the task dependence of neural response correlations and the diverging time courses of choice probabilities and psychophysical kernels. Our model makes new predictions and characterizes a component of correlated variability that represents task-related information rather than performance-degrading noise. It demonstrates a normative way to integrate sensory and cognitive components into physiologically testable models of perceptual decision-making. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. EEG based topography analysis in string recognition task

    NASA Astrophysics Data System (ADS)

    Ma, Xiaofei; Huang, Xiaolin; Shen, Yuxiaotong; Qin, Zike; Ge, Yun; Chen, Ying; Ning, Xinbao

    2017-03-01

    Vision perception and recognition is a complex process, during which different parts of brain are involved depending on the specific modality of the vision target, e.g. face, character, or word. In this study, brain activities in string recognition task compared with idle control state are analyzed through topographies based on multiple measurements, i.e. sample entropy, symbolic sample entropy and normalized rhythm power, extracted from simultaneously collected scalp EEG. Our analyses show that, for most subjects, both symbolic sample entropy and normalized gamma power in string recognition task are significantly higher than those in idle state, especially at locations of P4, O2, T6 and C4. It implies that these regions are highly involved in string recognition task. Since symbolic sample entropy measures complexity, from the perspective of new information generation, and normalized rhythm power reveals the power distributions in frequency domain, complementary information about the underlying dynamics can be provided through the two types of indices.

  12. Dissociating 'what' and 'how' in visual form agnosia: a computational investigation.

    PubMed

    Vecera, S P

    2002-01-01

    Patients with visual form agnosia exhibit a profound impairment in shape perception (what an object is) coupled with intact visuomotor functions (how to act on an object), demonstrating a dissociation between visual perception and action. How can these patients act on objects that they cannot perceive? Although two explanations of this 'what-how' dissociation have been offered, each explanation has shortcomings. A 'pathway information' account of the 'what-how' dissociation is presented in this paper. This account hypothesizes that 'where' and 'how' tasks require less information than 'what' tasks, thereby allowing 'where/how' to remain relatively spared in the face of neurological damage. Simulations with a neural network model test the predictions of the pathway information account. Following damage to an input layer common to the 'what' and 'where/how' pathways, the model performs object identification more poorly than spatial localization. Thus, the model offers a parsimonious explanation of differential 'what-how' performance in visual form agnosia. The simulation results are discussed in terms of their implications for visual form agnosia and other neuropsychological syndromes.

  13. Increased heart rate after exercise facilitates the processing of fearful but not disgusted faces.

    PubMed

    Pezzulo, G; Iodice, P; Barca, L; Chausse, P; Monceau, S; Mermillod, M

    2018-01-10

    Embodied theories of emotion assume that emotional processing is grounded in bodily and affective processes. Accordingly, the perception of an emotion re-enacts congruent sensory and affective states; and conversely, bodily states congruent with a specific emotion facilitate emotional processing. This study tests whether the ability to process facial expressions (faces having a neutral expression, expressing fear, or disgust) can be influenced by making the participants' body state congruent with the expressed emotion (e.g., high heart rate in the case of faces expressing fear). We designed a task requiring participants to categorize pictures of male and female faces that either had a neutral expression (neutral), or expressed emotions whose linkage with high heart rate is strong (fear) or significantly weaker or absent (disgust). Critically, participants were tested in two conditions: with experimentally induced high heart rate (Exercise) and with normal heart rate (Normal). Participants processed fearful faces (but not disgusted or neutral faces) faster when they were in the Exercise condition than in the Normal condition. These results support the idea that an emotionally congruent body state facilitates the automatic processing of emotionally-charged stimuli and this effect is emotion-specific rather than due to generic factors such as arousal.

  14. Preliminary Evidence of "Other-Race Effect"-Like Behavior Induced by Cathodal-tDCS over the Right Occipital Cortex, in the Absence of Overall Effects on Face/Object Processing.

    PubMed

    Costantino, Andrea I; Titoni, Matilde; Bossi, Francesco; Premoli, Isabella; Nitsche, Michael A; Rivolta, Davide

    2017-01-01

    Neuromodulation techniques such as tDCS have provided important insight into the neurophysiological mechanisms that mediate cognition. Albeit anodal tDCS (a-tDCS) often enhances cognitive skills, the role of cathodal tDCS (c-tDCS) in visual cognition is largely unexplored and inconclusive. Here, in a single-blind, sham-controlled study, we investigated the offline effects of 1.5 mA c-tDCS over the right occipital cortex of 86 participants on four tasks assessing perception and memory of both faces and objects. Results demonstrated that c-tDCS does not overall affect performance on the four tasks. However, post-hoc exploratory analysis on participants' race (Caucasian vs. non-Caucasians), showed a "face-specific" performance decrease (≈10%) in non-Caucasian participants only . This preliminary evidence suggests that c-tDCS can induce "other-race effect (ORE)-like" behavior in non-Caucasian participants that did not show any ORE before stimulation (and in case of sham stimulation). Our results add relevant information about the breadth of cognitive processes and visual stimuli that can be modulated by c-tDCS, about the design of effective neuromodulation protocols, and have important implications for the potential neurophysiological bases of ORE.

  15. Cultural immersion alters emotion perception: Neurophysiological evidence from Chinese immigrants to Canada.

    PubMed

    Liu, Pan; Rigoulot, Simon; Pell, Marc D

    2017-12-01

    To explore how cultural immersion modulates emotion processing, this study examined how Chinese immigrants to Canada process multisensory emotional expressions, which were compared to existing data from two groups, Chinese and North Americans. Stroop and Oddball paradigms were employed to examine different stages of emotion processing. The Stroop task presented face-voice pairs expressing congruent/incongruent emotions and participants actively judged the emotion of one modality while ignoring the other. A significant effect of cultural immersion was observed in the immigrants' behavioral performance, which showed greater interference from to-be-ignored faces, comparable with what was observed in North Americans. However, this effect was absent in their N400 data, which retained the same pattern as the Chinese. In the Oddball task, where immigrants passively viewed facial expressions with/without simultaneous vocal emotions, they exhibited a larger visual MMN for faces accompanied by voices, again mirroring patterns observed in Chinese. Correlation analyses indicated that the immigrants' living duration in Canada was associated with neural patterns (N400 and visual mismatch negativity) more closely resembling North Americans. Our data suggest that in multisensory emotion processing, adopting to a new culture first leads to behavioral accommodation followed by alterations in brain activities, providing new evidence on human's neurocognitive plasticity in communication.

  16. A single administration of cortisol acutely reduces preconscious attention for fear in anxious young men.

    PubMed

    Putman, Peter; Hermans, Erno J; Koppeschaar, Hans; van Schijndel, Alexandra; van Honk, Jack

    2007-08-01

    Chronically elevated HPA activity has often been associated with fear and anxiety, but there is evidence that single administrations of glucocorticoids may acutely reduce fear. Moreover, peri-traumatic cortisol elevation may protect against development of post-traumatic stress disorder. Hypervigilant processing of threat information plays a role in anxiety disorders and although relations with HPA functioning have been established, causality of these relations remains unclear. Presently, self-reported anxiety and response time patterns on a masked emotional Stroop task with fearful faces were measured in 20 healthy young men after double-blind, placebo-controlled oral administration of 40 mg cortisol. The masked fearful Stroop task measures vocal colornaming response latencies for pictures of neutral and fearful faces presented below the threshold for conscious perception. Results showed increased response times on trials for fearful compared to neutral faces after placebo, but this emotional Stroop effect was acutely abolished by cortisol administration. This effect was most pronounced in subjects with heightened anxiety levels. This is the first evidence showing that exogenous cortisol acutely reduces anxiety-driven selective attention to threat. These results extend earlier findings of acute fear reduction after glucocorticoid administration. This suggests interactions of HPA functioning and vigilant attention in the pathogenesis of anxiety disorders. Possible neuroendocrine mechanisms of action are discussed.

  17. Grounding context in face processing: color, emotion, and gender.

    PubMed

    Gil, Sandrine; Le Bigot, Ludovic

    2015-01-01

    In recent years, researchers have become interested in the way that the affective quality of contextual information transfers to a perceived target. We therefore examined the effect of a red (vs. green, mixed red/green, and achromatic) background - known to be valenced - on the processing of stimuli that play a key role in human interactions, namely facial expressions. We also examined whether the valenced-color effect can be modulated by gender, which is also known to be valenced. Female and male adult participants performed a categorization task of facial expressions of emotion in which the faces of female and male posers expressing two ambiguous emotions (i.e., neutral and surprise) were presented against the four different colored backgrounds. Additionally, this task was completed by collecting subjective ratings for each colored background in the form of five semantic differential scales corresponding to both discrete and dimensional perspectives of emotion. We found that the red background resulted in more negative face perception than the green background, whether the poser was female or male. However, whereas this valenced-color effect was the only effect for female posers, for male posers, the effect was modulated by both the nature of the ambiguous emotion and the decoder's gender. Overall, our findings offer evidence that color and gender have a common valence-based dimension.

  18. Understanding face perception by means of human electrophysiology.

    PubMed

    Rossion, Bruno

    2014-06-01

    Electrophysiological recordings on the human scalp provide a wealth of information about the temporal dynamics and nature of face perception at a global level of brain organization. The time window between 100 and 200 ms witnesses the transition between low-level and high-level vision, an N170 component correlating with conscious interpretation of a visual stimulus as a face. This face representation is rapidly refined as information accumulates during this time window, allowing the individualization of faces. To improve the sensitivity and objectivity of face perception measures, it is increasingly important to go beyond transient visual stimulation by recording electrophysiological responses at periodic frequency rates. This approach has recently provided face perception thresholds and the first objective signature of integration of facial parts in the human brain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Dissociation between the behavioural and electrophysiological effects of the face and body composite illusions.

    PubMed

    Soria Bauser, Denise A; Schriewer, Elisabeth; Suchan, Boris

    2015-08-01

    Several studies have reported similarities between perceptual processes underlying face and body perception, particularly emphasizing the importance of configural processes. Differences between the perception of faces and the perception of bodies were observed by means of a manipulation targeting a specific subtype of configural processing: the composite illusion. The composite face illusion describes the fact that two identical top halves of a face are perceived as being different if they are presented with different bottom parts. This effect disappears, if both halves are laterally shifted. Crucially, the effect of misalignment is not observed for bodies. This study aimed to further explore differences in the time course of face and body perception by using the composite effect. The present results replicated behavioural effects illustrating that misalignment affects the perception of faces but not bodies. Thus, face but not body perception relies on holistic processing. However, differences in the time course of the processing of both stimulus categories emerged at the N170 and P200. The pattern of the behavioural data seemed to be related to the P200. Thus, the present data indicate that holistic processes associated with the effect of misalignment might occur 200 ms after stimulus onset. © 2014 The British Psychological Society.

  20. The Development of Face Perception in Infancy: Intersensory Interference and Unimodal Visual Facilitation

    PubMed Central

    Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina

    2014-01-01

    Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the Intersensory Redundancy Hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired in the context of intersensory redundancy provided by audiovisual speech, and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development. Later in development, following improvements in attention, faces should be discriminated in both redundant audiovisual and nonredundant stimulation. Results supported these predictions. Two-month-old infants discriminated a novel face in unimodal visual and asynchronous audiovisual speech but not in synchronous audiovisual speech. By 3 months, face discrimination was evident even during synchronous audiovisual speech. These findings indicate that infant face perception is enhanced and emerges developmentally earlier following unimodal visual than synchronous audiovisual exposure and that intersensory redundancy generated by naturalistic audiovisual speech can interfere with face processing. PMID:23244407

  1. Task-related functional connectivity in autism spectrum conditions: an EEG study using wavelet transform coherence

    PubMed Central

    2013-01-01

    Background Autism Spectrum Conditions (ASC) are a set of pervasive neurodevelopmental conditions characterized by a wide range of lifelong signs and symptoms. Recent explanatory models of autism propose abnormal neural connectivity and are supported by studies showing decreased interhemispheric coherence in individuals with ASC. The first aim of this study was to test the hypothesis of reduced interhemispheric coherence in ASC, and secondly to investigate specific effects of task performance on interhemispheric coherence in ASC. Methods We analyzed electroencephalography (EEG) data from 15 participants with ASC and 15 typical controls, using Wavelet Transform Coherence (WTC) to calculate interhemispheric coherence during face and chair matching tasks, for EEG frequencies from 5 to 40 Hz and during the first 400 ms post-stimulus onset. Results Results demonstrate a reduction of interhemispheric coherence in the ASC group, relative to the control group, in both tasks and for all electrode pairs studied. For both tasks, group differences were generally observed after around 150 ms and at frequencies lower than 13 Hz. Regarding within-group task comparisons, while the control group presented differences in interhemispheric coherence between faces and chairs tasks at various electrode pairs (FT7-FT8, TP7-TP8, P7-P8), such differences were only seen for one electrode pair in the ASC group (T7-T8). No significant differences in EEG power spectra were observed between groups. Conclusions Interhemispheric coherence is reduced in people with ASC, in a time and frequency specific manner, during visual perception and categorization of both social and inanimate stimuli and this reduction in coherence is widely dispersed across the brain. Results of within-group task comparisons may reflect an impairment in task differentiation in people with ASC relative to typically developing individuals. Overall, the results of this research support the value of WTC in examining the time-frequency microstructure of task-related interhemispheric EEG coherence in people with ASC. PMID:23311570

  2. Classification of autism spectrum disorder using supervised learning of brain connectivity measures extracted from synchrostates

    NASA Astrophysics Data System (ADS)

    Jamal, Wasifa; Das, Saptarshi; Oprescu, Ioana-Anastasia; Maharatna, Koushik; Apicella, Fabio; Sicca, Federico

    2014-08-01

    Objective. The paper investigates the presence of autism using the functional brain connectivity measures derived from electro-encephalogram (EEG) of children during face perception tasks. Approach. Phase synchronized patterns from 128-channel EEG signals are obtained for typical children and children with autism spectrum disorder (ASD). The phase synchronized states or synchrostates temporally switch amongst themselves as an underlying process for the completion of a particular cognitive task. We used 12 subjects in each group (ASD and typical) for analyzing their EEG while processing fearful, happy and neutral faces. The minimal and maximally occurring synchrostates for each subject are chosen for extraction of brain connectivity features, which are used for classification between these two groups of subjects. Among different supervised learning techniques, we here explored the discriminant analysis and support vector machine both with polynomial kernels for the classification task. Main results. The leave one out cross-validation of the classification algorithm gives 94.7% accuracy as the best performance with corresponding sensitivity and specificity values as 85.7% and 100% respectively. Significance. The proposed method gives high classification accuracies and outperforms other contemporary research results. The effectiveness of the proposed method for classification of autistic and typical children suggests the possibility of using it on a larger population to validate it for clinical practice.

  3. Intact Rapid Facial Mimicry as well as Generally Reduced Mimic Responses in Stable Schizophrenia Patients

    PubMed Central

    Chechko, Natalya; Pagel, Alena; Otte, Ellen; Koch, Iring; Habel, Ute

    2016-01-01

    Spontaneous emotional expressions (rapid facial mimicry) perform both emotional and social functions. In the current study, we sought to test whether there were deficits in automatic mimic responses to emotional facial expressions in patients (15 of them) with stable schizophrenia compared to 15 controls. In a perception-action interference paradigm (the Simon task; first experiment), and in the context of a dual-task paradigm (second experiment), the task-relevant stimulus feature was the gender of a face, which, however, displayed a smiling or frowning expression (task-irrelevant stimulus feature). We measured the electromyographical activity in the corrugator supercilii and zygomaticus major muscle regions in response to either compatible or incompatible stimuli (i.e., when the required response did or did not correspond to the depicted facial expression). The compatibility effect based on interactions between the implicit processing of a task-irrelevant emotional facial expression and the conscious production of an emotional facial expression did not differ between the groups. In stable patients (in spite of a reduced mimic reaction), we observed an intact capacity to respond spontaneously to facial emotional stimuli. PMID:27303335

  4. Developmental plateau in visual object processing from adolescence to adulthood in autism

    PubMed Central

    O'Hearn, Kirsten; Tanaka, James; Lynn, Andrew; Fedor, Jennifer; Minshew, Nancy; Luna, Beatriz

    2016-01-01

    A lack of typical age-related improvement from adolescence to adulthood contributes to face recognition deficits in adults with autism on the Cambridge Face Memory Test (CFMT). The current studies examine if this atypical developmental trajectory generalizes to other tasks and objects, including parts of the face. The CFMT tests recognition of whole faces, often with a substantial delay. The current studies used the immediate memory (IM) task and the parts-whole face task from the Let's Face It! battery, which examines whole faces, face parts, and cars, without a delay between memorization and test trials. In the IM task, participants memorize a face or car. Immediately after the target disappears, participants identify the target from two similar distractors. In the part-whole task, participants memorize a whole face. Immediately after the face disappears, participants identify the target from a distractor with different eyes or mouth, either as a face part or a whole face. Results indicate that recognition deficits in autism become more robust by adulthood, consistent with previous work, and also become more general, including cars. In the IM task, deficits in autism were specific to faces in childhood, but included cars by adulthood. In the part-whole task, deficits in autism became more robust by adulthood, including both eyes and mouths as parts and in whole faces. Across tasks, the deficit in autism increased between adolescence and adulthood, reflecting a lack of typical improvement, leading to deficits with non-face stimuli and on a task without a memory delay. These results suggest that brain maturation continues to be affected into adulthood in autism, and that the transition from adolescence to adulthood is a vulnerable stage for those with autism. PMID:25019999

  5. Self-face Captures, Holds, and Biases Attention.

    PubMed

    Wójcik, Michał J; Nowicka, Maria M; Kotlewska, Ilona; Nowicka, Anna

    2017-01-01

    The implicit self-recognition process may take place already in the pre-attentive stages of perception. After a silent stimulus has captured attention, it is passed on to the attentive stage where it can affect decision making and responding. Numerous studies show that the presence of self-referential information affects almost every cognitive level. These effects may share a common and fundamental basis in an attentional mechanism, conceptualized as attentional bias: the exaggerated deployment of attentional resources to a salient stimulus. A gold standard in attentional bias research is the dot-probe paradigm. In this task, a prominent stimulus (cue) and a neutral stimulus are presented in different spatial locations, followed by the presentation of a target. In the current study we aimed at investigating whether the self-face captures, holds and biases attention when presented as a task-irrelevant stimulus. In two dot-probe experiments coupled with the event-related potential (ERP) technique we analyzed the following relevant ERPs components: N2pc and SPCN which reflect attentional shifts and the maintenance of attention, respectively. An inter-stimulus interval separating face-cues and probes (800 ms) was introduced only in the first experiment. In line with our predictions, in Experiment 1 the self-face elicited the N2pc and the SPCN component. In Experiment 2 in addition to N2pc, an attentional bias was observed. Our results indicate that unintentional self-face processing disables the top-down control setting to filter out distractors, thus leading to the engagement of attentional resources and visual short-term memory.

  6. Cross-Category Adaptation: Objects Produce Gender Adaptation in the Perception of Faces

    PubMed Central

    Javadi, Amir Homayoun; Wee, Natalie

    2012-01-01

    Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes. PMID:23049942

  7. Editorial: Challenges for the usability of AR and VR for clinical neurosurgical procedures.

    PubMed

    de Ribaupierre, Sandrine; Eagleson, Roy

    2017-10-01

    There are a number of challenges that must be faced when trying to develop AR and VR-based Neurosurgical simulators, Surgical Navigation Platforms, and "Smart OR" systems. Trying to simulate an operating room environment and surgical tasks in Augmented and Virtual Reality is a challenge many are attempting to solve, in order to train surgeons or help them operate. What are some of the needs of the surgeon, and what are the challenges encountered (human computer interface, perception, workflow, etc). We discuss these tradeoffs and conclude with critical remarks.

  8. Holistic processing and reliance on global viewing strategies in older adults' face perception.

    PubMed

    Meinhardt-Injac, Bozana; Persike, Malte; Meinhardt, Günter

    2014-09-01

    There is increasing evidence that face recognition might be impaired in older adults, but it is unclear whether the impairment is truly perceptual, and face specific. In order to address this question we compared performance in same/different matching tasks with face and non-face objects (watches) among young (mean age 23.7) and older adults (mean age 70.4) using a context congruency paradigm (Meinhardt-Injac, Persike & Meinhardt, 2010, Meinhardt-Injac, Persike and Meinhardt, 2011a). Older adults were less accurate than young adults with both object classes, while face matching was notably impaired. Effects of context congruency and inversion, measured as the hallmarks of holistic processing, were equally strong in both age groups, and were found only for faces, but not for watches. The face specific decline in older adults revealed deficits in handling internal facial features, while young adults matched external and internal features equally well. Comparison with non-face stimuli showed that this decline was face specific, and did not concern processing of object features in general. Taken together, the results indicate no age-related decline in the capabilities to process faces holistically. Rather, strong holistic effects, combined with a loss of precision in handling internal features indicate that older adults rely on global viewing strategies for faces. At the same time, access to the exact properties of inner face details becomes restricted. Copyright © 2014. Published by Elsevier B.V.

  9. Associative (prosop)agnosia without (apparent) perceptual deficits: a case-study.

    PubMed

    Anaki, David; Kaufman, Yakir; Freedman, Morris; Moscovitch, Morris

    2007-04-09

    In associative agnosia early perceptual processing of faces or objects are considered to be intact, while the ability to access stored semantic information about the individual face or object is impaired. Recent claims, however, have asserted that associative agnosia is also characterized by deficits at the perceptual level, which are too subtle to be detected by current neuropsychological tests. Thus, the impaired identification of famous faces or common objects in associative agnosia stems from difficulties in extracting the minute perceptual details required to identify a face or an object. In the present study, we report the case of a patient DBO with a left occipital infarct, who shows impaired object and famous face recognition. Despite his disability, he exhibits a face inversion effect, and is able to select a famous face from among non-famous distractors. In addition, his performance is normal in an immediate and delayed recognition memory for faces, whose external features were deleted. His deficits in face recognition are apparent only when he is required to name a famous face, or select two faces from among a triad of famous figures based on their semantic relationships (a task which does not require access to names). The nature of his deficits in object perception and recognition are similar to his impairments in the face domain. This pattern of behavior supports the notion that apperceptive and associative agnosia reflect distinct and dissociated deficits, which result from damage to different stages of the face and object recognition process.

  10. Perception of emotion in facial stimuli: The interaction of ADRA2A and COMT genotypes, and sex.

    PubMed

    Tamm, Gerly; Kreegipuu, Kairi; Harro, Jaanus

    2016-01-04

    Emotional facial stimuli are important social signals that are essential to be perceived and recognized in order to make appropriate decisions and responses in everyday communication. The ability to voluntarily guide attention to perceive and recognize emotions, and react to them varies largely across individuals, and has a strong genetic component (Friedman et al., 2008). Two key genetic variants of the catecholamine system that have been related to emotion perception and attention are the catechol-O-methyl transferase genetic variant (COMT Val158Met) and the α2A-receptor gene promoter polymorphism (ADRA2A C-1291G) accordingly. So far, the interaction of the two with sex in emotion perception has not been studied. Multilevel modeling method was applied to study how COMT Val158Met, ADRA2A C-1291G and sex are associated with measures of emotion perception in a large sample of young adults. Participants (n=506) completed emotion recognition and behavioral emotion detection tasks. It was found that COMT Val158Met genotype in combination with the ADRA2A C-1291G and sex predicts emotion detection, and perception of valence and arousal. In simple visual detection, the ADRA2A C-1291G G-allele leads to slower detection of a highly arousing face (scheming), which is modulated by each additional COMT Val158Met Met-allele and male sex predicting faster responses. The combination of G-allele, Met-allele and male sex also predicts higher perceived negativity in sad faces. No effects of C-1291G, Val158Met, and sex were found on verbal emotion recognition. Applying the findings to study the interplay between catecholamine-O-methyl transferase activity and α2A-receptors in emotion perception disorders (such as ADHD, autism and schizophrenia) in men and women would be the next step towards understanding individual differences in emotion perception. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. The Effects of Self-Esteem and Task Perception on Goal Setting, Efficacy, and Task Performance.

    ERIC Educational Resources Information Center

    Tang, Thomas Li-Ping; Reynolds, David Bryan

    This study examined the effects of self-esteem and task perception on goal setting, efficacy, and task performance in 52 recreational dart throwers who were members of two dart organizations. Task perception was manipulated by asking each dart thrower to compete against self, a difficult competitor, and an easy competitor on the same dart game.…

  12. Eye contact perception in the West and East: a cross-cultural study.

    PubMed

    Uono, Shota; Hietanen, Jari K

    2015-01-01

    This study investigated whether eye contact perception differs in people with different cultural backgrounds. Finnish (European) and Japanese (East Asian) participants were asked to determine whether Finnish and Japanese neutral faces with various gaze directions were looking at them. Further, participants rated the face stimuli for emotion and other affect-related dimensions. The results indicated that Finnish viewers had a smaller bias toward judging slightly averted gazes as directed at them when judging Finnish rather than Japanese faces, while the bias of Japanese viewers did not differ between faces from their own and other cultural backgrounds. This may be explained by Westerners experiencing more eye contact in their daily life leading to larger visual experience of gaze perception generally, and to more accurate perception of eye contact with people from their own cultural background particularly. The results also revealed cultural differences in the perception of emotion from neutral faces that could also contribute to the bias in eye contact perception.

  13. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments

    PubMed Central

    Fengler, Ineke; Nava, Elena; Röder, Brigitte

    2015-01-01

    Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166

  14. Bilingual Infants Demonstrate Perceptual Flexibility in Phoneme Discrimination but Perceptual Constraint in Face Discrimination

    PubMed Central

    Singh, Leher; Loh, Darrell; Xiao, Naiqi G.

    2017-01-01

    Perceptual narrowing is a highly significant development associated with the first year of life. It conventionally refers to an orientation toward nativeness whereby infant's perceptual sensitivities begin to align with the phonetic properties of their native environment. Nativeness effects, such as perceptual narrowing, have been observed in several domains, most notably, in face discrimination within other-race faces and speech discrimination of non-native phonemes. Thus, far, nativeness effects in the face and speech perception have been theoretically linked, but have mostly been investigated independently. An important caveat to nativeness effects is that diversifying experiences, such as bilingualism or multiracial exposure, can lead to a reduction or postponement in attunement to the native environment. The present study was designed to investigate whether bilingualism influences nativeness effects in phonetic and face perception. Eleven-month-old monolingual and bilingual infants were tested on their abilities to discriminate native and non-native speech contrasts as well as own-race and other-race face contrasts. While monolingual infants demonstrated nativeness effects in face and speech perception, bilingual infants demonstrated nativeness effects in the face perception but demonstrated flexibility in speech perception. Results support domain-specific effects of bilingual experience on nativeness effects. PMID:28955278

  15. Effects of inverting contour and features on processing for static and dynamic face perception: an MEG study.

    PubMed

    Miki, Kensaku; Takeshima, Yasuyuki; Watanabe, Shoko; Honda, Yukiko; Kakigi, Ryusuke

    2011-04-06

    We investigated the effects of inverting facial contour (hair and chin) and features (eyes, nose and mouth) on processing for static and dynamic face perception using magnetoencephalography (MEG). We used apparent motion, in which the first stimulus (S1) was replaced by a second stimulus (S2) with no interstimulus interval and subjects perceived visual motion, and presented three conditions as follows: (1) U&U: Upright contour and Upright features, (2) U&I: Upright contour and Inverted features, and (3) I&I: Inverted contour and Inverted features. In static face perception (S1 onset), the peak latency of the fusiform area's activity, which was related to static face perception, was significantly longer for U&I and I&I than for U&U in the right hemisphere and for U&I than for U&U and I&I in the left. In dynamic face perception (S2 onset), the strength (moment) of the occipitotemporal area's activity, which was related to dynamic face perception, was significantly larger for I&I than for U&U and U&I in the right hemisphere, but not the left. These results can be summarized as follows: (1) in static face perception, the activity of the right fusiform area was more affected by the inversion of features while that of the left fusiform area was more affected by the disruption of the spatial relation between the contour and features, and (2) in dynamic face perception, the activity of the right occipitotemporal area was affected by the inversion of the facial contour. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Ward rounds, participants, roles and perceptions: literature review.

    PubMed

    Walton, Victoria; Hogden, Anne; Johnson, Julie; Greenfield, David

    2016-05-09

    Purpose - The purpose of this paper is to classify and describe the purpose of ward rounds, who attends each round and their role, and participants' perception of each other's role during the respective ward rounds. Design/methodology/approach - A literature review of face-to-face ward rounds in medical wards was conducted. Peer reviewed journals and government publications published between 2000 and 2014 were searched. Articles were classified according to the type of round described in the study. Purposes were identified using keywords in the description of why the round was carried out. Descriptions of tasks and interactions with team members defined participant roles. Findings - Eight round classifications were identified. The most common were the generalised ward; multidisciplinary; and consultant rounds. Multidisciplinary rounds were the most collaborative round. Medical officers were the most likely discipline to attend any round. There was limited reference to allied health clinicians and patient involvement on rounds. Perceptions attendees held of each other reiterated the need to continue to investigate teamwork. Practical implications - A collaborative approach to care planning can occur by ensuring clinicians and patients are aware of different ward round processes and their role in them. Originality/value - Analysis fulfils a gap in the literature by identifying and analysing the different ward rounds being undertaken in acute medical wards. It identifies the complexities in the long established routine hospital processes of the ward round.

  17. Baby schema in human and animal faces induces cuteness perception and gaze allocation in children.

    PubMed

    Borgi, Marta; Cogliati-Dezza, Irene; Brelsford, Victoria; Meints, Kerstin; Cirulli, Francesca

    2014-01-01

    The baby schema concept was originally proposed as a set of infantile traits with high appeal for humans, subsequently shown to elicit caretaking behavior and to affect cuteness perception and attentional processes. However, it is unclear whether the response to the baby schema may be extended to the human-animal bond context. Moreover, questions remain as to whether the cute response is constant and persistent or whether it changes with development. In the present study we parametrically manipulated the baby schema in images of humans, dogs, and cats. We analyzed responses of 3-6 year-old children, using both explicit (i.e., cuteness ratings) and implicit (i.e., eye gaze patterns) measures. By means of eye-tracking, we assessed children's preferential attention to images varying only for the degree of baby schema and explored participants' fixation patterns during a cuteness task. For comparative purposes, cuteness ratings were also obtained in a sample of adults. Overall our results show that the response to an infantile facial configuration emerges early during development. In children, the baby schema affects both cuteness perception and gaze allocation to infantile stimuli and to specific facial features, an effect not simply limited to human faces. In line with previous research, results confirm human positive appraisal toward animals and inform both educational and therapeutic interventions involving pets, helping to minimize risk factors (e.g., dog bites).

  18. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network.

    PubMed

    Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus

    2017-01-01

    Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.

  19. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network

    PubMed Central

    Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus

    2017-01-01

    Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension. PMID:29249945

  20. Decomposing fear perception: A combination of psychophysics and neurometric modeling of fear perception

    PubMed Central

    Forscher, Emily C.; Zheng, Yan; Ke, Zijun; Folstein, Jonathan; Li, Wen

    2016-01-01

    Emotion perception is known to involve multiple operations and waves of analysis, but specific nature of these processes remains poorly understood. Combining psychophysical testing and neurometric analysis of event-related potentials (ERPs) in a fear detection task with parametrically-varied fear intensities (N=45), we sought to elucidate key processes in fear perception. Building on psychophysics marking fear perception thresholds, our neurometric model fitting identified several putative operations and stages: four key processes arose in sequence following face presentation—fear-neutral categorization (P1 at 100 ms), fear detection (P300 at 320 ms), valuation (early subcomponent of the late positive potential/LPP at 400–500 ms) and conscious awareness (late subcomponent LPP at 500–600 ms). Furthermore, within-subject brain-behavior association suggests that initial emotion categorization was mandatory and detached from behavior whereas valuation and conscious awareness directly impacted behavioral outcome (explaining 17% and 31% of the total variance, respectively). The current study thus reveals the chronometry of fear perception, ascribing psychological meaning to distinct underlying processes. The combination of early categorization and late valuation of fear reconciles conflicting (categorical versus dimensional) emotion accounts, lending support to a hybrid model. Importantly, future research could specifically interrogate these psychological processes in various behaviors and psychopathologies (e.g., anxiety and depression). PMID:27546075

  1. How Negative Social Bias Affects Memory for Faces: An Electrical Neuroimaging Study

    PubMed Central

    Proverbio, Alice Mado; La Mastra, Francesca; Zani, Alberto

    2016-01-01

    During social interactions, we make inferences about people’s personal characteristics based on their appearance. These inferences form a potential prejudice that can positively or negatively bias our interaction with them. Not much is known about the effects of negative bias on face perception and the ability to recognize people faces. This ability was investigated by recording event-related potentials (ERPs) from 128 sites in 16 volunteers. In the first session (encoding), they viewed 200 faces associated with a short fictional story that described anecdotal positive or negative characteristics about each person. In the second session (recognition), they underwent an old/new memory test, in which they had to distinguish 100 new faces from the previously shown faces. ERP data relative to the encoding phase showed a larger anterior negativity in response to negatively (vs. positively) biased faces, indicating an additional processing of faces with unpleasant social traits. In the recognition task, ERPs recorded in response to new faces elicited a larger FN400 than to old faces, and to positive than negative faces. Additionally, old faces elicited a larger Old-New parietal response than new faces, in the form of an enlarged late positive (LPC) component. An inverse solution SwLORETA (450–550 ms) indicated that remembering old faces was associated with the activation of right superior frontal gyrus (SFG), left medial temporal gyrus, and right fusiform gyrus. Only negatively connoted faces strongly activated the limbic and parahippocampal areas and the left SFG. A dissociation was found between familiarity (modulated by negative bias) and recollection (distinguishing old from new faces). PMID:27655327

  2. Does the medium matter? The interaction of task type and technology on group performance and member reactions.

    PubMed

    Straus, S G; McGrath, J E

    1994-02-01

    The authors investigated the hypothesis that as group tasks pose greater requirements for member interdependence, communication media that transmit more social context cues will foster group performance and satisfaction. Seventy-two 3-person groups of undergraduate students worked in either computer-mediated or face-to-face meetings on 3 tasks with increasing levels of interdependence: an idea-generation task, an intellective task, and a judgment task. Results showed few differences between computer-mediated and face-to-face groups in the quality of the work completed but large differences in productivity favoring face-to-face groups. Analysis of productivity and of members' reactions supported the predicted interaction of tasks and media, with greater discrepancies between media conditions for tasks requiring higher levels of coordination. Results are discussed in terms of the implications of using computer-mediated communications systems for group work.

  3. Effects on automatic attention due to exposure to pictures of emotional faces while performing Chinese word judgment tasks.

    PubMed

    Junhong, Huang; Renlai, Zhou; Senqi, Hu

    2013-01-01

    Two experiments were conducted to investigate the automatic processing of emotional facial expressions while performing low or high demand cognitive tasks under unattended conditions. In Experiment 1, 35 subjects performed low (judging the structure of Chinese words) and high (judging the tone of Chinese words) cognitive load tasks while exposed to unattended pictures of fearful, neutral, or happy faces. The results revealed that the reaction time was slower and the performance accuracy was higher while performing the low cognitive load task than while performing the high cognitive load task. Exposure to fearful faces resulted in significantly longer reaction times and lower accuracy than exposure to neutral faces on the low cognitive load task. In Experiment 2, 26 subjects performed the same word judgment tasks and their brain event-related potentials (ERPs) were measured for a period of 800 ms after the onset of the task stimulus. The amplitudes of the early component of ERP around 176 ms (P2) elicited by unattended fearful faces over frontal-central-parietal recording sites was significantly larger than those elicited by unattended neutral faces while performing the word structure judgment task. Together, the findings of the two experiments indicated that unattended fearful faces captured significantly more attention resources than unattended neutral faces on a low cognitive load task, but not on a high cognitive load task. It was concluded that fearful faces could automatically capture attention if residues of attention resources were available under the unattended condition.

  4. Effects on Automatic Attention Due to Exposure to Pictures of Emotional Faces while Performing Chinese Word Judgment Tasks

    PubMed Central

    Junhong, Huang; Renlai, Zhou; Senqi, Hu

    2013-01-01

    Two experiments were conducted to investigate the automatic processing of emotional facial expressions while performing low or high demand cognitive tasks under unattended conditions. In Experiment 1, 35 subjects performed low (judging the structure of Chinese words) and high (judging the tone of Chinese words) cognitive load tasks while exposed to unattended pictures of fearful, neutral, or happy faces. The results revealed that the reaction time was slower and the performance accuracy was higher while performing the low cognitive load task than while performing the high cognitive load task. Exposure to fearful faces resulted in significantly longer reaction times and lower accuracy than exposure to neutral faces on the low cognitive load task. In Experiment 2, 26 subjects performed the same word judgment tasks and their brain event-related potentials (ERPs) were measured for a period of 800 ms after the onset of the task stimulus. The amplitudes of the early component of ERP around 176 ms (P2) elicited by unattended fearful faces over frontal-central-parietal recording sites was significantly larger than those elicited by unattended neutral faces while performing the word structure judgment task. Together, the findings of the two experiments indicated that unattended fearful faces captured significantly more attention resources than unattended neutral faces on a low cognitive load task, but not on a high cognitive load task. It was concluded that fearful faces could automatically capture attention if residues of attention resources were available under the unattended condition. PMID:24124486

  5. Talker variability in audio-visual speech perception

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919

  6. Talker variability in audio-visual speech perception.

    PubMed

    Heald, Shannon L M; Nusbaum, Howard C

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  7. The non-linear development of the right hemispheric specialization for human face perception.

    PubMed

    Lochy, Aliette; de Heering, Adélaïde; Rossion, Bruno

    2017-06-24

    The developmental origins of human adults' right hemispheric specialization for face perception remain unclear. On the one hand, infant studies have shown a right hemispheric advantage for face perception. On the other hand, it has been proposed that the adult right hemispheric lateralization for face perception slowly emerges during childhood due to reading acquisition, which increases left lateralized posterior responses to competing written material (e.g., visual letters and words). Since methodological approaches used in infant and children typically differ when their face capabilities are explored, resolving this issue has been difficult. Here we tested 5-year-old preschoolers varying in their level of visual letter knowledge with the same fast periodic visual stimulation (FPVS) paradigm leading to strongly right lateralized electrophysiological occipito-temporal face-selective responses in 4- to 6-month-old infants (de Heering and Rossion, 2015). Children's face-selective response was quantitatively larger and differed in scalp topography from infants', but did not differ across hemispheres. There was a small positive correlation between preschoolers' letter knowledge and a non-normalized index of right hemispheric specialization for faces. These observations show that previous discrepant results in the literature reflect a genuine nonlinear development of the neural processes underlying face perception and are not merely due to methodological differences across age groups. We discuss several factors that could contribute to the adult right hemispheric lateralization for faces, such as myelination of the corpus callosum and reading acquisition. Our findings point to the value of FPVS coupled with electroencephalography to assess specialized face perception processes throughout development with the same methodology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Perceptions of Classroom Assessment Tasks: An Interplay of Gender, Subject Area, and Grade Level

    ERIC Educational Resources Information Center

    Alkharusi, Hussain Ali; Al-Hosni, Salim

    2015-01-01

    This study investigates students' perceptions of classroom assessment tasks as a function of gender, subject area, and grade level. Data from 2753 students on Dorman and Knightley's (2006) Perceptions of Assessment Tasks Inventory (PATI) were analyzed in a MANOVA design. Results showed that students tended to hold positive perceptions of their…

  9. Brain Network Activity During Face Perception: The Impact of Perceptual Familiarity and Individual Differences in Childhood Experience.

    PubMed

    Cloutier, Jasmin; Li, Tianyi; Mišic, Bratislav; Correll, Joshua; Berman, Marc G

    2017-09-01

    An extended distributed network of brain regions supports face perception. Face familiarity influences activity in brain regions involved in this network, but the impact of perceptual familiarity on this network has never been directly assessed with the use of partial least squares analysis. In the present work, we use this multivariate statistical analysis to examine how face-processing systems are differentially recruited by characteristics of the targets (i.e. perceptual familiarity and race) and of the perceivers (i.e. childhood interracial contact). Novel faces were found to preferentially recruit a large distributed face-processing network compared with perceptually familiar faces. Additionally, increased interracial contact during childhood led to decreased recruitment of distributed brain networks previously implicated in face perception, salience detection, and social cognition. Current results provide a novel perspective on the impact of cross-race exposure, suggesting that interracial contact early in life may dramatically shape the neural substrates of face perception generally. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Interaction between Social Categories in the Composite Face Paradigm

    ERIC Educational Resources Information Center

    Chen, Wenfeng; Ren, Naixin; Young, Andrew W.; Liu, Chang Hong

    2018-01-01

    The composite face paradigm (Young, Hellawell, & Hay, 1987) is widely used to demonstrate holistic perception of faces (Rossion, 2013). In the paradigm, parts from different faces (usually the top and bottom halves) are recombined. The principal criterion for holistic perception is that responses involving the component parts of composites in…

  11. Predicting Students' Academic Achievement: Contributions of Perceptions of Classroom Assessment Tasks and Motivated Learning Strategies

    ERIC Educational Resources Information Center

    Alkharusi, Hussain

    2016-01-01

    Introduction: Students are daily exposed to a variety of assessment tasks in the classroom. It has long been recognized that students' perceptions of the assessment tasks may influence student academic achievement. The present study aimed at predicting academic achievement in mathematics from perceptions of the assessment tasks after controlling…

  12. The Functional Neuroanatomy of Human Face Perception.

    PubMed

    Grill-Spector, Kalanit; Weiner, Kevin S; Kay, Kendrick; Gomez, Jesse

    2017-09-15

    Face perception is critical for normal social functioning and is mediated by a network of regions in the ventral visual stream. In this review, we describe recent neuroimaging findings regarding the macro- and microscopic anatomical features of the ventral face network, the characteristics of white matter connections, and basic computations performed by population receptive fields within face-selective regions composing this network. We emphasize the importance of the neural tissue properties and white matter connections of each region, as these anatomical properties may be tightly linked to the functional characteristics of the ventral face network. We end by considering how empirical investigations of the neural architecture of the face network may inform the development of computational models and shed light on how computations in the face network enable efficient face perception.

  13. Face Perception and Test Reliabilities in Congenital Prosopagnosia in Seven Tests

    PubMed Central

    Esins, Janina; Schultz, Johannes; Stemper, Claudia; Kennerknecht, Ingo

    2016-01-01

    Congenital prosopagnosia, the innate impairment in recognizing faces, is a very heterogeneous disorder with different phenotypical manifestations. To investigate the nature of prosopagnosia in more detail, we tested 16 prosopagnosics and 21 controls with an extended test battery addressing various aspects of face recognition. Our results show that prosopagnosics exhibited significant impairments in several face recognition tasks: impaired holistic processing (they were tested amongst others with the Cambridge Face Memory Test (CFMT)) as well as reduced processing of configural information of faces. This test battery also revealed some new findings. While controls recognized moving faces better than static faces, prosopagnosics did not exhibit this effect. Furthermore, prosopagnosics had significantly impaired gender recognition—which is shown on a groupwise level for the first time in our study. There was no difference between groups in the automatic extraction of face identity information or in object recognition as tested with the Cambridge Car Memory Test. In addition, a methodological analysis of the tests revealed reduced reliability for holistic face processing tests in prosopagnosics. To our knowledge, this is the first study to show that prosopagnosics showed a significantly reduced reliability coefficient (Cronbach’s alpha) in the CFMT compared to the controls. We suggest that compensatory strategies employed by the prosopagnosics might be the cause for the vast variety of response patterns revealed by the reduced test reliability. This finding raises the question whether classical face tests measure the same perceptual processes in controls and prosopagnosics. PMID:27482369

  14. Face perception is tuned to horizontal orientation in the N170 time window.

    PubMed

    Jacques, Corentin; Schiltz, Christine; Goffaux, Valerie

    2014-02-07

    The specificity of face perception is thought to reside both in its dramatic vulnerability to picture-plane inversion and its strong reliance on horizontally oriented image content. Here we asked when in the visual processing stream face-specific perception is tuned to horizontal information. We measured the behavioral performance and scalp event-related potentials (ERP) when participants viewed upright and inverted images of faces and cars (and natural scenes) that were phase-randomized in a narrow orientation band centered either on vertical or horizontal orientation. For faces, the magnitude of the inversion effect (IE) on behavioral discrimination performance was significantly reduced for horizontally randomized compared to vertically or nonrandomized images, confirming the importance of horizontal information for the recruitment of face-specific processing. Inversion affected the processing of nonrandomized and vertically randomized faces early, in the N170 time window. In contrast, the magnitude of the N170 IE was much smaller for horizontally randomized faces. The present research indicates that the early face-specific neural representations are preferentially tuned to horizontal information and offers new perspectives for a description of the visual information feeding face-specific perception.

  15. Face Recognition Deficits in Autism Spectrum Disorders Are Both Domain Specific and Process Specific

    PubMed Central

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2013-01-01

    Although many studies have reported face identity recognition deficits in autism spectrum disorders (ASD), two fundamental question remains: 1) Is this deficit “process specific” for face memory in particular, or does it extend to perceptual discrimination of faces as well? And 2) Is the deficit “domain specific” for faces, or is it found more generally for other social or even nonsocial stimuli? The answers to these questions are important both for understanding the nature of autism and its developmental etiology, and for understanding the functional architecture of face processing in the typical brain. Here we show that children with ASD are impaired (compared to age and IQ-matched typical children) in face memory, but not face perception, demonstrating process specificity. Further, we find no deficit for either memory or perception of places or cars, indicating domain specificity. Importantly, we further showed deficits in both the perception and memory of bodies, suggesting that the relevant domain of deficit may be social rather than specifically facial. These results provide a more precise characterization of the cognitive phenotype of autism and further indicate a functional dissociation between face memory and face perception. PMID:24040276

  16. The Correlation between Gifted Students' Cost and Task Value Perceptions towards Mathematics: The Mediating Role of Expectancy Belief

    ERIC Educational Resources Information Center

    Kurnaz, Ahmet

    2018-01-01

    In this study whether the expectancy belief has a mediating role in the correlation between cost value perception and task value perception of gifted students towards mathematics was examined. It is predicted that the correlation between cost value and task value perceptions of gifted students towards mathematics can change according to their…

  17. Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design.

    PubMed

    Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2017-01-01

    A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.

  18. Master's Thesis Projects: Student Perceptions of Supervisor Feedback

    ERIC Educational Resources Information Center

    de Kleijn, Renske A. M.; Mainhard, M. Tim; Meijer, Paulien C.; Brekelmans, Mieke; Pilot, Albert

    2013-01-01

    A growing body of research has investigated student perceptions of written feedback in higher education coursework, but few studies have considered feedback perceptions in one-on-one and face-to-face contexts such as master's thesis projects. In this article, student perceptions of feedback are explored in the context of the supervision of…

  19. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    PubMed

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).

  20. The Muslim Headscarf and Face Perception: “They All Look the Same, Don't They?”

    PubMed Central

    Toseeb, Umar; Bryant, Eleanor J.; Keeble, David R. T.

    2014-01-01

    The headscarf conceals hair and other external features of a head (such as the ears). It therefore may have implications for the way in which such faces are perceived. Images of faces with hair (H) or alternatively, covered by a headscarf (HS) were used in three experiments. In Experiment 1 participants saw both H and HS faces in a yes/no recognition task in which the external features either remained the same between learning and test (Same) or switched (Switch). Performance was similar for H and HS faces in both the Same and Switch condition, but in the Switch condition it dropped substantially compared to the Same condition. This implies that the mere presence of the headscarf does not reduce performance, rather, the change between the type of external feature (hair or headscarf) causes the drop in performance. In Experiment 2, which used eye-tracking methodology, it was found that almost all fixations were to internal regions, and that there was no difference in the proportion of fixations to external features between the Same and Switch conditions, implying that the headscarf influenced processing by virtue of extrafoveal viewing. In Experiment 3, similarity ratings of the internal features of pairs of HS faces were higher than pairs of H faces, confirming that the internal and external features of a face are perceived as a whole rather than as separate components. PMID:24520313

  1. Tolerance to spatial-relational transformations in unfamiliar faces: A further challenge to a configural processing account of identity recognition.

    PubMed

    Lorenzino, Martina; Caminati, Martina; Caudek, Corrado

    2018-05-25

    One of the most important questions in face perception research is to understand what information is extracted from a face in order to recognize its identity. Recognition of facial identity has been attributed to a special sensitivity to "configural" information. However, recent studies have challenged the configural account by showing that participants are poor in discriminating variations of metric distances among facial features, especially for familiar as opposed to unfamiliar faces, whereas a configural account predicts the opposite. We aimed to extend these previous results by examining classes of unfamiliar faces with which we have different levels of expertise. We hypothesized an inverse relation between sensitivity to configural information and expertise with a given class of faces, but only for neutral expressions. By first matching perceptual discriminability, we measured tolerance to subtle configural transformations with same-race (SR) versus other-race (OR) faces, and with upright versus upside-down faces. Consistently with our predictions, we found a lower sensitivity to at-threshold configural changes for SR compared to OR faces. We also found that, for our stimuli, the face inversion effect disappeared for neutral but not for emotional faces - a result that can also be attributed to a lower sensitivity to configural transformations for faces presented in a more familiar orientation. The present findings question a purely configural account of face processing and suggest that the role of spatial-relational information in face processing varies according to the functional demands of the task and to the characteristics of the stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Brain potentials indicate the effect of other observers' emotions on perceptions of facial attractiveness.

    PubMed

    Huang, Yujing; Pan, Xuwei; Mo, Yan; Ma, Qingguo

    2016-03-23

    Perceptions of facial attractiveness are sensitive to emotional expression of the perceived face. However, little is known about whether the emotional expression on the face of another observer of the perceived face may have an effect on perceptions of facial attractiveness. The present study used event-related potential technique to examine social influence of the emotional expression on the face of another observer of the perceived face on perceptions of facial attractiveness. The experiment consisted of two phases. In the first phase, a neutral target face was paired with two images of individuals gazing at the target face with smiling, fearful or neutral expressions. In the second phase, participants were asked to judge the attractiveness of the target face. We found that a target face was more attractive when other observers positively gazing at the target face in contrast to the condition when other observers were negative. Additionally, the results of brain potentials showed that the visual positive component P3 with peak latency from 270 to 330 ms was larger after participants observed the target face paired with smiling individuals than the target face paired with neutral individuals. These findings suggested that facial attractiveness of an individual may be influenced by the emotional expression on the face of another observer of the perceived face. Copyright © 2016. Published by Elsevier Ireland Ltd.

  3. Emotional memory and perception in temporal lobectomy patients with amygdala damage.

    PubMed

    Brierley, B; Medford, N; Shaw, P; David, A S

    2004-04-01

    The human amygdala is implicated in the formation of emotional memories and the perception of emotional stimuli--particularly fear--across various modalities. To discern the extent to which these functions are related. 28 patients who had anterior temporal lobectomy (13 left and 15 right) for intractable epilepsy were recruited. Structural magnetic resonance imaging showed that three of them had atrophy of their remaining amygdala. All participants were given tests of affect perception from facial and vocal expressions and of emotional memory, using a standard narrative test and a novel test of word recognition. The results were standardised against matched healthy controls. Performance on all emotion tasks in patients with unilateral lobectomy ranged from unimpaired to moderately impaired. Perception of emotions in faces and voices was (with exceptions) significantly positively correlated, indicating multimodal emotional processing. However, there was no correlation between the subjects' performance on tests of emotional memory and perception. Several subjects showed strong emotional memory enhancement but poor fear perception. Patients with bilateral amygdala damage had greater impairment, particularly on the narrative test of emotional memory, one showing superior fear recognition but absent memory enhancement. Bilateral amygdala damage is particularly disruptive of emotional memory processes in comparison with unilateral temporal lobectomy. On a cognitive level, the pattern of results implies that perception of emotional expressions and emotional memory are supported by separate processing systems or streams.

  4. Facial expressions as a model to test the role of the sensorimotor system in the visual perception of the actions.

    PubMed

    Mele, Sonia; Ghirardi, Valentina; Craighero, Laila

    2017-12-01

    A long-term debate concerns whether the sensorimotor coding carried out during transitive actions observation reflects the low-level movement implementation details or the movement goals. On the contrary, phonemes and emotional facial expressions are intransitive actions that do not fall into this debate. The investigation of phonemes discrimination has proven to be a good model to demonstrate that the sensorimotor system plays a role in understanding actions acoustically presented. In the present study, we adapted the experimental paradigms already used in phonemes discrimination during face posture manipulation, to the discrimination of emotional facial expressions. We submitted participants to a lower or to an upper face posture manipulation during the execution of a four alternative labelling task of pictures randomly taken from four morphed continua between two emotional facial expressions. The results showed that the implementation of low-level movement details influence the discrimination of ambiguous facial expressions differing for a specific involvement of those movement details. These findings indicate that facial expressions discrimination is a good model to test the role of the sensorimotor system in the perception of actions visually presented.

  5. Increased N250 amplitudes for other-race faces reflect more effortful processing at the individual level.

    PubMed

    Herzmann, Grit

    2016-07-01

    The N250 and N250r (r for repetition, signaling a difference measure of priming) has been proposed to reflect the activation of perceptual memory representations for individual faces. Increased N250r and N250 amplitudes have been associated with higher levels of familiarity and expertise, respectively. In contrast to these observations, the N250 amplitude has been found to be larger for other-race than own-race faces in recognition memory tasks. This study investigated if these findings were due to increased identity-specific processing demands for other-race relative to own-race faces and whether or not similar results would be obtained for the N250 in a repetition priming paradigm. Only Caucasian participants were available for testing and completed two tasks with Caucasian, African-American, and Chinese faces. In a repetition priming task, participants decided whether or not sequentially presented faces were of the same identity (individuation task) or same race (categorization task). Increased N250 amplitudes were found for African-American and Chinese faces relative to Caucasian faces, replicating previous results in recognition memory tasks. Contrary to the expectation that increased N250 amplitudes for other-race face would be confined to the individuation task, both tasks showed similar results. This could be due to the fact that face identity information needed to be maintained across the sequential presentation of prime and target in both tasks. Increased N250 amplitudes for other-race faces are taken to represent increased neural demands on the identity-specific processing of other-race faces, which are typically processed less holistically and less on the level of the individual. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Mirror self-face perception in individuals with schizophrenia: Feelings of strangeness associated with one's own image.

    PubMed

    Bortolon, Catherine; Capdevielle, Delphine; Altman, Rosalie; Macgregor, Alexandra; Attal, Jérôme; Raffard, Stéphane

    2017-07-01

    Self-face recognition is crucial for sense of identity and for maintaining a coherent sense of self. Most of our daily life experiences with the image of our own face happen when we look at ourselves in the mirror. However, to date, mirror self-perception in schizophrenia has received little attention despite evidence that face recognition deficits and self abnormalities have been described in schizophrenia. Thus, this study aims to investigate mirror self-face perception in schizophrenia patients and its correlation with clinical symptoms. Twenty-four schizophrenia patients and twenty-five healthy controls were explicitly requested to describe their image in detail during 2min whilst looking at themselves in a mirror. Then, they were asked to report whether they experienced any self-face recognition difficulties. Results showed that schizophrenia patients reported more feelings of strangeness towards their face compared to healthy controls (U=209.5, p=0.048, r=0.28), but no statistically significant differences were found regarding misidentification (p=0.111) and failures in recognition (p=0.081). Symptoms such as hallucinations, somatic concerns and depression were also associated with self-face perception abnormalities (all p-values>0.05). Feelings of strangeness toward one's own face in schizophrenia might be part of a familiar face perception deficit or a more global self-disturbance, which is characterized by a loss of self-other boundaries and has been associated with abnormal body experiences and first rank symptoms. Regarding this last hypothesis, multisensorial integration might have an impact on the way patients perceive themselves since it has an important role in mirror self-perception. Copyright © 2017. Published by Elsevier B.V.

  7. Is the Face-Perception System Human-Specific at Birth?

    ERIC Educational Resources Information Center

    Di Giorgio, Elisa; Leo, Irene; Pascalis, Olivier; Simion, Francesca

    2012-01-01

    The present study investigates the human-specificity of the orienting system that allows neonates to look preferentially at faces. Three experiments were carried out to determine whether the face-perception system that is present at birth is broad enough to include both human and nonhuman primate faces. The results demonstrate that the newborns…

  8. Adaptation to faces and voices: unimodal, cross-modal, and sex-specific effects.

    PubMed

    Little, Anthony C; Feinberg, David R; Debruine, Lisa M; Jones, Benedict C

    2013-11-01

    Exposure, or adaptation, to faces or voices biases perceptions of subsequent stimuli, for example, causing faces to appear more normal than they would be otherwise if they are similar to the previously presented stimuli. Studies also suggest that there may be cross-modal adaptation between sound and vision, although the evidence is inconsistent. We examined adaptation effects within and across voices and faces and also tested whether adaptation crosses between male and female stimuli. We exposed participants to sex-typical or sex-atypical stimuli and measured the perceived normality of subsequent stimuli. Exposure to female faces or voices altered perceptions of subsequent female stimuli, and these adaptation effects crossed modality; exposure to voices influenced judgments of faces, and vice versa. We also found that exposure to female stimuli did not influence perception of subsequent male stimuli. Our data demonstrate that recent experience of faces and voices changes subsequent perception and that mental representations of faces and voices may not be modality dependent. Both unimodal and cross-modal adaptation effects appear to be relatively sex-specific.

  9. Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing

    PubMed Central

    Frässle, Stefan; Krach, Sören; Paulus, Frieder Michel; Jansen, Andreas

    2016-01-01

    While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness-related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization. PMID:27250879

  10. Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing

    NASA Astrophysics Data System (ADS)

    Frässle, Stefan; Krach, Sören; Paulus, Frieder Michel; Jansen, Andreas

    2016-06-01

    While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness-related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization.

  11. Caring more and knowing more reduces age-related differences in emotion perception.

    PubMed

    Stanley, Jennifer Tehan; Isaacowitz, Derek M

    2015-06-01

    Traditional emotion perception tasks show that older adults are less accurate than are young adults at recognizing facial expressions of emotion. Recently, we proposed that socioemotional factors might explain why older adults seem impaired in lab tasks but less so in everyday life (Isaacowitz & Stanley, 2011). Thus, in the present research we empirically tested whether socioemotional factors such as motivation and familiarity can alter this pattern of age effects. In 1 task, accountability instructions eliminated age differences in the traditional emotion perception task. Using a novel emotion perception paradigm featuring spontaneous dynamic facial expressions of a familiar romantic partner versus a same-age stranger, we found that age differences in emotion perception accuracy were attenuated in the familiar partner condition, relative to the stranger condition. Taken together, the results suggest that both overall accuracy as well as specific patterns of age effects differ appreciably between traditional emotion perception tasks and emotion perception within a socioemotional context. (c) 2015 APA, all rights reserved.

  12. Connecting Athletes’ Self-Perceptions and Metaperceptions of Competence: a Structural Equation Modeling Approach

    PubMed Central

    Cecchini, Jose A.; Fernández-Rio, Javier; Méndez-Giménez, Antonio

    2015-01-01

    This study explored the relationships between athletes’ competence self-perceptions and metaperceptions. Two hundred and fifty one student-athletes (14.26 ± 1.89 years), members of twenty different teams (basketball, soccer) completed a questionnaire which included the Perception of Success Questionnaire, the Competence subscale of the Intrinsic Motivation Inventory, and modified versions of both questionnaires to assess athletes’ metaperceptions. Structural equation modelling analysis revealed that athletes’ task and ego metaperceptions positively predicted task and ego self-perceptions, respectively. Competence metaperceptions were strong predictors of competence self-perceptions, confirming the atypical metaperception formation in outcome-dependent contexts such as sport. Task and ego metaperceptions positively predicted athletes’ competence metaperceptions. How coaches value their athletes’ competence is more influential on what the athletes think of themselves than their own self-perceptions. Athletes’ ego and task metaperceptions influenced their competence metaperceptions (how coaches rate their competence). Therefore, athletes build their competence metaperceptions using all information available from their coaches. Finally, only task-self perfections positively predicted athletes’ competence self-perceptions. PMID:26240662

  13. Caring More and Knowing More Reduces Age-Related Differences in Emotion Perception

    PubMed Central

    Stanley, Jennifer Tehan; Isaacowitz, Derek M.

    2015-01-01

    Traditional emotion perception tasks show that older adults are less accurate than young adults at recognizing facial expressions of emotion. Recently, we proposed that socioemotional factors might explain why older adults seem impaired in lab tasks but less so in everyday life (Isaacowitz & Stanley, 2011). Thus, in the present research we empirically tested whether socioemotional factors such as motivation and familiarity can alter this pattern of age effects. In one task, accountability instructions eliminated age differences in the traditional emotion perception task. Using a novel emotion perception paradigm featuring spontaneous dynamic facial expressions of a familiar romantic partner versus a same-age stranger, we found that age differences in emotion perception accuracy were attenuated in the familiar partner condition, relative to the stranger condition. Taken together, the results suggest that both overall accuracy as well as specific patterns of age effects differ appreciably between traditional emotion perception tasks and emotion perception within a socioemotional context. PMID:26030775

  14. Grounding context in face processing: color, emotion, and gender

    PubMed Central

    Gil, Sandrine; Le Bigot, Ludovic

    2015-01-01

    In recent years, researchers have become interested in the way that the affective quality of contextual information transfers to a perceived target. We therefore examined the effect of a red (vs. green, mixed red/green, and achromatic) background – known to be valenced – on the processing of stimuli that play a key role in human interactions, namely facial expressions. We also examined whether the valenced-color effect can be modulated by gender, which is also known to be valenced. Female and male adult participants performed a categorization task of facial expressions of emotion in which the faces of female and male posers expressing two ambiguous emotions (i.e., neutral and surprise) were presented against the four different colored backgrounds. Additionally, this task was completed by collecting subjective ratings for each colored background in the form of five semantic differential scales corresponding to both discrete and dimensional perspectives of emotion. We found that the red background resulted in more negative face perception than the green background, whether the poser was female or male. However, whereas this valenced-color effect was the only effect for female posers, for male posers, the effect was modulated by both the nature of the ambiguous emotion and the decoder’s gender. Overall, our findings offer evidence that color and gender have a common valence-based dimension. PMID:25852625

  15. How Fast is Famous Face Recognition?

    PubMed Central

    Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.

    2012-01-01

    The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503

  16. Holistic face training enhances face processing in developmental prosopagnosia

    PubMed Central

    Cohan, Sarah; Nakayama, Ken

    2014-01-01

    Prosopagnosia has largely been regarded as an untreatable disorder. However, recent case studies using cognitive training have shown that it is possible to enhance face recognition abilities in individuals with developmental prosopagnosia. Our goal was to determine if this approach could be effective in a larger population of developmental prosopagnosics. We trained 24 developmental prosopagnosics using a 3-week online face-training program targeting holistic face processing. Twelve subjects with developmental prosopagnosia were assessed before and after training, and the other 12 were assessed before and after a waiting period, they then performed the training, and were then assessed again. The assessments included measures of front-view face discrimination, face discrimination with view-point changes, measures of holistic face processing, and a 5-day diary to quantify potential real-world improvements. Compared with the waiting period, developmental prosopagnosics showed moderate but significant overall training-related improvements on measures of front-view face discrimination. Those who reached the more difficult levels of training (‘better’ trainees) showed the strongest improvements in front-view face discrimination and showed significantly increased holistic face processing to the point of being similar to that of unimpaired control subjects. Despite challenges in characterizing developmental prosopagnosics’ everyday face recognition and potential biases in self-report, results also showed modest but consistent self-reported diary improvements. In summary, we demonstrate that by using cognitive training that targets holistic processing, it is possible to enhance face perception across a group of developmental prosopagnosics and further suggest that those who improved the most on the training task received the greatest benefits. PMID:24691394

  17. Understanding consumer evaluations of personalised nutrition services in terms of the privacy calculus: a qualitative study.

    PubMed

    Berezowska, Aleksandra; Fischer, Arnout R H; Ronteltap, Amber; Kuznesof, Sharron; Macready, Anna; Fallaize, Rosalind; van Trijp, Hans C M

    2014-01-01

    Personalised nutrition (PN) may provide major health benefits to consumers. A potential barrier to the uptake of PN is consumers' reluctance to disclose sensitive information upon which PN is based. This study adopts the privacy calculus to explore how PN service attributes contribute to consumers' privacy risk and personalisation benefit perceptions. Sixteen focus groups (n = 124) were held in 8 EU countries and discussed 9 PN services that differed in terms of personal information, communication channel, service provider, advice justification, scope, frequency, and customer lock-in. Transcripts were content analysed. The personal information that underpinned PN contributed to both privacy risk perception and personalisation benefit perception. Disclosing information face-to-face mitigated the perception of privacy risk and amplified the perception of personalisation benefit. PN provided by a qualified expert and justified by scientific evidence increased participants' value perception. Enhancing convenience, offering regular face-to face support, and employing customer lock-in strategies were perceived as beneficial. This study suggests that to encourage consumer adoption, PN has to account for face-to-face communication, expert advice providers, support, a lifestyle-change focus, and customised offers. The results provide an initial insight into service attributes that influence consumer adoption of PN. © 2014 S. Karger AG, Basel.

  18. Prism adaptation does not change the rightward spatial preference bias found with ambiguous stimuli in unilateral neglect

    PubMed Central

    Sarri, Margarita; Greenwood, Richard; Kalra, Lalit; Driver, Jon

    2011-01-01

    Previous research has shown that prism adaptation (prism adaptation) can ameliorate several symptoms of spatial neglect after right-hemisphere damage. But the mechanisms behind this remain unclear. Recently we reported that prisms may increase leftward awareness for neglect in a task using chimeric visual objects, despite apparently not affecting awareness in a task using chimeric emotional faces (Sarri et al., 2006). Here we explored potential reasons for this apparent discrepancy in outcome, by testing further whether the lack of a prism effect on the chimeric face task task could be explained by: i) the specific category of stimuli used (faces as opposed to objects); ii) the affective nature of the stimuli; and/or iii) the particular task implemented, with the chimeric face task requiring forced-choice judgements of lateral ‘preference’ between pairs of identical, but left/right mirror-reversed chimeric face tasks (as opposed to identification for the chimeric object task). We replicated our previous pattern of no impact of prisms on the emotional chimeric face task here in a new series of patients, while also similarly finding no beneficial impact on another lateral ‘preference’ measure that used non-face non-emotional stimuli, namely greyscale gradients. By contrast, we found the usual beneficial impact of prism adaptation (prism adaptation) on some conventional measures of neglect, and improvements for at least some patients in a different face task, requiring explicit discrimination of the chimeric or non-chimeric nature of face stimuli. The new findings indicate that prism therapy does not alter spatial biases in neglect as revealed by ‘lateral preference tasks’ that have no right or wrong answer (requiring forced-choice judgements on left/right mirror-reversed stimuli), regardless of whether these employ face or non-face stimuli. But our data also show that prism therapy can beneficially modulate some aspects of visual awareness in spatial neglect not only for objects, but also for face stimuli, in some cases. PMID:20171612

  19. Face-gender discrimination is possible in the near-absence of attention.

    PubMed

    Reddy, Leila; Wilken, Patrick; Koch, Christof

    2004-03-02

    The attentional cost associated with the visual discrimination of the gender of a face was investigated. Participants performed a face-gender discrimination task either alone (single-task) or concurrently (dual-task) with a known attentional demanding task (5-letter T/L discrimination). Overall performance on face-gender discrimination suffered remarkably little under the dual-task condition compared to the single-task condition. Similar results were obtained in experiments that controlled for potential training effects or the use of low-level cues in this discrimination task. Our results provide further evidence against the notion that only low-level representations can be accessed outside the focus of attention.

  20. Taking the reins: the effects of new leader status and leadership style on team performance.

    PubMed

    Sauer, Stephen J

    2011-05-01

    New leaders face a challenging task when they take charge of their teams. They have to determine how best to guide the work process, and they must understand how their behaviors will affect the members of their team. This research examines how a newly assigned team leader's status moderates subordinates' reactions to different leadership styles to affect assessments of the leader's self-confidence and effectiveness, and how this impacts team performance. Across 2 experimental studies, results demonstrate that low-status leaders are rated as more effective when they use a directive style, whereas high-status leaders are viewed as more effective when they use a participative style, and this relationship is mediated by perceptions of self-confidence. In addition, teams whose leaders are viewed more favorably perform better on a complex group task. These findings imply that low-status individuals are able to enhance their level of personal power by drawing on whatever positional power they hold, whereas high-status individuals are better off relying solely on their personal power to influence others. This research also provides a clear demonstration that assessments of new leaders' behaviors are subject to an appraisal that is clouded by observers' status perceptions and attributions.

  1. Development of Neural Sensitivity to Face Identity Correlates with Perceptual Discriminability

    PubMed Central

    Barnett, Michael A.; Hartley, Jake; Gomez, Jesse; Stigliani, Anthony; Grill-Spector, Kalanit

    2016-01-01

    Face perception is subserved by a series of face-selective regions in the human ventral stream, which undergo prolonged development from childhood to adulthood. However, it is unknown how neural development of these regions relates to the development of face-perception abilities. Here, we used functional magnetic resonance imaging (fMRI) to measure brain responses of ventral occipitotemporal regions in children (ages, 5–12 years) and adults (ages, 19–34 years) when they viewed faces that parametrically varied in dissimilarity. Since similar faces generate lower responses than dissimilar faces due to fMRI adaptation, this design objectively evaluates neural sensitivity to face identity across development. Additionally, a subset of subjects participated in a behavioral experiment to assess perceptual discriminability of face identity. Our data reveal three main findings: (1) neural sensitivity to face identity increases with age in face-selective but not object-selective regions; (2) the amplitude of responses to faces increases with age in both face-selective and object-selective regions; and (3) perceptual discriminability of face identity is correlated with the neural sensitivity to face identity of face-selective regions. In contrast, perceptual discriminability is not correlated with the amplitude of response in face-selective regions or of responses of object-selective regions. These data suggest that developmental increases in neural sensitivity to face identity in face-selective regions improve perceptual discriminability of faces. Our findings significantly advance the understanding of the neural mechanisms of development of face perception and open new avenues for using fMRI adaptation to study the neural development of high-level visual and cognitive functions more broadly. SIGNIFICANCE STATEMENT Face perception, which is critical for daily social interactions, develops from childhood to adulthood. However, it is unknown what developmental changes in the brain lead to improved performance. Using fMRI in children and adults, we find that from childhood to adulthood, neural sensitivity to changes in face identity increases in face-selective regions. Critically, subjects' perceptual discriminability among faces is linked to neural sensitivity: participants with higher neural sensitivity in face-selective regions demonstrate higher perceptual discriminability. Thus, our results suggest that developmental increases in face-selective regions' sensitivity to face identity improve perceptual discrimination of faces. These findings significantly advance understanding of the neural mechanisms underlying the development of face perception and have important implications for assessing both typical and atypical development. PMID:27798143

  2. Aberrant activity and connectivity of the posterior superior temporal sulcus during social cognition in schizophrenia.

    PubMed

    Mier, Daniela; Eisenacher, Sarah; Rausch, Franziska; Englisch, Susanne; Gerchen, Martin Fungisai; Zamoscik, Vera; Meyer-Lindenberg, Andreas; Zink, Mathias; Kirsch, Peter

    2017-10-01

    Schizophrenia is associated with significant impairments in social cognition. These impairments have been shown to go along with altered activation of the posterior superior temporal sulcus (pSTS). However, studies that investigate connectivity of pSTS during social cognition in schizophrenia are sparse. Twenty-two patients with schizophrenia and 22 matched healthy controls completed a social-cognitive task for functional magnetic resonance imaging that allows the investigation of affective Theory of Mind (ToM), emotion recognition and the processing of neutral facial expressions. Moreover, a resting-state measurement was taken. Patients with schizophrenia performed worse in the social-cognitive task (main effect of group). In addition, a group by social-cognitive processing interaction was revealed for activity, as well as for connectivity during the social-cognitive task, i.e., patients with schizophrenia showed hyperactivity of right pSTS during neutral face processing, but hypoactivity during emotion recognition and affective ToM. In addition, hypoconnectivity between right and left pSTS was revealed for affective ToM, but not for neutral face processing or emotion recognition. No group differences in connectivity from right to left pSTS occurred during resting state. This pattern of aberrant activity and connectivity of the right pSTS during social cognition might form the basis of false-positive perceptions of emotions and intentions and could contribute to the emergence and sustainment of delusions.

  3. Hybrid Structures: Faculty Use and Perception of Web-Based Courseware as a Supplement to Face-to-Face Instruction

    ERIC Educational Resources Information Center

    Woods, Robert; Baker, Jason D.; Hopper, Dave

    2004-01-01

    The researchers examined responses from 862 faculty members at 38 institutions nationwide using the blackboard Learning Management System (LMS) to supplement their face-to-face instruction. The four research questions addressed the primary uses that faculty make of blackboard, perceptions that faculty have of how certain blackboard features…

  4. Interference among the Processing of Facial Emotion, Face Race, and Face Gender.

    PubMed

    Li, Yongna; Tse, Chi-Shing

    2016-01-01

    People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).

  5. Interference among the Processing of Facial Emotion, Face Race, and Face Gender

    PubMed Central

    Li, Yongna; Tse, Chi-Shing

    2016-01-01

    People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender). PMID:27840621

  6. Decoding task-based attentional modulation during face categorization.

    PubMed

    Chiu, Yu-Chin; Esterman, Michael; Han, Yuefeng; Rosen, Heather; Yantis, Steven

    2011-05-01

    Attention is a neurocognitive mechanism that selects task-relevant sensory or mnemonic information to achieve current behavioral goals. Attentional modulation of cortical activity has been observed when attention is directed to specific locations, features, or objects. However, little is known about how high-level categorization task set modulates perceptual representations. In the current study, observers categorized faces by gender (male vs. female) or race (Asian vs. White). Each face was perceptually ambiguous in both dimensions, such that categorization of one dimension demanded selective attention to task-relevant information within the face. We used multivoxel pattern classification to show that task-specific modulations evoke reliably distinct spatial patterns of activity within three face-selective cortical regions (right fusiform face area and bilateral occipital face areas). This result suggests that patterns of activity in these regions reflect not only stimulus-specific (i.e., faces vs. houses) responses but also task-specific (i.e., race vs. gender) attentional modulation. Furthermore, exploratory whole-brain multivoxel pattern classification (using a searchlight procedure) revealed a network of dorsal fronto-parietal regions (left middle frontal gyrus and left inferior and superior parietal lobule) that also exhibit distinct patterns for the two task sets, suggesting that these regions may represent abstract goals during high-level categorization tasks.

  7. A visual processing advantage for young-adolescent deaf observers: Evidence from face and object matching tasks

    PubMed Central

    Megreya, Ahmed M.; Bindemann, Markus

    2017-01-01

    It is unresolved whether the permanent auditory deprivation that deaf people experience leads to the enhanced visual processing of faces. The current study explored this question with a matching task in which observers searched for a target face among a concurrent lineup of ten faces. This was compared with a control task in which the same stimuli were presented upside down, to disrupt typical face processing, and an object matching task. A sample of young-adolescent deaf observers performed with higher accuracy than hearing controls across all of these tasks. These results clarify previous findings and provide evidence for a general visual processing advantage in deaf observers rather than a face-specific effect. PMID:28117407

  8. Neural correlates of cognitive aging during the perception of facial age: the role of relatively distant and local texture information

    PubMed Central

    Komes, Jessica; Schweinberger, Stefan R.; Wiese, Holger

    2015-01-01

    Previous event-related potential (ERP) research revealed that older relative to younger adults show reduced inversion effects in the N170 (with more negative amplitudes for inverted than upright faces), suggestive of impairments in face perception. However, as these studies used young to middle-aged faces only, this finding may reflect preferential processing of own- relative to other-age faces rather than age-related decline. We conducted an ERP study in which young and older participants categorized young and old upright or inverted faces by age. Stimuli were presented either unfiltered or low-pass filtered at 30, 20, or 10 cycles per image (CPI). Response times revealed larger inversion effects, with slower responses for inverted faces, for young faces in young participants. Older participants did not show a corresponding effect. ERPs yielded a trend toward reduced N170 inversion effects in older relative to younger adults independent of face age. Moreover, larger inversion effects for young relative to old faces were detected, and filtering resulted in smaller N170 amplitudes. The reduced N170 inversion effect in older adults may reflect age-related changes in neural correlates of face perception. A smaller N170 inversion effect for old faces may indicate that facial changes with age hamper early face perception stages. PMID:26441790

  9. Developmental study of visual perception of handwriting movement: influence of motor competencies?

    PubMed

    Bidet-Ildei, Christel; Orliaguet, Jean-Pierre

    2008-07-25

    This paper investigates the influence of motor competencies for the visual perception of human movements in 6-10 years old children. To this end, we compared the kinematics of actual performed and perceptual preferred handwriting movements. The two children's tasks were (1) to write the letter e on a digitizer (handwriting task) and (2) to adjust the velocity of an e displayed on a screen so that it would correspond to "their preferred velocity" (perceptive task). In both tasks, the size of the letter (from 3.4 to 54.02 cm) was different on each trial. Results showed that irrespective of age and task, total movement time conforms to the isochrony principle, i.e., the tendency to maintain constant the duration of movement across changes of amplitude. However, concerning movement speed, there is no developmental correspondence between results obtained in the motor and the perceptive tasks. In handwriting task, movement time decreased with age but no effect of age was observed in the perceptive task. Therefore, perceptual preference of handwriting movement in children could not be strictly interpreted in terms of motor-perceptual coupling.

  10. Bodily action penetrates affective perception

    PubMed Central

    Rigutti, Sara; Gerbino, Walter

    2016-01-01

    Fantoni & Gerbino (2014) showed that subtle postural shifts associated with reaching can have a strong hedonic impact and affect how actors experience facial expressions of emotion. Using a novel Motor Action Mood Induction Procedure (MAMIP), they found consistent congruency effects in participants who performed a facial emotion identification task after a sequence of visually-guided reaches: a face perceived as neutral in a baseline condition appeared slightly happy after comfortable actions and slightly angry after uncomfortable actions. However, skeptics about the penetrability of perception (Zeimbekis & Raftopoulos, 2015) would consider such evidence insufficient to demonstrate that observer’s internal states induced by action comfort/discomfort affect perception in a top-down fashion. The action-modulated mood might have produced a back-end memory effect capable of affecting post-perceptual and decision processing, but not front-end perception. Here, we present evidence that performing a facial emotion detection (not identification) task after MAMIP exhibits systematic mood-congruent sensitivity changes, rather than response bias changes attributable to cognitive set shifts; i.e., we show that observer’s internal states induced by bodily action can modulate affective perception. The detection threshold for happiness was lower after fifty comfortable than uncomfortable reaches; while the detection threshold for anger was lower after fifty uncomfortable than comfortable reaches. Action valence induced an overall sensitivity improvement in detecting subtle variations of congruent facial expressions (happiness after positive comfortable actions, anger after negative uncomfortable actions), in the absence of significant response bias shifts. Notably, both comfortable and uncomfortable reaches impact sensitivity in an approximately symmetric way relative to a baseline inaction condition. All of these constitute compelling evidence of a genuine top-down effect on perception: specifically, facial expressions of emotion are penetrable by action-induced mood. Affective priming by action valence is a candidate mechanism for the influence of observer’s internal states on properties experienced as phenomenally objective and yet loaded with meaning. PMID:26893964

  11. A Comparative View of Face Perception

    PubMed Central

    Leopold, David A.; Rhodes, Gillian

    2010-01-01

    Face perception serves as the basis for much of human social exchange. Diverse information can be extracted about an individual from a single glance at their face, including their identity, emotional state, and direction of attention. Neuropsychological and fMRI experiments reveal a complex network of specialized areas in the human brain supporting these face-reading skills. Here we consider the evolutionary roots of human face perception by exploring the manner in which different animal species view and respond to faces. We focus on behavioral experiments collected from both primates and non-primates, assessing the types of information that animals are able to extract from the faces of their conspecifics, human experimenters, and natural predators. These experiments reveal that faces are an important category of visual stimuli for animals in all major vertebrate taxa, possibly reflecting the early emergence of neural specialization for faces in vertebrate evolution. At the same time, some aspects of facial perception are only evident in primates and a few other social mammals, and may therefore have evolved to suit the needs of complex social communication. Since the human brain likely utilizes both primitive and recently evolved neural specializations for the processing of faces, comparative studies may hold the key to understanding how these parallel circuits emerged during human evolution. PMID:20695655

  12. A comparative view of face perception.

    PubMed

    Leopold, David A; Rhodes, Gillian

    2010-08-01

    Face perception serves as the basis for much of human social exchange. Diverse information can be extracted about an individual from a single glance at their face, including their identity, emotional state, and direction of attention. Neuropsychological and functional magnetic resonance imaging (fMRI) experiments reveal a complex network of specialized areas in the human brain supporting these face-reading skills. Here we consider the evolutionary roots of human face perception by exploring the manner in which different animal species view and respond to faces. We focus on behavioral experiments collected from both primates and nonprimates, assessing the types of information that animals are able to extract from the faces of their conspecifics, human experimenters, and natural predators. These experiments reveal that faces are an important category of visual stimuli for animals in all major vertebrate taxa, possibly reflecting the early emergence of neural specialization for faces in vertebrate evolution. At the same time, some aspects of facial perception are only evident in primates and a few other social mammals, and may therefore have evolved to suit the needs of complex social communication. Because the human brain likely utilizes both primitive and recently evolved neural specializations for the processing of faces, comparative studies may hold the key to understanding how these parallel circuits emerged during human evolution. 2010 APA, all rights reserved

  13. Auditory Processing in Specific Language Impairment (SLI): Relations With the Perception of Lexical and Phrasal Stress.

    PubMed

    Richards, Susan; Goswami, Usha

    2015-08-01

    We investigated whether impaired acoustic processing is a factor in developmental language disorders. The amplitude envelope of the speech signal is known to be important in language processing. We examined whether impaired perception of amplitude envelope rise time is related to impaired perception of lexical and phrasal stress in children with specific language impairment (SLI). Twenty-two children aged between 8 and 12 years participated in this study. Twelve had SLI; 10 were typically developing controls. All children completed psychoacoustic tasks measuring rise time, intensity, frequency, and duration discrimination. They also completed 2 linguistic stress tasks measuring lexical and phrasal stress perception. The SLI group scored significantly below the typically developing controls on both stress perception tasks. Performance on stress tasks correlated with individual differences in auditory sensitivity. Rise time and frequency thresholds accounted for the most unique variance. Digit Span also contributed to task success for the SLI group. The SLI group had difficulties with both acoustic and stress perception tasks. Our data suggest that poor sensitivity to amplitude rise time and sound frequency significantly contributes to the stress perception skills of children with SLI. Other cognitive factors such as phonological memory are also implicated.

  14. Judging Normality and Attractiveness in Faces: Direct Evidence of a More Refined Representation for Own-Race, Young Adult Faces.

    PubMed

    Zhou, Xiaomei; Short, Lindsey A; Chan, Harmonie S J; Mondloch, Catherine J

    2016-09-01

    Young and older adults are more sensitive to deviations from normality in young than older adult faces, suggesting that the dimensions of face space are optimized for young adult faces. Here, we extend these findings to own-race faces and provide converging evidence using an attractiveness rating task. In Experiment 1, Caucasian and Chinese adults were shown own- and other-race face pairs; one member was undistorted and the other had compressed or expanded features. Participants indicated which member of each pair was more normal (a task that requires referencing a norm) and which was more expanded (a task that simply requires discrimination). Participants showed an own-race advantage in the normality task but not the discrimination task. In Experiment 2, participants rated the facial attractiveness of own- and other-race faces (Experiment 2a) or young and older adult faces (Experiment 2b). Between-rater variability in ratings of individual faces was higher for other-race and older adult faces; reduced consensus in attractiveness judgments reflects a less refined face space. Collectively, these results provide direct evidence that the dimensions of face space are optimized for own-race and young adult faces, which may underlie face race- and age-based deficits in recognition. © The Author(s) 2016.

  15. Barack Obama or Barry Dunham? The appearance of multiracial faces is affected by the names assigned to them.

    PubMed

    Hilliar, Kirin F; Kemp, Richard I

    2008-01-01

    Does semantic information in the form of stereotypical names influence participants' perceptions of the appearance of multiracial faces? Asian-Australian and European-Australian participants were asked to rate the appearance of Asian-Australian faces given typically Asian names, European-Australian faces given typically European names, multiracial faces given Asian names, and multiracial faces given European names. Participants rated the multiracial faces given European names as looking significantly 'more European' than the same multiracial faces given Asian names. This study demonstrates how socially derived expectations and stereotypes can influence face perception.

  16. Emotion Words Shape Emotion Percepts

    PubMed Central

    Gendron, Maria; Lindquist, Kristen A.; Barsalou, Lawrence; Barrett, Lisa Feldman

    2015-01-01

    People believe they see emotion written on the faces of other people. In an instant, simple facial actions are transformed into information about another's emotional state. The present research examined whether a perceiver unknowingly contributes to emotion perception with emotion word knowledge. We present 2 studies that together support a role for emotion concepts in the formation of visual percepts of emotion. As predicted, we found that perceptual priming of emotional faces (e.g., a scowling face) was disrupted when the accessibility of a relevant emotion word (e.g., anger) was temporarily reduced, demonstrating that the exact same face was encoded differently when a word was accessible versus when it was not. The implications of these findings for a linguistically relative view of emotion perception are discussed. PMID:22309717

  17. Recognizing Facial Slivers.

    PubMed

    Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan

    2018-07-01

    We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.

  18. Reading the mind in the infant eyes: paradoxical effects of oxytocin on neural activity and emotion recognition in watching pictures of infant faces.

    PubMed

    Voorthuis, Alexandra; Riem, Madelon M E; Van IJzendoorn, Marinus H; Bakermans-Kranenburg, Marian J

    2014-09-11

    The neuropeptide oxytocin facilitates parental caregiving and is involved in the processing of infant vocal cues. In this randomized-controlled trial with functional magnetic resonance imaging we examined the influence of intranasally administered oxytocin on neural activity during emotion recognition in infant faces. Blood oxygenation level dependent (BOLD) responses during emotion recognition were measured in 50 women who were administered 16 IU of oxytocin or a placebo. Participants performed an adapted version of the Infant Facial Expressions of Emotions from Looking at Pictures (IFEEL pictures), a task that has been developed to assess the perception and interpretation of infants' facial expressions. Experimentally induced oxytocin levels increased activation in the inferior frontal gyrus (IFG), the middle temporal gyrus (MTG) and the superior temporal gyrus (STG). However, oxytocin decreased performance on the IFEEL picture task. Our findings suggest that oxytocin enhances processing of facial cues of the emotional state of infants on a neural level, but at the same time it may decrease the correct interpretation of infants' facial expressions on a behavior level. This article is part of a Special Issue entitled Oxytocin and Social Behav. © 2013 Published by Elsevier B.V.

  19. Effects of induced sad mood on facial emotion perception in young and older adults.

    PubMed

    Lawrie, Louisa; Jackson, Margaret C; Phillips, Louise H

    2018-02-15

    Older adults perceive less intense negative emotion in facial expressions compared to younger counterparts. Prior research has also demonstrated that mood alters facial emotion perception. Nevertheless, there is little evidence which evaluates the interactive effects of age and mood on emotion perception. This study investigated the effects of sad mood on younger and older adults' perception of emotional and neutral faces. Participants rated the intensity of stimuli while listening to sad music and in silence. Measures of mood were administered. Younger and older participants' rated sad faces as displaying stronger sadness when they experienced sad mood. While younger participants showed no influence of sad mood on happiness ratings of happy faces, older adults rated happy faces as conveying less happiness when they experienced sad mood. This study demonstrates how emotion perception can change when a controlled mood induction procedure is applied to alter mood in young and older participants.

  20. The Effect of Perceived Injustice on Appraisals of Physical Activity: An Examination of the Mediating Role of Attention Bias to Pain in a Chronic Low Back Pain Sample.

    PubMed

    Trost, Zina; Van Ryckeghem, Dimitri; Scott, Whitney; Guck, Adam; Vervoort, Tine

    2016-11-01

    The current study examined the relationship between perceived injustice and attentional bias (AB) toward pain among individuals with chronic low back pain asked to perform and appraise the pain and difficulty of a standardized set of common physical activities. A pictorial dot-probe task assessed AB toward pain stimuli (ie, pain faces cueing pain), after which participants performed the physical tasks. Participants also rated face stimuli in terms of pain, sadness, and anger expression. As hypothesized, perceived injustice was positively associated with AB toward pain stimuli; additionally, perceived injustice and AB were positively associated with appraisals of pain and difficulty. Counter to expectations, AB did not mediate the relationship between perceived injustice and task appraisals, suggesting that AB is insufficient to explain this relationship. Exploratory analyses indicated that participants with higher levels of perceived injustice rated stimulus faces as sadder and angrier; no such differences emerged for pain ratings. To our knowledge, this is the first study to examine the association between perceived injustice and AB toward pain, as well as perceived injustice and in vivo appraisals of common physical activity. Results extend existing literature and suggest that attentional and potential interpretive bias should be considered in future research. This article identifies significant associations between perceived injustice, biased attention to pain, and appraisals of common physical activities among individuals with chronic low back pain. These findings suggest targets for intervention as well as directions for future research regarding individuals with high perceptions of injustice related to pain. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.

  1. Applying the Scholarship of Teaching and Learning: Student Perceptions, Behaviours and Success Online and Face-to-Face

    ERIC Educational Resources Information Center

    Horspool, Agi; Lange, Carsten

    2012-01-01

    This study compares student perceptions, learning behaviours and success in online and face-to-face versions of a Principles of Microeconomics course. It follows a Scholarship of Teaching and Learning (SoTL) approach by using a cycle of empirical analysis, reflection and action to improve the learning experience for students. The online course…

  2. Teaching Time Investment: Does Online Really Take More Time than Face-to-Face?

    ERIC Educational Resources Information Center

    Van de Vord, Rebecca; Pogue, Korolyn

    2012-01-01

    Enrollments in online programs are growing, increasing demand for online courses. The perception that teaching online takes more time than teaching face-to-face creates concerns related to faculty workload. To date, the research on teaching time does not provide a clear answer as to the accuracy of this perception. This study was designed to…

  3. An Exploration of the Characteristics of Public Relations in Regards to Face-to-Face versus Distance Learning in Two Private Liberal Arts Higher Education Settings

    ERIC Educational Resources Information Center

    Winslow, Cessna Catherine Smith

    2014-01-01

    This study explored perceptions of Public Relations (PR) among graduate higher education publics regarding distance learning as contrasted with face-to-face learning contexts. The research questions assessed student, faculty and administrator perceptions of characteristics of PR: trust, communication, quality, respect and rigor. Participants…

  4. Patient Perceptions of Treatment Delivery Platforms for Cognitive Behavioral Therapy for Insomnia.

    PubMed

    Cheung, Janet M Y; Bartlett, Delwyn J; Armour, Carol L; Laba, Tracey-Lea; Saini, Bandana

    2017-03-21

    Stepped care has given rise to the proliferation of abbreviated CBT-I programs and delivery formats. This includes interventions delivered by allied health professionals and those delivered electronically through the Internet. This article aims to explore patient perceptions between electronic and face-to-face (FTF) delivery platforms for (abbreviated) CBT-I. Patients with insomnia from specialist sleep or psychology clinics and those from the general community in Sydney, Australia. Semistructured interviews were conducted with patients with insomnia, guided by a schedule of questions and a choice task to explore patient perceptions of the different CBT-I treatment delivery platforms (e.g., perceived advantages and disadvantages or willingness to engage with either platform). Interviews were transcribed verbatim and analyzed using Framework Analysis. Participants also completed a battery of clinical mood and insomnia measures. Fifty-one interviews were conducted with patients with insomnia from specialist sleep or psychology clinics (n = 22) and the general community (n = 29). Synthesis of the qualitative data set revealed three themes pertinent to the patients' perspective toward electronic and FTF CBT-I delivery: Concepts of Efficacy, Concerns About Treatment, and Treatment on My Terms. Participants' choice to engage with either platform was also informed by diverse factors including perceived efficacy of treatment, personal commitments, lifestyle, and beliefs about sleep and insomnia. Clarifying patient treatment priorities and allaying potential concerns about engaging with an electronic treatment platform represent important steps for disseminating eCBT-I into mainstream practice.

  5. Decomposing fear perception: A combination of psychophysics and neurometric modeling of fear perception.

    PubMed

    Forscher, Emily C; Zheng, Yan; Ke, Zijun; Folstein, Jonathan; Li, Wen

    2016-10-01

    Emotion perception is known to involve multiple operations and waves of analysis, but specific nature of these processes remains poorly understood. Combining psychophysical testing and neurometric analysis of event-related potentials (ERPs) in a fear detection task with parametrically varied fear intensities (N=45), we sought to elucidate key processes in fear perception. Building on psychophysics marking fear perception thresholds, our neurometric model fitting identified several putative operations and stages: four key processes arose in sequence following face presentation - fear-neutral categorization (P1 at 100ms), fear detection (P300 at 320ms), valuation (early subcomponent of the late positive potential/LPP at 400-500ms) and conscious awareness (late subcomponent LPP at 500-600ms). Furthermore, within-subject brain-behavior association suggests that initial emotion categorization was mandatory and detached from behavior whereas valuation and conscious awareness directly impacted behavioral outcome (explaining 17% and 31% of the total variance, respectively). The current study thus reveals the chronometry of fear perception, ascribing psychological meaning to distinct underlying processes. The combination of early categorization and late valuation of fear reconciles conflicting (categorical versus dimensional) emotion accounts, lending support to a hybrid model. Importantly, future research could specifically interrogate these psychological processes in various behaviors and psychopathologies (e.g., anxiety and depression). Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Anterior medial prefrontal cortex exhibits activation during task preparation but deactivation during task execution.

    PubMed

    Koshino, Hideya; Minamoto, Takehiro; Ikeda, Takashi; Osaka, Mariko; Otsuka, Yuki; Osaka, Naoyuki

    2011-01-01

    The anterior prefrontal cortex (PFC) exhibits activation during some cognitive tasks, including episodic memory, reasoning, attention, multitasking, task sets, decision making, mentalizing, and processing of self-referenced information. However, the medial part of anterior PFC is part of the default mode network (DMN), which shows deactivation during various goal-directed cognitive tasks compared to a resting baseline. One possible factor for this pattern is that activity in the anterior medial PFC (MPFC) is affected by dynamic allocation of attentional resources depending on task demands. We investigated this possibility using an event related fMRI with a face working memory task. Sixteen students participated in a single fMRI session. They were asked to form a task set to remember the faces (Face memory condition) or to ignore them (No face memory condition), then they were given 6 seconds of preparation period before the onset of the face stimuli. During this 6-second period, four single digits were presented one at a time at the center of the display, and participants were asked to add them and to remember the final answer. When participants formed a task set to remember faces, the anterior MPFC exhibited activation during a task preparation period but deactivation during a task execution period within a single trial. The results suggest that the anterior MPFC plays a role in task set formation but is not involved in execution of the face working memory task. Therefore, when attentional resources are allocated to other brain regions during task execution, the anterior MPFC shows deactivation. The results suggest that activation and deactivation in the anterior MPFC are affected by dynamic allocation of processing resources across different phases of processing.

  7. Anterior Medial Prefrontal Cortex Exhibits Activation during Task Preparation but Deactivation during Task Execution

    PubMed Central

    Koshino, Hideya; Minamoto, Takehiro; Ikeda, Takashi; Osaka, Mariko; Otsuka, Yuki; Osaka, Naoyuki

    2011-01-01

    Background The anterior prefrontal cortex (PFC) exhibits activation during some cognitive tasks, including episodic memory, reasoning, attention, multitasking, task sets, decision making, mentalizing, and processing of self-referenced information. However, the medial part of anterior PFC is part of the default mode network (DMN), which shows deactivation during various goal-directed cognitive tasks compared to a resting baseline. One possible factor for this pattern is that activity in the anterior medial PFC (MPFC) is affected by dynamic allocation of attentional resources depending on task demands. We investigated this possibility using an event related fMRI with a face working memory task. Methodology/Principal Findings Sixteen students participated in a single fMRI session. They were asked to form a task set to remember the faces (Face memory condition) or to ignore them (No face memory condition), then they were given 6 seconds of preparation period before the onset of the face stimuli. During this 6-second period, four single digits were presented one at a time at the center of the display, and participants were asked to add them and to remember the final answer. When participants formed a task set to remember faces, the anterior MPFC exhibited activation during a task preparation period but deactivation during a task execution period within a single trial. Conclusions/Significance The results suggest that the anterior MPFC plays a role in task set formation but is not involved in execution of the face working memory task. Therefore, when attentional resources are allocated to other brain regions during task execution, the anterior MPFC shows deactivation. The results suggest that activation and deactivation in the anterior MPFC are affected by dynamic allocation of processing resources across different phases of processing. PMID:21829668

  8. Developmental changes in analytic and holistic processes in face perception.

    PubMed

    Joseph, Jane E; DiBartolo, Michelle D; Bhatt, Ramesh S

    2015-01-01

    Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2(nd) order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2(nd) order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6-8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9-11 years) showed an intermediate pattern with a trend toward holistic processing of 2(nd) order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2(nd) order and featural information are incorporated into holistic representations, whereas older children only incorporate 2(nd) order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2(nd) order processing initially then incorporates featural information by adulthood.

  9. The own-age face recognition bias is task dependent.

    PubMed

    Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J

    2015-08-01

    The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. © 2014 The British Psychological Society.

  10. Six-month-old infants' perception of the hollow face illusion: evidence for a general convexity bias.

    PubMed

    Corrow, Sherryse L; Mathison, Jordan; Granrud, Carl E; Yonas, Albert

    2014-01-01

    Corrow, Granrud, Mathison, and Yonas (2011, Perception, 40, 1376-1383) found evidence that 6-month-old infants perceive the hollow face illusion. In the present study we asked whether 6-month-old infants perceive illusory depth reversal for a nonface object and whether infants' perception of the hollow face illusion is affected by mask orientation inversion. In experiment 1 infants viewed a concave bowl, and their reaches were recorded under monocular and binocular viewing conditions. Infants reached to the bowl as if it were convex significantly more often in the monocular than in the binocular viewing condition. These results suggest that infants perceive illusory depth reversal with a nonface stimulus and that the infant visual system has a bias to perceive objects as convex. Infants in experiment 2 viewed a concave face-like mask in upright and inverted orientations. Infants reached to the display as if it were convex more in the monocular than in the binocular condition; however, mask orientation had no effect on reaching. Previous findings that adults' perception of the hollow face illusion is affected by mask orientation inversion have been interpreted as evidence of stored-knowledge influences on perception. However, we found no evidence of such influences in infants, suggesting that their perception of this illusion may not be affected by stored knowledge, and that perceived depth reversal is not face-specific in infants.

  11. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  12. Temporal lobe structures and facial emotion recognition in schizophrenia patients and nonpsychotic relatives.

    PubMed

    Goghari, Vina M; Macdonald, Angus W; Sponheim, Scott R

    2011-11-01

    Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.

  13. How Context Influences Our Perception of Emotional Faces: A Behavioral Study on the Kuleshov Effect.

    PubMed

    Calbi, Marta; Heimann, Katrin; Barratt, Daniel; Siri, Francesca; Umiltà, Maria A; Gallese, Vittorio

    2017-01-01

    Facial expressions are of major importance in understanding the mental and emotional states of others. So far, most studies on the perception and comprehension of emotions have used isolated facial expressions as stimuli; for example, photographs of actors displaying facial expressions corresponding to one of the so called 'basic emotions.' However, our real experience during social interactions is different: facial expressions of emotion are mostly perceived in a wider context, constituted by body language, the surrounding environment, and our beliefs and expectations. Already in the early twentieth century, the Russian filmmaker Lev Kuleshov argued that such context, established by intermediate shots of strong emotional content, could significantly change our interpretation of facial expressions in film. Prior experiments have shown behavioral effects pointing in this direction, but have only used static images as stimuli. Our study used a more ecological design with participants watching film sequences of neutral faces, crosscut with scenes of strong emotional content (evoking happiness or fear, plus neutral stimuli as a baseline condition). The task was to rate the emotion displayed by a target person's face in terms of valence, arousal, and category. Results clearly demonstrated the presence of a significant effect in terms of both valence and arousal in the fear condition only. Moreover, participants tended to categorize the target person's neutral facial expression choosing the emotion category congruent with the preceding context. Our results highlight the context-sensitivity of emotions and the importance of studying them under ecologically valid conditions.

  14. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind

    PubMed Central

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro

    2012-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547

  15. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    PubMed

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  16. Sex differences in face gender recognition: an event-related potential study.

    PubMed

    Sun, Yueting; Gao, Xiaochao; Han, Shihui

    2010-04-23

    Multiple level neurocognitive processes are involved in face processing in humans. The present study examined whether the early face processing such as structural encoding is modulated by task demands that manipulate attention to perceptual or social features of faces and such an effect, if any, is different between men and women. Event-related brain potentials were recorded from male and female adults while they identified a low-level perceptual feature of faces (i.e., face orientation) and a high-level social feature of faces (i.e., gender). We found that task demands that required the processing of face orientations or face gender resulted in modulations of both the early occipital/temporal negativity (N170) and the late central/parietal positivity (P3). The N170 amplitude was smaller in the gender relative to the orientation identification task whereas the P3 amplitude was larger in the gender identification task relative to the orientation identification task. In addition, these effects were much stronger in women than in men. Our findings suggest that attention to social information in faces such as gender modulates both the early encoding of facial structures and late evaluative process of faces to a greater degree in women than in men.

  17. The role of the amygdala and the basal ganglia in visual processing of central vs. peripheral emotional content.

    PubMed

    Almeida, Inês; van Asselen, Marieke; Castelo-Branco, Miguel

    2013-09-01

    In human cognition, most relevant stimuli, such as faces, are processed in central vision. However, it is widely believed that recognition of relevant stimuli (e.g. threatening animal faces) at peripheral locations is also important due to their survival value. Moreover, task instructions have been shown to modulate brain regions involved in threat recognition (e.g. the amygdala). In this respect it is also controversial whether tasks requiring explicit focus on stimulus threat content vs. implicit processing differently engage primitive subcortical structures involved in emotional appraisal. Here we have addressed the role of central vs. peripheral processing in the human amygdala using animal threatening vs. non-threatening face stimuli. First, a simple animal face recognition task with threatening and non-threatening animal faces, as well as non-face control stimuli, was employed in naïve subjects (implicit task). A subsequent task was then performed with the same stimulus categories (but different stimuli) in which subjects were told to explicitly detect threat signals. We found lateralized amygdala responses both to the spatial location of stimuli and to the threatening content of faces depending on the task performed: the right amygdala showed increased responses to central compared to left presented stimuli specifically during the threat detection task, while the left amygdala was better prone to discriminate threatening faces from non-facial displays during the animal face recognition task. Additionally, the right amygdala responded to faces during the threat detection task but only when centrally presented. Moreover, we have found no evidence for superior responses of the amygdala to peripheral stimuli. Importantly, we have found that striatal regions activate differentially depending on peripheral vs. central processing of threatening faces. Accordingly, peripheral processing of these stimuli activated more strongly the putaminal region, while central processing engaged mainly the caudate nucleus. We conclude that the human amygdala has a central bias for face stimuli, and that visual processing recruits different striatal regions, putaminal or caudate based, depending on the task and on whether peripheral or central visual processing is involved. © 2013 Elsevier Ltd. All rights reserved.

  18. Flexible and inflexible task sets: asymmetric interference when switching between emotional expression, sex, and age classification of perceived faces.

    PubMed

    Schuch, Stefanie; Werheid, Katja; Koch, Iring

    2012-01-01

    The present study investigated whether the processing characteristics of categorizing emotional facial expressions are different from those of categorizing facial age and sex information. Given that emotions change rapidly, it was hypothesized that processing facial expressions involves a more flexible task set that causes less between-task interference than the task sets involved in processing age or sex of a face. Participants switched between three tasks: categorizing a face as looking happy or angry (emotion task), young or old (age task), and male or female (sex task). Interference between tasks was measured by global interference and response interference. Both measures revealed patterns of asymmetric interference. Global between-task interference was reduced when a task was mixed with the emotion task. Response interference, as measured by congruency effects, was larger for the emotion task than for the nonemotional tasks. The results support the idea that processing emotional facial expression constitutes a more flexible task set that causes less interference (i.e., task-set "inertia") than processing the age or sex of a face.

  19. Environmental Inversion Effects in Face Perception

    ERIC Educational Resources Information Center

    Davidenko, Nicolas; Flusberg, Stephen J.

    2012-01-01

    Visual processing is highly sensitive to stimulus orientation; for example, face perception is drastically worse when faces are oriented inverted vs. upright. However, stimulus orientation must be established in relation to a particular reference frame, and in most studies, several reference frames are conflated. Which reference frame(s) matter in…

  20. Relationship between Speech Production and Perception in People Who Stutter.

    PubMed

    Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter

    2016-01-01

    Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl's gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS.

  1. Holistic processing, contact, and the other-race effect in face recognition.

    PubMed

    Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle

    2014-12-01

    Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Infants' Response to Maternal Mirroring in the Still Face and Replay Tasks

    ERIC Educational Resources Information Center

    Bigelow, Ann E.; Walden, Laura M.

    2009-01-01

    Infants' response to maternal mirroring was investigated in 4-month-old infants. Mother-infant dyads participated in the still face and replay tasks. Infants were grouped by those whose mothers did and did not mirror their behavior in the interactive phases of the tasks. In the still face task, infants with maternal mirroring showed more…

  3. Early visual responses predict conscious face perception within and between subjects during binocular rivalry

    PubMed Central

    Sandberg, Kristian; Bahrami, Bahador; Kanai, Ryota; Barnes, Gareth Robert; Overgaard, Morten; Rees, Geraint

    2014-01-01

    Previous studies indicate that conscious face perception may be related to neural activity in a large time window around 170-800ms after stimulus presentation, yet in the majority of these studies changes in conscious experience are confounded with changes in physical stimulation. Using multivariate classification on MEG data recorded when participants reported changes in conscious perception evoked by binocular rivalry between a face and a grating, we showed that only MEG signals in the 120-320ms time range, peaking at the M170 around 180ms and the P2m at around 260ms, reliably predicted conscious experience. Conscious perception could not only be decoded significantly better than chance from the sensors that showed the largest average difference, as previous studies suggest, but also from patterns of activity across groups of occipital sensors that individually were unable to predict perception better than chance. Additionally, source space analyses showed that sources in the early and late visual system predicted conscious perception more accurately than frontal and parietal sites, although conscious perception could also be decoded there. Finally, the patterns of neural activity associated with conscious face perception generalized from one participant to another around the times of maximum prediction accuracy. Our work thus demonstrates that the neural correlates of particular conscious contents (here, faces) are highly consistent in time and space within individuals and that these correlates are shared to some extent between individuals. PMID:23281780

  4. Task relevance regulates the interaction between reward expectation and emotion.

    PubMed

    Wei, Ping; Kang, Guanlan

    2014-06-01

    In the present study, we investigated the impact of reward expectation on the processing of emotional facial expression using a cue-target paradigm. A cue indicating the reward condition of each trial (incentive vs. non-incentive) was followed by the presentation of a picture of an emotional face, the target. Participants were asked to discriminate the emotional expression of the target face in Experiment 1, to discriminate the gender of the target face in Experiment 2, and to judge a number superimposed on the center of the target face as even or odd in Experiment 3, rendering the emotional expression of the target face as task relevant in Experiment 1 but task irrelevant in Experiments 2 and 3. Faster reaction times (RTs) were observed in the monetary incentive condition than in the non-incentive condition, demonstrating the effect of reward on facilitating task concentration. Moreover, the reward effect (i.e., RTs in non-incentive conditions versus incentive conditions) was larger for emotional faces than for neutral faces when emotional expression was task relevant but not when it was task irrelevant. The findings suggest that top-down incentive motivation biased attentional processing toward task-relevant stimuli, and that task relevance played an important role in regulating the influence of reward expectation on the processing of emotional stimuli.

  5. Development of Neural Sensitivity to Face Identity Correlates with Perceptual Discriminability.

    PubMed

    Natu, Vaidehi S; Barnett, Michael A; Hartley, Jake; Gomez, Jesse; Stigliani, Anthony; Grill-Spector, Kalanit

    2016-10-19

    Face perception is subserved by a series of face-selective regions in the human ventral stream, which undergo prolonged development from childhood to adulthood. However, it is unknown how neural development of these regions relates to the development of face-perception abilities. Here, we used functional magnetic resonance imaging (fMRI) to measure brain responses of ventral occipitotemporal regions in children (ages, 5-12 years) and adults (ages, 19-34 years) when they viewed faces that parametrically varied in dissimilarity. Since similar faces generate lower responses than dissimilar faces due to fMRI adaptation, this design objectively evaluates neural sensitivity to face identity across development. Additionally, a subset of subjects participated in a behavioral experiment to assess perceptual discriminability of face identity. Our data reveal three main findings: (1) neural sensitivity to face identity increases with age in face-selective but not object-selective regions; (2) the amplitude of responses to faces increases with age in both face-selective and object-selective regions; and (3) perceptual discriminability of face identity is correlated with the neural sensitivity to face identity of face-selective regions. In contrast, perceptual discriminability is not correlated with the amplitude of response in face-selective regions or of responses of object-selective regions. These data suggest that developmental increases in neural sensitivity to face identity in face-selective regions improve perceptual discriminability of faces. Our findings significantly advance the understanding of the neural mechanisms of development of face perception and open new avenues for using fMRI adaptation to study the neural development of high-level visual and cognitive functions more broadly. Face perception, which is critical for daily social interactions, develops from childhood to adulthood. However, it is unknown what developmental changes in the brain lead to improved performance. Using fMRI in children and adults, we find that from childhood to adulthood, neural sensitivity to changes in face identity increases in face-selective regions. Critically, subjects' perceptual discriminability among faces is linked to neural sensitivity: participants with higher neural sensitivity in face-selective regions demonstrate higher perceptual discriminability. Thus, our results suggest that developmental increases in face-selective regions' sensitivity to face identity improve perceptual discrimination of faces. These findings significantly advance understanding of the neural mechanisms underlying the development of face perception and have important implications for assessing both typical and atypical development. Copyright © 2016 the authors 0270-6474/16/3610893-15$15.00/0.

  6. A note on age differences in mood-congruent vs. mood-incongruent emotion processing in faces

    PubMed Central

    Voelkle, Manuel C.; Ebner, Natalie C.; Lindenberger, Ulman; Riediger, Michaela

    2014-01-01

    This article addresses four interrelated research questions: (1) Does experienced mood affect emotion perception in faces and is this perception mood-congruent or mood-incongruent?(2) Are there age-group differences in the interplay between experienced mood and emotion perception? (3) Does emotion perception in faces change as a function of the temporal sequence of study sessions and stimuli presentation, and (4) does emotion perception in faces serve a mood-regulatory function? One hundred fifty-four adults of three different age groups (younger: 20–31 years; middle-aged: 44–55 years; older adults: 70–81 years) were asked to provide multidimensional emotion ratings of a total of 1026 face pictures of younger, middle-aged, and older men and women, each displaying six different prototypical (primary) emotional expressions. By analyzing the likelihood of ascribing an additional emotional expression to a face whose primary emotion had been correctly recognized, the multidimensional rating approach permits the study of emotion perception while controlling for emotion recognition. Following up on previous research on mood responses to recurring unpleasant situations using the same dataset (Voelkle et al., 2013), crossed random effects analyses supported a mood-congruent relationship between experienced mood and perceived emotions in faces. In particular older adults were more likely to perceive happiness in faces when being in a positive mood and less likely to do so when being in a negative mood. This did not apply to younger adults. Temporal sequence of study sessions and stimuli presentation had a strong effect on the likelihood of ascribing an additional emotional expression. In contrast to previous findings, however, there was neither evidence for a change from mood-congruent to mood-incongruent responses over time nor evidence for a mood-regulatory effect. PMID:25018740

  7. A note on age differences in mood-congruent vs. mood-incongruent emotion processing in faces.

    PubMed

    Voelkle, Manuel C; Ebner, Natalie C; Lindenberger, Ulman; Riediger, Michaela

    2014-01-01

    (1) Does experienced mood affect emotion perception in faces and is this perception mood-congruent or mood-incongruent?(2) Are there age-group differences in the interplay between experienced mood and emotion perception? (3) Does emotion perception in faces change as a function of the temporal sequence of study sessions and stimuli presentation, and (4) does emotion perception in faces serve a mood-regulatory function? One hundred fifty-four adults of three different age groups (younger: 20-31 years; middle-aged: 44-55 years; older adults: 70-81 years) were asked to provide multidimensional emotion ratings of a total of 1026 face pictures of younger, middle-aged, and older men and women, each displaying six different prototypical (primary) emotional expressions. By analyzing the likelihood of ascribing an additional emotional expression to a face whose primary emotion had been correctly recognized, the multidimensional rating approach permits the study of emotion perception while controlling for emotion recognition. Following up on previous research on mood responses to recurring unpleasant situations using the same dataset (Voelkle et al., 2013), crossed random effects analyses supported a mood-congruent relationship between experienced mood and perceived emotions in faces. In particular older adults were more likely to perceive happiness in faces when being in a positive mood and less likely to do so when being in a negative mood. This did not apply to younger adults. Temporal sequence of study sessions and stimuli presentation had a strong effect on the likelihood of ascribing an additional emotional expression. In contrast to previous findings, however, there was neither evidence for a change from mood-congruent to mood-incongruent responses over time nor evidence for a mood-regulatory effect.

  8. Analysis of Student Perceptions of the Psychosocial Learning Environment in Online and Face-to-Face Career and Technical Education Courses

    ERIC Educational Resources Information Center

    Carver, Diane L.; Kosloski, Michael F., Jr.

    2015-01-01

    This study analyzed student perceptions of the psychosocial learning environment in online and face-to-face career and technical education courses, and used survey data from a school district in Washington state. A Mann-Whitney "U" test was used to measure variability and compare the mean scores for a series of psychosocial learning…

  9. Mirror Mirror on the Wall, Is Blended Instruction the Best of All? Students' Perceptions of Blending Face-to-Face and Online Instruction

    ERIC Educational Resources Information Center

    Terras, Katherine; Chiasson, Kari; Sansale, Adam

    2012-01-01

    According to Ayala (2009), blended learning is "the purposeful integration of traditional (i.e., face-to-face) and online learning in order to provide educational opportunities that maximize the benefits of each platform and thus more effectively facilitate student learning. The purpose of this study was to explore students' perceptions of…

  10. MDMA DECREASES THE EFFECTS OF SIMULATED SOCIAL REJECTION

    PubMed Central

    Frye, Charles G.; Wardle, Margaret C.; Norman, Greg J.; de Wit, Harriet

    2014-01-01

    3-4-methylenedioxymethamphetamine (MDMA) increases self-reported positive social feelings and decreases the ability to detect social threat in faces, but its effects on experiences of social acceptance and rejection have not been determined. We examined how an acute dose of MDMA affects subjective and autonomic responses to simulated social acceptance and rejection. We predicted that MDMA would decrease subjective responses to rejection. On an exploratory basis, we also examined the effect of MDMA on respiratory sinus arrhythmia (RSA), a measure of parasympathetic cardiac control often thought to index social engagement and emotional regulation. Over three sessions, healthy adult volunteers with previous MDMA experience (N = 36) received capsules containing placebo, 0.75 or 1.5 mg/kg of MDMA under counter-balanced double-blind conditions. During expected peak drug effect, participants played two rounds of a virtual social simulation task called “Cyberball” during which they experienced acceptance in one round and rejection in the other. During the task we also obtained electrocardiograms (ECGs), from which we calculated RSA. After each round, participants answered questionnaires about their mood and self-esteem. As predicted, MDMA decreased the effect of simulated social rejection on self-reported mood and self-esteem and decreased perceived intensity of rejection, measured as the percent of ball tosses participants reported receiving. Consistent with its sympathomimetic properties, MDMA decreased RSA as compared to placebo. Our finding that MDMA decreases perceptions of rejection in simulated social situations extends previous results indicating that MDMA reduces perception of social threat in faces. Together these findings suggest a cognitive mechanism by which MDMA might produce pro-social behavior and feelings and how the drug might function as an adjunct to psychotherapy. These phenomena merit further study in non-simulated social environments. PMID:24316346

  11. Radiation risk perception and public information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boggs-Mayes, C.J.

    1988-01-01

    We as Health Physicists face what, at many times, appears to be a hopeless task. The task simply stated is informing the public about the risks (or lack thereof) of radiation. Unfortunately, the public has perceived radiation risks to be much greater than they actually are. An example of this problem is shown in a paper by Arthur C. Upton. Three groups of people -- the League of Women Voters, students, and Business and Professional Club members -- were asked to rank 30 sources of risk according to their contribution to the number of deaths in the United States. Notmore » surprisingly, they ranked nuclear power much higher and medical x-rays much lower than the actual values. In addition to the perception problem, we are faced with another hurdle: health physicists as communicators. Members of the Health Physics Society (HPS) found that the communication styles of most health physicists appear to be dissimilar to those of the general public. These authors administered the Myers-Briggs Type Indicator to the HPS Baltimore-Washington Chapter. This test, a standardized test for psychological type developed by Isabel Myers, ask questions that provide a quantitative measure of our natural preferences in four areas. Assume that you as a health physicist have the necessary skills to communicate information about radiation to the public. Health physicists do nothing with these tools. Most people involved in radiation protection do not get involved with public information activies. What I will attempt to do is heighten your interest in such activities. I will share information about public information activities in which I have been involved and give you suggestions for sources of information and materials. 2 refs., 1 tab.« less

  12. Sensitivity to Spatiotemporal Percepts Predicts the Perception of Emotion

    PubMed Central

    Castro, Vanessa L.; Boone, R. Thomas

    2015-01-01

    The present studies examined how sensitivity to spatiotemporal percepts such as rhythm, angularity, configuration, and force predicts accuracy in perceiving emotion. In Study 1, participants (N = 99) completed a nonverbal test battery consisting of three nonverbal emotion perception tests and two perceptual sensitivity tasks assessing rhythm sensitivity and angularity sensitivity. Study 2 (N = 101) extended the findings of Study 1 with the addition of a fourth nonverbal test, a third configural sensitivity task, and a fourth force sensitivity task. Regression analyses across both studies revealed partial support for the association between perceptual sensitivity to spatiotemporal percepts and greater emotion perception accuracy. Results indicate that accuracy in perceiving emotions may be predicted by sensitivity to specific percepts embedded within channel- and emotion-specific displays. The significance of such research lies in the understanding of how individuals acquire emotion perception skill and the processes by which distinct features of percepts are related to the perception of emotion. PMID:26339111

  13. Impaired holistic processing in congenital prosopagnosia

    PubMed Central

    Avidan, Galia; Tanzer, Michal; Behrmann, Marlene

    2011-01-01

    It has long been argued that face processing requires disproportionate reliance on holistic or configural processing, relative to that required for non-face object recognition, and that a disruption of such holistic processing may be causally implicated in prosopagnosia. Previously, we demonstrated that individuals with congenital prosopagnosia (CP) did not show the normal face inversion effect (better performance for upright compared to inverted faces) and evinced a local (rather than the normal global) bias in a compound letter global/local (GL) task, supporting the claim of disrupted holistic processing in prosopagnosia. Here, we investigate further the nature of holistic processing impairments in CP, first by confirming, in a large sample of CP individuals, the absence of the normal face inversion effect and the presence of the local bias on the GL task, and, second, by employing the composite face paradigm, often regarded as the gold standard for measuring holistic face processing. In this last task, we show that, in contrast with normal individuals, the CP group perform equivalently with aligned and misaligned faces and was impervious to (the normal) interference from the task-irrelevant bottom part of faces. Interestingly, the extent of the local bias evident in the composite task is correlated with the abnormality of performance on diagnostic face processing tasks. Furthermore, there is a significant correlation between the magnitude of the local bias in the GL and performance on the composite task. These results provide further evidence for impaired holistic processing in CP and, moreover, corroborate the critical role of this type of processing for intact face recognition. PMID:21601583

  14. Neural processing of race by individuals with Williams syndrome: Do they show the other-race effect? (And why it matters)

    PubMed Central

    Fishman, Inna; Ng, Rowena; Bellugi, Ursula

    2012-01-01

    Williams syndrome (WS) is a genetic condition with a distinctive social phenotype characterized by excessive sociability, accompanied by a relative proficiency in face recognition, despite severe deficits in visuospatial domain of cognition. This consistent phenotypic characteristic and the relative homogeneity of the WS genotype make WS a compelling human model for examining the genotype-phenotype relations, especially with respect to social behavior. Following up on a recent report suggesting that individuals with WS do not show race bias and racial stereotyping, this study was designed to investigate the neural correlates of the perception of faces from different races, in individuals with WS as compared to typically developing (TD) controls. Caucasian WS and TD participants performed a gender identification task with own-race (White) and other-race (Black) faces while event-related potentials (ERPs) were recorded. In line with previous studies with TD participants, other-race faces elicited larger amplitudes ERPs within the first 200 ms following the face onset, in WS and TD participants alike. These results suggest that, just like their TD counterparts, individuals with WS differentially processed faces of own- vs. other-race, at relatively early stages of processing, starting as early as 115 ms after the face onset. Overall, these results indicate that neural processing of faces in individuals with WS is moderated by race at early perceptual stages, calling for a reconsideration of the previous claim that they are uniquely insensitive to race. PMID:22022973

  15. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    PubMed

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Attractiveness judgments and discrimination of mommies and grandmas: perceptual tuning for young adult faces.

    PubMed

    Short, Lindsey A; Mondloch, Catherine J; Hackland, Anne T

    2015-01-01

    Adults are more accurate in detecting deviations from normality in young adult faces than in older adult faces despite exhibiting comparable accuracy in discriminating both face ages. This deficit in judging the normality of older faces may be due to reliance on a face space optimized for the dimensions of young adult faces, perhaps because of early and continuous experience with young adult faces. Here we examined the emergence of this young adult face bias by testing 3- and 7-year-old children on a child-friendly version of the task used to test adults. In an attractiveness judgment task, children viewed young and older adult face pairs; each pair consisted of an unaltered face and a distorted face of the same identity. Children pointed to the prettiest face, which served as a measure of their sensitivity to the dimensions on which faces vary relative to a norm. To examine whether biases in the attractiveness task were specific to deficits in referencing a norm or extended to impaired discrimination, we tested children on a simultaneous match-to-sample task with the same stimuli. Both age groups were more accurate in judging the attractiveness of young faces relative to older faces; however, unlike adults, the young adult face bias extended to the match-to-sample task. These results suggest that by 3 years of age, children's perceptual system is more finely tuned for young adult faces than for older adult faces, which may support past findings of superior recognition for young adult faces. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. The place of information and communication technology-mediated consultations in primary care: GPs' perspectives.

    PubMed

    Hanna, Lisa; May, Carl; Fairhurst, Karen

    2012-06-01

    New information and communication technologies such as email and text messaging have been shown to be useful in some aspects of primary care service delivery. Little is known about Scottish GPs' attitudes towards the adoption of these technologies as routine consultation tools. To explore GPs' perceptions of the potential place of new non-face-to-face consultation technologies in the routine delivery of primary care; to explore GPs' perceived barriers to the introduction of these technologies and to identify the processes by which GPs feel that new consultation technologies could be incorporated into routine primary care. Qualitative interview study: 20 in-depth semi-structured interviews carried out with maximum variation sample of GPs across Scotland. Whilst the face-to-face consultation was seen as central to much of the clinical and diagnostic work of primary care, many GPs were conditionally willing to consider using new technologies in the future, particularly to carry out administrative or less complex tasks and therefore maximize practice efficiency and patient convenience. Key considerations were access to appropriate training, IT support and medico-legal guidance. GPs are conditionally willing to use new consultation media if clinically appropriate and if medico-legal and technical support is available.

  18. Is moral beauty different from facial beauty? Evidence from an fMRI study

    PubMed Central

    Wang, Tingting; Mo, Ce; Tan, Li Hai; Cant, Jonathan S.; Zhong, Luojin; Cupchik, Gerald

    2015-01-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts ‘facial aesthetic judgment > facial gender judgment’ and ‘scene moral aesthetic judgment > scene gender judgment’ identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. PMID:25298010

  19. Perceptual and memorial contributions to developmental prosopagnosia.

    PubMed

    Ulrich, Philip I N; Wilkinson, David T; Ferguson, Heather J; Smith, Laura J; Bindemann, Markus; Johnston, Robert A; Schmalzl, Laura

    2017-02-01

    Developmental prosopagnosia (DP) is commonly associated with the failure to properly perceive individuating facial properties, notably those conveying configural or holistic content. While this may indicate that the primary impairment is perceptual, it is conceivable that some cases of DP are instead caused by a memory impairment, with any perceptual complaint merely allied rather than causal. To investigate this possibility, we administered a battery of face perception tasks to 11 individuals who reported that their face recognition difficulties disrupt daily activity and who also performed poorly on two formal tests of face recognition. Group statistics identified, relative to age- and gender-matched controls, difficulties in apprehending global-local relations and the holistic properties of faces, and in matching across viewpoints, but these were mild in nature and were not consistently evident at the level of individual participants. Six of the 11 individuals failed to show any evidence of perceptual impairment. In the remaining five individuals, no single perceptual deficit, or combination of deficits, was necessary or sufficient for poor recognition performance. These data suggest that some cases of DP are better explained by a memorial rather than perceptual deficit, and highlight the relevance of the apperceptive/associative distinction more commonly applied to the allied syndrome of acquired prosopagnosia.

  20. The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity.

    PubMed

    Proverbio, Alice Mado; Mado Proverbio, C A Alice; Lozano Nasi, Valentina; Alessandra Arcari, Laura; De Benedetto, Francesco; Guardamagna, Matteo; Gazzola, Martina; Zani, Alberto

    2015-10-15

    The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding.

  1. The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity

    PubMed Central

    Mado Proverbio, C.A. Alice; Lozano Nasi, Valentina; Alessandra Arcari, Laura; De Benedetto, Francesco; Guardamagna, Matteo; Gazzola, Martina; Zani, Alberto

    2015-01-01

    The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding. PMID:26469712

  2. Residual fMRI sensitivity for identity changes in acquired prosopagnosia.

    PubMed

    Fox, Christopher J; Iaria, Giuseppe; Duchaine, Bradley C; Barton, Jason J S

    2013-01-01

    While a network of cortical regions contribute to face processing, the lesions in acquired prosopagnosia are highly variable, and likely result in different combinations of spared and affected regions of this network. To assess the residual functional sensitivities of spared regions in prosopagnosia, we designed a rapid event-related functional magnetic resonance imaging (fMRI) experiment that included pairs of faces with same or different identities and same or different expressions. By measuring the release from adaptation to these facial changes we determined the residual sensitivity of face-selective regions-of-interest. We tested three patients with acquired prosopagnosia, and all three of these patients demonstrated residual sensitivity for facial identity changes in surviving fusiform and occipital face areas of either the right or left hemisphere, but not in the right posterior superior temporal sulcus. The patients also showed some residual capabilities for facial discrimination with normal performance on the Benton Facial Recognition Test, but impaired performance on more complex tasks of facial discrimination. We conclude that fMRI can demonstrate residual processing of facial identity in acquired prosopagnosia, that this adaptation can occur in the same structures that show similar processing in healthy subjects, and further, that this adaptation may be related to behavioral indices of face perception.

  3. Residual fMRI sensitivity for identity changes in acquired prosopagnosia

    PubMed Central

    Fox, Christopher J.; Iaria, Giuseppe; Duchaine, Bradley C.; Barton, Jason J. S.

    2013-01-01

    While a network of cortical regions contribute to face processing, the lesions in acquired prosopagnosia are highly variable, and likely result in different combinations of spared and affected regions of this network. To assess the residual functional sensitivities of spared regions in prosopagnosia, we designed a rapid event-related functional magnetic resonance imaging (fMRI) experiment that included pairs of faces with same or different identities and same or different expressions. By measuring the release from adaptation to these facial changes we determined the residual sensitivity of face-selective regions-of-interest. We tested three patients with acquired prosopagnosia, and all three of these patients demonstrated residual sensitivity for facial identity changes in surviving fusiform and occipital face areas of either the right or left hemisphere, but not in the right posterior superior temporal sulcus. The patients also showed some residual capabilities for facial discrimination with normal performance on the Benton Facial Recognition Test, but impaired performance on more complex tasks of facial discrimination. We conclude that fMRI can demonstrate residual processing of facial identity in acquired prosopagnosia, that this adaptation can occur in the same structures that show similar processing in healthy subjects, and further, that this adaptation may be related to behavioral indices of face perception. PMID:24151479

  4. Facial Affect Recognition in Violent and Nonviolent Antisocial Behavior Subtypes.

    PubMed

    Schönenberg, Michael; Mayer, Sarah Verena; Christian, Sandra; Louis, Katharina; Jusyte, Aiste

    2016-10-01

    Prior studies provide evidence for impaired recognition of distress cues in individuals exhibiting antisocial behavior. However, it remains unclear whether this deficit is generally associated with antisociality or may be specific to violent behavior only. To examine whether there are meaningful differences between the two behavioral dimensions rule-breaking and aggression, violent and nonviolent incarcerated offenders as well as control participants were presented with an animated face recognition task in which a video sequence of a neutral face changed into an expression of one of the six basic emotions. The participants were instructed to press a button as soon as they were able to identify the emotional expression, allowing for an assessment of the perceived emotion onset. Both aggressive and nonaggressive offenders demonstrated a delayed perception of primarily fearful facial cues as compared to controls. These results suggest the importance of targeting impaired emotional processing in both types of antisocial behavior.

  5. Caring for the elderly: changing perceptions and attitudes.

    PubMed

    Lovell, Marge

    2006-03-01

    The aging population is currently one of the main issues facing international health care systems. It is a recognized fact that with advancing age, the likelihood of developing health problems and chronic disease will increase and the demand for health care resources will escalate. This will impact hospitals and long-term care facilities. Our young nurses of the future will be faced with the challenging task of caring for this elderly population. A review of the literature revealed that nursing students have a negative attitude toward the elderly. This may be affected by personal beliefs, values, culture, experience, or observations. Their perceived attitudes toward the gerontology field will make it difficult to recruit the nurses required in this area. This article will explore these issues and examine the role of all health care professionals to help change their attitudes and develop a more positive relationship to meet the needs of this unique population.

  6. Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates.

    PubMed

    Wang, Xiaodong; Guo, Xiaotao; Chen, Lin; Liu, Yijun; Goldberg, Michael E; Xu, Hong

    2017-02-01

    Adaptation is fundamental in sensory processing and has been studied extensively within the same sensory modality. However, little is known about adaptation across sensory modalities, especially in the context of high-level processing, such as the perception of emotion. Previous studies have shown that prolonged exposure to a face exhibiting one emotion, such as happiness, leads to contrastive biases in the perception of subsequently presented faces toward the opposite emotion, such as sadness. Such work has shown the importance of adaptation in calibrating face perception based on prior visual exposure. In the present study, we showed for the first time that emotion-laden sounds, like laughter, adapt the visual perception of emotional faces, that is, subjects more frequently perceived faces as sad after listening to a happy sound. Furthermore, via electroencephalography recordings and event-related potential analysis, we showed that there was a neural correlate underlying the perceptual bias: There was an attenuated response occurring at ∼ 400 ms to happy test faces and a quickened response to sad test faces, after exposure to a happy sound. Our results provide the first direct evidence for a behavioral cross-modal adaptation effect on the perception of facial emotion, and its neural correlate. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Sex differences in facial emotion perception ability across the lifespan.

    PubMed

    Olderbak, Sally; Wilhelm, Oliver; Hildebrandt, Andrea; Quoidbach, Jordi

    2018-03-22

    Perception of emotion in the face is a key component of human social cognition and is considered vital for many domains of life; however, little is known about how this ability differs across the lifespan for men and women. We addressed this question with a large community sample (N = 100,257) of persons ranging from younger than 15 to older than 60 years of age. Participants were viewers of the television show "Tout le Monde Joue", and the task was presented on television, with participants responding via their mobile devices. Applying latent variable modeling, and establishing measurement invariance between males and females and across age, we found that, for both males and females, emotion perception abilities peak between the ages of 15 and 30, with poorer performance by younger adults and declining performance after the age of 30. In addition, we show a consistent advantage by females across the lifespan, which decreases in magnitude with increasing age. This large scale study with a wide range of people and testing environments suggests these effects are largely robust. Implications are discussed.

  8. Strange-face Illusions During Interpersonal-Gazing and Personality Differences of Spirituality.

    PubMed

    Caputo, Giovanni B

    Strange-face illusions are produced when two individuals gaze at each other in the eyes in low illumination for more than a few minutes. Usually, the members of the dyad perceive numinous apparitions, like the other's face deformations and perception of a stranger or a monster in place of the other, and feel a short lasting dissociation. In the present experiment, the influence of the spirituality personality trait on strength and number of strange-face illusions was investigated. Thirty participants were preliminarily tested for superstition (Paranormal Belief Scale, PBS) and spirituality (Spiritual Transcendence Scale, STS); then, they were randomly assigned to 15 dyads. Dyads performed the intersubjective gazing task for 10 minutes and, finally, strange-face illusions (measured through the Strange-Face Questionnaire, SFQ) were evaluated. The first finding was that SFQ was independent of PBS; hence, strange-face illusions during intersubjective gazing are authentically perceptual, hallucination-like phenomena, and not due to superstition. The second finding was that SFQ depended on the spiritual-universality scale of STS (a belief in the unitive nature of life; e.g., "there is a higher plane of consciousness or spirituality that binds all people") and the two variables were negatively correlated. Thus, strange-face illusions, in particular monstrous apparitions, could potentially disrupt binding among human beings. Strange-face illusions can be considered as 'projections' of the subject's unconscious into the other's face. In conclusion, intersubjective gazing at low illumination can be a tool for conscious integration of unconscious 'shadows of the Self' in order to reach completeness of the Self. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Visual body recognition in a prosopagnosic patient.

    PubMed

    Moro, V; Pernigo, S; Avesani, R; Bulgarelli, C; Urgesi, C; Candidi, M; Aglioti, S M

    2012-01-01

    Conspicuous deficits in face recognition characterize prosopagnosia. Information on whether agnosic deficits may extend to non-facial body parts is lacking. Here we report the neuropsychological description of FM, a patient affected by a complete deficit in face recognition in the presence of mild clinical signs of visual object agnosia. His deficit involves both overt and covert recognition of faces (i.e. recognition of familiar faces, but also categorization of faces for gender or age) as well as the visual mental imagery of faces. By means of a series of matching-to-sample tasks we investigated: (i) a possible association between prosopagnosia and disorders in visual body perception; (ii) the effect of the emotional content of stimuli on the visual discrimination of faces, bodies and objects; (iii) the existence of a dissociation between identity recognition and the emotional discrimination of faces and bodies. Our results document, for the first time, the co-occurrence of body agnosia, i.e. the visual inability to discriminate body forms and body actions, and prosopagnosia. Moreover, the results show better performance in the discrimination of emotional face and body expressions with respect to body identity and neutral actions. Since FM's lesions involve bilateral fusiform areas, it is unlikely that the amygdala-temporal projections explain the relative sparing of emotion discrimination performance. Indeed, the emotional content of the stimuli did not improve the discrimination of their identity. The results hint at the existence of two segregated brain networks involved in identity and emotional discrimination that are at least partially shared by face and body processing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Changing perception: facial reanimation surgery improves attractiveness and decreases negative facial perception.

    PubMed

    Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick J; Ishii, Lisa E

    2014-01-01

    Determine the effect of facial reanimation surgery on observer-graded attractiveness and negative facial perception of patients with facial paralysis. Randomized controlled experiment. Ninety observers viewed images of paralyzed faces, smiling and in repose, before and after reanimation surgery, as well as normal comparison faces. Observers rated the attractiveness of each face and characterized the paralyzed faces by rating severity, disfigured/bothersome, and importance to repair. Iterated factor analysis indicated these highly correlated variables measure a common domain, so they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score. Mixed effects linear regression determined the effect of facial reanimation surgery on attractiveness and DIBS score. Facial paralysis induces an attractiveness penalty of 2.51 on a 10-point scale for faces in repose and 3.38 for smiling faces. Mixed effects linear regression showed that reanimation surgery improved attractiveness for faces both in repose and smiling by 0.84 (95% confidence interval [CI]: 0.67, 1.01) and 1.24 (95% CI: 1.07, 1.42) respectively. Planned hypothesis tests confirmed statistically significant differences in attractiveness ratings between postoperative and normal faces, indicating attractiveness was not completely normalized. Regression analysis also showed that reanimation surgery decreased DIBS by 0.807 (95% CI: 0.704, 0.911) for faces in repose and 0.989 (95% CI: 0.886, 1.093), an entire standard deviation, for smiling faces. Facial reanimation surgery increases attractiveness and decreases negative facial perception of patients with facial paralysis. These data emphasize the need to optimize reanimation surgery to restore not only function, but also symmetry and cosmesis to improve facial perception and patient quality of life. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  11. Invariant recognition drives neural representations of action sequences

    PubMed Central

    Poggio, Tomaso

    2017-01-01

    Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864

  12. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Division of Labor in Two-Earner Homes: Task Accomplishment versus Household Management as Critical Variables in Perceptions about Family Work.

    ERIC Educational Resources Information Center

    Mederer, Helen J.

    1993-01-01

    Data from 359 married, full-time employed women tested extent to which allocation of tasks and allocation of household management predict perceptions of fairness and conflict. Task and management allocation contributed independently and differently to perceptions of fairness and conflict about housework allocation. Unfairness was predicted by both…

  14. Learners' Perceptions of the Benefits of Voice Tool-Based Tasks on Their Spoken Performance

    ERIC Educational Resources Information Center

    Wilches, Astrid

    2014-01-01

    The purpose of this study is to investigate learners' perceptions of the benefits of tasks using voice tools to reinforce their oral skills. Additionally, this study seeks to determine what aspects of task design affected the students' perceptions. Beginner learners aged 18 to 36 with little or no experience in the use of technological tools for…

  15. Nonlinear analysis of saccade speed fluctuations during combined action and perception tasks

    PubMed Central

    Stan, C.; Astefanoaei, C.; Pretegiani, E.; Optican, L.; Creanga, D.; Rufa, A.; Cristescu, C.P.

    2014-01-01

    Background: Saccades are rapid eye movements used to gather information about a scene which requires both action and perception. These are usually studied separately, so that how perception influences action is not well understood. In a dual task, where the subject looks at a target and reports a decision, subtle changes in the saccades might be caused by action-perception interactions. Studying saccades might provide insight into how brain pathways for action and for perception interact. New method: We applied two complementary methods, multifractal detrended fluctuation analysis and Lempel-Ziv complexity index to eye peak speed recorded in two experiments, a pure action task and a combined action-perception task. Results: Multifractality strength is significantly different in the two experiments, showing smaller values for dual decision task saccades compared to simple-task saccades. The normalized Lempel-Ziv complexity index behaves similarly i.e. is significantly smaller in the decision saccade task than in the simple task. Comparison with existing methods: Compared to the usual statistical and linear approaches, these analyses emphasize the character of the dynamics involved in the fluctuations and offer a sensitive tool for quantitative evaluation of the multifractal features and of the complexity measure in the saccades peak speeds when different brain circuits are involved. Conclusion: Our results prove that the peak speed fluctuations have multifractal characteristics with lower magnitude for the multifractality strength and for the complexity index when two neural pathways are simultaneously activated, demonstrating the nonlinear interaction in the brain pathways for action and perception. PMID:24854830

  16. Developmental Change in Infant Categorization: The Perception of Correlations among Facial Features.

    ERIC Educational Resources Information Center

    Younger, Barbara

    1992-01-01

    Tested 7 and 10 month olds for perception of correlations among facial features. After habituation to faces displaying a pattern of correlation, 10 month olds generalized to a novel face that preserved the pattern of correlation but showed increased attention to a novel face that violated the pattern. (BC)

  17. Trustworthy-Looking Face Meets Brown Eyes

    PubMed Central

    Kleisner, Karel; Priplatova, Lenka; Frost, Peter; Flegr, Jaroslav

    2013-01-01

    We tested whether eye color influences perception of trustworthiness. Facial photographs of 40 female and 40 male students were rated for perceived trustworthiness. Eye color had a significant effect, the brown-eyed faces being perceived as more trustworthy than the blue-eyed ones. Geometric morphometrics, however, revealed significant correlations between eye color and face shape. Thus, face shape likewise had a significant effect on perceived trustworthiness but only for male faces, the effect for female faces not being significant. To determine whether perception of trustworthiness was being influenced primarily by eye color or by face shape, we recolored the eyes on the same male facial photos and repeated the test procedure. Eye color now had no effect on perceived trustworthiness. We concluded that although the brown-eyed faces were perceived as more trustworthy than the blue-eyed ones, it was not brown eye color per se that caused the stronger perception of trustworthiness but rather the facial features associated with brown eyes. PMID:23326406

  18. Brain systems for assessing the affective value of faces

    PubMed Central

    Said, Christopher P.; Haxby, James V.; Todorov, Alexander

    2011-01-01

    Cognitive neuroscience research on facial expression recognition and face evaluation has proliferated over the past 15 years. Nevertheless, large questions remain unanswered. In this overview, we discuss the current understanding in the field, and describe what is known and what remains unknown. In §2, we describe three types of behavioural evidence that the perception of traits in neutral faces is related to the perception of facial expressions, and may rely on the same mechanisms. In §3, we discuss cortical systems for the perception of facial expressions, and argue for a partial segregation of function in the superior temporal sulcus and the fusiform gyrus. In §4, we describe the current understanding of how the brain responds to emotionally neutral faces. To resolve some of the inconsistencies in the literature, we perform a large group analysis across three different studies, and argue that one parsimonious explanation of prior findings is that faces are coded in terms of their typicality. In §5, we discuss how these two lines of research—perception of emotional expressions and face evaluation—could be integrated into a common, cognitive neuroscience framework. PMID:21536552

  19. The effect of skin surface topography and skin colouration cues on perception of male facial age, health and attractiveness.

    PubMed

    Fink, B; Matts, P J; Brauckmann, C; Gundlach, S

    2018-04-01

    Previous studies investigating the effects of skin surface topography and colouration cues on the perception of female faces reported a differential weighting for the perception of skin topography and colour evenness, where topography was a stronger visual cue for the perception of age, whereas skin colour evenness was a stronger visual cue for the perception of health. We extend these findings in a study of the effect of skin surface topography and colour evenness cues on the perceptions of facial age, health and attractiveness in males. Facial images of six men (aged 40 to 70 years), selected for co-expression of lines/wrinkles and discolouration, were manipulated digitally to create eight stimuli, namely, separate removal of these two features (a) on the forehead, (b) in the periorbital area, (c) on the cheeks and (d) across the entire face. Omnibus (within-face) pairwise combinations, including the original (unmodified) face, were presented to a total of 240 male and female judges, who selected the face they considered younger, healthier and more attractive. Significant effects were detected for facial image choice, in response to skin feature manipulation. The combined removal of skin surface topography resulted in younger age perception compared with that seen with the removal of skin colouration cues, whereas the opposite pattern was found for health preference. No difference was detected for the perception of attractiveness. These perceptual effects were seen particularly on the forehead and cheeks. Removing skin topography cues (but not discolouration) in the periorbital area resulted in higher preferences for all three attributes. Skin surface topography and colouration cues affect the perception of age, health and attractiveness in men's faces. The combined removal of these features on the forehead, cheeks and in the periorbital area results in the most positive assessments. © 2018 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  20. Student Perceptions of the Distance Education Mode Compared with Face-to-Face Teaching in the University Distance Education Programme

    ERIC Educational Resources Information Center

    Vásquez Martínez, Claudio Rafael; Girón, Graciela; Bañuelos, Antonio Ayón

    2012-01-01

    This paper is based on a study of the perceptions of the distance education mode compared with face-to-face teaching on the part of students on the university distance education programme at the University of Antioch over the period from 2001 to 2007. It is not possible to ignore the close links between educational processes and social, economic,…

  1. Gender differences in creative thinking: behavioral and fMRI findings.

    PubMed

    Abraham, Anna; Thybusch, Kristin; Pieritz, Karoline; Hermann, Christiane

    2014-03-01

    Gender differences in creativity have been widely studied in behavioral investigations, but this topic has rarely been the focus of neuroscientific research. The current paper presents follow-up analyses of a previous fMRI study (Abraham et al., Neuropsychologia 50(8):1906-1917, 2012b), in which behavioral and brain function during creative conceptual expansion as well as general divergent thinking were explored. Here, we focus on gender differences within the same sample. Conceptual expansion was assessed with the alternate uses task relative to the object location task, whereas divergent thinking was assessed in terms of responses across both the alternate uses and object location tasks relative to n-back working memory tasks. While men and women were indistinguishable in terms of behavioral performance across all tasks, the pattern of brain activity while engaged in the tasks in question was indicative of strategy differences between the genders. Brain areas related to semantic cognition, rule learning and decision making were preferentially engaged in men during conceptual expansion, whereas women displayed higher activity in regions related to speech processing and social perception. During divergent thinking, declarative memory related regions were strongly activated in men, while regions involved in theory of mind and self-referential processing were more engaged in women. The implications of gender differences in adopted strategies or cognitive style when faced with generative tasks are discussed.

  2. Investigating speech perception in children with dyslexia: is there evidence of a consistent deficit in individuals?

    PubMed Central

    Messaoud-Galusi, Souhila; Hazan, Valerie; Rosen, Stuart

    2012-01-01

    Purpose The claim that speech perception abilities are impaired in dyslexia was investigated in a group of 62 dyslexic children and 51 average readers matched in age. Method To test whether there was robust evidence of speech perception deficits in children with dyslexia, speech perception in noise and quiet was measured using eight different tasks involving the identification and discrimination of a complex and highly natural synthetic ‘pea’-‘bee’ contrast (copy synthesised from natural models) and the perception of naturally-produced words. Results Children with dyslexia, on average, performed more poorly than average readers in the synthetic syllables identification task in quiet and in across-category discrimination (but not when tested using an adaptive procedure). They did not differ from average readers on two tasks of word recognition in noise or identification of synthetic syllables in noise. For all tasks, a majority of individual children with dyslexia performed within norms. Finally, speech perception generally did not correlate with pseudo-word reading or phonological processing, the core skills related to dyslexia. Conclusions On the tasks and speech stimuli we used, most children with dyslexia do not appear to show a consistent deficit in speech perception. PMID:21930615

  3. The fusiform face area: a cortical region specialized for the perception of faces

    PubMed Central

    Kanwisher, Nancy; Yovel, Galit

    2006-01-01

    Faces are among the most important visual stimuli we perceive, informing us not only about a person's identity, but also about their mood, sex, age and direction of gaze. The ability to extract this information within a fraction of a second of viewing a face is important for normal social interactions and has probably played a critical role in the survival of our primate ancestors. Considerable evidence from behavioural, neuropsychological and neurophysiological investigations supports the hypothesis that humans have specialized cognitive and neural mechanisms dedicated to the perception of faces (the face-specificity hypothesis). Here, we review the literature on a region of the human brain that appears to play a key role in face perception, known as the fusiform face area (FFA). Section 1 outlines the theoretical background for much of this work. The face-specificity hypothesis falls squarely on one side of a longstanding debate in the fields of cognitive science and cognitive neuroscience concerning the extent to which the mind/brain is composed of: (i) special-purpose (‘domain-specific’) mechanisms, each dedicated to processing a specific kind of information (e.g. faces, according to the face-specificity hypothesis), versus (ii) general-purpose (‘domain-general’) mechanisms, each capable of operating on any kind of information. Face perception has long served both as one of the prime candidates of a domain-specific process and as a key target for attack by proponents of domain-general theories of brain and mind. Section 2 briefly reviews the prior literature on face perception from behaviour and neurophysiology. This work supports the face-specificity hypothesis and argues against its domain-general alternatives (the individuation hypothesis, the expertise hypothesis and others). Section 3 outlines the more recent evidence on this debate from brain imaging, focusing particularly on the FFA. We review the evidence that the FFA is selectively engaged in face perception, by addressing (and rebutting) five of the most widely discussed alternatives to this hypothesis. In §4, we consider recent findings that are beginning to provide clues into the computations conducted in the FFA and the nature of the representations the FFA extracts from faces. We argue that the FFA is engaged both in detecting faces and in extracting the necessary perceptual information to recognize them, and that the properties of the FFA mirror previously identified behavioural signatures of face-specific processing (e.g. the face-inversion effect). Section 5 asks how the computations and representations in the FFA differ from those occurring in other nearby regions of cortex that respond strongly to faces and objects. The evidence indicates clear functional dissociations between these regions, demonstrating that the FFA shows not only functional specificity but also area specificity. We end by speculating in §6 on some of the broader questions raised by current research on the FFA, including the developmental origins of this region and the question of whether faces are unique versus whether similarly specialized mechanisms also exist for other domains of high-level perception and cognition. PMID:17118927

  4. Network Configurations in the Human Brain Reflect Choice Bias during Rapid Face Processing.

    PubMed

    Tu, Tao; Schneck, Noam; Muraskin, Jordan; Sajda, Paul

    2017-12-13

    Network interactions are likely to be instrumental in processes underlying rapid perception and cognition. Specifically, high-level and perceptual regions must interact to balance pre-existing models of the environment with new incoming stimuli. Simultaneous electroencephalography (EEG) and fMRI (EEG/fMRI) enables temporal characterization of brain-network interactions combined with improved anatomical localization of regional activity. In this paper, we use simultaneous EEG/fMRI and multivariate dynamical systems (MDS) analysis to characterize network relationships between constitute brain areas that reflect a subject's choice for a face versus nonface categorization task. Our simultaneous EEG and fMRI analysis on 21 human subjects (12 males, 9 females) identifies early perceptual and late frontal subsystems that are selective to the categorical choice of faces versus nonfaces. We analyze the interactions between these subsystems using an MDS in the space of the BOLD signal. Our main findings show that differences between face-choice and house-choice networks are seen in the network interactions between the early and late subsystems, and that the magnitude of the difference in network interaction positively correlates with the behavioral false-positive rate of face choices. We interpret this to reflect the role of saliency and expectations likely encoded in frontal "late" regions on perceptual processes occurring in "early" perceptual regions. SIGNIFICANCE STATEMENT Our choices are affected by our biases. In visual perception and cognition such biases can be commonplace and quite curious-e.g., we see a human face when staring up at a cloud formation or down at a piece of toast at the breakfast table. Here we use multimodal neuroimaging and dynamical systems analysis to measure whole-brain spatiotemporal dynamics while subjects make decisions regarding the type of object they see in rapidly flashed images. We find that the degree of interaction in these networks accounts for a substantial fraction of our bias to see faces. In general, our findings illustrate how the properties of spatiotemporal networks yield insight into the mechanisms of how we form decisions. Copyright © 2017 the authors 0270-6474/17/3712226-12$15.00/0.

  5. Network Configurations in the Human Brain Reflect Choice Bias during Rapid Face Processing

    PubMed Central

    Schneck, Noam

    2017-01-01

    Network interactions are likely to be instrumental in processes underlying rapid perception and cognition. Specifically, high-level and perceptual regions must interact to balance pre-existing models of the environment with new incoming stimuli. Simultaneous electroencephalography (EEG) and fMRI (EEG/fMRI) enables temporal characterization of brain–network interactions combined with improved anatomical localization of regional activity. In this paper, we use simultaneous EEG/fMRI and multivariate dynamical systems (MDS) analysis to characterize network relationships between constitute brain areas that reflect a subject's choice for a face versus nonface categorization task. Our simultaneous EEG and fMRI analysis on 21 human subjects (12 males, 9 females) identifies early perceptual and late frontal subsystems that are selective to the categorical choice of faces versus nonfaces. We analyze the interactions between these subsystems using an MDS in the space of the BOLD signal. Our main findings show that differences between face-choice and house-choice networks are seen in the network interactions between the early and late subsystems, and that the magnitude of the difference in network interaction positively correlates with the behavioral false-positive rate of face choices. We interpret this to reflect the role of saliency and expectations likely encoded in frontal “late” regions on perceptual processes occurring in “early” perceptual regions. SIGNIFICANCE STATEMENT Our choices are affected by our biases. In visual perception and cognition such biases can be commonplace and quite curious—e.g., we see a human face when staring up at a cloud formation or down at a piece of toast at the breakfast table. Here we use multimodal neuroimaging and dynamical systems analysis to measure whole-brain spatiotemporal dynamics while subjects make decisions regarding the type of object they see in rapidly flashed images. We find that the degree of interaction in these networks accounts for a substantial fraction of our bias to see faces. In general, our findings illustrate how the properties of spatiotemporal networks yield insight into the mechanisms of how we form decisions. PMID:29118108

  6. Task-irrelevant emotion facilitates face discrimination learning.

    PubMed

    Lorenzino, Martina; Caudek, Corrado

    2015-03-01

    We understand poorly how the ability to discriminate faces from one another is shaped by visual experience. The purpose of the present study is to determine whether face discrimination learning can be facilitated by facial emotions. To answer this question, we used a task-irrelevant perceptual learning paradigm because it closely mimics the learning processes that, in daily life, occur without a conscious intention to learn and without an attentional focus on specific facial features. We measured face discrimination thresholds before and after training. During the training phase (4 days), participants performed a contrast discrimination task on face images. They were not informed that we introduced (task-irrelevant) subtle variations in the face images from trial to trial. For the Identity group, the task-irrelevant features were variations along a morphing continuum of facial identity. For the Emotion group, the task-irrelevant features were variations along an emotional expression morphing continuum. The Control group did not undergo contrast discrimination learning and only performed the pre-training and post-training tests, with the same temporal gap between them as the other two groups. Results indicate that face discrimination improved, but only for the Emotion group. Participants in the Emotion group, moreover, showed face discrimination improvements also for stimulus variations along the facial identity dimension, even if these (task-irrelevant) stimulus features had not been presented during training. The present results highlight the importance of emotions for face discrimination learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Relationship between Speech Production and Perception in People Who Stutter

    PubMed Central

    Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter

    2016-01-01

    Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl’s gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS. PMID:27242487

  8. On the Relationship between Memory and Perception: Sequential Dependencies in Recognition Memory Testing

    ERIC Educational Resources Information Center

    Malmberg, Kenneth J.; Annis, Jeffrey

    2012-01-01

    Many models of recognition are derived from models originally applied to perception tasks, which assume that decisions from trial to trial are independent. While the independence assumption is violated for many perception tasks, we present the results of several experiments intended to relate memory and perception by exploring sequential…

  9. Greater sensitivity of the cortical face processing system to perceptually-equated face detection

    PubMed Central

    Maher, S.; Ekstrom, T.; Tong, Y.; Nickerson, L.D.; Frederick, B.; Chen, Y.

    2015-01-01

    Face detection, the perceptual capacity to identify a visual stimulus as a face before probing deeper into specific attributes (such as its identity or emotion), is essential for social functioning. Despite the importance of this functional capacity, face detection and its underlying brain mechanisms are not well understood. This study evaluated the roles that the cortical face processing system, which is identified largely through studying other aspects of face perception, play in face detection. Specifically, we used functional magnetic resonance imaging (fMRI) to examine the activations of the fusifom face area (FFA), occipital face area (OFA) and superior temporal sulcus (STS) when face detection was isolated from other aspects of face perception and when face detection was perceptually-equated across individual human participants (n=20). During face detection, FFA and OFA were significantly activated, even for stimuli presented at perceptual-threshold levels, whereas STS was not. During tree detection, however, FFA and OFA were responsive only for highly salient (i.e., high contrast) stimuli. Moreover, activation of FFA during face detection predicted a significant portion of the perceptual performance levels that were determined psychophysically for each participant. This pattern of result indicates that FFA and OFA have a greater sensitivity to face detection signals and selectively support the initial process of face vs. non-face object perception. PMID:26592952

  10. Perceptions of Receiving Bad News about Cancer among Bone Cancer Patients in Sarawak General Hospital - A Descriptive Study.

    PubMed

    Cheah, Whye Lian; Dollah, Nurul Bahariah; Chang, Ching Thon

    2012-07-01

    This study aimed to determine the perceptions and expectations of bone cancer patients with respect to their doctors and the breaking of bad news as well as the environment in which the news was delivered. A cross-sectional study using a pretested 41-item questionnaire was conducted using convenience sampling among bone cancer patients in Sarawak General Hospital. Face-to-face interviews were conducted after consent was obtained. Data were analysed using SPSS version 16 (SPSS Inc., IL, US). A total of 30 patients were interviewed. The majority of the respondents were younger than 40-years-old, Malays, and female. All of the respondents perceived that they received news in a comfortable place, agreed that the doctor used simple language and appropriate words during the interaction, and believed that the way the doctor delivered the news might influence their life. The majority of the respondents reported that their news was received without interruption, that the doctor was sitting close but without making physical contact, and time was given for patient to ask questions and they were informed accordingly. Delivering bad news regarding cancer is an important communication skill and a complex task that can be learned and acquired. Specially tailored training is proposed to improve medical practice in this area.

  11. Influence of warmth and competence on the promotion of safe in-group selection: Stereotype content model and social categorization of faces.

    PubMed

    Ponsi, G; Panasiti, M S; Scandola, M; Aglioti, S M

    2016-01-01

    Categorizing an individual as a friend or foe plays a pivotal role in navigating the social world. According to the stereotype content model (SCM), social perception relies on two fundamental dimensions, warmth and competence, which allow us to process the intentions of others and their ability to enact those intentions, respectively. Social cognition research indicates that, in categorization tasks, people tend to classify other individuals as more likely to belong to the out-group than the in-group (in-group overexclusion effect, IOE) when lacking diagnostic information, probably with the aim of protecting in-group integrity. Here, we explored the role of warmth and competence in group-membership decisions by testing 62 participants in a social-categorization task consisting of 150 neutral faces. We assessed whether (a) warmth and competence ratings could predict the in-group/out-group categorization, and (b) the reliance on these two dimensions differed in low-IOE versus high-IOE participants. Data showed that high ratings of warmth and competence were necessary to categorize a face as in-group. Moreover, while low-IOE participants relied on warmth, high-IOE participants relied on competence. This finding suggests that the proneness to include/exclude unknown identities in/from one's own in-group is related to individual differences in the reliance on SCM social dimensions. Furthermore, the primacy of the warmth effect seems not to represent a universal phenomenon adopted in the context of social evaluation.

  12. Peek-a-What? Infants' Response to the Still-Face Task after Normal and Interrupted Peek-a-Boo

    ERIC Educational Resources Information Center

    Bigelow, Ann E.; Best, Caitlin

    2013-01-01

    Infants' sensitivity to the vitality or tension envelope within dyadic social exchanges was investigated by examining their responses following normal and interrupted games of peek-a-boo embedded in a Still-Face Task. Infants 5-6 months old engaged in two modified Still-Face Tasks with their mothers. In one task, the initial interaction ended with…

  13. Causal attribution and psychobiological response to competition in young men.

    PubMed

    Salvador, Alicia; Costa, Raquel; Hidalgo, Vanesa; González-Bono, Esperanza

    2017-06-01

    A contribution to a special issue on Hormones and Human Competition. Psychoneuroendocrine effects of competition have been widely accepted as a clear example of the relationship between androgens and aggressive/dominant behavior in humans. However, results about the effects of competitive outcomes are quite heterogeneous, suggesting that personal and contextual factors play a moderating role in this relationship. To further explore these dimensions, we aimed to examine (i) the effect of competition and its outcome on the psychobiological response to a laboratory competition in young men, and (ii) the moderating role of some cognitive dimensions such as causal attributions. To do so, we compared the responses of 56 healthy young men faced with two competitive tasks with different instructions. Twenty-eight men carried out a task whose instructions led subjects to think the outcome was due to their personal performance ("merit" task), whereas 28 other men faced a task whose outcome was attributable to luck ("chance" task). In both cases, outcome was manipulated by the experimenter. Salivary steroid hormones (testosterone and cortisol), cardiovascular variables (heart rate and blood pressure), and emotional state (mood and anxiety) were measured at different moments before, during and after both tasks. Our results did not support the "winner-loser effect" because no significant differences were found in the responses of winners and losers. However, significantly higher values on the testosterone and cardiovascular variables, along with slight decreases in positive mood, were associated with the merit-based competition, but not the chance-based condition. In addition, an exploratory factorial analysis grouped the response components into two patterns traditionally related to more active or more passive behaviors. Thus, our results suggest that the perception of contributing to the outcome is relevant in the psychobiological response to competition in men. Overall, our results reveal the importance of the appraisal of control and causal attribution in understanding human competitive interactions. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. How distinct is the coding of face identity and expression? Evidence for some common dimensions in face space.

    PubMed

    Rhodes, Gillian; Pond, Stephen; Burton, Nichola; Kloth, Nadine; Jeffery, Linda; Bell, Jason; Ewing, Louise; Calder, Andrew J; Palermo, Romina

    2015-09-01

    Traditional models of face perception emphasize distinct routes for processing face identity and expression. These models have been highly influential in guiding neural and behavioural research on the mechanisms of face perception. However, it is becoming clear that specialised brain areas for coding identity and expression may respond to both attributes and that identity and expression perception can interact. Here we use perceptual aftereffects to demonstrate the existence of dimensions in perceptual face space that code both identity and expression, further challenging the traditional view. Specifically, we find a significant positive association between face identity aftereffects and expression aftereffects, which dissociates from other face (gaze) and non-face (tilt) aftereffects. Importantly, individual variation in the adaptive calibration of these common dimensions significantly predicts ability to recognize both identity and expression. These results highlight the role of common dimensions in our ability to recognize identity and expression, and show why the high-level visual processing of these attributes is not entirely distinct. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Familiarity and face emotion recognition in patients with schizophrenia.

    PubMed

    Lahera, Guillermo; Herrera, Sara; Fernández, Cristina; Bardón, Marta; de los Ángeles, Victoria; Fernández-Liria, Alberto

    2014-01-01

    To assess the emotion recognition in familiar and unknown faces in a sample of schizophrenic patients and healthy controls. Face emotion recognition of 18 outpatients diagnosed with schizophrenia (DSM-IVTR) and 18 healthy volunteers was assessed with two Emotion Recognition Tasks using familiar faces and unknown faces. Each subject was accompanied by 4 familiar people (parents, siblings or friends), which were photographed by expressing the 6 Ekman's basic emotions. Face emotion recognition in familiar faces was assessed with this ad hoc instrument. In each case, the patient scored (from 1 to 10) the subjective familiarity and affective valence corresponding to each person. Patients with schizophrenia not only showed a deficit in the recognition of emotions on unknown faces (p=.01), but they also showed an even more pronounced deficit on familiar faces (p=.001). Controls had a similar success rate in the unknown faces task (mean: 18 +/- 2.2) and the familiar face task (mean: 17.4 +/- 3). However, patients had a significantly lower score in the familiar faces task (mean: 13.2 +/- 3.8) than in the unknown faces task (mean: 16 +/- 2.4; p<.05). In both tests, the highest number of errors was with emotions of anger and fear. Subjectively, the patient group showed a lower level of familiarity and emotional valence to their respective relatives (p<.01). The sense of familiarity may be a factor involved in the face emotion recognition and it may be disturbed in schizophrenia. © 2013.

  16. Cross-Cultural Study of Special Education Teachers' Perception of Iconicity of Graphic Symbols for Emotions

    ERIC Educational Resources Information Center

    Chae, Soo Jung

    2011-01-01

    This study was to investigate whether there are differences in perception of the symbols representing six emotions between the Korean and the American teachers. For an accurate comparison, two transparency tasks (Task 1-1 and Task 2) and one translucency task (Task 3) were used to investigate differences between Korean and American special…

  17. Social cognition and interaction training (SCIT) for outpatients with bipolar disorder.

    PubMed

    Lahera, G; Benito, A; Montes, J M; Fernández-Liria, A; Olbert, C M; Penn, D L

    2013-03-20

    Patients with bipolar disorder show social cognition deficits during both symptomatic and euthymic phases of the illness, partially independent of other cognitive dysfunctions and current mood. Previous studies in schizophrenia have revealed that social cognition is a modifiable domain. Social cognition and interaction training (SCIT) is an 18-week, manual-based, group treatment designed to improve social functioning by way of social cognition. 37 outpatients with DSM-IV-TR bipolar and schizoaffective disorders were randomly assigned to treatment as usual (TAU)+SCIT (n=21) or TAU (n=16). Independent, blind evaluators assessed subjects before and after the intervention on Face Emotion Identification Task (FEIT), Face Emotion Discrimination (FEDT), Emotion Recognition (ER40), Theory of Mind (Hinting Task) and Hostility Bias (AIHQ). Analysis of covariance revealed significant group effects for emotion perception, theory of mind, and depressive symptoms. The SCIT group showed a small within-group decrease on the AIHQ Blame subscale, a moderate decrease in AIHQ Hostility Bias, a small increase in scores on the Hinting Task, a moderate increase on the ER40, and large increases on the FEDT and FEIT. There was no evidence of effects on aggressive attributional biases or on global functioning. No follow up assessment was conducted, so it is unknown whether the effects of SCIT persist over time. This trial provides preliminary evidence that SCIT is feasible and may improve social cognition for bipolar and schizoaffective outpatients. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Children's Perceptions of and Beliefs about Facial Maturity

    ERIC Educational Resources Information Center

    Thomas, Gross F.

    2004-01-01

    The author studied children's and young adult's perceptions of facial age and beliefs about the sociability, cognitive ability, and physical fitness of adult faces. From pairs of photographs of adult faces, participants (4-6 years old, 8-10 years old, 13-16 years old, and 19-23 years old) selected the one face that appeared younger, older, better…

  19. Asymmetric Cultural Effects on Perceptual Expertise Underlie an Own-Race Bias for Voices

    ERIC Educational Resources Information Center

    Perrachione, Tyler K.; Chiao, Joan Y.; Wong, Patrick C. M.

    2010-01-01

    The own-race bias in memory for faces has been a rich source of empirical work on the mechanisms of person perception. This effect is thought to arise because the face-perception system differentially encodes the relevant structural dimensions of features and their configuration based on experiences with different groups of faces. However, the…

  20. Perception of Sexual Orientation from Facial Structure: A Study with Artificial Face Models.

    PubMed

    González-Álvarez, Julio

    2017-07-01

    Research has shown that lay people can perceive sexual orientation better than chance from face stimuli. However, the relation between facial structure and sexual orientation has been scarcely examined. Recently, an extensive morphometric study on a large sample of Canadian people (Skorska, Geniole, Vrysen, McCormick, & Bogaert, 2015) identified three (in men) and four (in women) facial features as unique multivariate predictors of sexual orientation in each sex group. The present study tested the perceptual validity of these facial traits with two experiments based on realistic artificial 3D face models created by manipulating the key parameters and presented to Spanish participants. Experiment 1 included 200 White and Black face models of both sexes. The results showed an overall accuracy (0.74) clearly above chance in a binary hetero/homosexual judgment task and significant differences depending on the race and sex of the face models. Experiment 2 produced five versions of 24 artificial faces of both sexes varying the key parameters in equal steps, and participants had to rate on a 1-7 scale how likely they thought that the depicted person had a homosexual sexual orientation. Rating scores displayed an almost perfect linear regression as a function of the parameter steps. In summary, both experiments demonstrated the perceptual validity of the seven multivariate predictors identified by Skorska et al. and open up new avenues for further research on this issue with artificial face models.

  1. Rehabilitation of face-processing skills in an adolescent with prosopagnosia: Evaluation of an online perceptual training programme.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Mole, Joseph A; Ainge, James A; Gregory, Nicola J; Bobak, Anna K; Bussunt, Amanda

    2015-01-01

    In this paper we describe the case of EM, a female adolescent who acquired prosopagnosia following encephalitis at the age of eight. Initial neuropsychological and eye-movement investigations indicated that EM had profound difficulties in face perception as well as face recognition. EM underwent 14 weeks of perceptual training in an online programme that attempted to improve her ability to make fine-grained discriminations between faces. Following training, EM's face perception skills had improved, and the effect generalised to untrained faces. Eye-movement analyses also indicated that EM spent more time viewing the inner facial features post-training. Examination of EM's face recognition skills revealed an improvement in her recognition of personally-known faces when presented in a laboratory-based test, although the same gains were not noted in her everyday experiences with these faces. In addition, EM did not improve on a test assessing the recognition of newly encoded faces. One month after training, EM had maintained the improvement on the eye-tracking test, and to a lesser extent, her performance on the familiar faces test. This pattern of findings is interpreted as promising evidence that the programme can improve face perception skills, and with some adjustments, may at least partially improve face recognition skills.

  2. Mechanisms of hemispheric lateralization: Asymmetric interhemispheric recruitment in the face perception network.

    PubMed

    Frässle, Stefan; Paulus, Frieder Michel; Krach, Sören; Schweinberger, Stefan Robert; Stephan, Klaas Enno; Jansen, Andreas

    2016-01-01

    Perceiving human faces constitutes a fundamental ability of the human mind, integrating a wealth of information essential for social interactions in everyday life. Neuroimaging studies have unveiled a distributed neural network consisting of multiple brain regions in both hemispheres. Whereas the individual regions in the face perception network and the right-hemispheric dominance for face processing have been subject to intensive research, the functional integration among these regions and hemispheres has received considerably less attention. Using dynamic causal modeling (DCM) for fMRI, we analyzed the effective connectivity between the core regions in the face perception network of healthy humans to unveil the mechanisms underlying both intra- and interhemispheric integration. Our results suggest that the right-hemispheric lateralization of the network is due to an asymmetric face-specific interhemispheric recruitment at an early processing stage - that is, at the level of the occipital face area (OFA) but not the fusiform face area (FFA). As a structural correlate, we found that OFA gray matter volume was correlated with this asymmetric interhemispheric recruitment. Furthermore, exploratory analyses revealed that interhemispheric connection asymmetries were correlated with the strength of pupil constriction in response to faces, a measure with potential sensitivity to holistic (as opposed to feature-based) processing of faces. Overall, our findings thus provide a mechanistic description for lateralized processes in the core face perception network, point to a decisive role of interhemispheric integration at an early stage of face processing among bilateral OFA, and tentatively indicate a relation to individual variability in processing strategies for faces. These findings provide a promising avenue for systematic investigations of the potential role of interhemispheric integration in future studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Developmental changes in analytic and holistic processes in face perception

    PubMed Central

    Joseph, Jane E.; DiBartolo, Michelle D.; Bhatt, Ramesh S.

    2015-01-01

    Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2nd order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2nd order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6–8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9–11 years) showed an intermediate pattern with a trend toward holistic processing of 2nd order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2nd order and featural information are incorporated into holistic representations, whereas older children only incorporate 2nd order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2nd order processing initially then incorporates featural information by adulthood. PMID:26300838

  4. Multi-Task Convolutional Neural Network for Pose-Invariant Face Recognition

    NASA Astrophysics Data System (ADS)

    Yin, Xi; Liu, Xiaoming

    2018-02-01

    This paper explores multi-task learning (MTL) for face recognition. We answer the questions of how and why MTL can improve the face recognition performance. First, we propose a multi-task Convolutional Neural Network (CNN) for face recognition where identity classification is the main task and pose, illumination, and expression estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weight to each side task, which is a crucial problem in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the variations from the learnt identity features. Extensive experiments on the entire Multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in Multi-PIE for face recognition. Our approach is also applicable to in-the-wild datasets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.

  5. Early (M170) activation of face-specific cortex by face-like objects.

    PubMed

    Hadjikhani, Nouchine; Kveraga, Kestutis; Naik, Paulami; Ahlfors, Seppo P

    2009-03-04

    The tendency to perceive faces in random patterns exhibiting configural properties of faces is an example of pareidolia. Perception of 'real' faces has been associated with a cortical response signal arising at approximately 170 ms after stimulus onset, but what happens when nonface objects are perceived as faces? Using magnetoencephalography, we found that objects incidentally perceived as faces evoked an early (165 ms) activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas common objects did not evoke such activation. An earlier peak at 130 ms was also seen for images of real faces only. Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late reinterpretation cognitive phenomenon.

  6. The Bangor Voice Matching Test: A standardized test for the assessment of voice perception ability.

    PubMed

    Mühl, Constanze; Sheil, Orla; Jarutytė, Lina; Bestelmeyer, Patricia E G

    2017-11-09

    Recognising the identity of conspecifics is an important yet highly variable skill. Approximately 2 % of the population suffers from a socially debilitating deficit in face recognition. More recently the existence of a similar deficit in voice perception has emerged (phonagnosia). Face perception tests have been readily available for years, advancing our understanding of underlying mechanisms in face perception. In contrast, voice perception has received less attention, and the construction of standardized voice perception tests has been neglected. Here we report the construction of the first standardized test for voice perception ability. Participants make a same/different identity decision after hearing two voice samples. Item Response Theory guided item selection to ensure the test discriminates between a range of abilities. The test provides a starting point for the systematic exploration of the cognitive and neural mechanisms underlying voice perception. With a high test-retest reliability (r=.86) and short assessment duration (~10 min) this test examines individual abilities reliably and quickly and therefore also has potential for use in developmental and neuropsychological populations.

  7. Project PAVE (Personality And Vision Experimentation): role of personal and interpersonal resilience in the perception of emotional facial expression

    PubMed Central

    Tanzer, Michal; Shahar, Golan; Avidan, Galia

    2014-01-01

    The aim of the proposed theoretical model is to illuminate personal and interpersonal resilience by drawing from the field of emotional face perception. We suggest that perception/recognition of emotional facial expressions serves as a central link between subjective, self-related processes and the social context. Emotional face perception constitutes a salient social cue underlying interpersonal communication and behavior. Because problems in communication and interpersonal behavior underlie most, if not all, forms of psychopathology, it follows that perception/recognition of emotional facial expressions impacts psychopathology. The ability to accurately interpret one’s facial expression is crucial in subsequently deciding on an appropriate course of action. However, perception in general, and of emotional facial expressions in particular, is highly influenced by individuals’ personality and the self-concept. Herein we briefly outline well-established theories of personal and interpersonal resilience and link them to the neuro-cognitive basis of face perception. We then describe the findings of our ongoing program of research linking two well-established resilience factors, general self-efficacy (GSE) and perceived social support (PSS), with face perception. We conclude by pointing out avenues for future research focusing on possible genetic markers and patterns of brain connectivity associated with the proposed model. Implications of our integrative model to psychotherapy are discussed. PMID:25165439

  8. Famous face recognition, face matching, and extraversion.

    PubMed

    Lander, Karen; Poyarekar, Siddhi

    2015-01-01

    It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.

  9. Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception

    PubMed Central

    Rohe, Tim; Noppeney, Uta

    2015-01-01

    To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world. PMID:25710328

  10. What’s in a Face? How Face Gender and Current Affect Influence Perceived Emotion

    PubMed Central

    Harris, Daniel A.; Hayes-Skelton, Sarah A.; Ciaramitaro, Vivian M.

    2016-01-01

    Faces drive our social interactions. A vast literature suggests an interaction between gender and emotional face perception, with studies using different methodologies demonstrating that the gender of a face can affect how emotions are processed. However, how different is our perception of affective male and female faces? Furthermore, how does our current affective state when viewing faces influence our perceptual biases? We presented participants with a series of faces morphed along an emotional continuum from happy to angry. Participants judged each face morph as either happy or angry. We determined each participant’s unique emotional ‘neutral’ point, defined as the face morph judged to be perceived equally happy and angry, separately for male and female faces. We also assessed how current state affect influenced these perceptual neutral points. Our results indicate that, for both male and female participants, the emotional neutral point for male faces is perceptually biased to be happier than for female faces. This bias suggests that more happiness is required to perceive a male face as emotionally neutral, i.e., we are biased to perceive a male face as more negative. Interestingly, we also find that perceptual biases in perceiving female faces are correlated with current mood, such that positive state affect correlates with perceiving female faces as happier, while we find no significant correlation between negative state affect and the perception of facial emotion. Furthermore, we find reaction time biases, with slower responses for angry male faces compared to angry female faces. PMID:27733839

  11. Perception of difficulty and glucose control: Effects on academic performance in youth with type I diabetes.

    PubMed

    Potts, Tiffany M; Nguyen, Jacqueline L; Ghai, Kanika; Li, Kathy; Perlmuter, Lawrence

    2015-04-15

    To investigate whether perceptions of task difficulty on neuropsychological tests predicted academic achievement after controlling for glucose levels and depression. Participants were type 1 diabetic adolescents, with a mean age = 12.5 years (23 females and 16 males), seen at a northwest suburban Chicago hospital. The sample population was free of co-morbid clinical health conditions. Subjects completed a three-part neuropsychological battery including the Digit Symbol Task, Trail Making Test, and Controlled Oral Word Association test. Following each task, individuals rated task difficulty and then completed a depression inventory. Performance on these three tests is reflective of neuropsychological status in relation to glucose control. Blood glucose levels were measured immediately prior to and after completing the neuropsychological battery using a glucose meter. HbA1c levels were obtained from medical records. Academic performance was based on self-reported grades in Math, Science, and English. Data was analyzed using multiple regression models to evaluate the associations between academic performance, perception of task difficulty, and glucose control. Perceptions of difficulty on a neuropsychological battery significantly predicted academic performance after accounting for glucose control and depression. Perceptions of difficulty on the neuropsychological tests were inversely correlated with academic performance (r = -0.48), while acute (blood glucose) and long-term glucose levels increased along with perceptions of task difficulty (r = 0.47). Additionally, higher depression scores were associated with poorer academic performance (r = -0.43). With the first regression analysis, perception of difficulty on the neuropsychological tasks contributed to 8% of the variance in academic performance after controlling for peripheral blood glucose and depression. In the second regression analysis, perception of difficulty accounted for 11% of the variance after accounting for academic performance and depression. The final regression analysis indicated that perception of difficulty increased with peripheral blood glucose, contributing to 22% of the variance. Most importantly, after controlling for perceptions of task difficulty, academic performance no longer predicted glucose levels. Finally, subjects who found the cognitive battery difficult were likely to have poor academic grades. Perceptions of difficulty on neurological tests exhibited a significant association with academic achievement, indicating that deficits in this skill may lead to academic disadvantage in diabetic patients.

  12. Investigating the Interactions among Genre, Task Complexity, and Proficiency in L2 Writing: A Comprehensive Text Analysis and Study of Learner Perceptions

    ERIC Educational Resources Information Center

    Yoon, Hyung-Jo

    2017-01-01

    In this study, I explored the interactions among genre, task complexity, and L2 proficiency in learners' writing task performance. Specifically, after identifying the lack of valid operationalizations of genre and task dimensions in L2 writing research, I examined how genre functions as a task complexity variable, and how learners' perceptions and…

  13. About-face on face recognition ability and holistic processing

    PubMed Central

    Richler, Jennifer J.; Floyd, R. Jackie; Gauthier, Isabel

    2015-01-01

    Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically. PMID:26223027

  14. About-face on face recognition ability and holistic processing.

    PubMed

    Richler, Jennifer J; Floyd, R Jackie; Gauthier, Isabel

    2015-01-01

    Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically.

  15. The nature of face representations in subcortical regions.

    PubMed

    Gabay, Shai; Burlingham, Charles; Behrmann, Marlene

    2014-07-01

    Studies examining the neural correlates of face perception in humans have focused almost exclusively on the distributed cortical network of face-selective regions. Recently, however, investigations have also identified subcortical correlates of face perception and the question addressed here concerns the nature of these subcortical face representations. To explore this issue, we presented to participants pairs of images sequentially to the same or to different eyes. Superior performance in the former over latter condition implicates monocular, prestriate portions of the visual system. Over a series of five experiments, we manipulated both lower-level (size, location) as well as higher-level (identity) similarity across the pair of faces. A monocular advantage was observed even when the faces in a pair differed in location and in size, implicating some subcortical invariance across lower-level image properties. A monocular advantage was also observed when the faces in a pair were two different images of the same individual, indicating the engagement of subcortical representations in more abstract, higher-level aspects of face processing. We conclude that subcortical structures of the visual system are involved, perhaps interactively, in multiple aspects of face perception, and not simply in deriving initial coarse representations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study.

    PubMed

    Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl

    2012-02-01

    Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

  17. Concurrent development of facial identity and expression discrimination.

    PubMed

    Dalrymple, Kirsten A; Visconti di Oleggio Castello, Matteo; Elison, Jed T; Gobbini, M Ida

    2017-01-01

    Facial identity and facial expression processing both appear to follow a protracted developmental trajectory, yet these trajectories have been studied independently and have not been directly compared. Here we investigated whether these processes develop at the same or different rates using matched identity and expression discrimination tasks. The Identity task begins with a target face that is a morph between two identities (Identity A/Identity B). After a brief delay, the target face is replaced by two choice faces: 100% Identity A and 100% Identity B. Children 5-12-years-old were asked to pick the choice face that is most similar to the target identity. The Expression task is matched in format and difficulty to the Identity task, except the targets are morphs between two expressions (Angry/Happy, or Disgust/Surprise). The same children were asked to pick the choice face with the expression that is most similar to the target expression. There were significant effects of age, with performance improving (becoming more accurate and faster) on both tasks with increasing age. Accuracy and reaction times were not significantly different across tasks and there was no significant Age x Task interaction. Thus, facial identity and facial expression discrimination appear to develop at a similar rate, with comparable improvement on both tasks from age five to twelve. Because our tasks are so closely matched in format and difficulty, they may prove useful for testing face identity and face expression processing in special populations, such as autism or prosopagnosia, where one of these abilities might be impaired.

  18. Interactions between concentric form-from-structure and face perception revealed by visual masking but not adaptation

    PubMed Central

    Feczko, Eric; Shulman, Gordon L.; Petersen, Steven E.; Pruett, John R.

    2014-01-01

    Findings from diverse subfields of vision research suggest a potential link between high-level aspects of face perception and concentric form-from-structure perception. To explore this relationship, typical adults performed two adaptation experiments and two masking experiments to test whether concentric, but not nonconcentric, Glass patterns (a type of form-from-structure stimulus) utilize a processing mechanism shared by face perception. For the adaptation experiments, subjects were presented with an adaptor for 5 or 20 s, prior to discriminating a target. In the masking experiments, subjects saw a mask, then a target, and then a second mask. Measures of discriminability and bias were derived and repeated measures analysis of variance tested for pattern-specific masking and adaptation effects. Results from Experiment 1 show no Glass pattern-specific effect of adaptation to faces; results from Experiment 2 show concentric Glass pattern masking, but not adaptation, may impair upright/inverted face discrimination; results from Experiment 3 show concentric and radial Glass pattern masking impaired subsequent upright/inverted face discrimination more than translational Glass pattern masking; and results from Experiment 4 show concentric and radial Glass pattern masking impaired subsequent face gender discrimination more than translational Glass pattern masking. Taken together, these findings demonstrate interactions between concentric form-from-structure and face processing, suggesting a possible common processing pathway. PMID:24563526

  19. Early (N170) activation of face-specific cortex by face-like objects

    PubMed Central

    Hadjikhani, Nouchine; Kveraga, Kestutis; Naik, Paulami; Ahlfors, Seppo P.

    2009-01-01

    The tendency to perceive faces in random patterns exhibiting configural properties of faces is an example of pareidolia. Perception of ‘real’ faces has been associated with a cortical response signal arising at about 170ms after stimulus onset; but what happens when non-face objects are perceived as faces? Using magnetoencephalography (MEG), we found that objects incidentally perceived as faces evoked an early (165ms) activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas common objects did not evoke such activation. An earlier peak at 130 ms was also seen for images of real faces only. Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late re-interpretation cognitive phenomenon. PMID:19218867

  20. Effects of color information on face processing using event-related potentials and gamma oscillations.

    PubMed

    Minami, T; Goto, K; Kitazaki, M; Nakauchi, S

    2011-03-10

    In humans, face configuration, contour and color may affect face perception, which is important for social interactions. This study aimed to determine the effect of color information on face perception by measuring event-related potentials (ERPs) during the presentation of natural- and bluish-colored faces. Our results demonstrated that the amplitude of the N170 event-related potential, which correlates strongly with face processing, was higher in response to a bluish-colored face than to a natural-colored face. However, gamma-band activity was insensitive to the deviation from a natural face color. These results indicated that color information affects the N170 associated with a face detection mechanism, which suggests that face color is important for face detection. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. The Relationship Between Speech Production and Speech Perception Deficits in Parkinson's Disease.

    PubMed

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-10-01

    This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified through a standardized speech intelligibility assessment, acoustic analysis, and speech intensity measurements. Second, an overall estimation task and an intensity estimation task were addressed to evaluate overall speech perception and speech intensity perception, respectively. Finally, correlation analysis was performed between the speech characteristics of the overall estimation task and the corresponding acoustic analysis. The interaction between speech production and speech intensity perception was investigated by an intensity imitation task. Acoustic analysis and speech intensity measurements demonstrated significant differences in speech production between patients with PD and the HCs. A different pattern in the auditory perception of speech and speech intensity was found in the PD group. Auditory perceptual deficits may influence speech production in patients with PD. The present results suggest a disturbed auditory perception related to an automatic monitoring deficit in PD.

  2. A novel augmented reality simulator for skills assessment in minimal invasive surgery.

    PubMed

    Lahanas, Vasileios; Loukas, Constantinos; Smailis, Nikolaos; Georgiou, Evangelos

    2015-08-01

    Over the past decade, simulation-based training has come to the foreground as an efficient method for training and assessment of surgical skills in minimal invasive surgery. Box-trainers and virtual reality (VR) simulators have been introduced in the teaching curricula and have substituted to some extent the traditional model of training based on animals or cadavers. Augmented reality (AR) is a new technology that allows blending of VR elements and real objects within a real-world scene. In this paper, we present a novel AR simulator for assessment of basic laparoscopic skills. The components of the proposed system include: a box-trainer, a camera and a set of laparoscopic tools equipped with custom-made sensors that allow interaction with VR training elements. Three AR tasks were developed, focusing on basic skills such as perception of depth of field, hand-eye coordination and bimanual operation. The construct validity of the system was evaluated via a comparison between two experience groups: novices with no experience in laparoscopic surgery and experienced surgeons. The observed metrics included task execution time, tool pathlength and two task-specific errors. The study also included a feedback questionnaire requiring participants to evaluate the face-validity of the system. Between-group comparison demonstrated highly significant differences (<0.01) in all performance metrics and tasks denoting the simulator's construct validity. Qualitative analysis on the instruments' trajectories highlighted differences between novices and experts regarding smoothness and economy of motion. Subjects' ratings on the feedback questionnaire highlighted the face-validity of the training system. The results highlight the potential of the proposed simulator to discriminate groups with different expertise providing a proof of concept for the potential use of AR as a core technology for laparoscopic simulation training.

  3. Developmental Changes in the Perception of Adult Facial Age

    ERIC Educational Resources Information Center

    Gross, Thomas F.

    2007-01-01

    The author studied children's (aged 5-16 years) and young adults' (aged 18-22 years) perception and use of facial features to discriminate the age of mature adult faces. In Experiment 1, participants rated the age of unaltered and transformed (eyes, nose, eyes and nose, and whole face blurred) adult faces (aged 20-80 years). In Experiment 2,…

  4. A between-subjects test of the lower-identification/ higher-priming paradox.

    PubMed

    Rubino, I Alex; Rociola, Giuseppe; Di Lorenzo, Giorgio; Magni, Valentina; Ribolsi, Michele; Mancini, Valentina; Saya, Anna; Pezzarossa, Bianca; Siracusano, Alberto; Suslow, Thomas

    2013-01-01

    An under-recognised U-shaped model states that unconscious and conscious perceptual effects are functionally exclusive and that unconscious perceptual effects manifest themselves only at the objective detection threshold, when conscious perception is completely absent. We tested the U-shaped line model with a between-subjects paradigm. Angry, happy, neutral faces, or blank slides were flashed for 5.5 ms and 19.5 ms before Chinese ideographs in a darkened room. A group of volunteers (n = 84) were asked to rate how much they liked each ideograph and performed an identification task. According to the median identification score two subgroups were composed; one with 50% or < 50% identification scores (n = 31), and one with above 50% identification scores (n = 53). The hypothesised U-shaped line was confirmed by the findings. Affective priming was found only at the two extreme points: the 5.5 ms condition of the low-identification group (subliminal perception) and the 19.5 ms condition of the > 50% high-identification group (supraliminal perception). The two intermediate points (19.5 ms of the low-identification group and 5.5 ms of the high-identification group) did not correspond to significant priming effects. These results confirm that a complete absence of conscious perception is the condition for the deployment of unconscious perceptual effects.

  5. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    PubMed

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.

  6. Variations in the perceptions of peer and coach motivational climate.

    PubMed

    Vazou, Spiridoula

    2010-06-01

    This study examined (a) variations in the perceptions of peer- and coach-generated motivational climate within and between teams and (b) individual- and group-level factors that can account for these variations. Participants were 483 athletes between 12 and 16 years old. The results showed that perceptions of both peer- and coach-generated climate varied as a function of group-level variables, namely team success, coach's gender (except for peer ego-involving climate), and team type (only for coach ego-involving climate). Perceptions of peer- and coach-generated climate also varied as a function of individual-level variables, namely athletes' task and ego orientations, gender, and age (only for coach task-involving and peer ego-involving climate). Moreover, within-team variations in perceptions of peer- and coach-generated climate as a function of task and ego orientation levels were identified. Identifying and controlling the factors that influence perceptions of peer- and coach-generated climate may be important in strengthening task-involving motivational cues.

  7. Visual perception during mirror gazing at one's own face in schizophrenia.

    PubMed

    Caputo, Giovanni B; Ferrucci, Roberta; Bortolomasi, Marco; Giacopuzzi, Mario; Priori, Alberto; Zago, Stefano

    2012-09-01

    In normal observers gazing at one's own face in the mirror for some minutes, at a low illumination level, triggers the perception of strange faces, a new perceptual illusion that has been named 'strange-face in the mirror'. Subjects see distortions of their own faces, but often they see monsters, archetypical faces, faces of dead relatives, and of animals. We designed this study to primarily compare strange-face apparitions in response to mirror gazing in patients with schizophrenia and healthy controls. The study included 16 patients with schizophrenia and 21 healthy controls. In this paper we administered a 7 minute mirror gazing test (MGT). Before the mirror gazing session, all subjects underwent assessment with the Cardiff Anomalous Perception Scale (CAPS). When the 7minute MGT ended, the experimenter assessed patients and controls with a specifically designed questionnaire and interviewed them, asking them to describe strange-face perceptions. Apparitions of strange-faces in the mirror were significantly more intense in schizophrenic patients than in controls. All the following variables were higher in patients than in healthy controls: frequency (p<.005) and cumulative duration of apparitions (p<.009), number and types of strange-faces (p<.002), self-evaluation scores on Likert-type scales of apparition strength (p<.03) and of reality of apparitions (p<.001). In schizophrenic patients, these Likert-type scales showed correlations (p<.05) with CAPS total scores. These results suggest that the increase of strange-face apparitions in schizophrenia can be produced by ego dysfunction, by body dysmorphic disorder and by misattribution of self-agency. MGT may help in completing the standard assessment of patients with schizophrenia, independently of hallucinatory psychopathology. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Laughter exaggerates happy and sad faces depending on visual context

    PubMed Central

    Sherman, Aleksandra; Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced visual perception of facial expressions. We simultaneously presented laughter with a happy, neutral, or sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distracter faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a re-examination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may similarly be context dependent. PMID:22215467

  9. Developmental changes in perceptions of attractiveness: a role of experience?

    PubMed

    Cooper, Philip A; Geldart, Sybil S; Mondloch, Catherine J; Maurer, Daphne

    2006-09-01

    In three experiments, we traced the development of the adult pattern of judgments of attractiveness for faces that have been altered to have internal features in low, average, or high positions. Twelve-year-olds and adults demonstrated identical patterns of results: they rated faces with features in an average location as significantly more attractive than faces with either low or high features. Although both 4-year-olds and 9-year-olds rated faces with high features as least attractive, unlike adults and 12-year-olds, they rated faces with low and average features as equally attractive. Three-year-olds with high levels of peer interaction, but not those with low levels of peer interaction, chose faces with low features as significantly more attractive than those with high-placed features, possibly as a result of their increased experience with the proportions of the faces of peers. Overall, the pattern of results is consistent with the hypothesis that experience influences perceptions of attractiveness, with the proportions of the faces participants see in their everyday lives influencing their perceptions of attractiveness.

  10. Developmental prosopagnosia and super-recognition: no special role for surface reflectance processing

    PubMed Central

    Russell, Richard; Chatterjee, Garga; Nakayama, Ken

    2011-01-01

    Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers’ exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. PMID:22192636

  11. What Influences Principals' Perceptions of Academic Climate? A Nationally Representative Study of the Direct Effects of Perception on Climate

    ERIC Educational Resources Information Center

    Urick, Angela; Bowers, Alex J.

    2011-01-01

    Using a nationally representative sample of public high schools (N = 439), we examined the extent to which the principal's perception of their influence over instruction, the evaluation of nonacademic related tasks as well as academic related tasks, and their relationship with the school district relates to their perception of academic climate…

  12. Modeling the Relationship between Perceptions of Assessment Tasks and Classroom Assessment Environment as a Function of Gender

    ERIC Educational Resources Information Center

    Alkharusi, Hussain; Aldhafri, Said; Alnabhani, Hilal; Alkalbani, Muna

    2014-01-01

    A substantial proportion of the classroom time involves exposing students to a variety of assessment tasks. As students process these tasks, they develop beliefs about the importance, utility, value, and difficulty of the tasks. This study aimed at deriving a model describing the multivariate relationship between students' perceptions of the…

  13. Age-Group Differences in Interference from Young and Older Emotional Faces.

    PubMed

    Ebner, Natalie C; Johnson, Marcia K

    2010-11-01

    Human attention is selective, focusing on some aspects of events at the expense of others. In particular, angry faces engage attention. Most studies have used pictures of young faces, even when comparing young and older age groups. Two experiments asked (1) whether task-irrelevant faces of young and older individuals with happy, angry, and neutral expressions disrupt performance on a face-unrelated task, (2) whether interference varies for faces of different ages and different facial expressions, and (3) whether young and older adults differ in this regard. Participants gave speeded responses on a number task while irrelevant faces appeared in the background. Both age groups were more distracted by own than other-age faces. In addition, young participants' responses were slower for angry than happy faces, whereas older participants' responses were slower for happy than angry faces. Factors underlying age-group differences in interference from emotional faces of different ages are discussed.

  14. How Context Influences Our Perception of Emotional Faces: A Behavioral Study on the Kuleshov Effect

    PubMed Central

    Calbi, Marta; Heimann, Katrin; Barratt, Daniel; Siri, Francesca; Umiltà, Maria A.; Gallese, Vittorio

    2017-01-01

    Facial expressions are of major importance in understanding the mental and emotional states of others. So far, most studies on the perception and comprehension of emotions have used isolated facial expressions as stimuli; for example, photographs of actors displaying facial expressions corresponding to one of the so called ‘basic emotions.’ However, our real experience during social interactions is different: facial expressions of emotion are mostly perceived in a wider context, constituted by body language, the surrounding environment, and our beliefs and expectations. Already in the early twentieth century, the Russian filmmaker Lev Kuleshov argued that such context, established by intermediate shots of strong emotional content, could significantly change our interpretation of facial expressions in film. Prior experiments have shown behavioral effects pointing in this direction, but have only used static images as stimuli. Our study used a more ecological design with participants watching film sequences of neutral faces, crosscut with scenes of strong emotional content (evoking happiness or fear, plus neutral stimuli as a baseline condition). The task was to rate the emotion displayed by a target person’s face in terms of valence, arousal, and category. Results clearly demonstrated the presence of a significant effect in terms of both valence and arousal in the fear condition only. Moreover, participants tended to categorize the target person’s neutral facial expression choosing the emotion category congruent with the preceding context. Our results highlight the context-sensitivity of emotions and the importance of studying them under ecologically valid conditions. PMID:29046652

  15. Stereoscopic distance perception

    NASA Technical Reports Server (NTRS)

    Foley, John M.

    1989-01-01

    Limited cue, open-loop tasks in which a human observer indicates distances or relations among distances are discussed. By open-loop tasks, it is meant tasks in which the observer gets no feedback as to the accuracy of the responses. What happens when cues are added and when the loop is closed are considered. The implications of this research for the effectiveness of visual displays is discussed. Errors in visual distance tasks do not necessarily mean that the percept is in error. The error could arise in transformations that intervene between the percept and the response. It is argued that the percept is in error. It is also argued that there exist post-perceptual transformations that may contribute to the error or be modified by feedback to correct for the error.

  16. Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain

    PubMed Central

    Oya, Hiroyuki; Howard, Matthew A.; Adolphs, Ralph

    2008-01-01

    Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus. PMID:19065268

  17. Correlated individual differences suggest a common mechanism underlying metacognition in visual perception and visual short-term memory.

    PubMed

    Samaha, Jason; Postle, Bradley R

    2017-11-29

    Adaptive behaviour depends on the ability to introspect accurately about one's own performance. Whether this metacognitive ability is supported by the same mechanisms across different tasks is unclear. We investigated the relationship between metacognition of visual perception and metacognition of visual short-term memory (VSTM). Experiments 1 and 2 required subjects to estimate the perceived or remembered orientation of a grating stimulus and rate their confidence. We observed strong positive correlations between individual differences in metacognitive accuracy between the two tasks. This relationship was not accounted for by individual differences in task performance or average confidence, and was present across two different metrics of metacognition and in both experiments. A model-based analysis of data from a third experiment showed that a cross-domain correlation only emerged when both tasks shared the same task-relevant stimulus feature. That is, metacognition for perception and VSTM were correlated when both tasks required orientation judgements, but not when the perceptual task was switched to require contrast judgements. In contrast with previous results comparing perception and long-term memory, which have largely provided evidence for domain-specific metacognitive processes, the current findings suggest that metacognition of visual perception and VSTM is supported by a domain-general metacognitive architecture, but only when both domains share the same task-relevant stimulus feature. © 2017 The Author(s).

  18. Electrical Stimulation of the Left and Right Human Fusiform Gyrus Causes Different Effects in Conscious Face Perception

    PubMed Central

    Rangarajan, Vinitha; Hermes, Dora; Foster, Brett L.; Weiner, Kevin S.; Jacques, Corentin; Grill-Spector, Kalanit

    2014-01-01

    Neuroimaging and electrophysiological studies across species have confirmed bilateral face-selective responses in the ventral temporal cortex (VTC) and prosopagnosia is reported in patients with lesions in the VTC including the fusiform gyrus (FG). As imaging and electrophysiological studies provide correlative evidence, and brain lesions often comprise both white and gray matter structures beyond the FG, we designed the current study to explore the link between face-related electrophysiological responses in the FG and the causal effects of electrical stimulation of the left or right FG in face perception. We used a combination of electrocorticography (ECoG) and electrical brain stimulation (EBS) in 10 human subjects implanted with intracranial electrodes in either the left (5 participants, 30 FG sites) or right (5 participants, 26 FG sites) hemispheres. We identified FG sites with face-selective ECoG responses, and recorded perceptual reports during EBS of these sites. In line with existing literature, face-selective ECoG responses were present in both left and right FG sites. However, when the same sites were stimulated, we observed a striking difference between hemispheres. Only EBS of the right FG caused changes in the conscious perception of faces, whereas EBS of strongly face-selective regions in the left FG produced non-face-related visual changes, such as phosphenes. This study examines the relationship between correlative versus causal nature of ECoG and EBS, respectively, and provides important insight into the differential roles of the right versus left FG in conscious face perception. PMID:25232118

  19. Learning task affects ERP-correlates of the own-race bias, but not recognition memory performance.

    PubMed

    Stahl, Johanna; Wiese, Holger; Schweinberger, Stefan R

    2010-06-01

    People are generally better in recognizing faces from their own ethnic group as opposed to faces from another ethnic group, a finding which has been interpreted in the context of two opposing theories. Whereas perceptual expertise theories stress the role of long-term experience with one's own ethnic group, race feature theories assume that the processing of an other-race-defining feature triggers inferior coding and recognition of faces. The present study tested these hypotheses by manipulating the learning task in a recognition memory test. At learning, one group of participants categorized faces according to ethnicity, whereas another group rated facial attractiveness. Subsequent recognition tests indicated clear and similar own-race biases for both groups. However, ERPs from learning and test phases demonstrated an influence of learning task on neurophysiological processing of own- and other-race faces. While both groups exhibited larger N170 responses to Asian as compared to Caucasian faces, task-dependent differences were seen in a subsequent P2 ERP component. Whereas the P2 was more pronounced for Caucasian faces in the categorization group, this difference was absent in the attractiveness rating group. The learning task thus influences early face encoding. Moreover, comparison with recent research suggests that this attractiveness rating task influences the processes reflected in the P2 in a similar manner as perceptual expertise for other-race faces does. By contrast, the behavioural own-race bias suggests that long-term expertise is required to increase other-race face recognition and hence attenuate the own-race bias. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. Misinterpretation of Facial Expressions of Emotion in Verbal Adults with Autism Spectrum Disorder

    PubMed Central

    Eack, Shaun M.; MAZEFSKY, CARLA A.; Minshew, Nancy J.

    2014-01-01

    Facial emotion perception is significantly affected in autism spectrum disorder (ASD), yet little is known about how individuals with ASD misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with ASD and 30 age- and gender-matched volunteers without ASD to identify patterns of emotion misinterpretation during face processing that contribute to emotion recognition impairments in autism. Results revealed that difficulty distinguishing emotional from neutral facial expressions characterized much of the emotion perception impairments exhibited by participants with ASD. In particular, adults with ASD uniquely misinterpreted happy faces as neutral, and were significantly more likely than typical volunteers to attribute negative valence to non-emotional faces. The over-attribution of emotions to neutral faces was significantly related to greater communication and emotional intelligence impairments in individuals with ASD. These findings suggest a potential negative bias toward the interpretation of facial expressions and may have implications for interventions designed to remediate emotion perception in ASD. PMID:24535689

Top