Is the processing of affective prosody influenced by spatial attention? an ERP study
2013-01-01
Background The present study asked whether the processing of affective prosody is modulated by spatial attention. Pseudo-words with a neutral, happy, threatening, and fearful prosody were presented at two spatial positions. Participants attended to one position in order to detect infrequent targets. Emotional prosody was task irrelevant. The electro-encephalogram (EEG) was recorded to assess processing differences as a function of spatial attention and emotional valence. Results Event-related potentials (ERPs) differed as a function of emotional prosody both when attended and when unattended. While emotional prosody effects interacted with effects of spatial attention at early processing levels (< 200 ms), these effects were additive at later processing stages (> 200 ms). Conclusions Emotional prosody, therefore, seems to be partially processed outside the focus of spatial attention. Whereas at early sensory processing stages spatial attention modulates the degree of emotional voice processing as a function of emotional valence, emotional prosody is processed outside of the focus of spatial attention at later processing stages. PMID:23360491
Emotional prosody processing in autism spectrum disorder
Kliemann, Dorit; Dziobek, Isabel; Heekeren, Hauke R.
2017-01-01
Abstract Individuals with Autism Spectrum Disorder (ASD) are characterized by severe deficits in social communication, whereby the nature of their impairments in emotional prosody processing have yet to be specified. Here, we investigated emotional prosody processing in individuals with ASD and controls with novel, lifelike behavioral and neuroimaging paradigms. Compared to controls, individuals with ASD showed reduced emotional prosody recognition accuracy on a behavioral task. On the neural level, individuals with ASD displayed reduced activity of the STS, insula and amygdala for complex vs basic emotions compared to controls. Moreover, the coupling between the STS and amygdala for complex vs basic emotions was reduced in the ASD group. Finally, groups differed with respect to the relationship between brain activity and behavioral performance. Brain activity during emotional prosody processing was more strongly related to prosody recognition accuracy in ASD participants. In contrast, the coupling between STS and anterior cingulate cortex (ACC) activity predicted behavioral task performance more strongly in the control group. These results provide evidence for aberrant emotional prosody processing of individuals with ASD. They suggest that the differences in the relationship between the neural and behavioral level of individuals with ASD may account for their observed deficits in social communication. PMID:27531389
Early and late brain signatures of emotional prosody among individuals with high versus low power.
Paulmann, Silke; Uskul, Ayse K
2017-04-01
Using ERPs, we explored the relationship between social power and emotional prosody processing. In particular, we investigated differences at early and late processing stages between individuals primed with high or low power. Comparable to previously published findings from nonprimed participants, individuals primed with low power displayed differentially modulated P2 amplitudes in response to different emotional prosodies, whereas participants primed with high power failed to do so. Similarly, participants primed with low power showed differentially modulated amplitudes in response to different emotional prosodies at a later processing stage (late ERP component), whereas participants primed with high power did not. These ERP results suggest that high versus low power leads to emotional prosody processing differences at the early stage associated with emotional salience detection and at a later stage associated with more in-depth processing of emotional stimuli. © 2016 Society for Psychophysiological Research.
Advanced Parkinson disease patients have impairment in prosody processing.
Albuquerque, Luisa; Martins, Maurício; Coelho, Miguel; Guedes, Leonor; Ferreira, Joaquim J; Rosa, Mário; Martins, Isabel Pavão
2016-01-01
The ability to recognize and interpret emotions in others is a crucial prerequisite of adequate social behavior. Impairments in emotion processing have been reported from the early stages of Parkinson's disease (PD). This study aims to characterize emotion recognition in advanced Parkinson's disease (APD) candidates for deep-brain stimulation and to compare emotion recognition abilities in visual and auditory domains. APD patients, defined as those with levodopa-induced motor complications (N = 42), and healthy controls (N = 43) matched by gender, age, and educational level, undertook the Comprehensive Affect Testing System (CATS), a battery that evaluates recognition of seven basic emotions (happiness, sadness, anger, fear, surprise, disgust, and neutral) on facial expressions and four emotions on prosody (happiness, sadness, anger, and fear). APD patients were assessed during the "ON" state. Group performance was compared with independent-samples t tests. Compared to controls, APD had significantly lower scores on the discrimination and naming of emotions in prosody, and visual discrimination of neutral faces, but no significant differences in visual emotional tasks. The contrasting performance in emotional processing between visual and auditory stimuli suggests that APD candidates for surgery have either a selective difficulty in recognizing emotions in prosody or a general defect in prosody processing. Studies investigating early-stage PD, and the effect of subcortical lesions in prosody processing, favor the latter interpretation. Further research is needed to understand these deficits in emotional prosody recognition and their possible contribution to later behavioral or neuropsychiatric manifestations of PD.
The Sound of Feelings: Electrophysiological Responses to Emotional Speech in Alexithymia
Goerlich, Katharina Sophia; Aleman, André; Martens, Sander
2012-01-01
Background Alexithymia is a personality trait characterized by difficulties in the cognitive processing of emotions (cognitive dimension) and in the experience of emotions (affective dimension). Previous research focused mainly on visual emotional processing in the cognitive alexithymia dimension. We investigated the impact of both alexithymia dimensions on electrophysiological responses to emotional speech in 60 female subjects. Methodology During unattended processing, subjects watched a movie while an emotional prosody oddball paradigm was presented in the background. During attended processing, subjects detected deviants in emotional prosody. The cognitive alexithymia dimension was associated with a left-hemisphere bias during early stages of unattended emotional speech processing, and with generally reduced amplitudes of the late P3 component during attended processing. In contrast, the affective dimension did not modulate unattended emotional prosody perception, but was associated with reduced P3 amplitudes during attended processing particularly to emotional prosody spoken in high intensity. Conclusions Our results provide evidence for a dissociable impact of the two alexithymia dimensions on electrophysiological responses during the attended and unattended processing of emotional prosody. The observed electrophysiological modulations are indicative of a reduced sensitivity to the emotional qualities of speech, which may be a contributing factor to problems in interpersonal communication associated with alexithymia. PMID:22615853
Kreitewolf, Jens; Friederici, Angela D; von Kriegstein, Katharina
2014-11-15
Hemispheric specialization for linguistic prosody is a controversial issue. While it is commonly assumed that linguistic prosody and emotional prosody are preferentially processed in the right hemisphere, neuropsychological work directly comparing processes of linguistic prosody and emotional prosody suggests a predominant role of the left hemisphere for linguistic prosody processing. Here, we used two functional magnetic resonance imaging (fMRI) experiments to clarify the role of left and right hemispheres in the neural processing of linguistic prosody. In the first experiment, we sought to confirm previous findings showing that linguistic prosody processing compared to other speech-related processes predominantly involves the right hemisphere. Unlike previous studies, we controlled for stimulus influences by employing a prosody and speech task using the same speech material. The second experiment was designed to investigate whether a left-hemispheric involvement in linguistic prosody processing is specific to contrasts between linguistic prosody and emotional prosody or whether it also occurs when linguistic prosody is contrasted against other non-linguistic processes (i.e., speaker recognition). Prosody and speaker tasks were performed on the same stimulus material. In both experiments, linguistic prosody processing was associated with activity in temporal, frontal, parietal and cerebellar regions. Activation in temporo-frontal regions showed differential lateralization depending on whether the control task required recognition of speech or speaker: recognition of linguistic prosody predominantly involved right temporo-frontal areas when it was contrasted against speech recognition; when contrasted against speaker recognition, recognition of linguistic prosody predominantly involved left temporo-frontal areas. The results show that linguistic prosody processing involves functions of both hemispheres and suggest that recognition of linguistic prosody is based on an inter-hemispheric mechanism which exploits both a right-hemispheric sensitivity to pitch information and a left-hemispheric dominance in speech processing. Copyright © 2014 Elsevier Inc. All rights reserved.
When emotional prosody and semantics dance cheek to cheek: ERP evidence.
Kotz, Sonja A; Paulmann, Silke
2007-06-02
To communicate emotionally entails that a listener understands a verbal message but also the emotional prosody going along with it. So far the time course and interaction of these emotional 'channels' is still poorly understood. The current set of event-related brain potential (ERP) experiments investigated both the interactive time course of emotional prosody with semantics and of emotional prosody independent of emotional semantics using a cross-splicing method. In a probe verification task (Experiment 1) prosodic expectancy violations elicited a positivity, while a combined prosodic-semantic expectancy violation elicited a negativity. Comparable ERP results were obtained in an emotional prosodic categorization task (Experiment 2). The present data support different ERP responses with distinct time courses and topographies elicited as a function of prosodic expectancy and combined prosodic-semantic expectancy during emotional prosodic processing and combined emotional prosody/emotional semantic processing. These differences suggest that the interaction of more than one emotional channel facilitates subtle transitions in an emotional sentence context.
Emotional speech comprehension in children and adolescents with autism spectrum disorders.
Le Sourn-Bissaoui, Sandrine; Aguert, Marc; Girard, Pauline; Chevreuil, Claire; Laval, Virginie
2013-01-01
We examined the understanding of emotional speech by children and adolescents with autism spectrum disorders (ASD). We predicted that they would have difficulty understanding emotional speech, not because of an emotional prosody processing impairment but because of problems drawing appropriate inferences, especially in multiple-cue environments. Twenty-six children and adolescents with ASD and 26 typically developing controls performed a computerized task featuring emotional prosody, either embedded in a discrepant context or without any context at all. They must identify the speaker's feeling. When the prosody was the sole cue, participants with ASD performed just as well as controls, relying on this cue to infer the speaker's intention. When the prosody was embedded in a discrepant context, both ASD and TD participants exhibited a contextual bias and a negativity bias. However ASD participants relied less on the emotional prosody than the controls when it was positive. We discuss these findings with respect to executive function and intermodal processing. After reading this article, the reader should be able to (1) describe the ASD participants pragmatic impairments, (2) explain why ASD participants did not have an emotional prosody processing impairment, and (3) explain why ASD participants had difficulty inferring the speaker's intention from emotional prosody in a discrepant situation. Copyright © 2013 Elsevier Inc. All rights reserved.
Neural Substrates of Processing Anger in Language: Contributions of Prosody and Semantics.
Castelluccio, Brian C; Myers, Emily B; Schuh, Jillian M; Eigsti, Inge-Marie
2016-12-01
Emotions are conveyed primarily through two channels in language: semantics and prosody. While many studies confirm the role of a left hemisphere network in processing semantic emotion, there has been debate over the role of the right hemisphere in processing prosodic emotion. Some evidence suggests a preferential role for the right hemisphere, and other evidence supports a bilateral model. The relative contributions of semantics and prosody to the overall processing of affect in language are largely unexplored. The present work used functional magnetic resonance imaging to elucidate the neural bases of processing anger conveyed by prosody or semantic content. Results showed a robust, distributed, bilateral network for processing angry prosody and a more modest left hemisphere network for processing angry semantics when compared to emotionally neutral stimuli. Findings suggest the nervous system may be more responsive to prosodic cues in speech than to the semantic content of speech.
More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing.
Filippi, Piera; Ocklenburg, Sebastian; Bowling, Daniel L; Heege, Larissa; Güntürkün, Onur; Newen, Albert; de Boer, Bart
2017-08-01
Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of "happy" and "sad" were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of "happy" and "sad" were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
Gender differences in the activation of inferior frontal cortex during emotional speech perception.
Schirmer, Annett; Zysset, Stefan; Kotz, Sonja A; Yves von Cramon, D
2004-03-01
We investigated the brain regions that mediate the processing of emotional speech in men and women by presenting positive and negative words that were spoken with happy or angry prosody. Hence, emotional prosody and word valence were either congruous or incongruous. We assumed that an fRMI contrast between congruous and incongruous presentations would reveal the structures that mediate the interaction of emotional prosody and word valence. The left inferior frontal gyrus (IFG) was more strongly activated in incongruous as compared to congruous trials. This difference in IFG activity was significantly larger in women than in men. Moreover, the congruence effect was significant in women whereas it only appeared as a tendency in men. As the left IFG has been repeatedly implicated in semantic processing, these findings are taken as evidence that semantic processing in women is more susceptible to influences from emotional prosody than is semantic processing in men. Moreover, the present data suggest that the left IFG mediates increased semantic processing demands imposed by an incongruence between emotional prosody and word valence.
Perception of emotional prosody: moving toward a model that incorporates sex-related differences.
Everhart, D Erik; Demaree, Heath A; Shipley, Amy J
2006-06-01
The overall purpose of this article is to review the literature that addresses the theoretical models, neuroanatomical mechanisms, and sex-related differences in the perception of emotional prosody. Specifically, the article focuses on the right-hemisphere model of emotion processing as it pertains to the perception of emotional prosody. This article also reviews more recent research that implicates a role for the left hemisphere and subcortical structures in the perception of emotional prosody. The last major section of this article addresses sex-related differences and the potential influence of hormones on the perception of emotional prosody. The article concludes with a section that offers directions for future research.
Sadness is unique: neural processing of emotions in speech prosody in musicians and non-musicians.
Park, Mona; Gutyrchik, Evgeny; Welker, Lorenz; Carl, Petra; Pöppel, Ernst; Zaytseva, Yuliya; Meindl, Thomas; Blautzik, Janusch; Reiser, Maximilian; Bao, Yan
2014-01-01
Musical training has been shown to have positive effects on several aspects of speech processing, however, the effects of musical training on the neural processing of speech prosody conveying distinct emotions are yet to be better understood. We used functional magnetic resonance imaging (fMRI) to investigate whether the neural responses to speech prosody conveying happiness, sadness, and fear differ between musicians and non-musicians. Differences in processing of emotional speech prosody between the two groups were only observed when sadness was expressed. Musicians showed increased activation in the middle frontal gyrus, the anterior medial prefrontal cortex, the posterior cingulate cortex and the retrosplenial cortex. Our results suggest an increased sensitivity of emotional processing in musicians with respect to sadness expressed in speech, possibly reflecting empathic processes.
Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces
Rigoulot, Simon; Pell, Marc D.
2012-01-01
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions. PMID:22303454
Sadness is unique: neural processing of emotions in speech prosody in musicians and non-musicians
Park, Mona; Gutyrchik, Evgeny; Welker, Lorenz; Carl, Petra; Pöppel, Ernst; Zaytseva, Yuliya; Meindl, Thomas; Blautzik, Janusch; Reiser, Maximilian; Bao, Yan
2015-01-01
Musical training has been shown to have positive effects on several aspects of speech processing, however, the effects of musical training on the neural processing of speech prosody conveying distinct emotions are yet to be better understood. We used functional magnetic resonance imaging (fMRI) to investigate whether the neural responses to speech prosody conveying happiness, sadness, and fear differ between musicians and non-musicians. Differences in processing of emotional speech prosody between the two groups were only observed when sadness was expressed. Musicians showed increased activation in the middle frontal gyrus, the anterior medial prefrontal cortex, the posterior cingulate cortex and the retrosplenial cortex. Our results suggest an increased sensitivity of emotional processing in musicians with respect to sadness expressed in speech, possibly reflecting empathic processes. PMID:25688196
ERP evidence for the recognition of emotional prosody through simulated cochlear implant strategies.
Agrawal, Deepashri; Timm, Lydia; Viola, Filipa Campos; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias
2012-09-20
Emotionally salient information in spoken language can be provided by variations in speech melody (prosody) or by emotional semantics. Emotional prosody is essential to convey feelings through speech. In sensori-neural hearing loss, impaired speech perception can be improved by cochlear implants (CIs). Aim of this study was to investigate the performance of normal-hearing (NH) participants on the perception of emotional prosody with vocoded stimuli. Semantically neutral sentences with emotional (happy, angry and neutral) prosody were used. Sentences were manipulated to simulate two CI speech-coding strategies: the Advance Combination Encoder (ACE) and the newly developed Psychoacoustic Advanced Combination Encoder (PACE). Twenty NH adults were asked to recognize emotional prosody from ACE and PACE simulations. Performance was assessed using behavioral tests and event-related potentials (ERPs). Behavioral data revealed superior performance with original stimuli compared to the simulations. For simulations, better recognition for happy and angry prosody was observed compared to the neutral. Irrespective of simulated or unsimulated stimulus type, a significantly larger P200 event-related potential was observed for happy prosody after sentence onset than the other two emotions. Further, the amplitude of P200 was significantly more positive for PACE strategy use compared to the ACE strategy. Results suggested P200 peak as an indicator of active differentiation and recognition of emotional prosody. Larger P200 peak amplitude for happy prosody indicated importance of fundamental frequency (F0) cues in prosody processing. Advantage of PACE over ACE highlighted a privileged role of the psychoacoustic masking model in improving prosody perception. Taken together, the study emphasizes on the importance of vocoded simulation to better understand the prosodic cues which CI users may be utilizing.
An ERP study of vocal emotion processing in asymmetric Parkinson’s disease
Garrido-Vásquez, Patricia; Pell, Marc D.; Paulmann, Silke; Strecker, Karl; Schwarz, Johannes; Kotz, Sonja A.
2013-01-01
Parkinson’s disease (PD) has been related to impaired processing of emotional speech intonation (emotional prosody). One distinctive feature of idiopathic PD is motor symptom asymmetry, with striatal dysfunction being strongest in the hemisphere contralateral to the most affected body side. It is still unclear whether this asymmetry may affect vocal emotion perception. Here, we tested 22 PD patients (10 with predominantly left-sided [LPD] and 12 with predominantly right-sided motor symptoms) and 22 healthy controls in an event-related potential study. Sentences conveying different emotional intonations were presented in lexical and pseudo-speech versions. Task varied between an explicit and an implicit instruction. Of specific interest was emotional salience detection from prosody, reflected in the P200 component. We predicted that patients with predominantly right-striatal dysfunction (LPD) would exhibit P200 alterations. Our results support this assumption. LPD patients showed enhanced P200 amplitudes, and specific deficits were observed for disgust prosody, explicit anger processing and implicit processing of happy prosody. Lexical speech was predominantly affected while the processing of pseudo-speech was largely intact. P200 amplitude in patients correlated significantly with left motor scores and asymmetry indices. The data suggest that emotional salience detection from prosody is affected by asymmetric neuronal degeneration in PD. PMID:22956665
On the role of attention for the processing of emotions in speech: sex differences revisited.
Schirmer, Annett; Kotz, Sonja A; Friederici, Angela D
2005-08-01
In a previous cross-modal priming study [A. Schirmer, A.S. Kotz, A.D. Friederici, Sex differentiates the role of emotional prosody during word processing, Cogn. Brain Res. 14 (2002) 228-233.], we found that women integrated emotional prosody and word valence earlier than men. Both sexes showed a smaller N400 in the event-related potential to emotional words when these words were preceded by a sentence with congruous compared to incongruous emotional prosody. However, women showed this effect with a 200-ms interval between prime sentence and target word whereas men showed the effect with a 750-ms interval. The present study was designed to determine whether these sex differences prevail when attention is directed towards the emotional content of prosody and word meaning. To this end, we presented the same prime sentences and target words as in our previous study. Sentences were spoken with happy or sad prosody and followed by a congruous or incongruous emotional word or pseudoword. The interval between sentence offset and target onset was 200 ms. In addition to performing a lexical decision, participants were asked to decide whether or not a word matched the emotional prosody of the preceding sentence. The combined lexical and congruence judgment failed to reveal differences in emotional-prosodic priming between men and women. Both sexes showed smaller N400 amplitudes to emotionally congruent compared to incongruent words. This suggests that the presence of sex differences in emotional-prosodic priming depends on whether or not participants are instructed to take emotional prosody into account.
Mitchell, Rachel L. C.; Jazdzyk, Agnieszka; Stets, Manuela; Kotz, Sonja A.
2016-01-01
We aimed to progress understanding of prosodic emotion expression by establishing brain regions active when expressing specific emotions, those activated irrespective of the target emotion, and those whose activation intensity varied depending on individual performance. BOLD contrast data were acquired whilst participants spoke non-sense words in happy, angry or neutral tones, or performed jaw-movements. Emotion-specific analyses demonstrated that when expressing angry prosody, activated brain regions included the inferior frontal and superior temporal gyri, the insula, and the basal ganglia. When expressing happy prosody, the activated brain regions also included the superior temporal gyrus, insula, and basal ganglia, with additional activation in the anterior cingulate. Conjunction analysis confirmed that the superior temporal gyrus and basal ganglia were activated regardless of the specific emotion concerned. Nevertheless, disjunctive comparisons between the expression of angry and happy prosody established that anterior cingulate activity was significantly higher for angry prosody than for happy prosody production. Degree of inferior frontal gyrus activity correlated with the ability to express the target emotion through prosody. We conclude that expressing prosodic emotions (vs. neutral intonation) requires generic brain regions involved in comprehending numerous aspects of language, emotion-related processes such as experiencing emotions, and in the time-critical integration of speech information. PMID:27803656
Templier, Lorraine; Chetouani, Mohamed; Plaza, Monique; Belot, Zoé; Bocquet, Patrick; Chaby, Laurence
2015-03-01
Patients with Alzheimer's disease (AD) show cognitive and behavioral disorders, which they and their caregivers have difficulties to cope with in daily life. Psychological symptoms seem to be increased by impaired emotion processing in patients, this ability being linked to social cognition and thus essential to maintain good interpersonal relationships. Non-verbal emotion processing is a genuine way to communicate, especially so for patients whose language may be rapidly impaired. Many studies focus on emotion identification in AD patients, mostly by means of facial expressions rather than emotional prosody; even fewer consider emotional prosody production, despite its playing a key role in interpersonal exchanges. The literature on this subject is scarce with contradictory results. The present study compares the performances of 14 AD patients (88.4±4.9 yrs; MMSE: 19.9±2.7) to those of 14 control subjects (87.5±5.1 yrs; MMSE: 28.1±1.4) in tasks of emotion identification through faces and voices (non linguistic vocal emotion or emotional prosody) and in a task of emotional prosody production (12 sentences were to be pronounced in a neutral, positive, or negative tone, after a context was read). The Alzheimer's disease patients showed weaker performances than control subjects in all emotional recognition tasks and particularly when identifying emotional prosody. A negative relation between the identification scores and the NPI (professional caregivers) scores was found which underlines their link to psychological and behavioral disorders. The production of emotional prosody seems relatively preserved in a mild to moderate stage of the disease: we found subtle differences regarding acoustic parameters but in a qualitative way judges established that the patients' productions were as good as those of control subjects. These results suggest interesting new directions for improving patients' care.
Ben-David, Boaz M; Multani, Namita; Shakuf, Vered; Rudzicz, Frank; van Lieshout, Pascal H H M
2016-02-01
Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5 discrete emotions (anger, fear, happiness, sadness, and neutral) presented in prosody and semantics. Listeners were asked to rate the sentence as a whole, integrating both speech channels, or to focus on one channel only (prosody or semantics). We observed supremacy of congruency, failure of selective attention, and prosodic dominance. Supremacy of congruency means that a sentence that presents the same emotion in both speech channels was rated highest; failure of selective attention means that listeners were unable to selectively attend to one channel when instructed; and prosodic dominance means that prosodic information plays a larger role than semantics in processing emotional speech. Emotional prosody and semantics are separate but not separable channels, and it is difficult to perceive one without the influence of the other. Our findings indicate that the Test for Rating of Emotions in Speech can reveal specific aspects in the processing of emotional speech and may in the future prove useful for understanding emotion-processing deficits in individuals with pathologies.
Pinheiro, Ana P; Vasconcelos, Margarida; Dias, Marcelo; Arrais, Nuno; Gonçalves, Óscar F
2015-01-01
Recent studies have demonstrated the positive effects of musical training on the perception of vocally expressed emotion. This study investigated the effects of musical training on event-related potential (ERP) correlates of emotional prosody processing. Fourteen musicians and fourteen control subjects listened to 228 sentences with neutral semantic content, differing in prosody (one third with neutral, one third with happy and one third with angry intonation), with intelligible semantic content (semantic content condition--SCC) and unintelligible semantic content (pure prosody condition--PPC). Reduced P50 amplitude was found in musicians. A difference between SCC and PPC conditions was found in P50 and N100 amplitude in non-musicians only, and in P200 amplitude in musicians only. Furthermore, musicians were more accurate in recognizing angry prosody in PPC sentences. These findings suggest that auditory expertise characterizing extensive musical training may impact different stages of vocal emotional processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Emotional Prosody Processing in Epilepsy: Some Insights on Brain Reorganization.
Alba-Ferrara, Lucy; Kochen, Silvia; Hausmann, Markus
2018-01-01
Drug resistant epilepsy is one of the most complex, multifactorial and polygenic neurological syndrome. Besides its dynamicity and variability, it still provides us with a model to study brain-behavior relationship, giving cues on the anatomy and functional representation of brain function. Given that onset zone of focal epileptic seizures often affects different anatomical areas, cortical but limited to one hemisphere, this condition also let us study the functional differences of the left and right cerebral hemispheres. One lateralized function in the human brain is emotional prosody, and it can be a useful ictal sign offering hints on the location of the epileptogenic zone. Besides its importance for effective communication, prosody is not considered an eloquent domain, making resective surgery on its neural correlates feasible. We performed an Electronic databases search (Medline and PsychINFO) from inception to July 2017 for studies about prosody in epilepsy. The search terms included "epilepsy," "seizure," "emotional prosody," and "vocal affect." This review focus on emotional prosody processing in epilepsy as it can give hints regarding plastic functional changes following seizures (preoperatively), resection (post operatively), and also as an ictal sign enabling the assessment of dynamic brain networks. Moreover, it is argued that such reorganization can help to preserve the expression and reception of emotional prosody as a central skill to develop appropriate social interactions.
Mismatch negativity of sad syllables is absent in patients with major depressive disorder.
Pang, Xiaomei; Xu, Jing; Chang, Yi; Tang, Di; Zheng, Ya; Liu, Yanhua; Sun, Yiming
2014-01-01
Major depressive disorder (MDD) is an important and highly prevalent mental disorder characterized by anhedonia and a lack of interest in everyday activities. Additionally, patients with MDD appear to have deficits in various cognitive abilities. Although a number of studies investigating the central auditory processing of low-level sound features in patients with MDD have demonstrated that this population exhibits impairments in automatic processing, the influence of emotional voice processing has yet to be addressed. To explore the automatic processing of emotional prosodies in patients with MDD, we analyzed the ability to detect automatic changes using event-related potentials (ERPs). This study included 18 patients with MDD and 22 age- and sex-matched healthy controls. Subjects were instructed to watch a silent movie but to ignore the afferent acoustic emotional prosodies presented to both ears while continuous electroencephalographic activity was synchronously recorded. Prosodies included meaningless syllables, such as "dada" spoken with happy, angry, sad, or neutral tones. The mean amplitudes of the ERPs elicited by emotional stimuli and the peak latency of the emotional differential waveforms were analyzed. The sad MMN was absent in patients with MDD, whereas the happy and angry MMN components were similar across groups. The abnormal sad emotional MMN component was not significantly correlated with the HRSD-17 and HAMA scores, respectively. The data indicate that patients with MDD are impaired in their ability to automatically process sad prosody, whereas their ability to process happy and angry prosodies remains normal. The dysfunctional sad emotion-related MMN in patients with MDD were not correlated with depression symptoms. The blunted MMN of sad prosodies could be considered a trait of MDD.
ERIC Educational Resources Information Center
Kuchinke, Lars; Schneider, Dana; Kotz, Sonja A.; Jacobs, Arthur M.
2011-01-01
Emotional prosody provides important cues for understanding the emotions of others in every day communication. Asperger's syndrome (AS) is a developmental disorder characterised by pronounced deficits in socio-emotional communication, including difficulties in the domain of prosody processing. We measured pupillary responses as an index of…
Gordon, Karen A.; Papsin, Blake C.; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A.
2017-01-01
Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Results: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Conclusions: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation. PMID:28085739
Good, Arla; Gordon, Karen A; Papsin, Blake C; Nespoli, Gabe; Hopyan, Talar; Peretz, Isabelle; Russo, Frank A
Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation.
Selective attention to emotional prosody in social anxiety: a dichotic listening study.
Peschard, Virginie; Gilboa-Schechtman, Eva; Philippot, Pierre
2017-12-01
The majority of evidence on social anxiety (SA)-linked attentional biases to threat comes from research using facial expressions. Emotions are, however, communicated through other channels, such as voice. Despite its importance in the interpretation of social cues, emotional prosody processing in SA has been barely explored. This study investigated whether SA is associated with enhanced processing of task-irrelevant angry prosody. Fifty-three participants with high and low SA performed a dichotic listening task in which pairs of male/female voices were presented, one to each ear, with either the same or different prosody (neutral or angry). Participants were instructed to focus on either the left or right ear and to identify the speaker's gender in the attended side. Our main results show that, once attended, task-irrelevant angry prosody elicits greater interference than does neutral prosody. Surprisingly, high socially anxious participants were less prone to distraction from attended-angry (compared to attended-neutral) prosody than were low socially anxious individuals. These findings emphasise the importance of examining SA-related biases across modalities.
Behold the voice of wrath: cross-modal modulation of visual attention by anger prosody.
Brosch, Tobias; Grandjean, Didier; Sander, David; Scherer, Klaus R
2008-03-01
Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody.
Uskul, Ayse K; Paulmann, Silke; Weick, Mario
2016-02-01
Listeners have to pay close attention to a speaker's tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. Although a growing body of research has explored how emotions are processed from speech in general, little is known about how psychosocial factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational study (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional tone of voice differently. (c) 2016 APA, all rights reserved).
Sander, David; Grandjean, Didier; Pourtois, Gilles; Schwartz, Sophie; Seghier, Mohamed L; Scherer, Klaus R; Vuilleumier, Patrik
2005-12-01
Multiple levels of processing are thought to be involved in the appraisal of emotionally relevant events, with some processes being engaged relatively independently of attention, whereas other processes may depend on attention and current task goals or context. We conducted an event-related fMRI experiment to examine how processing angry voice prosody, an affectively and socially salient signal, is modulated by voluntary attention. To manipulate attention orthogonally to emotional prosody, we used a dichotic listening paradigm in which meaningless utterances, pronounced with either angry or neutral prosody, were presented simultaneously to both ears on each trial. In two successive blocks, participants selectively attended to either the left or right ear and performed a gender-decision on the voice heard on the target side. Our results revealed a functional dissociation between different brain areas. Whereas the right amygdala and bilateral superior temporal sulcus responded to anger prosody irrespective of whether it was heard from a to-be-attended or to-be-ignored voice, the orbitofrontal cortex and the cuneus in medial occipital cortex showed greater activation to the same emotional stimuli when the angry voice was to-be-attended rather than to-be-ignored. Furthermore, regression analyses revealed a strong correlation between orbitofrontal regions and sensitivity on a behavioral inhibition scale measuring proneness to anxiety reactions. Our results underscore the importance of emotion and attention interactions in social cognition by demonstrating that multiple levels of processing are involved in the appraisal of emotionally relevant cues in voices, and by showing a modulation of some emotional responses by both the current task-demands and individual differences.
Lateralization of Visuospatial Attention across Face Regions Varies with Emotional Prosody
ERIC Educational Resources Information Center
Thompson, Laura A.; Malloy, Daniel M.; LeBlanc, Katya L.
2009-01-01
It is well-established that linguistic processing is primarily a left-hemisphere activity, while emotional prosody processing is lateralized to the right hemisphere. Does attention, directed at different regions of the talker's face, reflect this pattern of lateralization? We investigated visuospatial attention across a talker's face with a…
Affective Prosody Labeling in Youths with Bipolar Disorder or Severe Mood Dysregulation
ERIC Educational Resources Information Center
Deveney, Christen M.; Brotman, Melissa A.; Decker, Ann Marie; Pine, Daniel S.; Leibenluft, Ellen
2012-01-01
Background: Accurate identification of nonverbal emotional cues is essential to successful social interactions, yet most research is limited to emotional face expression labeling. Little research focuses on the processing of emotional prosody, or tone of verbal speech, in clinical populations. Methods: Using the Diagnostic Analysis of Nonverbal…
Family environment influences emotion recognition following paediatric traumatic brain injury.
Schmidt, Adam T; Orsten, Kimberley D; Hanten, Gerri R; Li, Xiaoqi; Levin, Harvey S
2010-01-01
This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Findings suggest family functioning variables--especially financial resources--can influence performance on an emotional processing task following TBI in children.
Auditory-prosodic processing in bipolar disorder; from sensory perception to emotion.
Van Rheenen, Tamsyn E; Rossell, Susan L
2013-12-01
Accurate emotion processing is critical to understanding the social world. Despite growing evidence of facial emotion processing impairments in patients with bipolar disorder (BD), comprehensive investigations of emotional prosodic processing is limited. The existing (albeit sparse) literature is inconsistent at best, and confounded by failures to control for the effects of gender or low level sensory-perceptual impairments. The present study sought to address this paucity of research by utilizing a novel behavioural battery to comprehensively investigate the auditory-prosodic profile of BD. Fifty BD patients and 52 healthy controls completed tasks assessing emotional and linguistic prosody, and sensitivity for discriminating tones that deviate in amplitude, duration and pitch. BD patients were less sensitive than their control counterparts in discriminating amplitude and durational cues but not pitch cues or linguistic prosody. They also demonstrated impaired ability to recognize happy intonations; although this was specific to male's with the disorder. The recognition of happy in the patient group was correlated with pitch and amplitude sensitivity in female patients only. The small sample size of patients after stratification by current mood state prevented us from conducting subgroup comparisons between symptomatic, euthymic and control participants to explicitly examine the effects of mood. Our findings indicate the existence of a female advantage for the processing of emotional prosody in BD, specifically for the processing of happy. Although male BD patients were impaired in their ability to recognize happy prosody, this was unrelated to reduced tone discrimination sensitivity. This study indicates the importance of examining both gender and low order sensory perceptual capacity when examining emotional prosody. © 2013 Elsevier B.V. All rights reserved.
Family environment influences emotion recognition following paediatric traumatic brain injury
SCHMIDT, ADAM T.; ORSTEN, KIMBERLEY D.; HANTEN, GERRI R.; LI, XIAOQI; LEVIN, HARVEY S.
2011-01-01
Objective This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). Methods A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Conclusions Findings suggest family functioning variables—especially financial resources—can influence performance on an emotional processing task following TBI in children. PMID:21058900
Mitchell, Rachel L. C.; Xu, Yi
2015-01-01
In computerized technology, artificial speech is becoming increasingly important, and is already used in ATMs, online gaming and healthcare contexts. However, today’s artificial speech typically sounds monotonous, a main reason for this being the lack of meaningful prosody. One particularly important function of prosody is to convey different emotions. This is because successful encoding and decoding of emotions is vital for effective social cognition, which is increasingly recognized in human–computer interaction contexts. Current attempts to artificially synthesize emotional prosody are much improved relative to early attempts, but there remains much work to be done due to methodological problems, lack of agreed acoustic correlates, and lack of theoretical grounding. If the addition of synthetic emotional prosody is not of sufficient quality, it may risk alienating users instead of enhancing their experience. So the value of embedding emotion cues in artificial speech may ultimately depend on the quality of the synthetic emotional prosody. However, early evidence on reactions to synthesized non-verbal cues in the facial modality bodes well. Attempts to implement the recognition of emotional prosody into artificial applications and interfaces have perhaps been met with greater success, but the ultimate test of synthetic emotional prosody will be to critically compare how people react to synthetic emotional prosody vs. natural emotional prosody, at the behavioral, socio-cognitive and neural levels. PMID:26617563
How children use emotional prosody: Crossmodal emotional integration?
Gil, Sandrine; Hattouti, Jamila; Laval, Virginie
2016-07-01
A crossmodal effect has been observed in the processing of facial and vocal emotion in adults and infants. For the first time, we assessed whether this effect is present in childhood by administering a crossmodal task similar to those used in seminal studies featuring emotional faces (i.e., a continuum of emotional expressions running from happiness to sadness: 90% happy, 60% happy, 30% happy, neutral, 30% sad, 60% sad, 90% sad) and emotional prosody (i.e., sad vs. happy). Participants were 5-, 7-, and 9-year-old children and a control group of adult students. The children had a different pattern of results from the adults, with only the 9-year-olds exhibiting the crossmodal effect whatever the emotional condition. These results advance our understanding of emotional prosody processing and the efficiency of crossmodal integration in children and are discussed in terms of a developmental trajectory and factors that may modulate the efficiency of this effect in children. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Neural basis of processing threatening voices in a crowded auditory world
Mothes-Lasch, Martin; Becker, Michael P. I.; Miltner, Wolfgang H. R.
2016-01-01
In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant prosodic features under varying combinations of prosody valence, auditory background complexity and attentional focus. Here, we used event-related functional magnetic resonance imaging to investigate the effects of high background sound complexity and attentional focus on brain activation to angry and neutral prosody in humans. Results show that prosody effects in mid superior temporal cortex were gated by background complexity but not attention, while prosody effects in the amygdala and anterior superior temporal cortex were gated by attention but not background complexity, suggesting distinct emotional prosody processing limitations in different regions. Crucially, if attention was focused on the highly complex background, the differential processing of emotional prosody was prevented in all brain regions, suggesting that in a distracting, complex auditory world even threatening voices may go unnoticed. PMID:26884543
Mark My Words: Tone of Voice Changes Affective Word Representations in Memory
Schirmer, Annett
2010-01-01
The present study explored the effect of speaker prosody on the representation of words in memory. To this end, participants were presented with a series of words and asked to remember the words for a subsequent recognition test. During study, words were presented auditorily with an emotional or neutral prosody, whereas during test, words were presented visually. Recognition performance was comparable for words studied with emotional and neutral prosody. However, subsequent valence ratings indicated that study prosody changed the affective representation of words in memory. Compared to words with neutral prosody, words with sad prosody were later rated as more negative and words with happy prosody were later rated as more positive. Interestingly, the participants' ability to remember study prosody failed to predict this effect, suggesting that changes in word valence were implicit and associated with initial word processing rather than word retrieval. Taken together these results identify a mechanism by which speakers can have sustained effects on listener attitudes towards word referents. PMID:20169154
Influences of Semantic and Prosodic Cues on Word Repetition and Categorization in Autism
ERIC Educational Resources Information Center
Singh, Leher; Harrow, MariLouise S.
2014-01-01
Purpose: To investigate sensitivity to prosodic and semantic cues to emotion in individuals with high-functioning autism (HFA). Method: Emotional prosody and semantics were independently manipulated to assess the relative influence of prosody versus semantics on speech processing. A sample of 10-year-old typically developing children (n = 10) and…
Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang
2012-01-01
Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.
Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang
2012-01-01
Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies. PMID:22291928
Ventura, Joseph; Wood, Rachel C.; Jimenez, Amy M.; Hellemann, Gerhard S.
2014-01-01
Background In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? Methods A meta-analysis of 102 studies (combined n = 4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Results Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r = .51). In addition, the relationship between FR and EP through voice prosody (r = .58) is as strong as the relationship between FR and EP based on facial stimuli (r = .53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality – facial stimuli and voice prosody. Discussion The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. PMID:24268469
Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S
2013-12-01
In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.
Doi, Hirokazu; Shinohara, Kazuyuki
2015-03-01
Cross-modal integration of visual and auditory emotional cues is supposed to be advantageous in the accurate recognition of emotional signals. However, the neural locus of cross-modal integration between affective prosody and unconsciously presented facial expression in the neurologically intact population is still elusive at this point. The present study examined the influences of unconsciously presented facial expressions on the event-related potentials (ERPs) in emotional prosody recognition. In the experiment, fearful, happy, and neutral faces were presented without awareness by continuous flash suppression simultaneously with voices containing laughter and a fearful shout. The conventional peak analysis revealed that the ERPs were modulated interactively by emotional prosody and facial expression at multiple latency ranges, indicating that audio-visual integration of emotional signals takes place automatically without conscious awareness. In addition, the global field power during the late-latency range was larger for shout than for laughter only when a fearful face was presented unconsciously. The neural locus of this effect was localized to the left posterior fusiform gyrus, giving support to the view that the cortical region, traditionally considered to be unisensory region for visual processing, functions as the locus of audiovisual integration of emotional signals. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Prosody impairment and associated affective and behavioral disturbances in Alzheimer's disease.
Roberts, V J; Ingram, S M; Lamar, M; Green, R C
1996-12-01
We examined the ability to produce, repeat, and comprehend emotional prosody in 20 patients with Alzheimer's disease (AD) and in 11 elderly normal control subjects. In addition, caregivers of AD patients completed affective and behavioral measures with reference to the patient. Relative to control subjects, comprehension of emotional prosody was marginally impaired in mildly demented AD patients, whereas production, comprehension, and repetition of emotional prosody were significantly impaired in moderately demented AD patients. The moderately demented patients performed significantly poorer than the mildly demented patients on the production and repetition tasks. In contrast, there was no significance difference between the two groups on the prosody comprehension task. Additional analyses revealed an inverse relationship between the ability to correctly produce and repeat emotional prosody and the frequency of agitated behaviors and depressive symptomatology in moderately demented patients. This latter findings suggests that the inability to communicate emotional message is associated with disturbances in mood and behavior in AD patients. Implications for the management of disruptive behavior in agitated and aprosodic AD patients include the development of caregiver sensitivity to unexpressed emotion and caregiver assistance with emotional expression.
Early sensory encoding of affective prosody: neuromagnetic tomography of emotional category changes.
Thönnessen, Heike; Boers, Frank; Dammers, Jürgen; Chen, Yu-Han; Norra, Christine; Mathiak, Klaus
2010-03-01
In verbal communication, prosodic codes may be phylogenetically older than lexical ones. Little is known, however, about early, automatic encoding of emotional prosody. This study investigated the neuromagnetic analogue of mismatch negativity (MMN) as an index of early stimulus processing of emotional prosody using whole-head magnetoencephalography (MEG). We applied two different paradigms to study MMN; in addition to the traditional oddball paradigm, the so-called optimum design was adapted to emotion detection. In a sequence of randomly changing disyllabic pseudo-words produced by one male speaker in neutral intonation, a traditional oddball design with emotional deviants (10% happy and angry each) and an optimum design with emotional (17% happy and sad each) and nonemotional gender deviants (17% female) elicited the mismatch responses. The emotional category changes demonstrated early responses (<200 ms) at both auditory cortices with larger amplitudes at the right hemisphere. Responses to the nonemotional change from male to female voices emerged later ( approximately 300 ms). Source analysis pointed at bilateral auditory cortex sources without robust contribution from other such as frontal sources. Conceivably, both auditory cortices encode categorical representations of emotional prosodic. Processing of cognitive feature extraction and automatic emotion appraisal may overlap at this level enabling rapid attentional shifts to important social cues. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Lima, César F; Garrett, Carolina; Castro, São Luís
2013-01-01
Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To examine this question, we implemented a cross-domain comparative design in Parkinson's disease (PD). Twenty-four patients and 25 controls performed emotion recognition tasks for music and spoken sentences. In music, patients had impaired recognition of happiness and peacefulness, and intact recognition of sadness and fear; this pattern was independent of general cognitive and perceptual abilities. In speech, patients had a small global impairment, which was significantly mediated by executive dysfunction. Hence, PD affected differently musical and prosodic emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.
Dynamic Facial Expressions Prime the Processing of Emotional Prosody.
Garrido-Vásquez, Patricia; Pell, Marc D; Paulmann, Silke; Kotz, Sonja A
2018-01-01
Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.
The Neural Correlates of Emotional Prosody Comprehension: Disentangling Simple from Complex Emotion
Alba-Ferrara, Lucy; Hausmann, Markus; Mitchell, Rachel L.; Weis, Susanne
2011-01-01
Background Emotional prosody comprehension (EPC), the ability to interpret another person's feelings by listening to their tone of voice, is crucial for effective social communication. Previous studies assessing the neural correlates of EPC have found inconsistent results, particularly regarding the involvement of the medial prefrontal cortex (mPFC). It remained unclear whether the involvement of the mPFC is linked to an increased demand in socio-cognitive components of EPC such as mental state attribution and if basic perceptual processing of EPC can be performed without the contribution of this region. Methods fMRI was used to delineate neural activity during the perception of prosodic stimuli conveying simple and complex emotion. Emotional trials in general, as compared to neutral ones, activated a network comprising temporal and lateral frontal brain regions, while complex emotion trials specifically showed an additional involvement of the mPFC, premotor cortex, frontal operculum and left insula. Conclusion These results indicate that the mPFC and premotor areas might be associated, but are not crucial to EPC. However, the mPFC supports socio-cognitive skills necessary to interpret complex emotion such as inferring mental states. Additionally, the premotor cortex involvement may reflect the participation of the mirror neuron system for prosody processing particularly of complex emotion. PMID:22174872
Alba-Ferrara, Lucy; Fernyhough, Charles; Weis, Susanne; Mitchell, Rachel L C; Hausmann, Markus
2012-06-01
Deficits in emotional processing have been widely described in schizophrenia. Associations of positive symptoms with poor emotional prosody comprehension (EPC) have been reported at the phenomenological, behavioral, and neural levels. This review focuses on the relation between emotional processing deficits and auditory verbal hallucinations (AVH). We explore the possibility that the relation between AVH and EPC in schizophrenia might be mediated by the disruption of a common mechanism intrinsic to auditory processing, and that, moreover, prosodic feature processing deficits play a pivotal role in the formation of AVH. The review concludes with proposing a mechanism by which AVH are constituted and showing how different aspects of our neuropsychological model can explain the constellation of subjective experiences which occur in relation to AVH. Copyright © 2012 Elsevier Ltd. All rights reserved.
Abnormal Processing of Emotional Prosody in Williams Syndrome: An Event-Related Potentials Study
ERIC Educational Resources Information Center
Pinheiro, Ana P.; Galdo-Alvarez, Santiago; Rauber, Andreia; Sampaio, Adriana; Niznikiewicz, Margaret; Goncalves, Oscar F.
2011-01-01
Williams syndrome (WS), a neurodevelopmental genetic disorder due to a microdeletion in chromosome 7, is described as displaying an intriguing socio-cognitive phenotype. Deficits in prosody production and comprehension have been consistently reported in behavioral studies. It remains, however, to be clarified the neurobiological processes…
Second Language Ability and Emotional Prosody Perception
Bhatara, Anjali; Laukka, Petri; Boll-Avetisyan, Natalie; Granjon, Lionel; Anger Elfenbein, Hillary; Bänziger, Tanja
2016-01-01
The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions. PMID:27253326
Perception of affective and linguistic prosody: an ALE meta-analysis of neuroimaging studies.
Belyk, Michel; Brown, Steven
2014-09-01
Prosody refers to the melodic and rhythmic aspects of speech. Two forms of prosody are typically distinguished: 'affective prosody' refers to the expression of emotion in speech, whereas 'linguistic prosody' relates to the intonation of sentences, including the specification of focus within sentences and stress within polysyllabic words. While these two processes are united by their use of vocal pitch modulation, they are functionally distinct. In order to examine the localization and lateralization of speech prosody in the brain, we performed two voxel-based meta-analyses of neuroimaging studies of the perception of affective and linguistic prosody. There was substantial sharing of brain activations between analyses, particularly in right-hemisphere auditory areas. However, a major point of divergence was observed in the inferior frontal gyrus: affective prosody was more likely to activate Brodmann area 47, while linguistic prosody was more likely to activate the ventral part of area 44. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
van Rijn, Sophie; Aleman, André; van Diessen, Eric; Berckmoes, Celine; Vingerhoets, Guy; Kahn, René S
2005-06-01
Emotional signals in spoken language can be conveyed by semantic as well as prosodic cues. We investigated the role of the fronto-parietal operculum, a somatosensory area where the lips, tongue and jaw are represented, in the right hemisphere to detection of emotion in prosody vs. semantics. A total of 14 healthy volunteers participated in the present experiment, which involved transcranial magnetic stimulation (TMS) in combination with frameless stereotaxy. As predicted, compared with sham stimulation, TMS over the right fronto-parietal operculum differentially affected the reaction times for detection of emotional prosody vs. emotional semantics, showing that there is a dissociation at a neuroanatomical level. Detection of withdrawal emotions (fear and sadness) in prosody was delayed significantly by TMS. No effects of TMS were observed for approach emotions (happiness and anger). We propose that the right fronto-parietal operculum is not globally involved in emotion evaluation, but sensitive to specific forms of emotional discrimination and emotion types.
Sex Differences in Brain Control of Prosody
ERIC Educational Resources Information Center
Rymarczyk, Krystyna; Grabowska, Anna
2007-01-01
Affective (emotional) prosody is a neuropsychological function that encompasses non-verbal aspects of language that are necessary for recognizing and conveying emotions in communication, whereas non-affective (linguistic) prosody indicates whether the sentence is a question, an order or a statement. Considerable evidence points to a dominant role…
Processing of affective speech prosody is impaired in Asperger syndrome.
Korpilahti, Pirjo; Jansson-Verkasalo, Eira; Mattila, Marja-Leena; Kuusikko, Sanna; Suominen, Kalervo; Rytky, Seppo; Pauls, David L; Moilanen, Irma
2007-09-01
Many people with the diagnosis of Asperger syndrome (AS) show poorly developed skills in understanding emotional messages. The present study addressed discrimination of speech prosody in children with AS at neurophysiological level. Detection of affective prosody was investigated in one-word utterances as indexed by the N1 and the mismatch negativity (MMN) of auditory event-related potentials (ERPs). Data from fourteen boys with AS were compared with those for thirteen typically developed boys. These results suggest atypical neural responses to affective prosody in children with AS and their fathers, especially over the RH, and that this impairment can already be seen at low-level information processes. Our results provide evidence for familial patterns of abnormal auditory brain reactions to prosodic features of speech.
Perception of affective and linguistic prosody: an ALE meta-analysis of neuroimaging studies
Brown, Steven
2014-01-01
Prosody refers to the melodic and rhythmic aspects of speech. Two forms of prosody are typically distinguished: ‘affective prosody’ refers to the expression of emotion in speech, whereas ‘linguistic prosody’ relates to the intonation of sentences, including the specification of focus within sentences and stress within polysyllabic words. While these two processes are united by their use of vocal pitch modulation, they are functionally distinct. In order to examine the localization and lateralization of speech prosody in the brain, we performed two voxel-based meta-analyses of neuroimaging studies of the perception of affective and linguistic prosody. There was substantial sharing of brain activations between analyses, particularly in right-hemisphere auditory areas. However, a major point of divergence was observed in the inferior frontal gyrus: affective prosody was more likely to activate Brodmann area 47, while linguistic prosody was more likely to activate the ventral part of area 44. PMID:23934416
Hearing Feelings: Affective Categorization of Music and Speech in Alexithymia, an ERP Study
Goerlich, Katharina Sophia; Witteman, Jurriaan; Aleman, André; Martens, Sander
2011-01-01
Background Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials. Methodology Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets. Conclusions Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required. PMID:21573026
Identification of emotional intonation evaluated by fMRI.
Wildgruber, D; Riecker, A; Hertrich, I; Erb, M; Grodd, W; Ethofer, T; Ackermann, H
2005-02-15
During acoustic communication among human beings, emotional information can be expressed both by the propositional content of verbal utterances and by the modulation of speech melody (affective prosody). It is well established that linguistic processing is bound predominantly to the left hemisphere of the brain. By contrast, the encoding of emotional intonation has been assumed to depend specifically upon right-sided cerebral structures. However, prior clinical and functional imaging studies yielded discrepant data with respect to interhemispheric lateralization and intrahemispheric localization of brain regions contributing to processing of affective prosody. In order to delineate the cerebral network engaged in the perception of emotional tone, functional magnetic resonance imaging (fMRI) was performed during recognition of prosodic expressions of five different basic emotions (happy, sad, angry, fearful, and disgusted) and during phonetic monitoring of the same stimuli. As compared to baseline at rest, both tasks yielded widespread bilateral hemodynamic responses within frontal, temporal, and parietal areas, the thalamus, and the cerebellum. A comparison of the respective activation maps, however, revealed comprehension of affective prosody to be bound to a distinct right-hemisphere pattern of activation, encompassing posterior superior temporal sulcus (Brodmann Area [BA] 22), dorsolateral (BA 44/45), and orbitobasal (BA 47) frontal areas. Activation within left-sided speech areas, in contrast, was observed during the phonetic task. These findings indicate that partially distinct cerebral networks subserve processing of phonetic and intonational information during speech perception.
How Children Use Emotional Prosody: Crossmodal Emotional Integration?
ERIC Educational Resources Information Center
Gil, Sandrine; Hattouti, Jamila; Laval, Virginie
2016-01-01
A crossmodal effect has been observed in the processing of facial and vocal emotion in adults and infants. For the first time, we assessed whether this effect is present in childhood by administering a crossmodal task similar to those used in seminal studies featuring emotional faces (i.e., a continuum of emotional expressions running from…
Emotional Prosody Perception and Production in Dementia of the Alzheimer's Type
ERIC Educational Resources Information Center
Horley, Kaye; Reid, Amanda; Burnham, Denis
2010-01-01
Purpose: In this study, the authors investigated emotional prosody in patients with moderate Dementia of the Alzheimer's type (DAT) With Late Onset. It was expected that both expression and reception of prosody would be impaired relative to age-matched controls. Method: Twenty DAT and 20 control participants engaged in 2 expressive and 2 receptive…
Understanding speaker attitudes from prosody by adults with Parkinson's disease.
Monetta, Laura; Cheang, Henry S; Pell, Marc D
2008-09-01
The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical 'pseudo-utterances' were presented to listener groups with and without PD in two separate rating tasks. Task I required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo-utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the politelimpolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language (Pell & Leonard, 2003).
Emotion to emotion speech conversion in phoneme level
NASA Astrophysics Data System (ADS)
Bulut, Murtaza; Yildirim, Serdar; Busso, Carlos; Lee, Chul Min; Kazemzadeh, Ebrahim; Lee, Sungbok; Narayanan, Shrikanth
2004-10-01
Having an ability to synthesize emotional speech can make human-machine interaction more natural in spoken dialogue management. This study investigates the effectiveness of prosodic and spectral modification in phoneme level on emotion-to-emotion speech conversion. The prosody modification is performed with the TD-PSOLA algorithm (Moulines and Charpentier, 1990). We also transform the spectral envelopes of source phonemes to match those of target phonemes using LPC-based spectral transformation approach (Kain, 2001). Prosodic speech parameters (F0, duration, and energy) for target phonemes are estimated from the statistics obtained from the analysis of an emotional speech database of happy, angry, sad, and neutral utterances collected from actors. Listening experiments conducted with native American English speakers indicate that the modification of prosody only or spectrum only is not sufficient to elicit targeted emotions. The simultaneous modification of both prosody and spectrum results in higher acceptance rates of target emotions, suggesting that not only modeling speech prosody but also modeling spectral patterns that reflect underlying speech articulations are equally important to synthesize emotional speech with good quality. We are investigating suprasegmental level modifications for further improvement in speech quality and expressiveness.
Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki
2013-09-01
This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.
Grossman, Ruth B; Tager-Flusberg, Helen
2012-01-01
Data on emotion processing by individuals with ASD suggest both intact abilities and significant deficits. Signal intensity may be a contributing factor to this discrepancy. We presented low- and high-intensity emotional stimuli in a face-voice matching task to 22 adolescents with ASD and 22 typically developing (TD) peers. Participants heard semantically neutral sentences with happy, surprised, angry, and sad prosody presented at two intensity levels (low, high) and matched them to emotional faces. The facial expression choice was either across- or within-valence. Both groups were less accurate for low-intensity emotions, but the ASD participants' accuracy levels dropped off more sharply. ASD participants were significantly less accurate than their TD peers for trials involving low-intensity emotions and within-valence face contrasts. PMID:22450703
ERIC Educational Resources Information Center
Lindner, Jennifer L.; Rosen, Lee A.
2006-01-01
This study examined differences in the ability to decode emotion through facial expression, prosody, and verbal content between 14 children with Asperger's Syndrome (AS) and 16 typically developing peers. The ability to decode emotion was measured by the Perception of Emotion Test (POET), which portrayed the emotions of happy, angry, sad, and…
What Do You Mean by That?! An Electrophysiological Study of Emotional and Attitudinal Prosody.
Wickens, Steven; Perry, Conrad
2015-01-01
The use of prosody during verbal communication is pervasive in everyday language and whilst there is a wealth of research examining the prosodic processing of emotional information, much less is known about the prosodic processing of attitudinal information. The current study investigated the online neural processes underlying the prosodic processing of non-verbal emotional and attitudinal components of speech via the analysis of event-related brain potentials related to the processing of anger and sarcasm. To examine these, sentences with prosodic expectancy violations created by cross-splicing a prosodically neutral head ('he has') and a prosodically neutral, angry, or sarcastic ending (e.g., 'a serious face') were used. Task demands were also manipulated, with participants in one experiment performing prosodic classification and participants in another performing probe-verification. Overall, whilst minor differences were found across the tasks, the results suggest that angry and sarcastic prosodic expectancy violations follow a similar processing time-course underpinned by similar neural resources.
Behold the Voice of Wrath: Cross-Modal Modulation of Visual Attention by Anger Prosody
ERIC Educational Resources Information Center
Brosch, Tobias; Grandjean, Didier; Sander, David; Scherer, Klaus R.
2008-01-01
Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined "within-modality" effects, most frequently using pictures of emotional stimuli to…
ERIC Educational Resources Information Center
Ben-David, Boaz M.; Multani, Namita; Shakuf, Vered; Rudzicz, Frank; van Lieshout, Pascal H. H. M.
2016-01-01
Purpose: Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. Method: We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5…
The processing of emotional prosody and semantics in schizophrenia: relationship to gender and IQ.
Scholten, M R M; Aleman, A; Kahn, R S
2008-06-01
Female patients with schizophrenia are less impaired in social life than male patients. Because social impairment in schizophrenia has been found to be associated with deficits in emotion recognition, we examined whether the female advantage in processing emotional prosody and semantics is preserved in schizophrenia. Forty-eight patients (25 males, 23 females) and 46 controls (23 males, 23 females) were assessed using an emotional language task (in which healthy women generally outperform healthy men), consisting of 96 sentences in four conditions: (1) neutral-content/emotional-tone (happy, sad, angry or anxious); (2) neutral-tone/emotional-content; (3) emotional-tone/incongruous emotional-content; and (4) emotional-content/incongruous emotional-tone. Participants had to ignore the emotional-content in the third condition and the emotional-tone in the fourth condition. In addition, participants were assessed with a visuospatial task (in which healthy men typically excel). Correlation coefficients were computed for associations between emotional language data, visuospatial data, IQ measures and patient variables. Overall, on the emotional language task, patients made more errors than control subjects, and women outperformed men across diagnostic groups. Controlling for IQ revealed a significant effect on task performance in all groups, especially in the incongruent tasks. On the rotation task, healthy men outperformed healthy women, but male patients, female patients and female controls obtained similar scores. The advantage in emotional prosodic and semantic processing in healthy women is preserved in schizophrenia, whereas the male advantage in visuospatial processing is lost. These findings may explain, in part, why social functioning is less compromised in women with schizophrenia than in men.
ERIC Educational Resources Information Center
Rodway, Paul; Schepman, Astrid
2007-01-01
The majority of studies have demonstrated a right hemisphere (RH) advantage for the perception of emotions. Other studies have found that the involvement of each hemisphere is valence specific, with the RH better at perceiving negative emotions and the LH better at perceiving positive emotions [Reuter-Lorenz, P., & Davidson, R.J. (1981)…
Impact of personality on the cerebral processing of emotional prosody.
Brück, Carolin; Kreifelts, Benjamin; Kaza, Evangelia; Lotze, Martin; Wildgruber, Dirk
2011-09-01
While several studies have focused on identifying common brain mechanisms governing the decoding of emotional speech melody, interindividual variations in the cerebral processing of prosodic information, in comparison, have received only little attention to date: Albeit, for instance, differences in personality among individuals have been shown to modulate emotional brain responses, personality influences on the neural basis of prosody decoding have not been investigated systematically yet. Thus, the present study aimed at delineating relationships between interindividual differences in personality and hemodynamic responses evoked by emotional speech melody. To determine personality-dependent modulations of brain reactivity, fMRI activation patterns during the processing of emotional speech cues were acquired from 24 healthy volunteers and subsequently correlated with individual trait measures of extraversion and neuroticism obtained for each participant. Whereas correlation analysis did not indicate any link between brain activation and extraversion, strong positive correlations between measures of neuroticism and hemodynamic responses of the right amygdala, the left postcentral gyrus as well as medial frontal structures including the right anterior cingulate cortex emerged, suggesting that brain mechanisms mediating the decoding of emotional speech melody may vary depending on differences in neuroticism among individuals. Observed trait-specific modulations are discussed in the light of processing biases as well as differences in emotion control or task strategies which may be associated with the personality trait of neuroticism. Copyright © 2011 Elsevier Inc. All rights reserved.
Filippi, Piera
2016-01-01
Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP) paved the way for the evolution of linguistic prosody – and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: (i) empirical studies on the adaptive value of EIP in non-human primates, mammals, songbirds, anurans, and insects; (ii) the beneficial effects of EIP in scaffolding language learning and social development in human infants; (iii) the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language. PMID:27733835
ERIC Educational Resources Information Center
Grossman, Ruth B.; Tager-Flusberg, Helen
2012-01-01
Data on emotion processing by individuals with ASD suggest both intact abilities and significant deficits. Signal intensity may be a contributing factor to this discrepancy. We presented low- and high-intensity emotional stimuli in a face-voice matching task to 22 adolescents with ASD and 22 typically developing (TD) peers. Participants heard…
Multimodal human communication--targeting facial expressions, speech content and prosody.
Regenbogen, Christina; Schneider, Daniel A; Gur, Raquel E; Schneider, Frank; Habel, Ute; Kellermann, Thilo
2012-05-01
Human communication is based on a dynamic information exchange of the communication channels facial expressions, prosody, and speech content. This fMRI study elucidated the impact of multimodal emotion processing and the specific contribution of each channel on behavioral empathy and its prerequisites. Ninety-six video clips displaying actors who told self-related stories were presented to 27 healthy participants. In two conditions, all channels uniformly transported only emotional or neutral information. Three conditions selectively presented two emotional channels and one neutral channel. Subjects indicated the actors' emotional valence and their own while fMRI was recorded. Activation patterns of tri-channel emotional communication reflected multimodal processing and facilitative effects for empathy. Accordingly, subjects' behavioral empathy rates significantly deteriorated once one source was neutral. However, emotionality expressed via two of three channels yielded activation in a network associated with theory-of-mind-processes. This suggested participants' effort to infer mental states of their counterparts and was accompanied by a decline of behavioral empathy, driven by the participants' emotional responses. Channel-specific emotional contributions were present in modality-specific areas. The identification of different network-nodes associated with human interactions constitutes a prerequisite for understanding dynamics that underlie multimodal integration and explain the observed decline in empathy rates. This task might also shed light on behavioral deficits and neural changes that accompany psychiatric diseases. Copyright © 2012 Elsevier Inc. All rights reserved.
Psychoacoustic cues to emotion in speech prosody and music.
Coutinho, Eduardo; Dibben, Nicola
2013-01-01
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.
Major depressive disorder skews the recognition of emotional prosody.
Péron, Julie; El Tamer, Sarah; Grandjean, Didier; Leray, Emmanuelle; Travers, David; Drapier, Dominique; Vérin, Marc; Millet, Bruno
2011-06-01
Major depressive disorder (MDD) is associated with abnormalities in the recognition of emotional stimuli. MDD patients ascribe more negative emotion but also less positive emotion to facial expressions, suggesting blunted responsiveness to positive emotional stimuli. To ascertain whether these emotional biases are modality-specific, we examined the effects of MDD on the recognition of emotions from voices using a paradigm designed to capture subtle effects of biases. Twenty-one MDD patients and 21 healthy controls (HC) underwent clinical and neuropsychological assessments, followed by a paradigm featuring pseudowords spoken by actors in five types of emotional prosody, rated on continuous scales. Overall, MDD patients performed more poorly than HC, displaying significantly impaired recognition of fear, happiness and sadness. Compared with HC, they rated fear significantly more highly when listening to anger stimuli. They also displayed a bias toward surprise, rating it far higher when they heard sad or fearful utterances. Furthermore, for happiness stimuli, MDD patients gave higher ratings for negative emotions (fear and sadness). A multiple regression model on recognition of emotional prosody in MDD patients showed that the best fit was achieved using the executive functioning (categorical fluency, number of errors in the MCST, and TMT B-A) and the total score of the Montgomery-Asberg Depression Rating Scale. Impaired recognition of emotions would appear not to be specific to the visual modality but to be present also when emotions are expressed vocally, this impairment being related to depression severity and dysexecutive syndrome. MDD seems to skew the recognition of emotional prosody toward negative emotional stimuli and the blunting of positive emotion appears not to be restricted to the visual modality. Copyright © 2011 Elsevier Inc. All rights reserved.
Prosody production networks are modulated by sensory cues and social context.
Klasen, Martin; von Marschall, Clara; Isman, Güldehen; Zvyagintsev, Mikhail; Gur, Ruben C; Mathiak, Klaus
2018-03-05
The neurobiology of emotional prosody production is not well investigated. In particular, the effects of cues and social context are not known. The present study sought to differentiate cued from free emotion generation and the effect of social feedback from a human listener. Online speech filtering enabled fMRI during prosodic communication in 30 participants. Emotional vocalizations were a) free, b) auditorily cued, c) visually cued, or d) with interactive feedback. In addition to distributed language networks, cued emotions increased activity in auditory and - in case of visual stimuli - visual cortex. Responses were larger in pSTG at the right hemisphere and the ventral striatum when participants were listened to and received feedback from the experimenter. Sensory, language, and reward networks contributed to prosody production and were modulated by cues and social context. The right pSTG is a central hub for communication in social interactions - in particular for interpersonal evaluation of vocal emotions.
Niedtfeld, Inga; Defiebre, Nadine; Regenbogen, Christina; Mier, Daniela; Fenske, Sabrina; Kirsch, Peter; Lis, Stefanie; Schmahl, Christian
2017-04-01
Previous research has revealed alterations and deficits in facial emotion recognition in patients with borderline personality disorder (BPD). During interpersonal communication in daily life, social signals such as speech content, variation in prosody, and facial expression need to be considered simultaneously. We hypothesized that deficits in higher level integration of social stimuli contribute to difficulties in emotion recognition in BPD, and heightened arousal might explain this effect. Thirty-one patients with BPD and thirty-one healthy controls were asked to identify emotions in short video clips, which were designed to represent different combinations of the three communication channels: facial expression, speech content, and prosody. Skin conductance was recorded as a measure of sympathetic arousal, while controlling for state dissociation. Patients with BPD showed lower mean accuracy scores than healthy control subjects in all conditions comprising emotional facial expressions. This was true for the condition with facial expression only, and for the combination of all three communication channels. Electrodermal responses were enhanced in BPD only in response to auditory stimuli. In line with the major body of facial emotion recognition studies, we conclude that deficits in the interpretation of facial expressions lead to the difficulties observed in multimodal emotion processing in BPD.
Castagna, Filomena; Montemagni, Cristiana; Maria Milani, Anna; Rocca, Giuseppe; Rocca, Paola; Casacchia, Massimo; Bogetto, Filippo
2013-02-28
This study aimed to evaluate the ability to decode emotion in the auditory and audiovisual modality in a group of patients with schizophrenia, and to explore the role of cognition and psychopathology in affecting these emotion recognition abilities. Ninety-four outpatients in a stable phase and 51 healthy subjects were recruited. Patients were assessed through a psychiatric evaluation and a wide neuropsychological battery. All subjects completed the comprehensive affect testing system (CATS), a group of computerized tests designed to evaluate emotion perception abilities. With respect to the controls, patients were not impaired in the CATS tasks involving discrimination of nonemotional prosody, naming of emotional stimuli expressed by voice and judging the emotional content of a sentence, whereas they showed a specific impairment in decoding emotion in a conflicting auditory condition and in the multichannel modality. Prosody impairment was affected by executive functions, attention and negative symptoms, while deficit in multisensory emotion recognition was affected by executive functions and negative symptoms. These emotion recognition deficits, rather than being associated purely with emotion perception disturbances in schizophrenia, are affected by core symptoms of the illness. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
A cross-linguistic fMRI study of perception of intonation and emotion in Chinese.
Gandour, Jack; Wong, Donald; Dzemidzic, Mario; Lowe, Mark; Tong, Yunxia; Li, Xiaojian
2003-03-01
Conflicting data from neurobehavioral studies of the perception of intonation (linguistic) and emotion (affective) in spoken language highlight the need to further examine how functional attributes of prosodic stimuli are related to hemispheric differences in processing capacity. Because of similarities in their acoustic profiles, intonation and emotion permit us to assess to what extent hemispheric lateralization of speech prosody depends on functional instead of acoustical properties. To examine how the brain processes linguistic and affective prosody, an fMRI study was conducted using Chinese, a tone language in which both intonation and emotion may be signaled prosodically, in addition to lexical tones. Ten Chinese and 10 English subjects were asked to perform discrimination judgments of intonation (I: statement, question) and emotion (E: happy, angry, sad) presented in semantically neutral Chinese sentences. A baseline task required passive listening to the same speech stimuli (S). In direct between-group comparisons, the Chinese group showed left-sided frontoparietal activation for both intonation (I vs. S) and emotion (E vs. S) relative to baseline. When comparing intonation relative to emotion (I vs. E), the Chinese group demonstrated prefrontal activation bilaterally; parietal activation in the left hemisphere only. The reverse comparison (E vs. I), on the other hand, revealed that activation occurred in anterior and posterior prefrontal regions of the right hemisphere only. These findings show that some aspects of perceptual processing of emotion are dissociable from intonation, and, moreover, that they are mediated by the right hemisphere. Copyright 2003 Wiley-Liss, Inc.
McDonald, S; Togher, L; Tate, R; Randall, R; English, T; Gowland, A
2013-01-01
Many adults with acquired brain injuries, including traumatic brain injuries (TBI) have impaired emotion perception. Impaired perception of emotion in voice can occur independently to facial expression and represents a specific target for remediation. No research to date has addressed this. The current study used a randomised controlled trial to examine the efficacy of a short treatment (three x two-hour sessions) for improving the ability to recognise emotional prosody for people with acquired brain injury, mostly TBI. Ten participants were allocated to treatment and 10 to waitlist. All participants remained involved for the duration of the study in the groups to which they were allocated. There were no significant treatment effects for group, but analyses of individual performances indicated that six of the treated participants made demonstrable improvements on objective measures of prosody recognition. The reasons why some participants showed improvements while others did not, was not obvious. Improvements on objective lab-based measures did not generalise to relative reports of improvements in everyday communicative ability. Nor was there clear evidence of long-term effects. In conclusion, treatment of emotional prosody was effective in the short-term for half of the participants. Further research is required to determine what conditions are required to optimise generalisability and longer-term gains.
Liebenthal, Einat; Silbersweig, David A.; Stern, Emily
2016-01-01
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala—a subcortical center for emotion perception—are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states. PMID:27877106
Liebenthal, Einat; Silbersweig, David A; Stern, Emily
2016-01-01
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.
Lu, Xuejing; Ho, Hao Tam; Liu, Fang; Wu, Daxing; Thompson, William F
2015-01-01
Congenital amusia is a disorder that is known to affect the processing of musical pitch. Although individuals with amusia rarely show language deficits in daily life, a number of findings point to possible impairments in speech prosody that amusic individuals may compensate for by drawing on linguistic information. Using EEG, we investigated (1) whether the processing of speech prosody is impaired in amusia and (2) whether emotional linguistic information can compensate for this impairment. Twenty Chinese amusics and 22 matched controls were presented pairs of emotional words spoken with either statement or question intonation while their EEG was recorded. Their task was to judge whether the intonations were the same. Amusics exhibited impaired performance on the intonation-matching task for emotional linguistic information, as their performance was significantly worse than that of controls. EEG results showed a reduced N2 response to incongruent intonation pairs in amusics compared with controls, which likely reflects impaired conflict processing in amusia. However, our EEG results also indicated that amusics were intact in early sensory auditory processing, as revealed by a comparable N1 modulation in both groups. We propose that the impairment in discriminating speech intonation observed among amusic individuals may arise from an inability to access information extracted at early processing stages. This, in turn, could reflect a disconnection between low-level and high-level processing.
Lu, Xuejing; Ho, Hao Tam; Liu, Fang; Wu, Daxing; Thompson, William F.
2015-01-01
Background: Congenital amusia is a disorder that is known to affect the processing of musical pitch. Although individuals with amusia rarely show language deficits in daily life, a number of findings point to possible impairments in speech prosody that amusic individuals may compensate for by drawing on linguistic information. Using EEG, we investigated (1) whether the processing of speech prosody is impaired in amusia and (2) whether emotional linguistic information can compensate for this impairment. Method: Twenty Chinese amusics and 22 matched controls were presented pairs of emotional words spoken with either statement or question intonation while their EEG was recorded. Their task was to judge whether the intonations were the same. Results: Amusics exhibited impaired performance on the intonation-matching task for emotional linguistic information, as their performance was significantly worse than that of controls. EEG results showed a reduced N2 response to incongruent intonation pairs in amusics compared with controls, which likely reflects impaired conflict processing in amusia. However, our EEG results also indicated that amusics were intact in early sensory auditory processing, as revealed by a comparable N1 modulation in both groups. Conclusion: We propose that the impairment in discriminating speech intonation observed among amusic individuals may arise from an inability to access information extracted at early processing stages. This, in turn, could reflect a disconnection between low-level and high-level processing. PMID:25914659
Neural measures of the role of affective prosody in empathy for pain.
Meconi, Federica; Doro, Mattia; Lomoriello, Arianna Schiano; Mastrella, Giulia; Sessa, Paola
2018-01-10
Emotional communication often needs the integration of affective prosodic and semantic components from speech and the speaker's facial expression. Affective prosody may have a special role by virtue of its dual-nature; pre-verbal on one side and accompanying semantic content on the other. This consideration led us to hypothesize that it could act transversely, encompassing a wide temporal window involving the processing of facial expressions and semantic content expressed by the speaker. This would allow powerful communication in contexts of potential urgency such as witnessing the speaker's physical pain. Seventeen participants were shown with faces preceded by verbal reports of pain. Facial expressions, intelligibility of the semantic content of the report (i.e., participants' mother tongue vs. fictional language) and the affective prosody of the report (neutral vs. painful) were manipulated. We monitored event-related potentials (ERPs) time-locked to the onset of the faces as a function of semantic content intelligibility and affective prosody of the verbal reports. We found that affective prosody may interact with facial expressions and semantic content in two successive temporal windows, supporting its role as a transverse communication cue.
ERIC Educational Resources Information Center
Martinez-Castilla, Pastora; Peppe, Susan
2008-01-01
This study aimed to find out what intonation features reliably represent the emotions of "liking" as opposed to "disliking" in the Spanish language, with a view to designing a prosody assessment procedure for use with children with speech and language disorders. 18 intonationally different prosodic realisations (tokens) of one word (limon) were…
Herniman, Sarah E; Allott, Kelly A; Killackey, Eóin; Hester, Robert; Cotton, Sue M
2017-01-15
Comorbid depression is common in first-episode schizophrenia spectrum (FES) disorders. Both depression and FES are associated with significant deficits in facial and prosody emotion recognition performance. However, it remains unclear whether people with FES and comorbid depression, compared to those without comorbid depression, have overall poorer emotion recognition, or instead, a different pattern of emotion recognition deficits. The aim of this study was to compare facial and prosody emotion recognition performance between those with and without comorbid depression in FES. This study involved secondary analysis of baseline data from a randomized controlled trial of vocational intervention for young people with first-episode psychosis (N=82; age range: 15-25 years). Those with comorbid depression (n=24) had more accurate recognition of sadness in faces compared to those without comorbid depression. Severity of depressive symptoms was also associated with more accurate recognition of sadness in faces. Such results did not recur for prosody emotion recognition. In addition to the cross-sectional design, limitations of this study include the absence of facial and prosodic recognition of neutral emotions. Findings indicate a mood congruent negative bias in facial emotion recognition in those with comorbid depression and FES, and provide support for cognitive theories of depression that emphasise the role of such biases in the development and maintenance of depression. Longitudinal research is needed to determine whether mood-congruent negative biases are implicated in the development and maintenance of depression in FES, or whether such biases are simply markers of depressed state. Copyright © 2016 Elsevier B.V. All rights reserved.
Reappraising the voices of wrath
Frühholz, Sascha; Grandjean, Didier
2015-01-01
Cognitive reappraisal recruits prefrontal and parietal cortical areas. Because of the near exclusive usage in past research of visual stimuli to elicit emotions, it is unknown whether the same neural substrates underlie the reappraisal of emotions induced through other sensory modalities. Here, participants reappraised their emotions in order to increase or decrease their emotional response to angry prosody, or maintained their attention to it in a control condition. Neural activity was monitored with fMRI, and connectivity was investigated by using psychophysiological interaction analyses. A right-sided network encompassing the superior temporal gyrus, the superior temporal sulcus and the inferior frontal gyrus was found to underlie the processing of angry prosody. During reappraisal to increase emotional response, the left superior frontal gyrus showed increased activity and became functionally coupled to right auditory cortices. During reappraisal to decrease emotional response, a network that included the medial frontal gyrus and posterior parietal areas showed increased activation and greater functional connectivity with bilateral auditory regions. Activations pertaining to this network were more extended on the right side of the brain. Although directionality cannot be inferred from PPI analyses, the findings suggest a similar frontoparietal network for the reappraisal of visually and auditorily induced negative emotions. PMID:25964502
Aziz-Zadeh, Lisa; Sheng, Tong; Gheytanchi, Anahita
2010-01-01
Background Prosody, the melody and intonation of speech, involves the rhythm, rate, pitch and voice quality to relay linguistic and emotional information from one individual to another. A significant component of human social communication depends upon interpreting and responding to another person's prosodic tone as well as one's own ability to produce prosodic speech. However there has been little work on whether the perception and production of prosody share common neural processes, and if so, how these might correlate with individual differences in social ability. Methods The aim of the present study was to determine the degree to which perception and production of prosody rely on shared neural systems. Using fMRI, neural activity during perception and production of a meaningless phrase in different prosodic intonations was measured. Regions of overlap for production and perception of prosody were found in premotor regions, in particular the left inferior frontal gyrus (IFG). Activity in these regions was further found to correlate with how high an individual scored on two different measures of affective empathy as well as a measure on prosodic production ability. Conclusions These data indicate, for the first time, that areas that are important for prosody production may also be utilized for prosody perception, as well as other aspects of social communication and social understanding, such as aspects of empathy and prosodic ability. PMID:20098696
Impaired affective prosody decoding in severe alcohol use disorder and Korsakoff syndrome.
Brion, Mélanie; de Timary, Philippe; Mertens de Wilmars, Serge; Maurage, Pierre
2018-06-01
Recognizing others' emotions is a fundamental social skill, widely impaired in psychiatric populations. These emotional dysfunctions are involved in the development and maintenance of alcohol-related disorders, but their differential intensity across emotions and their modifications during disease evolution remain underexplored. Affective prosody decoding was assessed through a vocalization task using six emotions, among 17 patients with severe alcohol use disorder, 16 Korsakoff syndrome patients (diagnosed following DSM-V criteria) and 19 controls. Significant disturbances in emotional decoding, particularly for negative emotions, were found in alcohol-related disorders. These impairments, identical for both experimental groups, constitute a core deficit in excessive alcohol use. Copyright © 2018 Elsevier B.V. All rights reserved.
Bloch, Yuval; Aviram, Shai; Neeman, Ronnie; Braw, Yoram; Nitzan, Uriel; Maoz, Hagai; Mimouni-Bloch, Aviva
2015-01-01
Prosody production is highly personalized, related to both the emotional and cognitive state of the speaker and to the task being performed. Fundamental frequency (F main) is a central measurable feature of prosody, associated with having an attention deficit hyperactive disorder (ADHD). Since methylphenidate is an effective therapy for ADHD, we hypothesized that it will affect the fundamental frequency of ADHD patients. The answers of 32 adult ADHD patients were recorded while performing two computerized tasks (cognitive and emotional). Evaluations were performed at baseline and an hour after patients received methylphenidate. A significant effect of methylphenidate was observed on the fundamental frequency, as opposed to other parameters, of prosody. This change was evident while patients performed a cognitive, as opposed to an emotional, task. This change was seen in the 14 female ADHD patients but not in the 18 male ADHD patients. The fundamental frequency while performing a cognitive task without methylphenidate was not different in the female ADHD group, from 22 female controls. This pilot study supports prosodic changes as possible objective and accessible dynamic biological marker of treatment responses specifically in female ADHD.
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Christos Nikolaos; Vovoli, Eftichia
An emotion recognition framework based on sound processing could improve services in human-computer interaction. Various quantitative speech features obtained from sound processing of acting speech were tested, as to whether they are sufficient or not to discriminate between seven emotions. Multilayered perceptrons were trained to classify gender and emotions on the basis of a 24-input vector, which provide information about the prosody of the speaker over the entire sentence using statistics of sound features. Several experiments were performed and the results were presented analytically. Emotion recognition was successful when speakers and utterances were “known” to the classifier. However, severe misclassifications occurred during the utterance-independent framework. At least, the proposed feature vector achieved promising results for utterance-independent recognition of high- and low-arousal emotions.
van de Velde, Daan J.; Schiller, Niels O.; van Heuven, Vincent J.; Levelt, Claartje C.; van Ginkel, Joost; Beers, Mieke; Briaire, Jeroen J.; Frijns, Johan H. M.
2017-01-01
This study aimed to find the optimal filter slope for cochlear implant simulations (vocoding) by testing the effect of a wide range of slopes on the discrimination of emotional and linguistic (focus) prosody, with varying availability of F0 and duration cues. Forty normally hearing participants judged if (non-)vocoded sentences were pronounced with happy or sad emotion, or with adjectival or nominal focus. Sentences were recorded as natural stimuli and manipulated to contain only emotion- or focus-relevant segmental duration or F0 information or both, and then noise-vocoded with 5, 20, 80, 120, and 160 dB/octave filter slopes. Performance increased with steeper slopes, but only up to 120 dB/octave, with bigger effects for emotion than for focus perception. For emotion, results with both cues most closely resembled results with F0, while for focus results with both cues most closely resembled those with duration, showing emotion perception relies primarily on F0, and focus perception on duration. This suggests that filter slopes affect focus perception less than emotion perception because for emotion, F0 is both more informative and more affected. The performance increase until extreme filter slope values suggests that much performance improvement in prosody perception is still to be gained for CI users. PMID:28599540
Sensory contribution to vocal emotion deficit in Parkinson's disease after subthalamic stimulation.
Péron, Julie; Cekic, Sezen; Haegelen, Claire; Sauleau, Paul; Patel, Sona; Drapier, Dominique; Vérin, Marc; Grandjean, Didier
2015-02-01
Subthalamic nucleus (STN) deep brain stimulation in Parkinson's disease induces modifications in the recognition of emotion from voices (or emotional prosody). Nevertheless, the underlying mechanisms are still only poorly understood, and the role of acoustic features in these deficits has yet to be elucidated. Our aim was to identify the influence of acoustic features on changes in emotional prosody recognition following STN stimulation in Parkinson's disease. To this end, we analysed the performances of patients on vocal emotion recognition in pre-versus post-operative groups, as well as of matched controls, entering the acoustic features of the stimuli into our statistical models. Analyses revealed that the post-operative biased ratings on the Fear scale when patients listened to happy stimuli were correlated with loudness, while the biased ratings on the Sadness scale when they listened to happiness were correlated with fundamental frequency (F0). Furthermore, disturbed ratings on the Happiness scale when the post-operative patients listened to sadness were found to be correlated with F0. These results suggest that inadequate use of acoustic features following subthalamic stimulation has a significant impact on emotional prosody recognition in patients with Parkinson's disease, affecting the extraction and integration of acoustic cues during emotion perception. Copyright © 2014 Elsevier Ltd. All rights reserved.
Zhang, Heming; Chen, Xuhai; Chen, Shengdong; Li, Yansong; Chen, Changming; Long, Quanshan; Yuan, Jiajin
2018-05-09
Facial and vocal expressions are essential modalities mediating the perception of emotion and social communication. Nonetheless, currently little is known about how emotion perception and its neural substrates differ across facial expression and vocal prosody. To clarify this issue, functional MRI scans were acquired in Study 1, in which participants were asked to discriminate the valence of emotional expression (angry, happy or neutral) from facial, vocal, or bimodal stimuli. In Study 2, we used an affective priming task (unimodal materials as primers and bimodal materials as target) and participants were asked to rate the intensity, valence, and arousal of the targets. Study 1 showed higher accuracy and shorter response latencies in the facial than in the vocal modality for a happy expression. Whole-brain analysis showed enhanced activation during facial compared to vocal emotions in the inferior temporal-occipital regions. Region of interest analysis showed a higher percentage signal change for facial than for vocal anger in the superior temporal sulcus. Study 2 showed that facial relative to vocal priming of anger had a greater influence on perceived emotion for bimodal targets, irrespective of the target valence. These findings suggest that facial expression is associated with enhanced emotion perception compared to equivalent vocal prosodies.
Sensory contribution to vocal emotion deficit in Parkinson’s disease after subthalamic stimulation
Péron, Julie; Cekic, Sezen; Haegelen, Claire; Sauleau, Paul; Patel, Sona; Drapier, Dominique; Vérin, Marc; Grandjean, Didier
2016-01-01
Subthalamic nucleus (STN) deep brain stimulation in Parkinson’s disease induces modifications in the recognition of emotion from voices (or emotional prosody). Nevertheless, the underlying mechanisms are still only poorly understood, and the role of acoustic features in these deficits has yet to be elucidated. Our aim was to identify the influence of acoustic features on changes in emotional prosody recognition following STN stimulation in Parkinson’s disease. To this end, we analysed the performances of patients on vocal emotion recognition in pre-versus post-operative groups, as well as of matched controls, entering the acoustic features of the stimuli into our statistical models. Analyses revealed that the post-operative biased ratings on the Fear scale when patients listened to happy stimuli were correlated with loudness, while the biased ratings on the Sadness scale when they listened to happiness were correlated with fundamental frequency (F0). Furthermore, disturbed ratings on the Happiness scale when the post-operative patients listened to sadness were found to be correlated with F0. These results suggest that inadequate use of acoustic features following subthalamic stimulation has a significant impact on emotional prosody recognition in patients with Parkinson’s disease, affecting the extraction and integration of acoustic cues during emotion perception. PMID:25282055
van den Broek, Egon L
2004-01-01
The voice embodies three sources of information: speech, the identity, and the emotional state of the speaker (i.e., emotional prosody). The latter feature is resembled by the variability of the F0 (also named fundamental frequency of pitch) (SD F0). To extract this feature, Emotional Prosody Measurement (EPM) was developed, which consists of 1) speech recording, 2) removal of speckle noise, 3) a Fourier Transform to extract the F0-signal, and 4) the determination of SD F0. After a pilot study in which six participants mimicked emotions by their voice, the core experiment was conducted to see whether EPM is successful. Twenty-five patients suffering from a panic disorder with agoraphobia participated. Two methods (story-telling and reliving) were used to trigger anxiety and were compared with comparable but more relaxed conditions. This resulted in a unique database of speech samples that was used to compare the EPM with the Subjective Unit of Distress to validate it as measure for anxiety/stress. The experimental manipulation of anxiety proved to be successful and EPM proved to be a successful evaluation method for psychological therapy effectiveness.
Simon, Doerte; Becker, Michael; Mothes-Lasch, Martin; Miltner, Wolfgang H.R.
2017-01-01
Abstract Angry expressions of both voices and faces represent disorder-relevant stimuli in social anxiety disorder (SAD). Although individuals with SAD show greater amygdala activation to angry faces, previous work has failed to find comparable effects for angry voices. Here, we investigated whether voice sound-intensity, a modulator of a voice’s threat-relevance, affects brain responses to angry prosody in SAD. We used event-related functional magnetic resonance imaging to explore brain responses to voices varying in sound intensity and emotional prosody in SAD patients and healthy controls (HCs). Angry and neutral voices were presented either with normal or high sound amplitude, while participants had to decide upon the speaker’s gender. Loud vs normal voices induced greater insula activation, and angry vs neutral prosody greater orbitofrontal cortex activation in SAD as compared with HC subjects. Importantly, an interaction of sound intensity, prosody and group was found in the insula and the amygdala. In particular, the amygdala showed greater activation to loud angry voices in SAD as compared with HC subjects. This finding demonstrates a modulating role of voice sound-intensity on amygdalar hyperresponsivity to angry prosody in SAD and suggests that abnormal processing of interpersonal threat signals in amygdala extends beyond facial expressions in SAD. PMID:27651541
Expression of emotions and physiological changes during teaching
NASA Astrophysics Data System (ADS)
Tobin, Kenneth; King, Donna; Henderson, Senka; Bellocchi, Alberto; Ritchie, Stephen M.
2016-09-01
We investigated the expression of emotions while teaching in relation to a teacher's physiological changes. We used polyvagal theory (PVT) to frame the study of teaching in a teacher education program. Donna, a teacher-researcher, experienced high levels of stress and anxiety prior to beginning to teach and throughout the lesson we used her expressed emotions as a focus for this research. We adopted event-oriented inquiry in a study of heart rate, oxygenation of the blood, and expressed emotions. Five events were identified for multilevel analysis in which we used narrative, prosodic analysis, and hermeneutic-phenomenological methods to learn more about the expression of emotions when Donna had: high heart rate (before and while teaching); low blood oxygenation (before and while teaching); and high blood oxygenation (while teaching). What we learned was consistent with the body's monitoring system recognizing social harm and switching to the control of the unmyelinated vagus nerve, thereby shutting down organs and muscles associated with social communication—leading to irregularities in prosody and expression of emotion. In events involving high heart rate and low blood oxygenation the physiological environment was associated with less effective and sometimes confusing patterns in prosody, including intonation, pace of speaking, and pausing. In a low blood oxygenation environment there was evidence of rapid speech and shallow, irregular breathing. In contrast, during an event in which 100 % blood oxygenation occurred, prosody was perceived to be conducive to engagement and teacher expressed positive emotions, such as satisfaction, while teaching. Becoming aware of the purposes of the research and the results we obtained provided the teacher with tools to enact changes to her teaching practice, especially prosody of the voice. We regard it as a high priority to create tools to allow teachers and students, if and as necessary, to ameliorate excess emotions, and change heart rate, oxygenation levels, and breathing patterns.
Prosody and Formulaic Language in Treatment-Resistant Depression: Effects of Deep Brain Stimulation
ERIC Educational Resources Information Center
Bridges, Kelly A.
2014-01-01
Communication, specifically the elements crucial for normal social interaction, can be significantly affected in psychiatric illness, especially depression. Of specific importance are prosody (an aspect of speech that carries emotional valence) and formulaic language (non-novel linguistic segments that are prevalent in naturalistic conversation).…
Does incongruence of lexicosemantic and prosodic information cause discernible cognitive conflict?
Mitchell, Rachel L C
2006-12-01
We are often required to interpret discordant emotional signals. Whereas equivalent cognitive paradigms cause noticeable conflict via their behavioral and psychophysiological effects, the same may not necessarily be true for discordant emotions. Skin conductance responses (SCRs) and heart rates (HRs) were measured during a classic Stroop task and one in which the emotions conveyed by lexicosemantic content and prosody were congruent or incongruent. The participants' task was to identify the emotion conveyed by lexicosemantic content or prosody. No relationship was observed between HR and congruence. SCR was higher during incongruent than during congruent conditions of the experimental task (as well as in the classic Stroop task), but no difference in SCR was observed in a comparison between congruence effects during lexicosemantic emotion identification and those during prosodic emotion identification. It is concluded that incongruence between lexicosemantic and prosodic emotion does cause notable cognitive conflict. Functional neuroanatomic implications are discussed.
ERIC Educational Resources Information Center
Grossman, Ruth B.; Edelson, Lisa R.; Tager-Flusberg, Helen
2013-01-01
Purpose: People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Method: Participants were 22…
Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte
2018-04-01
It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and rely more on the facial cues of audio-visual emotional stimuli. Two groups of young adult CD CI users-early signers (ES CI users; n = 11) and late signers (LS CI users; n = 10)-and a group of hearing, non-signing, age-matched controls (n = 12) performed an emotion recognition task with auditory, visual, and cross-modal emotionally congruent and incongruent speech stimuli. On different trials, participants categorized either the facial or the vocal expressions. The ES CI users more accurately recognized affective prosody than the LS CI users in the presence of congruent facial information. Furthermore, the ES CI users, but not the LS CI users, gained more than the controls from congruent visual stimuli when recognizing affective prosody. Both CI groups performed overall worse than the controls in recognizing affective prosody. These results suggest that early sign language experience affects multisensory emotion perception in CD CI users.
Simon, Doerte; Becker, Michael; Mothes-Lasch, Martin; Miltner, Wolfgang H R; Straube, Thomas
2017-03-01
Angry expressions of both voices and faces represent disorder-relevant stimuli in social anxiety disorder (SAD). Although individuals with SAD show greater amygdala activation to angry faces, previous work has failed to find comparable effects for angry voices. Here, we investigated whether voice sound-intensity, a modulator of a voice's threat-relevance, affects brain responses to angry prosody in SAD. We used event-related functional magnetic resonance imaging to explore brain responses to voices varying in sound intensity and emotional prosody in SAD patients and healthy controls (HCs). Angry and neutral voices were presented either with normal or high sound amplitude, while participants had to decide upon the speaker's gender. Loud vs normal voices induced greater insula activation, and angry vs neutral prosody greater orbitofrontal cortex activation in SAD as compared with HC subjects. Importantly, an interaction of sound intensity, prosody and group was found in the insula and the amygdala. In particular, the amygdala showed greater activation to loud angry voices in SAD as compared with HC subjects. This finding demonstrates a modulating role of voice sound-intensity on amygdalar hyperresponsivity to angry prosody in SAD and suggests that abnormal processing of interpersonal threat signals in amygdala extends beyond facial expressions in SAD. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Oerlemans, Anoek M; van der Meer, Jolanda M J; van Steijn, Daphne J; de Ruiter, Saskia W; de Bruijn, Yvette G E; de Sonneville, Leo M J; Buitelaar, Jan K; Rommelse, Nanda N J
2014-05-01
Autism is a highly heritable and clinically heterogeneous neuropsychiatric disorder that frequently co-occurs with other psychopathologies, such as attention-deficit/hyperactivity disorder (ADHD). An approach to parse heterogeneity is by forming more homogeneous subgroups of autism spectrum disorder (ASD) patients based on their underlying, heritable cognitive vulnerabilities (endophenotypes). Emotion recognition is a likely endophenotypic candidate for ASD and possibly for ADHD. Therefore, this study aimed to examine whether emotion recognition is a viable endophenotypic candidate for ASD and to assess the impact of comorbid ADHD in this context. A total of 90 children with ASD (43 with and 47 without ADHD), 79 ASD unaffected siblings, and 139 controls aged 6-13 years, were included to test recognition of facial emotion and affective prosody. Our results revealed that the recognition of both facial emotion and affective prosody was impaired in children with ASD and aggravated by the presence of ADHD. The latter could only be partly explained by typical ADHD cognitive deficits, such as inhibitory and attentional problems. The performance of unaffected siblings could overall be considered at an intermediate level, performing somewhat worse than the controls and better than the ASD probands. Our findings suggest that emotion recognition might be a viable endophenotype in ASD and a fruitful target in future family studies of the genetic contribution to ASD and comorbid ADHD. Furthermore, our results suggest that children with comorbid ASD and ADHD are at highest risk for emotion recognition problems.
How Psychological Stress Affects Emotional Prosody.
Paulmann, Silke; Furnes, Desire; Bøkenes, Anne Ming; Cozzolino, Philip J
2016-01-01
We explored how experimentally induced psychological stress affects the production and recognition of vocal emotions. In Study 1a, we demonstrate that sentences spoken by stressed speakers are judged by naïve listeners as sounding more stressed than sentences uttered by non-stressed speakers. In Study 1b, negative emotions produced by stressed speakers are generally less well recognized than the same emotions produced by non-stressed speakers. Multiple mediation analyses suggest this poorer recognition of negative stimuli was due to a mismatch between the variation of volume voiced by speakers and the range of volume expected by listeners. Together, this suggests that the stress level of the speaker affects judgments made by the receiver. In Study 2, we demonstrate that participants who were induced with a feeling of stress before carrying out an emotional prosody recognition task performed worse than non-stressed participants. Overall, findings suggest detrimental effects of induced stress on interpersonal sensitivity.
How Psychological Stress Affects Emotional Prosody
Paulmann, Silke; Furnes, Desire; Bøkenes, Anne Ming; Cozzolino, Philip J.
2016-01-01
We explored how experimentally induced psychological stress affects the production and recognition of vocal emotions. In Study 1a, we demonstrate that sentences spoken by stressed speakers are judged by naïve listeners as sounding more stressed than sentences uttered by non-stressed speakers. In Study 1b, negative emotions produced by stressed speakers are generally less well recognized than the same emotions produced by non-stressed speakers. Multiple mediation analyses suggest this poorer recognition of negative stimuli was due to a mismatch between the variation of volume voiced by speakers and the range of volume expected by listeners. Together, this suggests that the stress level of the speaker affects judgments made by the receiver. In Study 2, we demonstrate that participants who were induced with a feeling of stress before carrying out an emotional prosody recognition task performed worse than non-stressed participants. Overall, findings suggest detrimental effects of induced stress on interpersonal sensitivity. PMID:27802287
ERIC Educational Resources Information Center
Gadea, Marien; Espert, Raul; Salvador, Alicia; Marti-Bonmati, Luis
2011-01-01
Dichotic Listening (DL) is a valuable tool to study emotional brain lateralization. Regarding the perception of sadness and anger through affective prosody, the main finding has been a left ear advantage (LEA) for the sad but contradictory data for the anger prosody. Regarding an induced mood in the laboratory, its consequences upon DL were a…
Emotion speeds up conflict resolution: a new role for the ventral anterior cingulate cortex?
Kanske, Philipp; Kotz, Sonja A
2011-04-01
It has been hypothesized that processing of conflict is facilitated by emotion. Emotional stimuli signal significance in a situation. Thus, when an emotional stimulus is task relevant, more resources may be devoted to conflict processing to reduce the time that an organism is unable to act. In the present electroencephalography and functional magnetic resonance imaging (fMRI) studies, we employed a conflict task and manipulated the emotional content and prosody of auditory target stimuli. In line with our hypothesis, reaction times revealed faster conflict resolution for emotional stimuli. Early stages of event-related potential conflict processing were modulated by emotion as indexed in an enhanced frontocentral negativity at 420 ms. FMRI yielded conflict activation in the dorsal anterior cingulate cortex (dACC), a crucial part of the executive control network. The right ventral ACC (vACC) was activated for conflict processing in emotional stimuli, suggesting that it is additionally activated for conflict processing in emotional stimuli. The amygdala was also activated by emotion. Furthermore, emotion increased functional connectivity between the vACC and activity in the amygdala and the dACC. The results support the hypothesis that emotion speeds up conflict processing and suggest a new role for the vACC in processing conflict in particularly significant situations signaled by emotion.
Loutrari, Ariadne; Lorch, Marjorie Perlman
2017-07-01
We present a follow-up study on the case of a Greek amusic adult, B.Z., whose impaired performance on scale, contour, interval, and meter was reported by Paraskevopoulos, Tsapkini, and Peretz in 2010, employing a culturally-tailored version of the Montreal Battery of Evaluation of Amusia. In the present study, we administered a novel set of perceptual judgement tasks designed to investigate the ability to appreciate holistic prosodic aspects of 'expressiveness' and emotion in phrase length music and speech stimuli. Our results show that, although diagnosed as a congenital amusic, B.Z. scored as well as healthy controls (N=24) on judging 'expressiveness' and emotional prosody in both speech and music stimuli. These findings suggest that the ability to make perceptual judgements about such prosodic qualities may be preserved in individuals who demonstrate difficulties perceiving basic musical features such as melody or rhythm. B.Z.'s case yields new insights into amusia and the processing of speech and music prosody through a holistic approach. The employment of novel stimuli with relatively fewer non-naturalistic manipulations, as developed for this study, may be a useful tool for revealing unexplored aspects of music and speech cognition and offer the possibility to further the investigation of the perception of acoustic streams in more authentic auditory conditions. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Fonseca, Rochele Paz; Fachel, Jandyra Maria Guimarães; Chaves, Márcia Lorena Fagundes; Liedtke, Francéia Veiga; Parente, Maria Alice de Mattos Pimenta
2007-01-01
Right-brain-damaged individuals may present discursive, pragmatic, lexical-semantic and/or prosodic disorders. To verify the effect of right hemisphere damage on communication processing evaluated by the Brazilian version of the Protocole Montréal d'Évaluation de la Communication (Montreal Communication Evaluation Battery) - Bateria Montreal de Avaliação da Comunicação, Bateria MAC, in Portuguese. A clinical group of 29 right-brain-damaged participants and a control group of 58 non-brain-damaged adults formed the sample. A questionnaire on sociocultural and health aspects, together with the Brazilian MAC Battery was administered. Significant differences between the clinical and control groups were observed in the following MAC Battery tasks: conversational discourse, unconstrained, semantic and orthographic verbal fluency, linguistic prosody repetition, emotional prosody comprehension, repetition and production. Moreover, the clinical group was less homogeneous than the control group. A right-brain-damage effect was identified directly, on three communication processes: discursive, lexical-semantic and prosodic processes, and indirectly, on pragmatic process.
ERIC Educational Resources Information Center
Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki
2013-01-01
This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…
Talk this way: the effect of prosodically conveyed semantic information on memory for novel words.
Shintel, Hadas; Anderson, Nathan L; Fenn, Kimberly M
2014-08-01
Speakers modulate their prosody to express not only emotional information but also semantic information (e.g., raising pitch for upward motion). Moreover, this information can help listeners infer meaning. Work investigating the communicative role of prosodically conveyed meaning has focused on reference resolution, and potential mnemonic benefits remain unexplored. We investigated the effect of prosody on memory for the meaning of novel words, even when it conveys superfluous information. Participants heard novel words, produced with congruent or incongruent prosody, and viewed image pairs representing the intended meaning and its antonym (e.g., a small and a large dog). Importantly, an arrow indicated the image representing the intended meaning, resolving the ambiguity. Participants then completed 2 memory tests, either immediately after learning or after a 24-hr delay, on which they chose an image (out of a new image pair) and a definition that best represented the word. On the image test, memory was similar on the immediate test, but incongruent prosody led to greater loss over time. On the definition test, memory was better for congruent prosody at both times. Results suggest that listeners extract semantic information from prosody even when it is redundant and that prosody can enhance memory, beyond its role in comprehension. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Age-related differences in recall for words using semantics and prosody.
Sober, Jonathan D; VanWormer, Lisa A; Arruda, James E
2016-01-01
The positivity effect is a developmental shift seen in older adults to be increasingly influenced by positive information in areas such as memory, attention, and decision-making. This study is the first to examine the age-related differences of the positivity effect for emotional prosody. Participants heard a factorial combination of words that were semantically positive or negative said with either positive or negative intonation. Results showed a semantic positivity effect for older adults, and a prosody positivity effect for younger adults. Additionally, older adults showed a significant decrease in recall for semantically negative words said in an incongruent prosodically positive tone.
ERIC Educational Resources Information Center
Schmidt, Adam T.; Hanten, Gerri R.; Li, Xiaoqi; Orsten, Kimberley D.; Levin, Harvey S.
2010-01-01
Children with closed head injuries often experience significant and persistent disruptions in their social and behavioral functioning. Studies with adults sustaining a traumatic brain injury (TBI) indicate deficits in emotion recognition and suggest that these difficulties may underlie some of the social deficits. The goal of the current study was…
Emotional voice and emotional body postures influence each other independently of visual awareness.
Stienen, Bernard M C; Tanaka, Akihiro; de Gelder, Beatrice
2011-01-01
Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from -50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness.
The effects of sad prosody on hemispheric specialization for words processing.
Leshem, Rotem; Arzouan, Yossi; Armony-Sivan, Rinat
2015-06-01
This study examined the effect of sad prosody on hemispheric specialization for word processing using behavioral and electrophysiological measures. A dichotic listening task combining focused attention and signal-detection methods was conducted to evaluate the detection of a word spoken in neutral or sad prosody. An overall right ear advantage together with leftward lateralization in early (150-170 ms) and late (240-260 ms) processing stages was found for word detection, regardless of prosody. Furthermore, the early stage was most pronounced for words spoken in neutral prosody, showing greater negative activation over the left than the right hemisphere. In contrast, the later stage was most pronounced for words spoken with sad prosody, showing greater positive activation over the left than the right hemisphere. The findings suggest that sad prosody alone was not sufficient to modulate hemispheric asymmetry in word-level processing. We posit that lateralized effects of sad prosody on word processing are largely dependent on the psychoacoustic features of the stimuli as well as on task demands. Copyright © 2015 Elsevier Inc. All rights reserved.
Effect of Dopamine Therapy on Nonverbal Affect Burst Recognition in Parkinson's Disease
Péron, Julie; Grandjean, Didier; Drapier, Sophie; Vérin, Marc
2014-01-01
Background Parkinson's disease (PD) provides a model for investigating the involvement of the basal ganglia and mesolimbic dopaminergic system in the recognition of emotions from voices (i.e., emotional prosody). Although previous studies of emotional prosody recognition in PD have reported evidence of impairment, none of them compared PD patients at different stages of the disease, or ON and OFF dopamine replacement therapy, making it difficult to determine whether their impairment was due to general cognitive deterioration or to a more specific dopaminergic deficit. Methods We explored the involvement of the dopaminergic pathways in the recognition of nonverbal affect bursts (onomatopoeias) in 15 newly diagnosed PD patients in the early stages of the disease, 15 PD patients in the advanced stages of the disease and 15 healthy controls. The early PD group was studied in two conditions: ON and OFF dopaminergic therapy. Results Results showed that the early PD patients performed more poorly in the ON condition than in the OFF one, for overall emotion recognition, as well as for the recognition of anger, disgust and fear. Additionally, for anger, the early PD ON patients performed more poorly than controls. For overall emotion recognition, both advanced PD patients and early PD ON patients performed more poorly than controls. Analysis of continuous ratings on target and nontarget visual analog scales confirmed these patterns of results, showing a systematic emotional bias in both the advanced PD and early PD ON (but not OFF) patients compared with controls. Conclusions These results i) confirm the involvement of the dopaminergic pathways and basal ganglia in emotional prosody recognition, and ii) suggest a possibly deleterious effect of dopatherapy on affective abilities in the early stages of PD. PMID:24651759
Lexical and Affective Prosody in Children with High-Functioning Autism
ERIC Educational Resources Information Center
Grossman, Ruth B.; Bemis, Rhyannon H.; Skwerer, Daniela Plesa; Tager-Flusberg, Helen
2010-01-01
Purpose: To investigate the perception and production of lexical stress and processing of affective prosody in adolescents with high-functioning autism (HFA). We hypothesized preserved processing of lexical and affective prosody but atypical lexical prosody production. Method: Sixteen children with HFA and 15 typically developing (TD) peers…
ERIC Educational Resources Information Center
Gil, Sandrine; Aguert, Marc; Bigot, Ludovic Le; Lacroix, Agnès; Laval, Virginie
2014-01-01
The ability to infer the emotional states of others is central to our everyday interactions. These inferences can be drawn from several different sources of information occurring simultaneously in the communication situation. Based on previous studies revealing that children pay more heed to situational context than to emotional prosody when…
Right-ear precedence and vocal emotion contagion: The role of the left hemisphere.
Schepman, Astrid; Rodway, Paul; Cornmell, Louise; Smith, Bethany; de Sa, Sabrina Lauren; Borwick, Ciara; Belfon-Thompson, Elisha
2018-05-01
Much evidence suggests that the processing of emotions is lateralized to the right hemisphere of the brain. However, under some circumstances the left hemisphere might play a role, particularly for positive emotions and emotional experiences. We explored whether emotion contagion was right-lateralized, lateralized valence-specifically, or potentially left-lateralized. In two experiments, right-handed female listeners rated to what extent emotionally intoned pseudo-sentences evoked target emotions in them. These sound stimuli had a 7 ms ear lead in the left or right channel, leading to stronger stimulation of the contralateral hemisphere. In both experiments, the results revealed that right ear lead stimuli received subtly but significantly higher evocation scores, suggesting a left hemisphere dominance for emotion contagion. A control experiment using an emotion identification task showed no effect of ear lead. The findings are discussed in relation to prior findings that have linked the processing of emotional prosody to left-hemisphere brain regions that regulate emotions, control orofacial musculature, are involved in affective empathy processing areas, or have an affinity for processing emotions socially. Future work is needed to eliminate alternative interpretations and understand the mechanisms involved. Our novel binaural asynchrony method may be useful in future work in auditory laterality.
Analysis of prosody in finger braille using electromyography.
Miyagi, Manabi; Nishida, Masafumi; Horiuchi, Yasuo; Ichikawa, Akira
2006-01-01
Finger braille is one of the communication methods for the deaf blind. The interpreter types braille codes on the fingers of deaf blind. Finger braille seems to be the most suitable medium for real-time communication by its speed and accuracy of transmitting characters. We hypothesize that the prosody information exists in the time structure and strength of finger braille typing. Prosody is the paralinguistic information that has functions to transmit the sentence structure, prominence, emotions and other form of information in real time communication. In this study, we measured the surface electromyography (sEMG) of finger movement to analyze the typing strength of finger braille. We found that the typing strength increases at the beginning of a phrase and a prominent phrase. The result shows the possibility that the prosody in the typing strength of finger braille can be applied to create an interpreter system for the deafblind.
Fonseca, Rochele Paz; Fachel, Jandyra Maria Guimarães; Chaves, Márcia Lorena Fagundes; Liedtke, Francéia Veiga; Parente, Maria Alice de Mattos Pimenta
2007-01-01
Right-brain-damaged individuals may present discursive, pragmatic, lexical-semantic and/or prosodic disorders. Objective To verify the effect of right hemisphere damage on communication processing evaluated by the Brazilian version of the Protocole Montréal d’Évaluation de la Communication (Montreal Communication Evaluation Battery) – Bateria Montreal de Avaliação da Comunicação, Bateria MAC, in Portuguese. Methods A clinical group of 29 right-brain-damaged participants and a control group of 58 non-brain-damaged adults formed the sample. A questionnaire on sociocultural and health aspects, together with the Brazilian MAC Battery was administered. Results Significant differences between the clinical and control groups were observed in the following MAC Battery tasks: conversational discourse, unconstrained, semantic and orthographic verbal fluency, linguistic prosody repetition, emotional prosody comprehension, repetition and production. Moreover, the clinical group was less homogeneous than the control group. Conclusions A right-brain-damage effect was identified directly, on three communication processes: discursive, lexical-semantic and prosodic processes, and indirectly, on pragmatic process. PMID:29213400
Toward Emotionally Accessible Massive Open Online Courses (MOOCs).
Hillaire, Garron; Iniesto, Francisco; Rienties, Bart
2017-01-01
This paper outlines an approach to evaluating the emotional content of three Massive Open Online Courses (MOOCs) using the affective computing approach of prosody detection on two different text-to-speech voices in conjunction with human raters judging the emotional content of course text. The intent of this work is to establish the potential variation on the emotional delivery of MOOC material through synthetic voice.
Martínez-Castilla, Pastora; Peppé, Susan
2008-01-01
This study aimed to find out what intonation features reliably represent the emotions of "liking" as opposed to "disliking" in the Spanish language, with a view to designing a prosody assessment procedure for use with children with speech and language disorders. 18 intonationally different prosodic realisations (tokens) of one word (limón) were recorded by one native Spanish speaker. The tokens were deemed representative of two categories of emotion: liking or disliking of the taste "lemon". 30 native Spanish speakers assigned them to the two categories and rated their expressiveness on a six-point scale. For all tokens except two, agreement between judges as to category was highly significant, some tokens attracting 100% agreement. The intonation contours most related to expressiveness levels were: for "liking", an inverted U form contour with exaggerated pitch peak within the tonic syllable; and for "disliking", a flat melodic contour with a slight fall.
Prosody and alignment: a sequential perspective
NASA Astrophysics Data System (ADS)
Szczepek Reed, Beatrice
2010-12-01
In their analysis of a corpus of classroom interactions in an inner city high school, Roth and Tobin describe how teachers and students accomplish interactional alignment by prosodically matching each other's turns. Prosodic matching, and specific prosodic patterns are interpreted as signs of, and contributions to successful interactional outcomes and positive emotions. Lack of prosodic matching, and other specific prosodic patterns are interpreted as features of unsuccessful interactions, and negative emotions. This forum focuses on the article's analysis of the relation between interpersonal alignment, emotion and prosody. It argues that prosodic matching, and other prosodic linking practices, play a primarily sequential role, i.e. one that displays the way in which participants place and design their turns in relation to other participants' turns. Prosodic matching, rather than being a conversational action in itself, is argued to be an interactional practice (Schegloff 1997), which is not always employed for the accomplishment of `positive', or aligning actions.
Jürgens, Rebecca; Fischer, Julia; Schacht, Annekathrin
2018-01-01
Emotional expressions provide strong signals in social interactions and can function as emotion inducers in a perceiver. Although speech provides one of the most important channels for human communication, its physiological correlates, such as activations of the autonomous nervous system (ANS) while listening to spoken utterances, have received far less attention than in other domains of emotion processing. Our study aimed at filling this gap by investigating autonomic activation in response to spoken utterances that were embedded into larger semantic contexts. Emotional salience was manipulated by providing information on alleged speaker similarity. We compared these autonomic responses to activations triggered by affective sounds, such as exploding bombs, and applause. These sounds had been rated and validated as being either positive, negative, or neutral. As physiological markers of ANS activity, we recorded skin conductance responses (SCRs) and changes of pupil size while participants classified both prosodic and sound stimuli according to their hedonic valence. As expected, affective sounds elicited increased arousal in the receiver, as reflected in increased SCR and pupil size. In contrast, SCRs to angry and joyful prosodic expressions did not differ from responses to neutral ones. Pupil size, however, was modulated by affective prosodic utterances, with increased dilations for angry and joyful compared to neutral prosody, although the similarity manipulation had no effect. These results indicate that cues provided by emotional prosody in spoken semantically neutral utterances might be too subtle to trigger SCR, although variation in pupil size indicated the salience of stimulus variation. Our findings further demonstrate a functional dissociation between pupil dilation and skin conductance that presumably origins from their differential innervation. PMID:29541045
The perception of emotion in body expressions.
de Gelder, B; de Borst, A W; Watson, R
2015-01-01
During communication, we perceive and express emotional information through many different channels, including facial expressions, prosody, body motion, and posture. Although historically the human body has been perceived primarily as a tool for actions, there is now increased understanding that the body is also an important medium for emotional expression. Indeed, research on emotional body language is rapidly emerging as a new field in cognitive and affective neuroscience. This article reviews how whole-body signals are processed and understood, at the behavioral and neural levels, with specific reference to their role in emotional communication. The first part of this review outlines brain regions and spectrotemporal dynamics underlying perception of isolated neutral and affective bodies, the second part details the contextual effects on body emotion recognition, and final part discusses body processing on a subconscious level. More specifically, research has shown that body expressions as compared with neutral bodies draw upon a larger network of regions responsible for action observation and preparation, emotion processing, body processing, and integrative processes. Results from neurotypical populations and masking paradigms suggest that subconscious processing of affective bodies relies on a specific subset of these regions. Moreover, recent evidence has shown that emotional information from the face, voice, and body all interact, with body motion and posture often highlighting and intensifying the emotion expressed in the face and voice. © 2014 John Wiley & Sons, Ltd.
The voices of seduction: cross-gender effects in processing of erotic prosody
Ethofer, Thomas; Wiethoff, Sarah; Anders, Silke; Kreifelts, Benjamin; Grodd, Wolfgang
2007-01-01
Gender specific differences in cognitive functions have been widely discussed. Considering social cognition such as emotion perception conveyed by non-verbal cues, generally a female advantage is assumed. In the present study, however, we revealed a cross-gender interaction with increasing responses to the voice of opposite sex in male and female subjects. This effect was confined to erotic tone of speech in behavioural data and haemodynamic responses within voice sensitive brain areas (right middle superior temporal gyrus). The observed response pattern, thus, indicates a particular sensitivity to emotional voices that have a high behavioural relevance for the listener. PMID:18985138
Music and speech prosody: a common rhythm.
Hausen, Maija; Torppa, Ritva; Salmela, Viljami R; Vainio, Martti; Särkämö, Teppo
2013-01-01
Disorders of music and speech perception, known as amusia and aphasia, have traditionally been regarded as dissociated deficits based on studies of brain damaged patients. This has been taken as evidence that music and speech are perceived by largely separate and independent networks in the brain. However, recent studies of congenital amusia have broadened this view by showing that the deficit is associated with problems in perceiving speech prosody, especially intonation and emotional prosody. In the present study the association between the perception of music and speech prosody was investigated with healthy Finnish adults (n = 61) using an on-line music perception test including the Scale subtest of Montreal Battery of Evaluation of Amusia (MBEA) and Off-Beat and Out-of-key tasks as well as a prosodic verbal task that measures the perception of word stress. Regression analyses showed that there was a clear association between prosody perception and music perception, especially in the domain of rhythm perception. This association was evident after controlling for music education, age, pitch perception, visuospatial perception, and working memory. Pitch perception was significantly associated with music perception but not with prosody perception. The association between music perception and visuospatial perception (measured using analogous tasks) was less clear. Overall, the pattern of results indicates that there is a robust link between music and speech perception and that this link can be mediated by rhythmic cues (time and stress).
Music and speech prosody: a common rhythm
Hausen, Maija; Torppa, Ritva; Salmela, Viljami R.; Vainio, Martti; Särkämö, Teppo
2013-01-01
Disorders of music and speech perception, known as amusia and aphasia, have traditionally been regarded as dissociated deficits based on studies of brain damaged patients. This has been taken as evidence that music and speech are perceived by largely separate and independent networks in the brain. However, recent studies of congenital amusia have broadened this view by showing that the deficit is associated with problems in perceiving speech prosody, especially intonation and emotional prosody. In the present study the association between the perception of music and speech prosody was investigated with healthy Finnish adults (n = 61) using an on-line music perception test including the Scale subtest of Montreal Battery of Evaluation of Amusia (MBEA) and Off-Beat and Out-of-key tasks as well as a prosodic verbal task that measures the perception of word stress. Regression analyses showed that there was a clear association between prosody perception and music perception, especially in the domain of rhythm perception. This association was evident after controlling for music education, age, pitch perception, visuospatial perception, and working memory. Pitch perception was significantly associated with music perception but not with prosody perception. The association between music perception and visuospatial perception (measured using analogous tasks) was less clear. Overall, the pattern of results indicates that there is a robust link between music and speech perception and that this link can be mediated by rhythmic cues (time and stress). PMID:24032022
Recognition of schematic facial displays of emotion in parents of children with autism.
Palermo, Mark T; Pasqualetti, Patrizio; Barbati, Giulia; Intelligente, Fabio; Rossini, Paolo Maria
2006-07-01
Performance on an emotional labeling task in response to schematic facial patterns representing five basic emotions without the concurrent presentation of a verbal category was investigated in 40 parents of children with autism and 40 matched controls. 'Autism fathers' performed worse than 'autism mothers', who performed worse than controls in decoding displays representing sadness or disgust. This indicates the need to include facial expression decoding tasks in genetic research of autism. In addition, emotional expression interactions between parents and their children with autism, particularly through play, where affect and prosody are 'physiologically' exaggerated, may stimulate development of social competence. Future studies could benefit from a combination of stimuli including photographs and schematic drawings, with and without associated verbal categories. This may allow the subdivision of patients and relatives on the basis of the amount of information needed to understand and process social-emotionally relevant information.
On whether mirror neurons play a significant role in processing affective prosody.
Ramachandra, Vijayachandra
2009-02-01
Several behavioral and neuroimaging studies have indicated that both right and left cortical structures and a few subcortical ones are involved in processing affective prosody. Recent investigations have shown that the mirror neuron system plays a crucial role in several higher-level functions such as empathy, theory of mind, language, etc., but no studies so far link the mirror neuron system with affective prosody. In this paper is a speculation that the mirror neuron system, which serves as a common neural substrate for different higher-level functions, may play a significant role in processing affective prosody via its connections with the limbic lobe. Actual research must apply electrophysiological and neuroimaging techniques to assess whether the mirror neuron systems underly affective prosody in humans.
A Joint Prosodic Origin of Language and Music
Brown, Steven
2017-01-01
Vocal theories of the origin of language rarely make a case for the precursor functions that underlay the evolution of speech. The vocal expression of emotion is unquestionably the best candidate for such a precursor, although most evolutionary models of both language and speech ignore emotion and prosody altogether. I present here a model for a joint prosodic precursor of language and music in which ritualized group-level vocalizations served as the ancestral state. This precursor combined not only affective and intonational aspects of prosody, but also holistic and combinatorial mechanisms of phrase generation. From this common stage, there was a bifurcation to form language and music as separate, though homologous, specializations. This separation of language and music was accompanied by their (re)unification in songs with words. PMID:29163276
ERIC Educational Resources Information Center
Rozga, Agata; King, Tricia Z.; Vuduc, Richard W.; Robins, Diana L.
2013-01-01
We examined facial electromyography (fEMG) activity to dynamic, audio-visual emotional displays in individuals with autism spectrum disorders (ASD) and typically developing (TD) individuals. Participants viewed clips of happy, angry, and fearful displays that contained both facial expression and affective prosody while surface electrodes measured…
Razafimandimby, Annick; Hervé, Pierre-Yves; Marzloff, Vincent; Brazo, Perrine; Tzourio-Mazoyer, Nathalie; Dollfus, Sonia
2016-12-01
Functional brain imaging research has already demonstrated that patients with schizophrenia had difficulties with emotion processing, namely in facial emotion perception and emotional prosody. However, the moderating effect of social context and the boundary of perceptual categories of emotion attribution remain unclear. This study investigated the neural bases of emotional sentence attribution in schizophrenia. Twenty-one schizophrenia patients and 25 healthy subjects underwent an event-related functional magnetic resonance imaging paradigm including two tasks: one to classify sentences according to their emotional content, and the other to classify neutral sentences according to their grammatical person. First, patients showed longer response times as compared to controls only during the emotion attribution task. Second, patients with schizophrenia showed reduction of activation in bilateral auditory areas irrespective of the presence of emotions. Lastly, during emotional sentences attribution, patients displayed less activation than controls in the medial prefrontal cortex (mPFC). We suggest that the functional abnormality observed in the mPFC during the emotion attribution task could provide a biological basis for social cognition deficits in patients with schizophrenia. Copyright © 2016 Elsevier B.V. All rights reserved.
Involvement of Right STS in Audio-Visual Integration for Affective Speech Demonstrated Using MEG
Hagan, Cindy C.; Woods, Will; Johnson, Sam; Green, Gary G. R.; Young, Andrew W.
2013-01-01
Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals. PMID:23950977
Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG.
Hagan, Cindy C; Woods, Will; Johnson, Sam; Green, Gary G R; Young, Andrew W
2013-01-01
Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.
Isolating N400 as neural marker of vocal anger processing in 6-11-year old children.
Chronaki, Georgia; Broyd, Samantha; Garner, Matthew; Hadwin, Julie A; Thompson, Margaret J J; Sonuga-Barke, Edmund J S
2012-04-01
Vocal anger is a salient social signal serving adaptive functions in typical child development. Despite recent advances in the developmental neuroscience of emotion processing with regard to visual stimuli, little remains known about the neural correlates of vocal anger processing in childhood. This study represents the first attempt to isolate a neural marker of vocal anger processing in children using electrophysiological methods. We compared ERP wave forms during the processing of non-word emotional vocal stimuli in a population sample of 55 6-11-year-old typically developing children. Children listened to three types of stimuli expressing angry, happy, and neutral prosody and completed an emotion identification task with three response options (angry, happy and neutral/'ok'). A distinctive N400 component which was modulated by emotional content of vocal stimulus was observed in children over parietal and occipital scalp regions-amplitudes were significantly attenuated to angry compared to happy and neutral voices. Findings of the present study regarding the N400 are compatible with adult studies showing reduced N400 amplitudes to negative compared to neutral emotional stimuli. Implications for studies of the neural basis of vocal anger processing in children are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Communicating Emotion: Linking Affective Prosody and Word Meaning
ERIC Educational Resources Information Center
Nygaard, Lynne C.; Queen, Jennifer S.
2008-01-01
The present study investigated the role of emotional tone of voice in the perception of spoken words. Listeners were presented with words that had either a happy, sad, or neutral meaning. Each word was spoken in a tone of voice (happy, sad, or neutral) that was congruent, incongruent, or neutral with respect to affective meaning, and naming…
ERIC Educational Resources Information Center
Techentin, Cheryl; Voyer, Daniel; Klein, Raymond M.
2009-01-01
The present study investigated the influence of within- and between-ear congruency on interference and laterality effects in an auditory semantic/prosodic conflict task. Participants were presented dichotically with words (e.g., mad, sad, glad) pronounced in either congruent or incongruent emotional tones (e.g., angry, happy, or sad) and…
Connecting multimodality in human communication
Regenbogen, Christina; Habel, Ute; Kellermann, Thilo
2013-01-01
A successful reciprocal evaluation of social signals serves as a prerequisite for social coherence and empathy. In a previous fMRI study we studied naturalistic communication situations by presenting video clips to our participants and recording their behavioral responses regarding empathy and its components. In two conditions, all three channels transported congruent emotional or neutral information, respectively. Three conditions selectively presented two emotional channels and one neutral channel and were thus bimodally emotional. We reported channel-specific emotional contributions in modality-related areas, elicited by dynamic video clips with varying combinations of emotionality in facial expressions, prosody, and speech content. However, to better understand the underlying mechanisms accompanying a naturalistically displayed human social interaction in some key regions that presumably serve as specific processing hubs for facial expressions, prosody, and speech content, we pursued a reanalysis of the data. Here, we focused on two different descriptions of temporal characteristics within these three modality-related regions [right fusiform gyrus (FFG), left auditory cortex (AC), left angular gyrus (AG) and left dorsomedial prefrontal cortex (dmPFC)]. By means of a finite impulse response (FIR) analysis within each of the three regions we examined the post-stimulus time-courses as a description of the temporal characteristics of the BOLD response during the video clips. Second, effective connectivity between these areas and the left dmPFC was analyzed using dynamic causal modeling (DCM) in order to describe condition-related modulatory influences on the coupling between these regions. The FIR analysis showed initially diminished activation in bimodally emotional conditions but stronger activation than that observed in neutral videos toward the end of the stimuli, possibly by bottom-up processes in order to compensate for a lack of emotional information. The DCM analysis instead showed a pronounced top-down control. Remarkably, all connections from the dmPFC to the three other regions were modulated by the experimental conditions. This observation is in line with the presumed role of the dmPFC in the allocation of attention. In contrary, all incoming connections to the AG were modulated, indicating its key role in integrating multimodal information and supporting comprehension. Notably, the input from the FFG to the AG was enhanced when facial expressions conveyed emotional information. These findings serve as preliminary results in understanding network dynamics in human emotional communication and empathy. PMID:24265613
Connecting multimodality in human communication.
Regenbogen, Christina; Habel, Ute; Kellermann, Thilo
2013-01-01
A successful reciprocal evaluation of social signals serves as a prerequisite for social coherence and empathy. In a previous fMRI study we studied naturalistic communication situations by presenting video clips to our participants and recording their behavioral responses regarding empathy and its components. In two conditions, all three channels transported congruent emotional or neutral information, respectively. Three conditions selectively presented two emotional channels and one neutral channel and were thus bimodally emotional. We reported channel-specific emotional contributions in modality-related areas, elicited by dynamic video clips with varying combinations of emotionality in facial expressions, prosody, and speech content. However, to better understand the underlying mechanisms accompanying a naturalistically displayed human social interaction in some key regions that presumably serve as specific processing hubs for facial expressions, prosody, and speech content, we pursued a reanalysis of the data. Here, we focused on two different descriptions of temporal characteristics within these three modality-related regions [right fusiform gyrus (FFG), left auditory cortex (AC), left angular gyrus (AG) and left dorsomedial prefrontal cortex (dmPFC)]. By means of a finite impulse response (FIR) analysis within each of the three regions we examined the post-stimulus time-courses as a description of the temporal characteristics of the BOLD response during the video clips. Second, effective connectivity between these areas and the left dmPFC was analyzed using dynamic causal modeling (DCM) in order to describe condition-related modulatory influences on the coupling between these regions. The FIR analysis showed initially diminished activation in bimodally emotional conditions but stronger activation than that observed in neutral videos toward the end of the stimuli, possibly by bottom-up processes in order to compensate for a lack of emotional information. The DCM analysis instead showed a pronounced top-down control. Remarkably, all connections from the dmPFC to the three other regions were modulated by the experimental conditions. This observation is in line with the presumed role of the dmPFC in the allocation of attention. In contrary, all incoming connections to the AG were modulated, indicating its key role in integrating multimodal information and supporting comprehension. Notably, the input from the FFG to the AG was enhanced when facial expressions conveyed emotional information. These findings serve as preliminary results in understanding network dynamics in human emotional communication and empathy.
Audiovisual emotional processing and neurocognitive functioning in patients with depression
Doose-Grünefeld, Sophie; Eickhoff, Simon B.; Müller, Veronika I.
2015-01-01
Alterations in the processing of emotional stimuli (e.g., facial expressions, prosody, music) have repeatedly been reported in patients with major depression. Such impairments may result from the likewise prevalent executive deficits in these patients. However, studies investigating this relationship are rare. Moreover, most studies to date have only assessed impairments in unimodal emotional processing, whereas in real life, emotions are primarily conveyed through more than just one sensory channel. The current study therefore aimed at investigating multi-modal emotional processing in patients with depression and to assess the relationship between emotional and neurocognitive impairments. Fourty one patients suffering from major depression and 41 never-depressed healthy controls participated in an audiovisual (faces-sounds) emotional integration paradigm as well as a neurocognitive test battery. Our results showed that depressed patients were specifically impaired in the processing of positive auditory stimuli as they rated faces significantly more fearful when presented with happy than with neutral sounds. Such an effect was absent in controls. Findings in emotional processing in patients did not correlate with Beck’s depression inventory score. Furthermore, neurocognitive findings revealed significant group differences for two of the tests. The effects found in audiovisual emotional processing, however, did not correlate with performance in the neurocognitive tests. In summary, our results underline the diversity of impairments going along with depression and indicate that deficits found for unimodal emotional processing cannot trivially be generalized to deficits in a multi-modal setting. The mechanisms of impairments therefore might be far more complex than previously thought. Our findings furthermore contradict the assumption that emotional processing deficits in major depression are associated with impaired attention or inhibitory functioning. PMID:25688188
Etchepare, Aurore; Prouteau, Antoinette
2018-04-01
Social cognition has received growing interest in many conditions in recent years. However, this construct still suffers from a considerable lack of consensus, especially regarding the dimensions to be studied and the resulting methodology of clinical assessment. Our review aims to clarify the distinctiveness of the dimensions of social cognition. Based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statements, a systematic review was conducted to explore the factor structure of social cognition in the adult general and clinical populations. The initial search provided 441 articles published between January 1982 and March 2017. Eleven studies were included, all conducted in psychiatric populations and/or healthy participants. Most studies were in favor of a two-factor solution. Four studies drew a distinction between low-level (e.g., facial emotion/prosody recognition) and high-level (e.g., theory of mind) information processing. Four others reported a distinction between affective (e.g., facial emotion/prosody recognition) and cognitive (e.g., false beliefs) information processing. Interestingly, attributional style was frequently reported as an additional separate factor of social cognition. Results of factor analyses add further support for the relevance of models differentiating level of information processing (low- vs. high-level) from nature of processed information (affective vs. cognitive). These results add to a significant body of empirical evidence from developmental, clinical research and neuroimaging studies. We argue the relevance of integrating low- versus high-level processing with affective and cognitive processing in a two-dimensional model of social cognition that would be useful for future research and clinical practice. (JINS, 2018, 24, 391-404).
ERIC Educational Resources Information Center
Naito-Billen, Yuka
2012-01-01
Recently, the significant role that pronunciation and prosody plays in processing spoken language has been widely recognized and a variety of teaching methodologies of pronunciation/prosody has been implemented in teaching foreign languages. Thus, an analysis of how similarly or differently native and L2 learners of a language use…
Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition
Rigoulot, Simon; Wassiliwizky, Eugen; Pell, Marc D.
2013-01-01
Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400–1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech. PMID:23805115
Lexical and affective prosody in children with high-functioning autism.
Grossman, Ruth B; Bemis, Rhyannon H; Plesa Skwerer, Daniela; Tager-Flusberg, Helen
2010-06-01
To investigate the perception and production of lexical stress and processing of affective prosody in adolescents with high-functioning autism (HFA). We hypothesized preserved processing of lexical and affective prosody but atypical lexical prosody production. Sixteen children with HFA and 15 typically developing (TD) peers participated in 3 experiments that examined the following: (a) perception of affective prosody (Experiment 1), (b) lexical stress perception (Experiment 2), and (c) lexical stress production (Experiment 3). In Experiment 1, participants labeled sad, happy, and neutral spoken sentences that were low-pass filtered, to eliminate verbal content. In Experiment 2, participants disambiguated word meanings based on lexical stress (HOTdog vs. hot DOG). In Experiment 3, participants produced these words in a sentence completion task. Productions were analyzed with acoustic measures. Accuracy levels showed no group differences. Participants with HFA could determine affect from filtered sentences and disambiguate words on the basis of lexical stress. They produced appropriately differentiated lexical stress patterns but demonstrated atypically long productions, indicating reduced ability in natural prosody production. Children with HFA were as capable as their TD peers in receptive tasks of lexical stress and affective prosody. Prosody productions were atypically long, despite accurate differentiation of lexical stress patterns. Future research should use larger samples and spontaneous versus elicited productions.
Direct speech quotations promote low relative-clause attachment in silent reading of English.
Yao, Bo; Scheepers, Christoph
2018-07-01
The implicit prosody hypothesis (Fodor, 1998, 2002) proposes that silent reading coincides with a default, implicit form of prosody to facilitate sentence processing. Recent research demonstrated that a more vivid form of implicit prosody is mentally simulated during silent reading of direct speech quotations (e.g., Mary said, "This dress is beautiful"), with neural and behavioural consequences (e.g., Yao, Belin, & Scheepers, 2011; Yao & Scheepers, 2011). Here, we explored the relation between 'default' and 'simulated' implicit prosody in the context of relative-clause (RC) attachment in English. Apart from confirming a general low RC-attachment preference in both production (Experiment 1) and comprehension (Experiments 2 and 3), we found that during written sentence completion (Experiment 1) or when reading silently (Experiment 2), the low RC-attachment preference was reliably enhanced when the critical sentences were embedded in direct speech quotations as compared to indirect speech or narrative sentences. However, when reading aloud (Experiment 3), direct speech did not enhance the general low RC-attachment preference. The results from Experiments 1 and 2 suggest a quantitative boost to implicit prosody (via auditory perceptual simulation) during silent production/comprehension of direct speech. By contrast, when reading aloud (Experiment 3), prosody becomes equally salient across conditions due to its explicit nature; indirect speech and narrative sentences thus become as susceptible to prosody-induced syntactic biases as direct speech. The present findings suggest a shared cognitive basis between default implicit prosody and simulated implicit prosody, providing a new platform for studying the effects of implicit prosody on sentence processing. Copyright © 2018 Elsevier B.V. All rights reserved.
Recalibration of vocal affect by a dynamic face.
Baart, Martijn; Vroomen, Jean
2018-04-25
Perception of vocal affect is influenced by the concurrent sight of an emotional face. We demonstrate that the sight of an emotional face also can induce recalibration of vocal affect. Participants were exposed to videos of a 'happy' or 'fearful' face in combination with a slightly incongruous sentence with ambiguous prosody. After this exposure, ambiguous test sentences were rated as more 'happy' when the exposure phase contained 'happy' instead of 'fearful' faces. This auditory shift likely reflects recalibration that is induced by error minimization of the inter-sensory discrepancy. In line with this view, when the prosody of the exposure sentence was non-ambiguous and congruent with the face (without audiovisual discrepancy), aftereffects went in the opposite direction, likely reflecting adaptation. Our results demonstrate, for the first time, that perception of vocal affect is flexible and can be recalibrated by slightly discrepant visual information.
Graham, Susan A; San Juan, Valerie; Khu, Melanie
2017-05-01
When linguistic information alone does not clarify a speaker's intended meaning, skilled communicators can draw on a variety of cues to infer communicative intent. In this paper, we review research examining the developmental emergence of preschoolers' sensitivity to a communicative partner's perspective. We focus particularly on preschoolers' tendency to use cues both within the communicative context (i.e. a speaker's visual access to information) and within the speech signal itself (i.e. emotional prosody) to make on-line inferences about communicative intent. Our review demonstrates that preschoolers' ability to use visual and emotional cues of perspective to guide language interpretation is not uniform across tasks, is sometimes related to theory of mind and executive function skills, and, at certain points of development, is only revealed by implicit measures of language processing.
Ross, Elliott D; Monnot, Marilee
2011-04-01
The Aprosodia Battery was developed to distinguish different patterns of affective-prosodic deficits in patients with left versus right brain damage by using affective utterances with incrementally reduced verbal-articulatory demands. It has also been used to assess affective-prosodic performance in various clinical groups, including patients with schizophrenia, PTSD, multiple sclerosis, alcohol abuse and Alzheimer disease and in healthy adults, as means to explore maturational-aging effects. To date, all studies using the Aprosodia Battery have yielded statistically robust results. This paper describes an extensive, quantitative error analysis using previous results from the Aprosodia Battery in patients with left and right brain damage, age-equivalent controls (old adults), and a group of young adults. This inductive analysis was performed to address three major issues in the literature: (1) sex and (2) maturational-aging effects in comprehending affective prosody and (3) differential hemispheric lateralization of emotions. We found no overall sex effects for comprehension of affective prosody. There were, however, scattered sex effects related to a particular affect, suggesting that these differences were related to cognitive appraisal rather than primary perception. Results in the brain damaged groups did not support the Valence Hypothesis of emotional lateralization but did support the Right Hemisphere Hypothesis of emotional lateralization. When comparing young versus old adults, a robust maturational-aging effect was observed in overall error rates and in the distribution of errors across affects. This effect appears to be mediated, in part, by cognitive appraisal, causing an alteration in the salience of different affective-prosodic stimuli with increasing age. In addition, the maturational-aging effects lend support for the Emotion-Type hypothesis of emotional lateralization and the "classic aging effect" that is due primarily to decline of right hemisphere cognitive functions in senescence. The results of our inductive analysis may help direct future deductive research efforts, exploring the neuropsychology of emotional communication, by taking into account the potentially confounding influence of (1) methodological differences involving construction of test stimuli and assessment procedures, (2) developmental, maturational and aging effects related to cognitive appraisal and (3) whether a stimulus has a primary or social-emotional bias. Published by Elsevier Ltd.
Kalathottukaren, Rose Thomas; Purdy, Suzanne C; Ballard, Elaine
2017-04-01
Auditory development in children with hearing loss, including the perception of prosody, depends on having adequate input from cochlear implants and/or hearing aids. Lack of adequate auditory stimulation can lead to delayed speech and language development. Nevertheless, prosody perception and production in people with hearing loss have received less attention than other aspects of language. The perception of auditory information conveyed through prosody using variations in the pitch, amplitude, and duration of speech is not usually evaluated clinically. This study (1) compared prosody perception and production abilities in children with hearing loss and children with normal hearing; and (2) investigated the effect of age, hearing level, and musicality on prosody perception. Participants were 16 children with hearing loss and 16 typically developing controls matched for age and gender. Fifteen of the children with hearing loss were tested while using amplification (n = 9 hearing aids, n = 6 cochlear implants). Six receptive subtests of the Profiling Elements of Prosody in Speech-Communication (PEPS-C), the Child Paralanguage subtest of Diagnostic Analysis of Nonverbal Accuracy 2 (DANVA 2), and Contour and Interval subtests of the Montreal Battery of Evaluation of Amusia (MBEA) were used. Audio recordings of the children's reading samples were rated using a perceptual prosody rating scale by nine experienced listeners who were blinded to the children's hearing status. Thirty two children, 16 with hearing loss (mean age = 8.71 yr) and 16 age- and gender-matched typically developing children with normal hearing (mean age = 8.87 yr). Assessments were completed in one session lasting 1-2 hours in a quiet room. Test items were presented using a laptop computer through loudspeaker at a comfortable listening level. For children with hearing loss using hearing instruments, all tests were completed with hearing devices set at their everyday listening setting. All PEPS-C subtests and total scores were significantly lower for children with hearing loss compared to controls (p < 0.05). The hearing loss group performed more poorly than the control group in recognizing happy, sad, and fearful emotions in the DANVA 2 subtest. Musicality (composite MBEA scores and musical experience) was significantly correlated with prosody perception scores, but this link was not evident in the regression analyses. Regression modeling showed that age and hearing level (better ear pure-tone average) accounted for 55.4% and 56.7% of the variance in PEPS-C and DANVA 2 total scores, respectively. There was greater variability for the ratings of pitch, pitch variation, and overall impression of prosody in the hearing loss group compared to control group. Prosody perception (PEPS-C and DANVA 2 total scores) and ratings of prosody production were not correlated. Children with hearing loss aged 7-12 yr had significant difficulties in understanding different aspects of prosody and were rated as having more atypical prosody overall than controls. These findings suggest that clinical assessment and speech-language therapy services for children with hearing loss should be expanded to target prosodic difficulties. Future studies should investigate whether musical training is beneficial for improving receptive prosody skills. American Academy of Audiology
Effects of musical expertise on oscillatory brain activity in response to emotional sounds.
Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L
2017-08-01
Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Emotional voices in context: A neurobiological model of multimodal affective information processing
NASA Astrophysics Data System (ADS)
Brück, Carolin; Kreifelts, Benjamin; Wildgruber, Dirk
2011-12-01
Just as eyes are often considered a gateway to the soul, the human voice offers a window through which we gain access to our fellow human beings' minds - their attitudes, intentions and feelings. Whether in talking or singing, crying or laughing, sighing or screaming, the sheer sound of a voice communicates a wealth of information that, in turn, may serve the observant listener as valuable guidepost in social interaction. But how do human beings extract information from the tone of a voice? In an attempt to answer this question, the present article reviews empirical evidence detailing the cerebral processes that underlie our ability to decode emotional information from vocal signals. The review will focus primarily on two prominent classes of vocal emotion cues: laughter and speech prosody (i.e. the tone of voice while speaking). Following a brief introduction, behavioral as well as neuroimaging data will be summarized that allows to outline cerebral mechanisms associated with the decoding of emotional voice cues, as well as the influence of various context variables (e.g. co-occurring facial and verbal emotional signals, attention focus, person-specific parameters such as gender and personality) on the respective processes. Building on the presented evidence, a cerebral network model will be introduced that proposes a differential contribution of various cortical and subcortical brain structures to the processing of emotional voice signals both in isolation and in context of accompanying (facial and verbal) emotional cues.
Emotional voices in context: a neurobiological model of multimodal affective information processing.
Brück, Carolin; Kreifelts, Benjamin; Wildgruber, Dirk
2011-12-01
Just as eyes are often considered a gateway to the soul, the human voice offers a window through which we gain access to our fellow human beings' minds - their attitudes, intentions and feelings. Whether in talking or singing, crying or laughing, sighing or screaming, the sheer sound of a voice communicates a wealth of information that, in turn, may serve the observant listener as valuable guidepost in social interaction. But how do human beings extract information from the tone of a voice? In an attempt to answer this question, the present article reviews empirical evidence detailing the cerebral processes that underlie our ability to decode emotional information from vocal signals. The review will focus primarily on two prominent classes of vocal emotion cues: laughter and speech prosody (i.e. the tone of voice while speaking). Following a brief introduction, behavioral as well as neuroimaging data will be summarized that allows to outline cerebral mechanisms associated with the decoding of emotional voice cues, as well as the influence of various context variables (e.g. co-occurring facial and verbal emotional signals, attention focus, person-specific parameters such as gender and personality) on the respective processes. Building on the presented evidence, a cerebral network model will be introduced that proposes a differential contribution of various cortical and subcortical brain structures to the processing of emotional voice signals both in isolation and in context of accompanying (facial and verbal) emotional cues. Copyright © 2011 Elsevier B.V. All rights reserved.
Speech Intelligibility and Prosody Production in Children with Cochlear Implants
Chin, Steven B.; Bergeson, Tonya R.; Phan, Jennifer
2012-01-01
Objectives The purpose of the current study was to examine the relation between speech intelligibility and prosody production in children who use cochlear implants. Methods The Beginner's Intelligibility Test (BIT) and Prosodic Utterance Production (PUP) task were administered to 15 children who use cochlear implants and 10 children with normal hearing. Adult listeners with normal hearing judged the intelligibility of the words in the BIT sentences, identified the PUP sentences as one of four grammatical or emotional moods (i.e., declarative, interrogative, happy, or sad), and rated the PUP sentences according to how well they thought the child conveyed the designated mood. Results Percent correct scores were higher for intelligibility than for prosody and higher for children with normal hearing than for children with cochlear implants. Declarative sentences were most readily identified and received the highest ratings by adult listeners; interrogative sentences were least readily identified and received the lowest ratings. Correlations between intelligibility and all mood identification and rating scores except declarative were not significant. Discussion The findings suggest that the development of speech intelligibility progresses ahead of prosody in both children with cochlear implants and children with normal hearing; however, children with normal hearing still perform better than children with cochlear implants on measures of intelligibility and prosody even after accounting for hearing age. Problems with interrogative intonation may be related to more general restrictions on rising intonation, and the correlation results indicate that intelligibility and sentence intonation may be relatively dissociated at these ages. PMID:22717120
Affective Aprosodia from a Medial Frontal Stroke
ERIC Educational Resources Information Center
Heilman, Kenneth M.; Leon, Susan A.; Rosenbek, John C.
2004-01-01
Background and objectives: Whereas injury to the left hemisphere induces aphasia, injury to the right hemisphere's perisylvian region induces an impairment of emotional speech prosody (affective aprosodia). Left-sided medial frontal lesions are associated with reduced verbal fluency with relatively intact comprehension and repetition…
Post-stroke acquired amusia: A comparison between right- and left-brain hemispheric damages.
Jafari, Zahra; Esmaili, Mahdiye; Delbari, Ahmad; Mehrpour, Masoud; Mohajerani, Majid H
2017-01-01
Although extensive research has been published about the emotional consequences of stroke, most studies have focused on emotional words, speech prosody, voices, or facial expressions. The emotional processing of musical excerpts following stroke has been relatively unexplored. The present study was conducted to investigate the effects of chronic stroke on the recognition of basic emotions in music. Seventy persons, including 25 normal controls (NC), 25 persons with right brain damage (RBD) from stroke, and 20 persons with left brain damage (LBD) from stroke between the ages of 31-71 years were studied. The Musical Emotional Bursts (MEB) test, which consists of a set of short musical pieces expressing basic emotional states (happiness, sadness, and fear) and neutrality, was used to test musical emotional perception. Both stroke groups were significantly poorer than normal controls for the MEB total score and its subtests (p < 0.001). The RBD group was significantly less able than the LBD group to recognize sadness (p = 0.047) and neutrality (p = 0.015). Negative correlations were found between age and MEB scores for all groups, particularly the NC and RBD groups. Our findings indicated that stroke affecting the auditory cerebrum can cause acquired amusia with greater severity in RBD than LBD. These results supported the "valence hypothesis" of right hemisphere dominance in processing negative emotions.
Niedtfeld, Inga
2017-07-01
Borderline personality disorder (BPD) is characterized by affective instability and interpersonal problems. In the context of social interaction, impairments in empathy are proposed to result in inadequate social behavior. In contrast to findings of reduced cognitive empathy, some authors suggested enhanced emotional empathy in BPD. It was investigated whether ambiguity leads to decreased cognitive or emotional empathy in BPD. Thirty-four patients with BPD and thirty-two healthy controls were presented with video clips, which were presented through prosody, facial expression, and speech content. Experimental conditions were designed to induce ambiguity by presenting neutral valence in one of these communication channels. Subjects were asked to indicate the actors' emotional valence, their decision confidence, and their own emotional state. BPD patients showed increased emotional empathy when neutral stories comprised nonverbally expressed emotions. In contrast, when all channels were emotional, patients showed lower emotional empathy than healthy controls. Regarding cognitive empathy, there were no significant differences between BPD patients and healthy control subjects in recognition accuracy, but reduced decision confidence in BPD. These results suggest that patients with BPD show altered emotional empathy, experiencing higher rates of emotional contagion when emotions are expressed nonverbally. The latter may contribute to misunderstandings and inadequate social behavior. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Adaptation to Vocal Expressions Reveals Multistep Perception of Auditory Emotion
Maurage, Pierre; Rouger, Julien; Latinus, Marianne; Belin, Pascal
2014-01-01
The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect. PMID:24920615
Adaptation to vocal expressions reveals multistep perception of auditory emotion.
Bestelmeyer, Patricia E G; Maurage, Pierre; Rouger, Julien; Latinus, Marianne; Belin, Pascal
2014-06-11
The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect. Copyright © 2014 Bestelmeyer et al.
Effect of Parkinson Disease on Emotion Perception Using the Persian Affective Voices Test.
Saffarian, Arezoo; Shavaki, Yunes Amiri; Shahidi, Gholam Ali; Jafari, Zahra
2018-05-04
Emotion perception plays a major role in proper communication with people in different social interactions. Nonverbal affect bursts can be used to evaluate vocal emotion perception. The present study was a preliminary step to establishing the psychometric properties of the Persian version of the Montreal Affective Voices (MAV) test, as well as to investigate the effect of Parkinson disease (PD) on vocal emotion perception. The short, emotional sound made by pronouncing the vowel "a" in Persian was recorded by 22 actors and actresses to develop the Persian version of the MAV, the Persian Affective Voices (PAV), for emotions of happiness, sadness, pleasure, pain, anger, disgust, fear, surprise, and neutrality. The results of the recordings of five of the actresses and five of the actors who obtained the highest score were used to generate the test. For convergent validity assessment, the correlation between the PAV and a speech prosody comprehension test was examined using a gender- and age-matched control group. To investigate the effect of the PD on emotion perception, the PAV test was performed on 28 patients with mild PD between ages 50 and 70 years. The PAV showed a high internal consistency (Cronbach's α = 0.80). A significant positive correlation was observed between the PAV and the speech prosody comprehension test. The test-retest reliability also showed the high repeatability of the PAV (intraclass correlation coefficient = 0.815, P ≤ 0.001). A significant difference was observed between the patients with PD and the controls in all subtests. The PAV test is a useful psychometric tool for examining vocal emotion perception that can be used in both behavioral and neuroimaging studies. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Implicit Prosody and Cue-based Retrieval: L1 and L2 Agreement and Comprehension during Reading.
Pratt, Elizabeth; Fernández, Eva M
2016-01-01
This project focuses on structural and prosodic effects during reading, examining their influence on agreement processing and comprehension in native English (L1) and Spanish-English bilingual (L2) speakers. We consolidate research from several distinct areas of inquiry-cognitive processing, reading fluency, and L1/L2 processing-in order to support the integration of prosody with a cue-based retrieval mechanism for subject-verb agreement. To explore this proposal, the experimental design manipulated text presentation to influence implicit prosody, using sentences designed to induce subject-verb agreement attraction errors. Materials included simple and complex relative clauses with head nouns and verbs that were either matched or mismatched for number. Participants read items in one of three presentation formats (whole sentence, word-by-word, or phrase-by-phrase), rated each item for grammaticality, and responded to a comprehension probe. Results indicated that while overall, message comprehension was prioritized over subject-verb agreement computation, presentation format differentially affected both measures in the L1 and L2 groups. For the L1 participants, facilitating the projection of phrasal prosody onto text (phrase-by-phrase presentation) enhanced performance in agreement processing, while disrupting prosodic projection via word-by-word presentation decreased comprehension accuracy. For the L2 participants, however, phrase-by-phrase presentation was not significantly beneficial for agreement processing, and additionally resulted in lower comprehension accuracy. These differences point to a significant role of prosodic phrasing during agreement processing in both L1 and L2 speakers, additionally suggesting that it may contribute to a cue-based retrieval agreement model, either acting as a cue directly, or otherwise scaffolding the retrieval process. The discussion and results presented provide support both for a cue-based retrieval mechanism in agreement, and the function of prosody within such a mechanism, adding further insight into the interaction of retrieval processes, cognitive task load, and the role of implicit prosody.
Systematic review of the neural basis of social cognition in patients with mood disorders.
Cusi, Andrée M; Nazarov, Anthony; Holshausen, Katherine; Macqueen, Glenda M; McKinnon, Margaret C
2012-05-01
This review integrates neuroimaging studies of 2 domains of social cognition--emotion comprehension and theory of mind (ToM)--in patients with major depressive disorder and bipolar disorder. The influence of key clinical and method variables on patterns of neural activation during social cognitive processing is also examined. Studies were identified using PsycINFO and PubMed (January 1967 to May 2011). The search terms were "fMRI," "emotion comprehension," "emotion perception," "affect comprehension," "affect perception," "facial expression," "prosody," "theory of mind," "mentalizing" and "empathy" in combination with "major depressive disorder," "bipolar disorder," "major depression," "unipolar depression," "clinical depression" and "mania." Taken together, neuroimaging studies of social cognition in patients with mood disorders reveal enhanced activation in limbic and emotion-related structures and attenuated activity within frontal regions associated with emotion regulation and higher cognitive functions. These results reveal an overall lack of inhibition by higher-order cognitive structures on limbic and emotion-related structures during social cognitive processing in patients with mood disorders. Critically, key variables, including illness burden, symptom severity, comorbidity, medication status and cognitive load may moderate this pattern of neural activation. Studies that did not include control tasks or a comparator group were included in this review. Further work is needed to examine the contribution of key moderator variables and to further elucidate the neural networks underlying altered social cognition in patients with mood disorders. The neural networks under lying higher-order social cognitive processes, including empathy, remain unexplored in patients with mood disorders.
Impaired perception of harmonic complexity in congenital amusia: a case study.
Reed, Catherine L; Cahn, Steven J; Cory, Christopher; Szaflarski, Jerzy P
2011-07-01
This study investigates whether congenital amusia (an inability to perceive music from birth) also impairs the perception of musical qualities that do not rely on fine-grained pitch discrimination. We established that G.G. (64-year-old male, age-typical hearing) met the criteria of congenital amusia and demonstrated music-specific deficits (e.g., language processing, intonation, prosody, fine-grained pitch processing, pitch discrimination, identification of discrepant tones and direction of pitch for tones in a series, pitch discrimination within scale segments, predictability of tone sequences, recognition versus knowing memory for melodies, and short-term memory for melodies). Next, we conducted tests of tonal fusion, harmonic complexity, and affect perception: recognizing timbre, assessing consonance and dissonance, and recognizing musical affect from harmony. G.G. displayed relatively unimpaired perception and production of environmental sounds, prosody, and emotion conveyed by speech compared with impaired fine-grained pitch perception, tonal sequence discrimination, and melody recognition. Importantly, G.G. could not perform tests of tonal fusion that do not rely on pitch discrimination: He could not distinguish concurrent notes, timbre, consonance/dissonance, simultaneous notes, and musical affect. Results indicate at least three distinct problems-one with pitch discrimination, one with harmonic simultaneity, and one with musical affect-and each has distinct consequences for music perception.
Oerlemans, Anoek M; Droste, Katharina; van Steijn, Daphne J; de Sonneville, Leo M J; Buitelaar, Jan K; Rommelse, Nanda N J
2013-12-01
Cognitive research proposes that social cognition (SC), executive functions (EF) and local processing style (weak CC) may be fruitful areas for research into the familial-genetic underpinnings of Autism Spectrum Disorders (ASD). The performance of 140 children with ASD, 172 siblings and 127 controls on tasks measuring SC (face recognition, affective prosody, and facial emotion recognition), EF (inhibition, cognitive flexibility, and verbal working memory) and local processing style was assessed. Compelling evidence was found for the interrelatedness of SC and EF, but not local processing style, within individuals and within families, suggesting that these domains tend to co-segregate in ASD. Using the underlying shared variance of these constructs in genetic research may increase the power for detecting susceptibility genes for ASD.
Implicit Prosody and Cue-based Retrieval: L1 and L2 Agreement and Comprehension during Reading
Pratt, Elizabeth; Fernández, Eva M.
2016-01-01
This project focuses on structural and prosodic effects during reading, examining their influence on agreement processing and comprehension in native English (L1) and Spanish–English bilingual (L2) speakers. We consolidate research from several distinct areas of inquiry—cognitive processing, reading fluency, and L1/L2 processing—in order to support the integration of prosody with a cue-based retrieval mechanism for subject-verb agreement. To explore this proposal, the experimental design manipulated text presentation to influence implicit prosody, using sentences designed to induce subject-verb agreement attraction errors. Materials included simple and complex relative clauses with head nouns and verbs that were either matched or mismatched for number. Participants read items in one of three presentation formats (whole sentence, word-by-word, or phrase-by-phrase), rated each item for grammaticality, and responded to a comprehension probe. Results indicated that while overall, message comprehension was prioritized over subject-verb agreement computation, presentation format differentially affected both measures in the L1 and L2 groups. For the L1 participants, facilitating the projection of phrasal prosody onto text (phrase-by-phrase presentation) enhanced performance in agreement processing, while disrupting prosodic projection via word-by-word presentation decreased comprehension accuracy. For the L2 participants, however, phrase-by-phrase presentation was not significantly beneficial for agreement processing, and additionally resulted in lower comprehension accuracy. These differences point to a significant role of prosodic phrasing during agreement processing in both L1 and L2 speakers, additionally suggesting that it may contribute to a cue-based retrieval agreement model, either acting as a cue directly, or otherwise scaffolding the retrieval process. The discussion and results presented provide support both for a cue-based retrieval mechanism in agreement, and the function of prosody within such a mechanism, adding further insight into the interaction of retrieval processes, cognitive task load, and the role of implicit prosody. PMID:28018264
Constituent Length Affects Prosody and Processing for a Dative NP Ambiguity in Korean
ERIC Educational Resources Information Center
Hwang, Hyekyung; Schafer, Amy J.
2009-01-01
Two sentence processing experiments on a dative NP ambiguity in Korean demonstrate effects of phrase length on overt and implicit prosody. Both experiments controlled non-prosodic length factors by using long versus short proper names that occurred before the syntactically critical material. Experiment 1 found that long phrases induce different…
Recognition of emotion with temporal lobe epilepsy and asymmetrical amygdala damage.
Fowler, Helen L; Baker, Gus A; Tipples, Jason; Hare, Dougal J; Keller, Simon; Chadwick, David W; Young, Andrew W
2006-08-01
Impairments in emotion recognition occur when there is bilateral damage to the amygdala. In this study, ability to recognize auditory and visual expressions of emotion was investigated in people with asymmetrical amygdala damage (AAD) and temporal lobe epilepsy (TLE). Recognition of five emotions was tested across three participant groups: those with right AAD and TLE, those with left AAD and TLE, and a comparison group. Four tasks were administered: recognition of emotion from facial expressions, sentences describing emotion-laden situations, nonverbal sounds, and prosody. Accuracy scores for each task and emotion were analysed, and no consistent overall effect of AAD on emotion recognition was found. However, some individual participants with AAD were significantly impaired at recognizing emotions, in both auditory and visual domains. The findings indicate that a minority of individuals with AAD have impairments in emotion recognition, but no evidence of specific impairments (e.g., visual or auditory) was found.
Development in Children's Interpretation of Pitch Cues to Emotions
ERIC Educational Resources Information Center
Quam, Carolyn; Swingley, Daniel
2012-01-01
Young infants respond to positive and negative speech prosody (A. Fernald, 1993), yet 4-year-olds rely on lexical information when it conflicts with paralinguistic cues to approval or disapproval (M. Friend, 2003). This article explores this surprising phenomenon, testing one hundred eighteen 2- to 5-year-olds' use of isolated pitch cues to…
Dmitrieva, E S; Gel'man, V Ia
2011-01-01
The listener-distinctive features of recognition of different emotional intonations (positive, negative and neutral) of male and female speakers in the presence or absence of background noise were studied in 49 adults aged 20-79 years. In all the listeners noise produced the most pronounced decrease in recognition accuracy for positive emotional intonation ("joy") as compared to other intonations, whereas it did not influence the recognition accuracy of "anger" in 65-79-year-old listeners. The higher emotion recognition rates of a noisy signal were observed for speech emotional intonations expressed by female speakers. Acoustic characteristics of noisy and clear speech signals underlying perception of speech emotional prosody were found for adult listeners of different age and gender.
The Role of Prosody and Explicit Instruction in Processing Instruction
ERIC Educational Resources Information Center
Henry, Nick; Jackson, Carrie N.; Dimidio, Jack
2017-01-01
This study investigates the role of prosodic cues and explicit information (EI) in the acquisition of German accusative case markers. We compared 4 groups of 3rd-semester learners (low intermediate level) who completed 1 of 4 Processing Instruction (PI) treatments that manipulated the presence or absence of EI and focused prosody. The results…
ERIC Educational Resources Information Center
Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.
2017-01-01
This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…
Multimodal emotion perception after anterior temporal lobectomy (ATL)
Milesi, Valérie; Cekic, Sezen; Péron, Julie; Frühholz, Sascha; Cristinzio, Chiara; Seeck, Margitta; Grandjean, Didier
2014-01-01
In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual) and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion. PMID:24839437
Processing of prosodic changes in natural speech stimuli in school-age children.
Lindström, R; Lepistö, T; Makkonen, T; Kujala, T
2012-12-01
Speech prosody conveys information about important aspects of communication: the meaning of the sentence and the emotional state or intention of the speaker. The present study addressed processing of emotional prosodic changes in natural speech stimuli in school-age children (mean age 10 years) by recording the electroencephalogram, facial electromyography, and behavioral responses. The stimulus was a semantically neutral Finnish word uttered with four different emotional connotations: neutral, commanding, sad, and scornful. In the behavioral sound-discrimination task the reaction times were fastest for the commanding stimulus and longest for the scornful stimulus, and faster for the neutral than for the sad stimulus. EEG and EMG responses were measured during non-attentive oddball paradigm. Prosodic changes elicited a negative-going, fronto-centrally distributed neural response peaking at about 500 ms from the onset of the stimulus, followed by a fronto-central positive deflection, peaking at about 740 ms. For the commanding stimulus also a rapid negative deflection peaking at about 290 ms from stimulus onset was elicited. No reliable stimulus type specific rapid facial reactions were found. The results show that prosodic changes in natural speech stimuli activate pre-attentive neural change-detection mechanisms in school-age children. However, the results do not support the suggestion of automaticity of emotion specific facial muscle responses to non-attended emotional speech stimuli in children. Copyright © 2012 Elsevier B.V. All rights reserved.
Automatic Neural Processing of Disorder-Related Stimuli in Social Anxiety Disorder: Faces and More
Schulz, Claudia; Mothes-Lasch, Martin; Straube, Thomas
2013-01-01
It has been proposed that social anxiety disorder (SAD) is associated with automatic information processing biases resulting in hypersensitivity to signals of social threat such as negative facial expressions. However, the nature and extent of automatic processes in SAD on the behavioral and neural level is not entirely clear yet. The present review summarizes neuroscientific findings on automatic processing of facial threat but also other disorder-related stimuli such as emotional prosody or negative words in SAD. We review initial evidence for automatic activation of the amygdala, insula, and sensory cortices as well as for automatic early electrophysiological components. However, findings vary depending on tasks, stimuli, and neuroscientific methods. Only few studies set out to examine automatic neural processes directly and systematic attempts are as yet lacking. We suggest that future studies should: (1) use different stimulus modalities, (2) examine different emotional expressions, (3) compare findings in SAD with other anxiety disorders, (4) use more sophisticated experimental designs to investigate features of automaticity systematically, and (5) combine different neuroscientific methods (such as functional neuroimaging and electrophysiology). Finally, the understanding of neural automatic processes could also provide hints for therapeutic approaches. PMID:23745116
NASA Astrophysics Data System (ADS)
Imai, Emiko; Katagiri, Yoshitada; Seki, Keiko; Kawamata, Toshio
2011-06-01
We present a neural model of the production of modulated speech streams in the brain, referred to as prosody, which indicates the limbic structure essential for producing prosody both linguistically and emotionally. This model suggests that activating the fundamental brain including monoamine neurons at the basal ganglia will potentially contribute to helping patients with prosodic disorders coming from functional defects of the fundamental brain to overcome their speech problem. To establish effective clinical treatment for such prosodic disorders, we examine how sounds affect the fundamental activity by using electroencephalographic measurements. Throughout examinations with various melodious sounds, we found that some melodies with lilting rhythms successfully give rise to the fast alpha rhythms at the electroencephalogram which reflect the fundamental brain activity without any negative feelings.
Expression of Emotion in Eastern and Western Music Mirrors Vocalization
Bowling, Daniel Liu; Sundararajan, Janani; Han, Shui'er; Purves, Dale
2012-01-01
In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states. PMID:22431970
Expression of emotion in Eastern and Western music mirrors vocalization.
Bowling, Daniel Liu; Sundararajan, Janani; Han, Shui'er; Purves, Dale
2012-01-01
In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states.
Nonverbal Vocal Communication of Emotions in Interviews with Child and Adolescent Psychoanalysts
ERIC Educational Resources Information Center
Tokgoz, Tuba
2014-01-01
This exploratory study attempted to examine both the words and the prosody/melody of the language within the framework of Bucci's Multiple Code Theory. The sample consisted of twelve audio-recorded, semi-structured interviews of child and adolescent psychoanalysts who were asked to describe their work with patients. It is observed that emotionally…
Characteristics of Auditory Agnosia in a Child with Severe Traumatic Brain Injury: A Case Report
ERIC Educational Resources Information Center
Hattiangadi, Nina; Pillion, Joseph P.; Slomine, Beth; Christensen, James; Trovato, Melissa K.; Speedie, Lynn J.
2005-01-01
We present a case that is unusual in many respects from other documented incidences of auditory agnosia, including the mechanism of injury, age of the individual, and location of neurological insult. The clinical presentation is one of disturbance in the perception of spoken language, music, pitch, emotional prosody, and temporal auditory…
Cross-cultural emotional prosody recognition: evidence from Chinese and British listeners.
Paulmann, Silke; Uskul, Ayse K
2014-01-01
This cross-cultural study of emotional tone of voice recognition tests the in-group advantage hypothesis (Elfenbein & Ambady, 2002) employing a quasi-balanced design. Individuals of Chinese and British background were asked to recognise pseudosentences produced by Chinese and British native speakers, displaying one of seven emotions (anger, disgust, fear, happy, neutral tone of voice, sad, and surprise). Findings reveal that emotional displays were recognised at rates higher than predicted by chance; however, members of each cultural group were more accurate in recognising the displays communicated by a member of their own cultural group than a member of the other cultural group. Moreover, the evaluation of error matrices indicates that both culture groups relied on similar mechanism when recognising emotional displays from the voice. Overall, the study reveals evidence for both universal and culture-specific principles in vocal emotion recognition.
Keshtiari, Niloofar; Kuhlmann, Michael; Eslami, Moharram; Klann-Delius, Gisela
2015-03-01
Research on emotional speech often requires valid stimuli for assessing perceived emotion through prosody and lexical content. To date, no comprehensive emotional speech database for Persian is officially available. The present article reports the process of designing, compiling, and evaluating a comprehensive emotional speech database for colloquial Persian. The database contains a set of 90 validated novel Persian sentences classified in five basic emotional categories (anger, disgust, fear, happiness, and sadness), as well as a neutral category. These sentences were validated in two experiments by a group of 1,126 native Persian speakers. The sentences were articulated by two native Persian speakers (one male, one female) in three conditions: (1) congruent (emotional lexical content articulated in a congruent emotional voice), (2) incongruent (neutral sentences articulated in an emotional voice), and (3) baseline (all emotional and neutral sentences articulated in neutral voice). The speech materials comprise about 470 sentences. The validity of the database was evaluated by a group of 34 native speakers in a perception test. Utterances recognized better than five times chance performance (71.4 %) were regarded as valid portrayals of the target emotions. Acoustic analysis of the valid emotional utterances revealed differences in pitch, intensity, and duration, attributes that may help listeners to correctly classify the intended emotion. The database is designed to be used as a reliable material source (for both text and speech) in future cross-cultural or cross-linguistic studies of emotional speech, and it is available for academic research purposes free of charge. To access the database, please contact the first author.
Visual attention modulates brain activation to angry voices.
Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas
2011-06-29
In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.
[Perception features of emotional intonation of short pseudowords].
Dmitrieva, E S; Gel'man, V Ia; Zaĭtseva, K A; Orlov, A M
2012-01-01
Reaction time and recognition accuracy of speech emotional intonations in short meaningless words that differed only in one phoneme with background noise and without it were studied in 49 adults of 20-79 years old. The results were compared with the same parameters of emotional intonations in intelligent speech utterances under similar conditions. Perception of emotional intonations at different linguistic levels (phonological and lexico-semantic) was found to have both common features and certain peculiarities. Recognition characteristics of emotional intonations depending on gender and age of listeners appeared to be invariant with regard to linguistic levels of speech stimuli. Phonemic composition of pseudowords was found to influence the emotional perception, especially against the background noise. The most significant stimuli acoustic characteristic responsible for the perception of speech emotional prosody in short meaningless words under the two experimental conditions, i.e. with and without background noise, was the fundamental frequency variation.
Grossman, Ruth B; Edelson, Lisa R; Tager-Flusberg, Helen
2013-06-01
People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Participants were 22 male children and adolescents with HFA and 18 typically developing (TD) controls (17 males, 1 female). The authors used a story retelling task to elicit emotionally laden narratives, which were analyzed through the use of acoustic measures and perceptual codes. Naïve listeners coded all productions for emotion type, degree of expressiveness, and awkwardness. The group with HFA was not significantly different in accuracy or expressiveness of facial productions, but was significantly more awkward than the TD group. Participants with HFA were significantly more expressive in their vocal productions, with a trend for greater awkwardness. Severity of social communication impairment, as captured by the Autism Diagnostic Observation Schedule (ADOS; Lord, Rutter, DiLavore, & Risi, 1999), was correlated with greater vocal and facial awkwardness. Facial and vocal expressions of participants with HFA were as recognizable as those of their TD peers but were qualitatively different, particularly when listeners coded samples with intact dynamic properties. These preliminary data show qualitative differences in nonverbal communication that may have significant negative impact on the social communication success of children and adolescents with HFA.
The assessment and treatment of prosodic disorders and neurological theories of prosody.
Diehl, Joshua J; Paul, Rhea
2009-08-01
In this article, we comment on specific aspects of Peppé (Peppé, 2009). In particular, we address the assessment and treatment of prosody in clinical settings and discuss current theory on neurological models of prosody. We argue that in order for prosodic assessment instruments and treatment programs to be clinical effective, we need assessment instruments that: (1) have a representative normative comparison sample and strong psychometric properties; (2) are based on empirical information regarding the typical sequence of prosodic acquisition and are sensitive to developmental change; (3) meaningfully subcategorize various aspects of prosody; (4) use tasks that have ecological validity; and (5) have clinical properties, such as length and ease of administration, that allow them to become part of standard language assessment batteries. In addition, we argue that current theories of prosody processing in the brain are moving toward network models that involve multiple brain areas and are crucially dependent on cortical communication. The implications of these observations for future research and clinical practice are outlined.
Affective priming effects of musical sounds on the processing of word meaning.
Steinbeis, Nikolaus; Koelsch, Stefan
2011-03-01
Recent studies have shown that music is capable of conveying semantically meaningful concepts. Several questions have subsequently arisen particularly with regard to the precise mechanisms underlying the communication of musical meaning as well as the role of specific musical features. The present article reports three studies investigating the role of affect expressed by various musical features in priming subsequent word processing at the semantic level. By means of an affective priming paradigm, it was shown that both musically trained and untrained participants evaluated emotional words congruous to the affect expressed by a preceding chord faster than words incongruous to the preceding chord. This behavioral effect was accompanied by an N400, an ERP typically linked with semantic processing, which was specifically modulated by the (mis)match between the prime and the target. This finding was shown for the musical parameter of consonance/dissonance (Experiment 1) and then extended to mode (major/minor) (Experiment 2) and timbre (Experiment 3). Seeing that the N400 is taken to reflect the processing of meaning, the present findings suggest that the emotional expression of single musical features is understood by listeners as such and is probably processed on a level akin to other affective communications (i.e., prosody or vocalizations) because it interferes with subsequent semantic processing. There were no group differences, suggesting that musical expertise does not have an influence on the processing of emotional expression in music and its semantic connotations.
Baez, Sandra; Marengo, Juan; Perez, Ana; Huepe, David; Font, Fernanda Giralt; Rial, Veronica; Gonzalez-Gadea, María Luz; Manes, Facundo; Ibanez, Agustin
2015-09-01
Impaired social cognition has been claimed to be a mechanism underlying the development and maintenance of borderline personality disorder (BPD). One important aspect of social cognition is the theory of mind (ToM), a complex skill that seems to be influenced by more basic processes, such as executive functions (EF) and emotion recognition. Previous ToM studies in BPD have yielded inconsistent results. This study assessed the performance of BPD adults on ToM, emotion recognition, and EF tasks. We also examined whether EF and emotion recognition could predict the performance on ToM tasks. We evaluated 15 adults with BPD and 15 matched healthy controls using different tasks of EF, emotion recognition, and ToM. The results showed that BPD adults exhibited deficits in the three domains, which seem to be task-dependent. Furthermore, we found that EF and emotion recognition predicted the performance on ToM. Our results suggest that tasks that involve real-life social scenarios and contextual cues are more sensitive to detect ToM and emotion recognition deficits in BPD individuals. Our findings also indicate that (a) ToM variability in BPD is partially explained by individual differences on EF and emotion recognition; and (b) ToM deficits of BPD patients are partially explained by the capacity to integrate cues from face, prosody, gesture, and social context to identify the emotions and others' beliefs. © 2014 The British Psychological Society.
Systematic review of the neural basis of social cognition in patients with mood disorders
Cusi, Andrée M.; Nazarov, Anthony; Holshausen, Katherine; MacQueen, Glenda M.; McKinnon, Margaret C.
2012-01-01
Background This review integrates neuroimaging studies of 2 domains of social cognition — emotion comprehension and theory of mind (ToM) — in patients with major depressive disorder and bipolar disorder. The influence of key clinical and method variables on patterns of neural activation during social cognitive processing is also examined. Methods Studies were identified using PsycINFO and PubMed (January 1967 to May 2011). The search terms were “fMRI,” “emotion comprehension,” “emotion perception,” “affect comprehension,” “affect perception,” “facial expression,” “prosody,” “theory of mind,” “mentalizing” and “empathy” in combination with “major depressive disorder,” “bipolar disorder,” “major depression,” “unipolar depression,” “clinical depression” and “mania.” Results Taken together, neuroimaging studies of social cognition in patients with mood disorders reveal enhanced activation in limbic and emotion-related structures and attenuated activity within frontal regions associated with emotion regulation and higher cognitive functions. These results reveal an overall lack of inhibition by higher-order cognitive structures on limbic and emotion-related structures during social cognitive processing in patients with mood disorders. Critically, key variables, including illness burden, symptom severity, comorbidity, medication status and cognitive load may moderate this pattern of neural activation. Limitations Studies that did not include control tasks or a comparator group were included in this review. Conclusion Further work is needed to examine the contribution of key moderator variables and to further elucidate the neural networks underlying altered social cognition in patients with mood disorders. The neural networks underlying higher-order social cognitive processes, including empathy, remain unexplored in patients with mood disorders. PMID:22297065
Gender differences in emotion recognition: Impact of sensory modality and emotional category.
Lambrecht, Lena; Kreifelts, Benjamin; Wildgruber, Dirk
2014-04-01
Results from studies on gender differences in emotion recognition vary, depending on the types of emotion and the sensory modalities used for stimulus presentation. This makes comparability between different studies problematic. This study investigated emotion recognition of healthy participants (N = 84; 40 males; ages 20 to 70 years), using dynamic stimuli, displayed by two genders in three different sensory modalities (auditory, visual, audio-visual) and five emotional categories. The participants were asked to categorise the stimuli on the basis of their nonverbal emotional content (happy, alluring, neutral, angry, and disgusted). Hit rates and category selection biases were analysed. Women were found to be more accurate in recognition of emotional prosody. This effect was partially mediated by hearing loss for the frequency of 8,000 Hz. Moreover, there was a gender-specific selection bias for alluring stimuli: Men, as compared to women, chose "alluring" more often when a stimulus was presented by a woman as compared to a man.
The Role of Speech Prosody and Text Reading Prosody in Children's Reading Comprehension
ERIC Educational Resources Information Center
Veenendaal, Nathalie J.; Groen, Margriet A.; Verhoeven, Ludo
2014-01-01
Background: Text reading prosody has been associated with reading comprehension. However, text reading prosody is a reading-dependent measure that relies heavily on decoding skills. Investigation of the contribution of speech prosody--which is independent from reading skills--in addition to text reading prosody, to reading comprehension could…
Social cognition in a case of amnesia with neurodevelopmental mechanisms.
Staniloiu, Angelica; Borsutzky, Sabine; Woermann, Friedrich G; Markowitsch, Hans J
2013-01-01
Episodic-autobiographical memory (EAM) is considered to emerge gradually in concert with the development of other cognitive abilities (such as executive functions, personal semantic knowledge, emotional knowledge, theory of mind (ToM) functions, language, and working memory). On the brain level its emergence is accompanied by structural and functional reorganization of different components of the so-called EAM network. This network includes the hippocampal formation, which is viewed as being vital for the acquisition of memories of personal events for long-term storage. Developmental studies have emphasized socio-cultural-linguistic mechanisms that may be unique to the development of EAM. Furthermore it was hypothesized that one of the main functions of EAM is the social one. In the research field, the link between EAM and social cognition remains however debated. Herein we aim to bring new insights into the relation between EAM and social information processing (including social cognition) by describing a young adult patient with amnesia with neurodevelopmental mechanisms due to perinatal complications accompanied by hypoxia. The patient was investigated medically, psychiatrically, and with neuropsychological and neuroimaging methods. Structural high resolution magnetic resonance imaging revealed significant bilateral hippocampal atrophy as well as indices for degeneration in the amygdalae, basal ganglia, and thalamus, when a less conservative threshold was applied. In addition to extensive memory investigations and testing other (non-social) cognitive functions, we employed a broad range of tests that assessed social information processing (social perception, social cognition, social regulation). Our results point to both preserved (empathy, core ToM functions, visual affect selection, and discrimination, affective prosody discrimination) and impaired domains of social information processing (incongruent affective prosody processing, complex social judgments). They support proposals for a role of the hippocampal formation in processing more complex social information that likely requires multimodal relational handling.
Social cognition in a case of amnesia with neurodevelopmental mechanisms
Staniloiu, Angelica; Borsutzky, Sabine; Woermann, Friedrich G.; Markowitsch, Hans J.
2013-01-01
Episodic–autobiographical memory (EAM) is considered to emerge gradually in concert with the development of other cognitive abilities (such as executive functions, personal semantic knowledge, emotional knowledge, theory of mind (ToM) functions, language, and working memory). On the brain level its emergence is accompanied by structural and functional reorganization of different components of the so-called EAM network. This network includes the hippocampal formation, which is viewed as being vital for the acquisition of memories of personal events for long-term storage. Developmental studies have emphasized socio-cultural-linguistic mechanisms that may be unique to the development of EAM. Furthermore it was hypothesized that one of the main functions of EAM is the social one. In the research field, the link between EAM and social cognition remains however debated. Herein we aim to bring new insights into the relation between EAM and social information processing (including social cognition) by describing a young adult patient with amnesia with neurodevelopmental mechanisms due to perinatal complications accompanied by hypoxia. The patient was investigated medically, psychiatrically, and with neuropsychological and neuroimaging methods. Structural high resolution magnetic resonance imaging revealed significant bilateral hippocampal atrophy as well as indices for degeneration in the amygdalae, basal ganglia, and thalamus, when a less conservative threshold was applied. In addition to extensive memory investigations and testing other (non-social) cognitive functions, we employed a broad range of tests that assessed social information processing (social perception, social cognition, social regulation). Our results point to both preserved (empathy, core ToM functions, visual affect selection, and discrimination, affective prosody discrimination) and impaired domains of social information processing (incongruent affective prosody processing, complex social judgments). They support proposals for a role of the hippocampal formation in processing more complex social information that likely requires multimodal relational handling. PMID:23805111
ERIC Educational Resources Information Center
Hwang, Hyekyung; Steinhauer, Karsten
2011-01-01
In spoken language comprehension, syntactic parsing decisions interact with prosodic phrasing, which is directly affected by phrase length. Here we used ERPs to examine whether a similar effect holds for the on-line processing of written sentences during silent reading, as suggested by theories of "implicit prosody." Ambiguous Korean sentence…
ERIC Educational Resources Information Center
Martzoukou, Maria; Papadopoulou, Despina; Kosmidis, Mary-Helen
2017-01-01
The present study investigates the comprehension of syntactic and affective prosody in adults with autism spectrum disorder without accompanying cognitive deficits (ASD w/o cognitive deficits) as well as age-, education- and gender-matched unimpaired adults, while processing orally presented sentences. Two experiments were conducted: (a) an…
A Shared Neural Substrate for Mentalizing and the Affective Component of Sentence Comprehension
Hervé, Pierre-Yves; Razafimandimby, Annick; Jobard, Gaël; Tzourio-Mazoyer, Nathalie
2013-01-01
Using event-related fMRI in a sample of 42 healthy participants, we compared the cerebral activity maps obtained when classifying spoken sentences based on the mental content of the main character (belief, deception or empathy) or on the emotional tonality of the sentence (happiness, anger or sadness). To control for the effects of different syntactic constructions (such as embedded clauses in belief sentences), we subtracted from each map the BOLD activations obtained during plausibility judgments on structurally matching sentences, devoid of emotions or ToM. The obtained theory of mind (ToM) and emotional speech comprehension networks overlapped in the bilateral temporo-parietal junction, posterior cingulate cortex, right anterior temporal lobe, dorsomedial prefrontal cortex and in the left inferior frontal sulcus. These regions form a ToM network, which contributes to the emotional component of spoken sentence comprehension. Compared with the ToM task, in which the sentences were enounced on a neutral tone, the emotional sentence classification task, in which the sentences were play-acted, was associated with a greater activity in the bilateral superior temporal sulcus, in line with the presence of emotional prosody. Besides, the ventromedial prefrontal cortex was more active during emotional than ToM sentence processing. This region may link mental state representations with verbal and prosodic emotional cues. Compared with emotional sentence classification, ToM was associated with greater activity in the caudate nucleus, paracingulate cortex, and superior frontal and parietal regions, in line with behavioral data showing that ToM sentence comprehension was a more demanding task. PMID:23342148
Vocal learning, prosody, and basal ganglia: don't underestimate their complexity.
Ravignani, Andrea; Martins, Mauricio; Fitch, W Tecumseh
2014-12-01
Ackermann et al.'s arguments in the target article need sharpening and rethinking at both mechanistic and evolutionary levels. First, the authors' evolutionary arguments are inconsistent with recent evidence concerning nonhuman animal rhythmic abilities. Second, prosodic intonation conveys much more complex linguistic information than mere emotional expression. Finally, human adults' basal ganglia have a considerably wider role in speech modulation than Ackermann et al. surmise.
Hemispheric Asymmetry for Linguistic Prosody: A Study of Stress Perception in Croatian
ERIC Educational Resources Information Center
Mildner, Vesna
2004-01-01
The aim of the study was to test for possible functional cerebral asymmetry in processing one segment of linguistic prosody, namely word stress, in Croatian. The test material consisted of eight tokens of the word "pas" under a falling accent, varying only in vowel duration between 119 and 185ms, attached to the end of a frame sentence. The…
Yang, Y J Daniel; Allen, Tandra; Abdullahi, Sebiha M; Pelphrey, Kevin A; Volkmar, Fred R; Chapman, Sandra B
2017-06-01
Autism Spectrum Disorder (ASD) is characterized by remarkable heterogeneity in social, communication, and behavioral deficits, creating a major barrier in identifying effective treatments for a given individual with ASD. To facilitate precision medicine in ASD, we utilized a well-validated biological motion neuroimaging task to identify pretreatment biomarkers that can accurately forecast the response to an evidence-based behavioral treatment, Virtual Reality-Social Cognition Training (VR-SCT). In a preliminary sample of 17 young adults with high-functioning ASD, we identified neural predictors of change in emotion recognition after VR-SCT. The predictors were characterized by the pretreatment brain activations to biological vs. scrambled motion in the neural circuits that support (a) language comprehension and interpretation of incongruent auditory emotions and prosody, and (b) processing socio-emotional experience and interpersonal affective information, as well as emotional regulation. The predictive value of the findings for individual adults with ASD was supported by regression-based multivariate pattern analyses with cross validation. To our knowledge, this is the first pilot study that shows neuroimaging-based predictive biomarkers for treatment effectiveness in adults with ASD. The findings have potentially far-reaching implications for developing more precise and effective treatments for ASD. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Atypical neural responses to vocal anger in attention-deficit/hyperactivity disorder.
Chronaki, Georgia; Benikos, Nicholas; Fairchild, Graeme; Sonuga-Barke, Edmund J S
2015-04-01
Deficits in facial emotion processing, reported in attention-deficit/hyperactivity disorder (ADHD), have been linked to both early perceptual and later attentional components of event-related potentials (ERPs). However, the neural underpinnings of vocal emotion processing deficits in ADHD have yet to be characterised. Here, we report the first ERP study of vocal affective prosody processing in ADHD. Event-related potentials of 6-11-year-old children with ADHD (n = 25) and typically developing controls (n = 25) were recorded as they completed a task measuring recognition of vocal prosodic stimuli (angry, happy and neutral). Audiometric assessments were conducted to screen for hearing impairments. Children with ADHD were less accurate than controls at recognising vocal anger. Relative to controls, they displayed enhanced N100 and attenuated P300 components to vocal anger. The P300 effect was reduced, but remained significant, after controlling for N100 effects by rebaselining. Only the N100 effect was significant when children with ADHD and comorbid conduct disorder (n = 10) were excluded. This study provides the first evidence linking ADHD to atypical neural activity during the early perceptual stages of vocal anger processing. These effects may reflect preattentive hyper-vigilance to vocal anger in ADHD. © 2014 Association for Child and Adolescent Mental Health.
Cespedes-Guevara, Julian; Eerola, Tuomas
2018-01-01
Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music’s ability to express core affects and the influence of top-down and contextual information in the listener’s mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence). Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological choices that will be vital to move beyond the popular Basic Emotion paradigm and start untangling the emergence of emotional experiences with music in the actual contexts in which they occur. PMID:29541041
Cespedes-Guevara, Julian; Eerola, Tuomas
2018-01-01
Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music's ability to express core affects and the influence of top-down and contextual information in the listener's mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence). Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological choices that will be vital to move beyond the popular Basic Emotion paradigm and start untangling the emergence of emotional experiences with music in the actual contexts in which they occur.
Processing of English Focal Stress by L1-English and L1-Mandarin/L2-English Speakers
ERIC Educational Resources Information Center
Guigelaar, Ellen R.
2017-01-01
Late second language (L2) learners often struggle with L2 prosody, both in perception and production. This may result from first language (L1) interference or some property of how a second language functions in a late learner independent of what their L1 might be. Here we investigate prosody's role in determining information structure through…
ERP correlates of motivating voices: quality of motivation and time-course matters
Zougkou, Konstantina; Weinstein, Netta
2017-01-01
Abstract Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g. ‘You absolutely have to do it my way’ spoken in a controlling tone of voice), or lacked motivationally biasing words (e.g. ‘Why don’t we meet again tomorrow’ spoken in a motivational tone of voice). Event-related brain potentials (ERPs) in response to motivations conveyed through words and prosody showed that listeners rapidly distinguished between motivations and neutral forms of communication as shown in enhanced P2 amplitudes in response to motivational when compared with neutral speech. This early detection mechanism is argued to help determine the importance of incoming information. Once assessed, motivational language is continuously monitored and thoroughly evaluated. When compared with neutral speech, listening to controlling (but not autonomy-supportive) speech led to enhanced late potential ERP mean amplitudes, suggesting that listeners are particularly attuned to controlling messages. The importance of controlling motivation for listeners is mirrored in effects observed for motivations expressed through prosody only. Here, an early rapid appraisal, as reflected in enhanced P2 amplitudes, is only found for sentences spoken in controlling (but not autonomy-supportive) prosody. Once identified as sounding pressuring, the message seems to be preferentially processed, as shown by enhanced late potential amplitudes in response to controlling prosody. Taken together, results suggest that motivational and neutral language are differentially processed; further, the data suggest that listening to cues signaling pressure and control cannot be ignored and lead to preferential, and more in-depth processing mechanisms. PMID:28525641
ERP correlates of motivating voices: quality of motivation and time-course matters.
Zougkou, Konstantina; Weinstein, Netta; Paulmann, Silke
2017-10-01
Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g. 'You absolutely have to do it my way' spoken in a controlling tone of voice), or lacked motivationally biasing words (e.g. 'Why don't we meet again tomorrow' spoken in a motivational tone of voice). Event-related brain potentials (ERPs) in response to motivations conveyed through words and prosody showed that listeners rapidly distinguished between motivations and neutral forms of communication as shown in enhanced P2 amplitudes in response to motivational when compared with neutral speech. This early detection mechanism is argued to help determine the importance of incoming information. Once assessed, motivational language is continuously monitored and thoroughly evaluated. When compared with neutral speech, listening to controlling (but not autonomy-supportive) speech led to enhanced late potential ERP mean amplitudes, suggesting that listeners are particularly attuned to controlling messages. The importance of controlling motivation for listeners is mirrored in effects observed for motivations expressed through prosody only. Here, an early rapid appraisal, as reflected in enhanced P2 amplitudes, is only found for sentences spoken in controlling (but not autonomy-supportive) prosody. Once identified as sounding pressuring, the message seems to be preferentially processed, as shown by enhanced late potential amplitudes in response to controlling prosody. Taken together, results suggest that motivational and neutral language are differentially processed; further, the data suggest that listening to cues signaling pressure and control cannot be ignored and lead to preferential, and more in-depth processing mechanisms. © The Author (2017). Published by Oxford University Press.
Development in Children’s Interpretation of Pitch Cues to Emotions
Quam, Carolyn; Swingley, Daniel
2012-01-01
Young infants respond to positive and negative speech prosody (Fernald, 1993), yet 4-year-olds rely on lexical information when it conflicts with paralinguistic cues to approval or disapproval (Friend, 2003). This article explores this surprising phenomenon, testing 118 2- to 5-year-olds’ use of isolated pitch cues to emotions in interactive tasks. Only 4- to 5-year-olds consistently interpreted exaggerated, stereotypically happy or sad pitch contours as evidence that a puppet had succeeded or failed to find his toy (Experiment 1) or was happy or sad (Experiments 2, 3). Two- and three-year-olds exploited facial and body-language cues in the same task. The authors discuss the implications of this late-developing use of pitch cues to emotions, relating them to other functions of pitch. PMID:22181680
Jiang, Cunmei; Hamm, Jeff P; Lim, Vanessa K; Kirk, Ian J; Chen, Xuhai; Yang, Yufang
2012-01-01
Pitch processing is a critical ability on which humans' tonal musical experience depends, and which is also of paramount importance for decoding prosody in speech. Congenital amusia refers to deficits in the ability to properly process musical pitch, and recent evidence has suggested that this musical pitch disorder may impact upon the processing of speech sounds. Here we present the first electrophysiological evidence demonstrating that individuals with amusia who speak Mandarin Chinese are impaired in classifying prosody as appropriate or inappropriate during a speech comprehension task. When presented with inappropriate prosody stimuli, control participants elicited a larger P600 and smaller N100 relative to the appropriate condition. In contrast, amusics did not show significant differences between the appropriate and inappropriate conditions in either the N100 or the P600 component. This provides further evidence that the pitch perception deficits associated with amusia may also affect intonation processing during speech comprehension in those who speak a tonal language such as Mandarin, and suggests music and language share some cognitive and neural resources.
Jiang, Cunmei; Hamm, Jeff P.; Lim, Vanessa K.; Kirk, Ian J.; Chen, Xuhai; Yang, Yufang
2012-01-01
Pitch processing is a critical ability on which humans’ tonal musical experience depends, and which is also of paramount importance for decoding prosody in speech. Congenital amusia refers to deficits in the ability to properly process musical pitch, and recent evidence has suggested that this musical pitch disorder may impact upon the processing of speech sounds. Here we present the first electrophysiological evidence demonstrating that individuals with amusia who speak Mandarin Chinese are impaired in classifying prosody as appropriate or inappropriate during a speech comprehension task. When presented with inappropriate prosody stimuli, control participants elicited a larger P600 and smaller N100 relative to the appropriate condition. In contrast, amusics did not show significant differences between the appropriate and inappropriate conditions in either the N100 or the P600 component. This provides further evidence that the pitch perception deficits associated with amusia may also affect intonation processing during speech comprehension in those who speak a tonal language such as Mandarin, and suggests music and language share some cognitive and neural resources. PMID:22859982
Age-related changes to the production of linguistic prosody
NASA Astrophysics Data System (ADS)
Barnes, Daniel R.
The production of speech prosody (the rhythm, pausing, and intonation associated with natural speech) is critical to effective communication. The current study investigated the impact of age-related changes to physiology and cognition in relation to the production of two types of linguistic prosody: lexical stress and the disambiguation of syntactically ambiguous utterances. Analyses of the acoustic correlates of stress: speech intensity (or sound-pressure level; SPL), fundamental frequency (F0), key word/phrase duration, and pause duration revealed that both young and older adults effectively use these acoustic features to signal linguistic prosody, although the relative weighting of cues differed by group. Differences in F0 were attributed to age-related physiological changes in the laryngeal subsystem, while group differences in duration measures were attributed to relative task complexity and the cognitive-linguistic load of these respective tasks. The current study provides normative acoustic data for older adults which informs interpretation of clinical findings as well as research pertaining to dysprosody as the result of disease processes.
Sheppard, Shannon M; Love, Tracy; Midgley, Katherine J; Holcomb, Phillip J; Shapiro, Lewis P
2017-12-01
Event-related potentials (ERPs) were used to examine how individuals with aphasia and a group of age-matched controls use prosody and themattic fit information in sentences containing temporary syntactic ambiguities. Two groups of individuals with aphasia were investigated; those demonstrating relatively good sentence comprehension whose primary language difficulty is anomia (Individuals with Anomic Aphasia (IWAA)), and those who demonstrate impaired sentence comprehension whose primary diagnosis is Broca's aphasia (Individuals with Broca's Aphasia (IWBA)). The stimuli had early closure syntactic structure and contained a temporary early closure (correct)/late closure (incorrect) syntactic ambiguity. The prosody was manipulated to either be congruent or incongruent, and the temporarily ambiguous NP was also manipulated to either be a plausible or an implausible continuation for the subordinate verb (e.g., "While the band played the song/the beer pleased all the customers."). It was hypothesized that an implausible NP in sentences with incongruent prosody may provide the parser with a plausibility cue that could be used to predict syntactic structure. The results revealed that incongruent prosody paired with a plausibility cue resulted in an N400-P600 complex at the implausible NP (the beer) in both the controls and the IWAAs, yet incongruent prosody without a plausibility cue resulted in an N400-P600 at the critical verb (pleased) only in healthy controls. IWBAs did not show evidence of N400 or P600 effects at the ambiguous NP or critical verb, although they did show evidence of a delayed N400 effect at the sentence-final word in sentences with incongruent prosody. These results suggest that IWAAs have difficulty integrating prosodic cues with underlying syntactic structure when lexical-semantic information is not available to aid their parse. IWBAs have difficulty integrating both prosodic and lexical-semantic cues with syntactic structure, likely due to a processing delay. Copyright © 2017 Elsevier Ltd. All rights reserved.
Implicit prosody mining based on the human eye image capture technology
NASA Astrophysics Data System (ADS)
Gao, Pei-pei; Liu, Feng
2013-08-01
The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of disabled assisted speech interaction. Experiments show that Implicit Prosody mining based on the human eye image capture technology makes the synthesized speech has more flexible expressions.
Gender Differences in the Recognition of Vocal Emotions
Lausen, Adi; Schacht, Annekathrin
2018-01-01
The conflicting findings from the few studies conducted with regard to gender differences in the recognition of vocal expressions of emotion have left the exact nature of these differences unclear. Several investigators have argued that a comprehensive understanding of gender differences in vocal emotion recognition can only be achieved by replicating these studies while accounting for influential factors such as stimulus type, gender-balanced samples, number of encoders, decoders, and emotional categories. This study aimed to account for these factors by investigating whether emotion recognition from vocal expressions differs as a function of both listeners' and speakers' gender. A total of N = 290 participants were randomly and equally allocated to two groups. One group listened to words and pseudo-words, while the other group listened to sentences and affect bursts. Participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. Overall, females were more accurate than males when decoding vocal emotions, however, when testing for specific emotions these differences were small in magnitude. Speakers' gender had a significant impact on how listeners' judged emotions from the voice. The group listening to words and pseudo-words had higher identification rates for emotions spoken by male than by female actors, whereas in the group listening to sentences and affect bursts the identification rates were higher when emotions were uttered by female than male actors. The mixed pattern for emotion-specific effects, however, indicates that, in the vocal channel, the reliability of emotion judgments is not systematically influenced by speakers' gender and the related stereotypes of emotional expressivity. Together, these results extend previous findings by showing effects of listeners' and speakers' gender on the recognition of vocal emotions. They stress the importance of distinguishing these factors to explain recognition ability in the processing of emotional prosody. PMID:29922202
Boucher, Victor J
2006-01-01
Language learning requires a capacity to recall novel series of speech sounds. Research shows that prosodic marks create grouping effects enhancing serial recall. However, any restriction on memory affecting the reproduction of prosody would limit the set of patterns that could be learned and subsequently used in speech. By implication, grouping effects of prosody would also be limited to reproducible patterns. This view of the role of prosody and the contribution of memory processes in the organization of prosodic patterns is examined by evaluating the correspondence between a reported tendency to restrict stress intervals in speech and size limits on stress-grouping effects. French speech is used where stress defines the endpoints of groups. In Experiment 1, 40 speakers recalled novel series of syllables containing stress-groups of varying size. Recall was not enhanced by groupings exceeding four syllables, which corresponded to a restriction on the reproducibility of stress-groups. In Experiment 2, the subjects produced given sentences containing phrases of differing length. The results show a strong tendency to insert stress within phrases that exceed four syllables. Since prosody can arise in the recall of syntactically unstructured lists, the results offer initial support for viewing memory processes as a factor of stress-rhythm organization.
Age- and gender-related variations of emotion recognition in pseudowords and faces.
Demenescu, Liliana R; Mathiak, Krystyna A; Mathiak, Klaus
2014-01-01
BACKGROUND/STUDY CONTEXT: The ability to interpret emotionally salient stimuli is an important skill for successful social functioning at any age. The objective of the present study was to disentangle age and gender effects on emotion recognition ability in voices and faces. Three age groups of participants (young, age range: 18-35 years; middle-aged, age range: 36-55 years; and older, age range: 56-75 years) identified basic emotions presented in voices and faces in a forced-choice paradigm. Five emotions (angry, fearful, sad, disgusted, and happy) and a nonemotional category (neutral) were shown as encoded in color photographs of facial expressions and pseudowords spoken in affective prosody. Overall, older participants had a lower accuracy rate in categorizing emotions than young and middle-aged participants. Females performed better than males in recognizing emotions from voices, and this gender difference emerged in middle-aged and older participants. The performance of emotion recognition in faces was significantly correlated with the performance in voices. The current study provides further evidence for a general age and gender effect on emotion recognition; the advantage of females seems to be age- and stimulus modality-dependent.
Veenendaal, Nathalie J.; Groen, Margriet A.; Verhoeven, Ludo
2016-01-01
The purpose of this study was to examine the directionality of the relationship between text reading prosody and reading comprehension in the upper grades of primary school. We compared three theoretical possibilities: Two unidirectional relations from text reading prosody to reading comprehension and from reading comprehension to text reading prosody and a bidirectional relation between text reading prosody and reading comprehension. Further, we controlled for autoregressive effects and included decoding efficiency as a measure of general reading skill. Participants were 99 Dutch children, followed longitudinally, from fourth- to sixth-grade. Structural equation modeling showed that the bidirectional relation provided the best fitting model. In fifth-grade, text reading prosody was related to prior decoding and reading comprehension, whereas in sixth-grade, reading comprehension was related to prior text reading prosody. As such, the results suggest that the relation between text reading prosody and reading comprehension is reciprocal, but dependent on grade level. PMID:27667916
ERIC Educational Resources Information Center
Nickels, Stefanie; Steinhauer, Karsten
2018-01-01
The role of prosodic information in sentence processing is not usually addressed in second language (L2) instruction, and neurocognitive studies on prosody-syntax interactions are rare. Here we compare event-related potentials (ERP) of Chinese and German learners of English L2 to those of native English speakers and show how first language (L1)…
Processing Load Imposed by Line Breaks in English Temporal Wh-Questions
Hirotani, Masako; Terry, J. Michael; Sadato, Norihiro
2016-01-01
Prosody plays an important role in online sentence processing both explicitly and implicitly. It has been shown that prosodically packaging together parts of a sentence that are interpreted together facilitates processing of the sentence. This applies not only to explicit prosody but also implicit prosody. The present work hypothesizes that a line break in a written text induces an implicit prosodic break, which, in turn, should result in a processing bias for interpreting English wh-questions. Two experiments—one self-paced reading study and one questionnaire study—are reported. Both supported the “line break” hypothesis mentioned above. The results of the self-paced reading experiment showed that unambiguous wh-questions were read faster when the location of line breaks (or frame breaks) matched the scope of a wh-phrase (main or embedded clause) than when they did not. The questionnaire tested sentences with an ambiguous wh-phrase, one that could attach either to the main or the embedded clause. These sentences were interpreted as attaching to the main clause more often than to the embedded clause when a line break appeared after the main verb, but not when it appeared after the embedded verb. PMID:27774072
Processing Load Imposed by Line Breaks in English Temporal Wh-Questions.
Hirotani, Masako; Terry, J Michael; Sadato, Norihiro
2016-01-01
Prosody plays an important role in online sentence processing both explicitly and implicitly. It has been shown that prosodically packaging together parts of a sentence that are interpreted together facilitates processing of the sentence. This applies not only to explicit prosody but also implicit prosody. The present work hypothesizes that a line break in a written text induces an implicit prosodic break, which, in turn, should result in a processing bias for interpreting English wh-questions. Two experiments-one self-paced reading study and one questionnaire study-are reported. Both supported the "line break" hypothesis mentioned above. The results of the self-paced reading experiment showed that unambiguous wh-questions were read faster when the location of line breaks (or frame breaks) matched the scope of a wh-phrase (main or embedded clause) than when they did not. The questionnaire tested sentences with an ambiguous wh-phrase, one that could attach either to the main or the embedded clause. These sentences were interpreted as attaching to the main clause more often than to the embedded clause when a line break appeared after the main verb, but not when it appeared after the embedded verb.
ERIC Educational Resources Information Center
Holliman, A. J.; Williams, G. J.; Mundy, I. R.; Wood, C.; Hart, L.; Waldron, S.
2014-01-01
A growing number of studies now suggest that sensitivity to the rhythmic patterning of speech (prosody) is implicated in successful reading acquisition. However, recent evidence suggests that prosody is not a unitary construct and that the different components of prosody (stress, intonation, and timing) operating at different linguistic levels…
Gilboa-Schechtman, Eva; Shachar-Lavie, Iris
2013-01-01
Processing of nonverbal social cues (NVSCs) is essential to interpersonal functioning and is particularly relevant to models of social anxiety. This article provides a review of the literature on NVSC processing from the perspective of social rank and affiliation biobehavioral systems (ABSs), based on functional analysis of human sociality. We examine the potential of this framework for integrating cognitive, interpersonal, and evolutionary accounts of social anxiety. We argue that NVSCs are uniquely suited to rapid and effective conveyance of emotional, motivational, and trait information and that various channels are differentially effective in transmitting such information. First, we review studies on perception of NVSCs through face, voice, and body. We begin with studies that utilized information processing or imaging paradigms to assess NVSC perception. This research demonstrated that social anxiety is associated with biased attention to, and interpretation of, emotional facial expressions (EFEs) and emotional prosody. Findings regarding body and posture remain scarce. Next, we review studies on NVSC expression, which pinpointed links between social anxiety and disturbances in eye gaze, facial expressivity, and vocal properties of spontaneous and planned speech. Again, links between social anxiety and posture were understudied. Although cognitive, interpersonal, and evolutionary theories have described different pathways to social anxiety, all three models focus on interrelations among cognition, subjective experience, and social behavior. NVSC processing and production comprise the juncture where these theories intersect. In light of the conceptualizations emerging from the review, we highlight several directions for future research including focus on NVSCs as indexing reactions to changes in belongingness and social rank, the moderating role of gender, and the therapeutic opportunities offered by embodied cognition to treat social anxiety. PMID:24427129
ERIC Educational Resources Information Center
Krauss, Michael, Ed.
Nine papers on Yupik Eskimo prosody systems are presented. An introductory section gives background information on the Yupik language and dialects, defines prosody, and provides notes on orthography. The papers include: "A History of the Study of Yupik Prosody" (Michael Krauss); "Siberian Yupik and Central Yupik Prosody"…
Assessing the Relationship between Prosody and Reading Outcomes in Children Using the PEPS-C
ERIC Educational Resources Information Center
Lochrin, Margaret; Arciuli, Joanne; Sharma, Mridula
2015-01-01
This study investigated the relationship between both receptive and expressive prosody and each of three reading outcomes: accuracy of reading aloud words, accuracy of reading aloud nonwords, and comprehension. Participants were 63 children aged 7 to 12 years. To assess prosody, we used the Profiling Elements of Prosody in Speech Communication…
Emotional recognition in depressed epilepsy patients.
Brand, Jesse G; Burton, Leslie A; Schaffer, Sarah G; Alper, Kenneth R; Devinsky, Orrin; Barr, William B
2009-07-01
The current study examined the relationship between emotional recognition and depression using the Minnesota Multiphasic Personality Inventory, Second Edition (MMPI-2), in a population with epilepsy. Participants were a mixture of surgical candidates in addition to those receiving neuropsychological testing as part of a comprehensive evaluation. Results suggested that patients with epilepsy reporting increased levels of depression (Scale D) performed better than those patients reporting low levels of depression on an index of simple facial recognition, and depression was associated with poor prosody discrimination. Further, it is notable that more than half of the present sample had significantly elevated Scale D scores. The potential effects of a mood-congruent bias and implications for social functioning in depressed patients with epilepsy are discussed.
Development in children's interpretation of pitch cues to emotions.
Quam, Carolyn; Swingley, Daniel
2012-01-01
Young infants respond to positive and negative speech prosody (A. Fernald, 1993), yet 4-year-olds rely on lexical information when it conflicts with paralinguistic cues to approval or disapproval (M. Friend, 2003). This article explores this surprising phenomenon, testing one hundred eighteen 2- to 5-year-olds' use of isolated pitch cues to emotions in interactive tasks. Only 4- to 5-year-olds consistently interpreted exaggerated, stereotypically happy or sad pitch contours as evidence that a puppet had succeeded or failed to find his toy (Experiment 1) or was happy or sad (Experiments 2, 3). Two- and 3-year-olds exploited facial and body-language cues in the same task. The authors discuss the implications of this late-developing use of pitch cues to emotions, relating them to other functions of pitch. © 2011 The Authors. Child Development © 2011 Society for Research in Child Development, Inc.
ERIC Educational Resources Information Center
Tian, Shuang; Murao, Remi
2016-01-01
The present study examined the use of prosody in semantic and syntactic disambiguation by means of comparison between Japanese and Chinese speakers' production of English sentences. In Chinese and Japanese, lexical prosody is more prominent than sentence prosody, and the sentential meaning contrast is usually realized through particles or a change…
Maurage, Pierre; Campanella, Salvatore; Philippot, Pierre; Charest, Ian; Martin, Sophie; de Timary, Philippe
2009-01-01
Emotional facial expression (EFE) decoding impairment has been repeatedly reported in alcoholism (e.g. Philippot et al., 1999). Nevertheless, several questions are still under debate concerning this alteration, notably its generalization to other emotional stimuli and its variation according to the emotional valence of stimuli. Eighteen recently detoxified alcoholic subjects and 18 matched controls performed a decoding test consisting in emotional intensity ratings on various stimuli (faces, voices, body postures and written scenarios) depicting different emotions (anger, fear, happiness, neutral, sadness). Perceived threat and difficulty were also assessed for each stimulus. Alcoholic individuals had a preserved decoding performance for happiness stimuli, but alcoholism was associated with an underestimation of sadness and fear, and with a general overestimation of anger. More importantly, these decoding impairments were observed for faces, voices and postures but not for written scenarios. We observed for the first time a generalized emotional decoding impairment in alcoholism, as this impairment is present not only for faces but also for other visual (i.e. body postures) and auditory stimuli. Moreover, we report that this alteration (1) is mainly indexed by an overestimation of anger and (2) cannot be explained by an 'affect labelling' impairment, as the semantic comprehension of written emotional scenarios is preserved. Fundamental and clinical implications are discussed.
Talking about Emotion: Prosody and Skin Conductance Indicate Emotion Regulation.
Matejka, Moritz; Kazzer, Philipp; Seehausen, Maria; Bajbouj, Malek; Klann-Delius, Gisela; Menninghaus, Winfried; Jacobs, Arthur M; Heekeren, Hauke R; Prehn, Kristin
2013-01-01
Talking about emotion and putting feelings into words has been hypothesized to regulate emotion in psychotherapy as well as in everyday conversation. However, the exact dynamics of how different strategies of verbalization regulate emotion and how these strategies are reflected in characteristics of the voice has received little scientific attention. In the present study, we showed emotional pictures to 30 participants and asked them to verbally admit or deny an emotional experience or a neutral fact concerning the picture in a simulated conversation. We used a 2 × 2 factorial design manipulating the focus (on emotion or facts) as well as the congruency (admitting or denying) of the verbal expression. Analyses of skin conductance response (SCR) and voice during the verbalization conditions revealed a main effect of the factor focus. SCR and pitch of the voice were lower during emotion compared to fact verbalization, indicating lower autonomic arousal. In contradiction to these physiological parameters, participants reported that fact verbalization was more effective in down-regulating their emotion than emotion verbalization. These subjective ratings, however, were in line with voice parameters associated with emotional valence. That is, voice intensity showed that fact verbalization reduced negative valence more than emotion verbalization. In sum, the results of our study provide evidence that emotion verbalization as compared to fact verbalization is an effective emotion regulation strategy. Moreover, based on the results of our study we propose that different verbalization strategies influence valence and arousal aspects of emotion selectively.
Leitman, David I; Ziwich, Rachel; Pasternak, Roey; Javitt, Daniel C
2006-08-01
Theory of Mind (ToM) refers to the ability to infer another person's mental state based upon interactional information. ToM deficits have been suggested to underlie crucial aspects of social interaction failure in disorders such as autism and schizophrenia, although the development of paradigms for demonstrating such deficits remains an ongoing area of research. Recent studies have explored the use of sarcasm perception, in which subjects must infer an individual's sincerity or lack thereof, as a 'real-life' index of ToM ability, and as an index of functioning of specific right hemispheric structures. Sarcastic detection ability has not previously been studied in schizophrenia, although patients have been shown to have deficits in ability to decode emotional information from speech ('affective prosody'). Twenty-two schizophrenia patients and 17 control subjects were tested on their ability to detect sarcasm from spoken speech as well as measures of affective prosody and basic pitch perception. Despite normal overall intelligence, patients performed substantially worse than controls in ability to detect sarcasm (d=2.2), showing both decreased sensitivity (A') in detection of sincerity versus sarcasm and an increased bias (B'') toward sincerity. Correlations across groups revealed significant relationships between impairments in sarcasm recognition, affective prosody and basic pitch perception. These findings demonstrate substantial deficits in ability to infer an internal subjective state based upon vocal modulation among subjects with schizophrenia. Deficits were related to, but were significantly more severe than, more general forms of prosodic and sensorial misperception, and are consistent with both right hemispheric and 'bottom-up' theories of the disorder.
Dream actors in the theatre of memory: their role in the psychoanalytic process.
Mancia, Mauro
2003-08-01
The author notes that neuropsychological research has discovered the existence of two long-term memory systems, namely declarative or explicit memory, which is conscious and autobiographical, and non-declarative or implicit memory, which is neither conscious nor verbalisable. It is suggested that pre-verbal and pre-symbolic experience in the child's primary relations is stored in implicit memory, where it constitutes an unconscious nucleus of the self which is not repressed and which influences the person's affective, emotional, cognitive and sexual life even as an adult. In the analytic relationship this unconscious part can emerge essentially through certain modes of communication (tone of voice, rhythm and prosody of the voice, and structure and tempo of speech), which could be called the 'musical dimension' of the transference, and through dream representations. Besides work on the transference, the critical component of the therapeutic action of psychoanalysis is stated to consist in work on dreams as pictographic and symbolic representations of implicit pre-symbolic and pre-verbal experiences. A case history is presented in which dream interpretation allowed some of a patient's early unconscious, non-repressed experiences to be emotionally reconstructed and made thinkable even though they were not actually remembered.
Pursuing prosody interventions.
Hargrove, Patricia M
2013-08-01
This paper provides an overview of evidence-based prosodic intervention strategies to facilitate clinicians' inclusion of prosody in their therapeutic planning and to encourage researchers' interest in prosody as an area of specialization. Four current evidence-based prosodic interventions are reviewed and answers to some important clinical questions are proposed. Additionally, the future direction of prosodic intervention research is discussed in recommendations about issues that are of concern to clinicians. The paper ends with a call for participation in an online collaboration at the Clinical Prosody blog at clinicalprosody.wordpress.com.
Nordström, Henrik; Laukka, Petri; Thingujam, Nutankumar S; Schubert, Emery; Elfenbein, Hillary Anger
2017-11-01
This study explored the perception of emotion appraisal dimensions on the basis of speech prosody in a cross-cultural setting. Professional actors from Australia and India vocally portrayed different emotions (anger, fear, happiness, pride, relief, sadness, serenity and shame) by enacting emotion-eliciting situations. In a balanced design, participants from Australia and India then inferred aspects of the emotion-eliciting situation from the vocal expressions, described in terms of appraisal dimensions (novelty, intrinsic pleasantness, goal conduciveness, urgency, power and norm compatibility). Bayesian analyses showed that the perceived appraisal profiles for the vocally expressed emotions were generally consistent with predictions based on appraisal theories. Few group differences emerged, which suggests that the perceived appraisal profiles are largely universal. However, some differences between Australian and Indian participants were also evident, mainly for ratings of norm compatibility. The appraisal ratings were further correlated with a variety of acoustic measures in exploratory analyses, and inspection of the acoustic profiles suggested similarity across groups. In summary, results showed that listeners may infer several aspects of emotion-eliciting situations from the non-verbal aspects of a speaker's voice. These appraisal inferences also seem to be relatively independent of the cultural background of the listener and the speaker.
Thingujam, Nutankumar S.; Schubert, Emery
2017-01-01
This study explored the perception of emotion appraisal dimensions on the basis of speech prosody in a cross-cultural setting. Professional actors from Australia and India vocally portrayed different emotions (anger, fear, happiness, pride, relief, sadness, serenity and shame) by enacting emotion-eliciting situations. In a balanced design, participants from Australia and India then inferred aspects of the emotion-eliciting situation from the vocal expressions, described in terms of appraisal dimensions (novelty, intrinsic pleasantness, goal conduciveness, urgency, power and norm compatibility). Bayesian analyses showed that the perceived appraisal profiles for the vocally expressed emotions were generally consistent with predictions based on appraisal theories. Few group differences emerged, which suggests that the perceived appraisal profiles are largely universal. However, some differences between Australian and Indian participants were also evident, mainly for ratings of norm compatibility. The appraisal ratings were further correlated with a variety of acoustic measures in exploratory analyses, and inspection of the acoustic profiles suggested similarity across groups. In summary, results showed that listeners may infer several aspects of emotion-eliciting situations from the non-verbal aspects of a speaker's voice. These appraisal inferences also seem to be relatively independent of the cultural background of the listener and the speaker. PMID:29291085
Iconic Prosody in Story Reading
ERIC Educational Resources Information Center
Perlman, Marcus; Clark, Nathaniel; Falck, Marlene Johansson
2015-01-01
Recent experiments have shown that people iconically modulate their prosody corresponding with the meaning of their utterance (e.g., Shintel et al., 2006). This article reports findings from a story reading task that expands the investigation of iconic prosody to abstract meanings in addition to concrete ones. Participants read stories that…
NASA Astrophysics Data System (ADS)
Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood
2015-10-01
Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.
[Self-consciousness, consciousness of the other and dementias].
Gil, Roger
2007-06-01
Studies of self-consciousness in dementia concern essentially anosognosia or the loss of insight. However, Self-consciousness is multifaceted: it includes awareness of the body, perceptions, one's own history, identity, and one's own projects. Self-consciousness is linked to consciousness of others i.e. to social cognition supported by identification of others, but also by comprehension of facial expression of emotions, comprehension and expression of emotional prosody, pragmatic abilities, ability to infer other's people's mental states, thoughts, and feelings (theory of mind and empathy), knowledge of social norms and rules, social reasoning. The subtypes of dementias (and namely Alzheimer's disease and frontotemporal dementia) affect heterogeneously the different aspects of the self-and other-consciousness. Further studies are needed for a better knowledge of the complex relationship between Self-consciousness, social cognition, decision making and neuropsychiatric symptoms and behavioral disturbances occurring in demented patients.
Structural and functional connectivity of the subthalamic nucleus during vocal emotion decoding
Frühholz, Sascha; Ceravolo, Leonardo; Grandjean, Didier
2016-01-01
Our understanding of the role played by the subthalamic nucleus (STN) in human emotion has recently advanced with STN deep brain stimulation, a neurosurgical treatment for Parkinson’s disease and obsessive-compulsive disorder. However, the potential presence of several confounds related to pathological models raises the question of how much they affect the relevance of observations regarding the physiological function of the STN itself. This underscores the crucial importance of obtaining evidence from healthy participants. In this study, we tested the structural and functional connectivity between the STN and other brain regions related to vocal emotion in a healthy population by combining diffusion tensor imaging and psychophysiological interaction analysis from a high-resolution functional magnetic resonance imaging study. As expected, we showed that the STN is functionally connected to the structures involved in emotional prosody decoding, notably the orbitofrontal cortex, inferior frontal gyrus, auditory cortex, pallidum and amygdala. These functional results were corroborated by probabilistic fiber tracking, which revealed that the left STN is structurally connected to the amygdala and the orbitofrontal cortex. These results confirm, in healthy participants, the role played by the STN in human emotion and its structural and functional connectivity with the brain network involved in vocal emotions. PMID:26400857
The Usage of CAUSE in Three Branches of Science
ERIC Educational Resources Information Center
Yang, Bei; Chen, Bin
2016-01-01
Semantic prosody is a concept that has been subject to considerable criticism and debate. One big concern is to what extent semantic prosody is domain or register-related. Previous studies reach the agreement that CAUSE has an overwhelmingly negative meaning in general English. Its semantic prosody remains controversial in academic writing,…
Neurology of Affective Prosody and Its Functional-Anatomic Organization in Right Hemisphere
ERIC Educational Resources Information Center
Ross, Elliott D.; Monnot, Marilee
2008-01-01
Unlike the aphasic syndromes, the organization of affective prosody in brain has remained controversial because affective-prosodic deficits may occur after left or right brain damage. However, different patterns of deficits are observed following left and right brain damage that suggest affective prosody is a dominant and lateralized function of…
Perception and Production of Prosody by Speakers with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Paul, Rhea; Augustyn, Amy; Klin, Ami; Volkmar, Fred R.
2005-01-01
Speakers with autism spectrum disorders (ASD) show difficulties in suprasegmental aspects of speech production, or "prosody," those aspects of speech that accompany words and sentences and create what is commonly called "tone of voice." However, little is known about the perception of prosody, or about the specific aspects of…
A Semantic Prosody Analysis of Three Adjective Synonymous Pairs in COCA
ERIC Educational Resources Information Center
Hu, H. C. Marcella
2015-01-01
Over the past two decades the concept of semantic prosody has attracted considerable research interest since Sinclair (1991) observed that "many uses of words and phrases show a tendency to occur in a certain semantic environment" (p. 112). Sinclair (2003) also noted that semantic prosody conveys its pragmatic meaning and attitudinal…
Ixpantepec Nieves Mixtec Word Prosody
NASA Astrophysics Data System (ADS)
Carroll, Lucien Serapio
This dissertation presents a phonological description and acoustic analysis of the word prosody of Ixpantepec Nieves Mixtec, which involves both a complex tone system and a default stress system. The analysis of Nieves Mixtec word prosody is complicated by a close association between morphological structure and prosodic structure, and by the interactions between word prosody and phonation type, which has both contrastive and non-contrastive roles in the phonology. I contextualize these systems within the phonology of Nieves Mixtec as a whole, within the literature on other Mixtec varieties, and within the literature on cross-linguistic prosodic typology. The literature on prosodic typology indicates that stress is necessarily defined abstractly, as structured prominence realized differently in each language. Descriptions of stress in other Mixtec varieties widely report default stress on the initial syllable of the canonical bimoraic root, though some descriptions suggest final stress or mobile stress. I first present phonological evidence---from distributional restrictions, phonological processes, and loanword adaptation---that Nieves Mixtec word prosody does involve a stress system, based on trochaic feet aligned to the root. I then present an acoustic study comparing stressed syllables to unstressed syllables, for ten potential acoustic correlates of stress. The results indicate that the acoustic correlates of stress in Nieves Mixtec include segmental duration, intensity and periodicity. Building on analyses of other Mixtec tone systems, I show that the distribution of tone and the tone processes in Nieves Mixtec support an analysis in which morae may bear H, M or L tone, where M tone is underlyingly unspecified, and each morpheme may sponsor a final +H or +L floating tone. Bimoraic roots thus host up to two linked tones and one floating tone, while monomoraic clitics host just one linked tone and one floating tone, and tonal morphemes are limited to a single floating tone. I then present three studies describing the acoustic realization of tone and comparing the realization of tone in different prosodic types. The findings of these studies include a strong directional asymmetry in tonal coarticulation, increased duration at the word or phrase boundary, phonation differences among the tone categories, and F0 differences between the glottalization categories.
ERIC Educational Resources Information Center
Chevallier, Coralie; Noveck, Ira; Happe, Francesca; Wilson, Deirdre
2009-01-01
We report findings concerning the understanding of prosody in Asperger Syndrome (AS), a topic which has attracted little attention and led to contradictory results. Ability to understand grammatical prosody was tested in three novel experiments. Experiment 1 assessed the interpretation of word stress, Experiment 2 focused on grammatical pauses,…
ERIC Educational Resources Information Center
Veenendaal, Nathalie J.; Groen, Margriet A.; Verhoeven, Ludo
2016-01-01
The purpose of this study was to examine the directionality of the relationship between text reading prosody and reading comprehension in the upper grades of primary school. We compared 3 theoretical possibilities: Two unidirectional relations from text reading prosody to reading comprehension and from reading comprehension to text reading prosody…
ERIC Educational Resources Information Center
Nadig, Aparna; Shaw, Holly
2012-01-01
Are there consistent markers of atypical prosody in speakers with high functioning autism (HFA) compared to typically-developing speakers? We examined: (1) acoustic measurements of pitch range, mean pitch and speech rate in conversation, (2) perceptual ratings of conversation for these features and overall prosody, and (3) acoustic measurements of…
Investigating Holistic Measures of Speech Prosody
ERIC Educational Resources Information Center
Cunningham, Dana Aliel
2012-01-01
Speech prosody is a multi-faceted dimension of speech which can be measured and analyzed in a variety of ways. In this study, the speech prosody of Mandarin L1 speakers, English L2 speakers, and English L1 speakers was assessed by trained raters who listened to sound clips of the speakers responding to a graph prompt and reading a short passage.…
[Life paths and motifs. Meeting points of hypnotherapy and music therapy].
Vas, P József
2013-01-01
Effects both of hypnotherapy and music therapy are originated from an attunement as supposed by the author. Either to a hypnotherapist's suggestions or to a piece of music one is able to be tuned in them. On one hand, the hypnotherapist's prosody, which can be called as melodic declamation seen as a musical phenomenon transmitting emotions. On the other hand, music has got emotional and visceral impacts. As a meeting points of these two methods four possibilities are shown by the author: 1. musical analogies of vitality affects ; 2. paternal and maternal archetypes in music; 3. analogies of copings in music; 4. corrections of psychological deficits by virtue of hypno- and music therapy with parallel used energy healing method. Finally, the author suggests, that hypnosis is regarded as an inductive method expressing its effect from outside to inside; music, however is likely to be employed as a deductive therapeutic tool, effecting from inside to outside.
ERIC Educational Resources Information Center
Diehl, Joshua John; Paul, Rhea
2013-01-01
Prosody production atypicalities are a feature of autism spectrum disorders (ASDs), but behavioral measures of performance have failed to provide detail on the properties of these deficits. We used acoustic measures of prosody to compare children with ASDs to age-matched groups with learning disabilities and typically developing peers. Overall,…
ERIC Educational Resources Information Center
Aguert, Marc; Laval, Virginie; Le Bigot, Ludovic; Bernicot, Josie
2010-01-01
Purpose: This study was aimed at determining the role of prosody and situational context in children's understanding of expressive utterances. Which one of these 2 cues will help children grasp the speaker's intention? Do children exhibit a "contextual bias" whereby they ignore prosody, such as the "lexical bias" found in other studies (M. Friend…
Immediate use of prosody and context in predicting a syntactic structure.
Nakamura, Chie; Arai, Manabu; Mazuka, Reiko
2012-11-01
Numerous studies have reported an effect of prosodic information on parsing but whether prosody can impact even the initial parsing decision is still not evident. In a visual world eye-tracking experiment, we investigated the influence of contrastive intonation and visual context on processing temporarily ambiguous relative clause sentences in Japanese. Our results showed that listeners used the prosodic cue to make a structural prediction before hearing disambiguating information. Importantly, the effect was limited to cases where the visual scene provided an appropriate context for the prosodic cue, thus eliminating the explanation that listeners have simply associated marked prosodic information with a less frequent structure. Furthermore, the influence of the prosodic information was also evident following disambiguating information, in a way that reflected the initial analysis. The current study demonstrates that prosody, when provided with an appropriate context, influences the initial syntactic analysis and also the subsequent cost at disambiguating information. The results also provide first evidence for pre-head structural prediction driven by prosodic and contextual information with a head-final construction. Copyright © 2012 Elsevier B.V. All rights reserved.
Prosody Predicts Contest Outcome in Non-Verbal Dialogs
Dreiss, Amélie N.; Chatelain, Philippe G.
2016-01-01
Non-verbal communication has important implications for inter-individual relationships and negotiation success. However, to what extent humans can spontaneously use rhythm and prosody as a sole communication tool is largely unknown. We analysed human ability to resolve a conflict without verbal dialogs, independently of semantics. We invited pairs of subjects to communicate non-verbally using whistle sounds. Along with the production of more whistles, participants unwittingly used a subtle prosodic feature to compete over a resource (ice-cream scoops). Winners can be identified by their propensity to accentuate the first whistles blown when replying to their partner, compared to the following whistles. Naive listeners correctly identified this prosodic feature as a key determinant of which whistler won the interaction. These results suggest that in the absence of other communication channels, individuals spontaneously use a subtle variation of sound accentuation (prosody), instead of merely producing exuberant sounds, to impose themselves in a conflict of interest. We discuss the biological and cultural bases of this ability and their link with verbal communication. Our results highlight the human ability to use non-verbal communication in a negotiation process. PMID:27907039
Prosody Predicts Contest Outcome in Non-Verbal Dialogs.
Dreiss, Amélie N; Chatelain, Philippe G; Roulin, Alexandre; Richner, Heinz
2016-01-01
Non-verbal communication has important implications for inter-individual relationships and negotiation success. However, to what extent humans can spontaneously use rhythm and prosody as a sole communication tool is largely unknown. We analysed human ability to resolve a conflict without verbal dialogs, independently of semantics. We invited pairs of subjects to communicate non-verbally using whistle sounds. Along with the production of more whistles, participants unwittingly used a subtle prosodic feature to compete over a resource (ice-cream scoops). Winners can be identified by their propensity to accentuate the first whistles blown when replying to their partner, compared to the following whistles. Naive listeners correctly identified this prosodic feature as a key determinant of which whistler won the interaction. These results suggest that in the absence of other communication channels, individuals spontaneously use a subtle variation of sound accentuation (prosody), instead of merely producing exuberant sounds, to impose themselves in a conflict of interest. We discuss the biological and cultural bases of this ability and their link with verbal communication. Our results highlight the human ability to use non-verbal communication in a negotiation process.
Prosody's Contribution to Fluency: An Examination of the Theory of Automatic Information Processing
ERIC Educational Resources Information Center
Schrauben, Julie E.
2010-01-01
LaBerge and Samuels' (1974) theory of automatic information processing in reading offers a model that explains how and where the processing of information occurs and the degree to which processing of information occurs. These processes are dependent upon two criteria: accurate word decoding and automatic word recognition. However, LaBerge and…
Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania
2014-12-01
A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re-education programs in children presenting with deficits in social cue processing.
Focus prosody of telephone numbers in Tokyo Japanese.
Lee, Yong-Cheol; Nambu, Satoshi; Cho, Sunghye
2018-05-01
Using production and perception experiments, this study examined whether the prosodic structure inherent to telephone numbers in Tokyo Japanese affects the realization of focus prosody as well as its perception. It was hypothesized that prosodic marking of focus differs by position within the digit groups of phone number strings. Overall, focus prosody of telephone numbers was not clearly marked, resulting in poor identification in perception. However, a difference between positions within digit groups was identified, reflecting a prosodic structure where one position is assigned an accentual peak instead of the other. The findings suggest that, conforming to a language-specific prosodic structure, focus prosody within a language can vary under the influence of a particular linguistic environment.
You 'have' to hear this: Using tone of voice to motivate others.
Weinstein, Netta; Zougkou, Konstantina; Paulmann, Silke
2018-06-01
The present studies explored the role of prosody in motivating others, and applied self-determination theory (Ryan & Deci, 2000) to do so. Initial studies describe patterns of prosody that discriminate motivational speech. Autonomy support was expressed with lower intensity, slower speech rate and less voice energy in both motivationally laden and neutral (but motivationally primed) sentences. In a follow-up study, participants were able to recognize motivational prosody in semantically neutral sentences, suggesting prosody alone may carry motivational content. Findings from subsequent studies also showed that an autonomy-supportive as compared with a controlling tone facilitated positive personal (perceived choice and lower perceived pressure, well-being) and interpersonal (closeness to others and prosocial behaviors) outcomes commonly linked to this type of motivation. Results inform both the social psychology (in particular motivation) and psycho-linguistic (in particular prosody) literatures and offer a first description of how motivational tone alone can shape listeners' experiences. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Elfmarková, Nela; Gajdoš, Martin; Mračková, Martina; Mekyska, Jiří; Mikl, Michal; Rektorová, Irena
2016-01-01
Impaired speech prosody is common in Parkinson's disease (PD). We assessed the impact of PD and levodopa on MRI resting-state functional connectivity (rs-FC) underlying speech prosody control. We studied 19 PD patients in the OFF and ON dopaminergic conditions and 15 age-matched healthy controls using functional MRI and seed partial least squares correlation (PLSC) analysis. In the PD group, we also correlated levodopa-induced rs-FC changes with the results of acoustic analysis. The PLCS analysis revealed a significant impact of PD but not of medication on the rs-FC strength of spatial correlation maps seeded by the anterior cingulate (p = 0.006), the right orofacial primary sensorimotor cortex (OF_SM1; p = 0.025) and the right caudate head (CN; p = 0.047). In the PD group, levodopa-induced changes in the CN and OF_SM1 connectivity strengths were related to changes in speech prosody. We demonstrated an impact of PD but not of levodopa on rs-FC within the brain networks related to speech prosody control. When only the PD patients were taken into account, the association between treatment-induced changes in speech prosody and changes in rs-FC within the associative striato-prefrontal and motor speech networks was found. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hemispheric specialization in spontaneous gesticulation in a patient with callosal disconnection.
Lausberg, H; Davis, M; Rothenhäusler, A
2000-01-01
This is an investigation of spontaneous gesticulation in a left-handed patient with a callosal disconnection syndrome due to infarction of the total length of the corpus callosum. After callosal infarction, the patient gesticulated predominantly unilaterally with the left hand despite left apraxia. Bilateral gesticulation occurred later on and was presumably achieved by an increase in ipsilateral proximal control. Movement analysis further indicated that the two hemispheres are specialized for certain gesture types. Gestures with emotional connotation and batons (emphasizing prosody) were generated predominantly in the right hemisphere whereas physiographics which picture the linguistic content concretely and deictics (pointing) were of left-hemispheric origin.
NASA Astrophysics Data System (ADS)
Pitts, Wesley
2011-03-01
The major focus of this ethnographic study is devoted to exploring the confluence of global and local referents of science education in the context of an urban chemistry laboratory classroom taught by a first-generation Filipino-American male teacher. This study investigates encounters between the teacher and four second-generation immigrant female students of color, as well as encounters among the four students. The pervasive spread of neoliberal ideology of accountability and sanctions both globally and locally, particularly in public high schools in the Bronx, New York City fuel situations for teaching and learning science that are encoded with the referents of top-down control. In the face of theses challenges, classroom participants must become aware of productive ways to build solidarity and interstitial culture across salient social boundaries, such as age, gender, ethnicity and role, to create and sustain successful teaching and learning of chemistry. Empirical evidence for solidarity was guided by physical and verbal displays of synchrony, mutual focus, entrainment, and emotional energy, body gestures, and prosody markers. This study shows that classroom participants used a combination of prosody markers to appropriate resources to decrease breaches in face-to-face encounters and, at the same time, create and sustain participation and solidarity to successfully complete an acid-base experiment.
Sound frequency affects speech emotion perception: results from congenital amusia
Lolli, Sydney L.; Lewenstein, Ari D.; Basurto, Julian; Winnik, Sean; Loui, Psyche
2015-01-01
Congenital amusics, or “tone-deaf” individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech. PMID:26441718
Liu, Pan; Pell, Marc D
2012-12-01
To establish a valid database of vocal emotional stimuli in Mandarin Chinese, a set of Chinese pseudosentences (i.e., semantically meaningless sentences that resembled real Chinese) were produced by four native Mandarin speakers to express seven emotional meanings: anger, disgust, fear, sadness, happiness, pleasant surprise, and neutrality. These expressions were identified by a group of native Mandarin listeners in a seven-alternative forced choice task, and items reaching a recognition rate of at least three times chance performance in the seven-choice task were selected as a valid database and then subjected to acoustic analysis. The results demonstrated expected variations in both perceptual and acoustic patterns of the seven vocal emotions in Mandarin. For instance, fear, anger, sadness, and neutrality were associated with relatively high recognition, whereas happiness, disgust, and pleasant surprise were recognized less accurately. Acoustically, anger and pleasant surprise exhibited relatively high mean f0 values and large variation in f0 and amplitude; in contrast, sadness, disgust, fear, and neutrality exhibited relatively low mean f0 values and small amplitude variations, and happiness exhibited a moderate mean f0 value and f0 variation. Emotional expressions varied systematically in speech rate and harmonics-to-noise ratio values as well. This validated database is available to the research community and will contribute to future studies of emotional prosody for a number of purposes. To access the database, please contact pan.liu@mail.mcgill.ca.
Prosody and informativity: A cross-linguistic investigation
NASA Astrophysics Data System (ADS)
Ouyang, Iris Chuoying
This dissertation aims to extend our knowledge of prosody -- in particular, what kinds of information may be conveyed through prosody, which prosodic dimensions may be used to convey them, and how individual speakers differ from one another in how they use prosody. Four production studies were conducted to examine how various factors interact with one another in shaping the prosody of an utterance and how prosody fulfills its multi-functional role. Experiments 1 explores the interaction between two types of informativity, namely information structure and information-theoretic properties. The results show that the prosodic consequences of new-information focus are modulated by the focused word's frequency, whereas the prosodic consequences of corrective focus are modulated by the focused word's probability in the context. Furthermore, f0 ranges appear to be more informative than f0 shapes in reflecting informativity across speakers. Specifically, speakers seem to have individual 'preferences' regarding f0 shapes, the f0 ranges they use for an utterance, and the magnitude of differences in f0 ranges by which they mark information-structural distinctions. In contrast, there is more cross-speaker validity in the actual directions of differences in f0 ranges between information-structural types. Experiments 2 and 3 further show that the interaction found between corrective focus and contextual probability depends on the interlocutor's knowledge state. When the interlocutor has no access to the crucial information concerning utterances' contextual probability, speakers prosodically emphasize contextually improbable corrections, but not contextually probable corrections. Furthermore, speakers prosodically emphasize the corrections in response to contextually probable misstatements, but not the corrections in response to contextually improbable misstatements. In contrast, completely opposite patterns are found when words' contextual probability is shared knowledge between the speaker and the interlocutor: speakers prosodically emphasize contextually probable corrections and the corrections in response to contextually improbable misstatements. Experiment 4 demonstrates the multi-functionality of prosody by investigating its discourse-level functions in Mandarin Chinese, a tone language where a word's prosodic patterns is crucial to its meaning. The results show that, although prosody serves fundamental, lexical-level functions in Mandarin Chinese, it nevertheless provides cues to information structure as well. Similar to what has been found with English, corrective information is prosodically more prominent than non-corrective information, and new information is prosodically more prominent than given information. Taken together, these experiments demonstrate the complex relationship between prosody and the different types of information it encodes in a given language. To better understand prosody, it is important to integrate insights from different traditions of research and to investigate across languages. In addition, the findings of this research suggest that speakers' assumptions about what their interlocutors know -- as well as speakers' ability to update these expectations -- play a key role in shaping the prosody of utterances. I hypothesize that prosodic prominence may reflect the gap between what speakers had expected their interlocutors to say and what their interlocutors have actually said.
Learning word order at birth: A NIRS study.
Benavides-Varela, Silvia; Gervain, Judit
2017-06-01
In language, the relative order of words in sentences carries important grammatical functions. However, the developmental origins and the neural correlates of the ability to track word order are to date poorly understood. The current study therefore investigates the origins of infants' ability to learn about the sequential order of words, using near-infrared spectroscopy (NIRS) with newborn infants. We have conducted two experiments: one in which a word order change was implemented in 4-word sequences recorded with a list intonation (as if each word was a separate item in a list; list prosody condition, Experiment 1) and one in which the same 4-word sequences were recorded with a well-formed utterance-level prosodic contour (utterance prosody condition, Experiment 2). We found that newborns could detect the violation of the word order in the list prosody condition, but not in the utterance prosody condition. These results suggest that while newborns are already sensitive to word order in linguistic sequences, prosody appears to be a stronger cue than word order for the identification of linguistic units at birth. Copyright © 2017. Published by Elsevier Ltd.
Underconnectivity between voice-selective cortex and reward circuitry in children with autism.
Abrams, Daniel A; Lynch, Charles J; Cheng, Katherine M; Phillips, Jennifer; Supekar, Kaustubh; Ryali, Srikanth; Uddin, Lucina Q; Menon, Vinod
2013-07-16
Individuals with autism spectrum disorders (ASDs) often show insensitivity to the human voice, a deficit that is thought to play a key role in communication deficits in this population. The social motivation theory of ASD predicts that impaired function of reward and emotional systems impedes children with ASD from actively engaging with speech. Here we explore this theory by investigating distributed brain systems underlying human voice perception in children with ASD. Using resting-state functional MRI data acquired from 20 children with ASD and 19 age- and intelligence quotient-matched typically developing children, we examined intrinsic functional connectivity of voice-selective bilateral posterior superior temporal sulcus (pSTS). Children with ASD showed a striking pattern of underconnectivity between left-hemisphere pSTS and distributed nodes of the dopaminergic reward pathway, including bilateral ventral tegmental areas and nucleus accumbens, left-hemisphere insula, orbitofrontal cortex, and ventromedial prefrontal cortex. Children with ASD also showed underconnectivity between right-hemisphere pSTS, a region known for processing speech prosody, and the orbitofrontal cortex and amygdala, brain regions critical for emotion-related associative learning. The degree of underconnectivity between voice-selective cortex and reward pathways predicted symptom severity for communication deficits in children with ASD. Our results suggest that weak connectivity of voice-selective cortex and brain structures involved in reward and emotion may impair the ability of children with ASD to experience speech as a pleasurable stimulus, thereby impacting language and social skill development in this population. Our study provides support for the social motivation theory of ASD.
Paralinguistic Processing in Children with Callosal Agenesis: Emergence of Neurolinguistic Deficits
ERIC Educational Resources Information Center
Brown, W.S.; Symingtion, M.; VanLancker-Sidtis, D.; Dietrich, R.; Paul, L.K.
2005-01-01
Recent research revealed impaired processing of both nonliteral meaning and affective prosody in adults with agenesis of the corpus callosum (ACC) and normal intelligence. Since normal children have incomplete myelination of the corpus callosum, it was hypothesized that paralanguage deficits in children with ACC would be less apparent relative to…
Prosody in the hands of the speaker
Guellaï, Bahia; Langus, Alan; Nespor, Marina
2014-01-01
In everyday life, speech is accompanied by gestures. In the present study, two experiments tested the possibility that spontaneous gestures accompanying speech carry prosodic information. Experiment 1 showed that gestures provide prosodic information, as adults are able to perceive the congruency between low-pass filtered—thus unintelligible—speech and the gestures of the speaker. Experiment 2 shows that in the case of ambiguous sentences (i.e., sentences with two alternative meanings depending on their prosody) mismatched prosody and gestures lead participants to choose more often the meaning signaled by gestures. Our results demonstrate that the prosody that characterizes speech is not a modality specific phenomenon: it is also perceived in the spontaneous gestures that accompany speech. We draw the conclusion that spontaneous gestures and speech form a single communication system where the suprasegmental aspects of spoken language are mapped to the motor-programs responsible for the production of both speech sounds and hand gestures. PMID:25071666
ERIC Educational Resources Information Center
Rota, Giuseppina; Handjaras, Giacomo; Sitaram, Ranganatha; Birbaumer, Niels; Dogil, Grzegorz
2011-01-01
Mechanisms of cortical reorganization underlying the enhancement of speech processing have been poorly investigated. In the present study, we addressed changes in functional and effective connectivity induced in subjects who learned to deliberately increase activation in the right inferior frontal gyrus (rIFG), and improved their ability to…
Bögels, Sara; Schriefers, Herbert; Vonk, Wietske; Chwilla, Dorothee J; Kerkhofs, Roel
2013-11-01
This ERP study investigates whether a superfluous prosodic break (i.e., a prosodic break that does not coincide with a syntactic break) has more severe processing consequences during auditory sentence comprehension than a missing prosodic break (i.e., the absence of a prosodic break at the position of a syntactic break). Participants listened to temporarily ambiguous sentences involving a prosody-syntax match or mismatch. The disambiguation of these sentences was always lexical in nature in the present experiment. This contrasts with a related study by Pauker, Itzhak, Baum, and Steinhauer (2011), where the disambiguation was of a lexical type for missing PBs and of a prosodic type for superfluous PBs. Our results converge with those of Pauker et al. (2011): superfluous prosodic breaks lead to more severe processing problems than missing prosodic breaks. Importantly, the present results extend those of Pauker et al. (2011) showing that this holds when the disambiguation is always lexical in nature. Furthermore, our results show that the way listeners use prosody can change over the course of the experiment which bears consequences for future studies. © 2013 Elsevier Ltd. All rights reserved.
Do Persian Native Speakers Prosodically Mark Wh-in-situ Questions?
Shiamizadeh, Zohreh; Caspers, Johanneke; Schiller, Niels O
2018-02-01
It has been shown that prosody contributes to the contrast between declarativity and interrogativity, notably in interrogative utterances lacking lexico-syntactic features of interrogativity. Accordingly, it may be proposed that prosody plays a role in marking wh-in-situ questions in which the interrogativity feature (the wh-phrase) does not move to sentence-initial position, as, for example, in Persian. This paper examines whether prosody distinguishes Persian wh-in-situ questions from declaratives in the absence of the interrogativity feature in the sentence-initial position. To answer this question, a production experiment was designed in which wh-questions and declaratives were elicited from Persian native speakers. On the basis of the results of previous studies, we hypothesize that prosodic features mark wh-in-situ questions as opposed to declaratives at both the local (pre- and post-wh part) and global level (complete sentence). The results of the current study confirm our hypothesis that prosodic correlates mark the pre-wh part as well as the complete sentence in wh-in-situ questions. The results support theoretical concepts such as the frequency code, the universal dichotomous association between relaxation and declarativity on the one hand and tension and interrogativity on the other, the relation between prosody and pragmatics, and the relation between prosody and encoding and decoding of sentence type.
ERIC Educational Resources Information Center
Srinivasan, Ravindra J.; Massaro, Dominic W.
2003-01-01
Examined the processing of potential auditory and visual cues that differentiate statements from echoic questions. Found that both auditory and visual cues reliably conveyed statement and question intonation, were successfully synthesized, and generalized to other utterances. (Author/VWL)
Processing Implied Meaning through Contrastive Prosody
ERIC Educational Resources Information Center
Dennison, Heeyeon Yoon
2010-01-01
Understanding implicature--something meant, implied, or suggested distinct from what is said--is paramount for successful human communication. Yet, it is unclear how our cognitive abilities fill in gaps of unspecified information. This study presents three distinct sets of experiments investigating how people understand implied contrasts conveyed…
Her Voice Lingers on and Her Memory Is Strategic: Effects of Gender on Directed Forgetting
Yang, Hwajin; Yang, Sujin; Park, Giho
2013-01-01
The literature on directed forgetting has employed exclusively visual words. Thus, the potentially interesting aspects of a spoken utterance, which include not only vocal cues (e.g., prosody) but also the speaker and the listener, have been neglected. This study demonstrates that prosody alone does not influence directed-forgetting effects, while the sex of the speaker and the listener significantly modulate directed-forgetting effects for spoken utterances. Specifically, forgetting costs were attenuated for female-spoken items compared to male-spoken items, and forgetting benefits were eliminated among female listeners but not among male listeners. These results suggest that information conveyed in a female voice draws attention to its distinct perceptual attributes, thus interfering with retention of the semantic meaning, while female listeners' superior capacity for processing the surface features of spoken utterances may predispose them to spontaneously employ adaptive strategies to retain content information despite distraction by perceptual features. Our findings underscore the importance of sex differences when processing spoken messages in directed forgetting. PMID:23691141
Meier, Sandra L; Charleston, Alison J; Tippett, Lynette J
2010-11-01
Amyotrophic lateral sclerosis, a progressive disease affecting motor neurons, may variably affect cognition and behaviour. We tested the hypothesis that functions associated with orbitomedial prefrontal cortex are affected by evaluating the behavioural and cognitive performance of 18 participants with amyotrophic lateral sclerosis without dementia and 18 healthy, matched controls. We measured Theory of Mind (Faux Pas Task), emotional prosody recognition (Aprosodia Battery), reversal of behaviour in response to changes in reward (Probabilistic Reversal Learning Task), decision making without risk (Holiday Apartment Task) and aberrant behaviour (Neuropsychiatric Inventory). We also assessed dorsolateral prefrontal function, using verbal and written fluency and planning (One-touch Stockings of Cambridge), to determine whether impairments in tasks sensitive to these two prefrontal regions co-occur. The patient group was significantly impaired at identifying social faux pas, recognizing emotions and decision-making, indicating mild, but consistent impairment on most measures sensitive to orbitomedial prefrontal cortex. Significant levels of aberrant behaviour were present in 50% of patients. Patients were also impaired on verbal fluency and planning. Individual subject analyses involved computing classical dissociations between tasks sensitive to different prefrontal regions. These revealed heterogeneous patterns of impaired and spared cognitive abilities: 33% of participants had classical dissociations involving orbitomedial prefrontal tasks, 17% had classical dissociations involving dorsolateral prefrontal tasks, 22% had classical dissociations between tasks of both regions, and 28% had no classical dissociations. These data indicate subtle changes in behaviour, emotional processing, decision-making and altered social awareness, associated with orbitomedial prefrontal cortex, may be present in a significant proportion of individuals with amyotrophic lateral sclerosis without dementia, some with no signs of dysfunction in tasks sensitive to other regions of prefrontal cortex. This demonstration of variability in cognitive integrity supports previous research indicating amyotrophic lateral sclerosis is a heterogeneous disease.
Language dominance shapes non-linguistic rhythmic grouping in bilinguals.
Molnar, Monika; Carreiras, Manuel; Gervain, Judit
2016-07-01
To what degree non-linguistic auditory rhythm perception is governed by universal biases (e.g., Iambic-Trochaic Law; Hayes, 1995) or shaped by native language experience is debated. It has been proposed that rhythmic regularities in spoken language, such as phrasal prosody affect the grouping abilities of monolinguals (e.g., Iversen, Patel, & Ohgushi, 2008). Here, we assessed the non-linguistic tone grouping biases of Spanish monolinguals, and three groups of Basque-Spanish bilinguals with different levels of Basque experience. It is usually assumed in the literature that Basque and Spanish have different phrasal prosodies and even linguistic rhythms. To confirm this, first, we quantified Basque and Spanish phrasal prosody (Experiment 1a) and duration patterns used in the classification of languages into rhythm classes (Experiment 1b). The acoustic measurements revealed that regularities in phrasal prosody systematically differ across Basque and Spanish; by contrast, the rhythms of the two languages are only minimally dissimilar. In Experiment 2, participants' non-linguistic rhythm preferences were assessed in response to non-linguistic tones alternating in either intensity (Intensity condition) or in duration (Duration condition). In the Intensity condition, all groups showed a trochaic grouping bias, as predicted by the Iambic-Trochaic Law. In the Duration Condition the Spanish monolingual and the most Basque-dominant bilingual group exhibited opposite grouping preferences in line with the phrasal prosodies of their native/dominant languages, trochaic in Basque, iambic in Spanish. The two other bilingual groups showed no significant biases, however. Overall, results indicate that duration-based grouping mechanisms are biased toward the phrasal prosody of the native and dominant language; also, the presence of an L2 in the environment interacts with the auditory biases. Copyright © 2016 Elsevier B.V. All rights reserved.
Lessons Learned in Part-of-Speech Tagging of Conversational Speech
2010-10-01
for conversational speech recognition. In Plenary Meeting and Symposium on Prosody and Speech Processing. Slav Petrov and Dan Klein. 2007. Improved...inference for unlexicalized parsing. In HLT-NAACL. Slav Petrov. 2010. Products of random latent variable grammars. In HLT-NAACL. Brian Roark, Yang Liu
ERIC Educational Resources Information Center
Mirzayan, Armik
2010-01-01
This thesis provides a comprehensive account of the intonational phonology of Lakota, an indigenous North American language of the Siouan family. Lakota is predominantly a verb final language, characterized by complex verbal morphology. The phonological description of Lakota intonation and prosody presented here is based on acoustic analysis of…
Eye Movements, Prosody, and Word Frequency among Average- and High-Skilled Second-Grade Readers
ERIC Educational Resources Information Center
Valle, Araceli; Binder, Katherine S.; Walsh, Caitlin B.; Nemier, Carolyn; Bangs, Katheryn E.
2013-01-01
readers (as identified by their Woodcock-Johnson III Test of Academic Achievement Broad Reading scores) differed on behavioral measures of reading related to comprehension: eye movements during silent reading and prosody during oral reading. Results from silent reading implicate…
Multisensory emotion perception in congenitally, early, and late deaf CI users
Nava, Elena; Villwock, Agnes K.; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2017-01-01
Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences. PMID:29023525
Multisensory emotion perception in congenitally, early, and late deaf CI users.
Fengler, Ineke; Nava, Elena; Villwock, Agnes K; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2017-01-01
Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.
Drury, John E; Baum, Shari R; Valeriote, Hope; Steinhauer, Karsten
2016-01-01
This study presents the first two ERP reading studies of comma-induced effects of covert (implicit) prosody on syntactic parsing decisions in English. The first experiment used a balanced 2 × 2 design in which the presence/absence of commas determined plausibility (e.g., John, said Mary, was the nicest boy at the party vs. John said Mary was the nicest boy at the party ). The second reading experiment replicated a previous auditory study investigating the role of overt prosodic boundaries in closure ambiguities (Pauker et al., 2011). In both experiments, commas reliably elicited CPS components and generally played a dominant role in determining parsing decisions in the face of input ambiguity. The combined set of findings provides further evidence supporting the claim that mechanisms subserving speech processing play an active role during silent reading.
Neural and Behavioral Correlates of Song Prosody
ERIC Educational Resources Information Center
Gordon, Reyna Leigh
2010-01-01
This dissertation studies the neural basis of song, a universal human behavior. The relationship of words and melodies in the perception of song at phonological, semantic, melodic, and rhythmic levels of processing was investigated using the fine temporal resolution of Electroencephalography (EEG). The observations reported here may shed light on…
ERIC Educational Resources Information Center
DARLEY, FREDERIC L.
THIS TEXT GIVES THE STUDENT AN OUTLINE OF THE BASIC PRINCIPLES OF SCIENTIFIC METHODOLOGY WHICH UNDERLIE EVALUATIVE WORK IN SPEECH DISORDERS. RATIONALE AND ASSESSMENT TECHNIQUES ARE GIVEN FOR EXAMINATION OF THE BASIC COMMUNICATION PROCESSES OF SYMBOLIZATION, RESPIRATION, PHONATION, ARTICULATION-RESONANCE, PROSODY, ASSOCIATED SENSORY AND PERCEPTUAL…
Prosody and the Development of Comprehension.
ERIC Educational Resources Information Center
Cutler, Anne; Swinney, David A.
1987-01-01
Studies analyzing children's response time to detect word targets revealed that six-year-olds and younger children generally did not show the response time advantage for accented target words which adult listeners show, providing support for the argument that the processing advantage for accented words reflects the semantic role of accent as an…
Self-organizing map classifier for stressed speech recognition
NASA Astrophysics Data System (ADS)
Partila, Pavol; Tovarek, Jaromir; Voznak, Miroslav
2016-05-01
This paper presents a method for detecting speech under stress using Self-Organizing Maps. Most people who are exposed to stressful situations can not adequately respond to stimuli. Army, police, and fire department occupy the largest part of the environment that are typical of an increased number of stressful situations. The role of men in action is controlled by the control center. Control commands should be adapted to the psychological state of a man in action. It is known that the psychological changes of the human body are also reflected physiologically, which consequently means the stress effected speech. Therefore, it is clear that the speech stress recognizing system is required in the security forces. One of the possible classifiers, which are popular for its flexibility, is a self-organizing map. It is one type of the artificial neural networks. Flexibility means independence classifier on the character of the input data. This feature is suitable for speech processing. Human Stress can be seen as a kind of emotional state. Mel-frequency cepstral coefficients, LPC coefficients, and prosody features were selected for input data. These coefficients were selected for their sensitivity to emotional changes. The calculation of the parameters was performed on speech recordings, which can be divided into two classes, namely the stress state recordings and normal state recordings. The benefit of the experiment is a method using SOM classifier for stress speech detection. Results showed the advantage of this method, which is input data flexibility.
Phonotactic Acquisition in Healthy Preterm Infants
ERIC Educational Resources Information Center
Gonzalez-Gomez, Nayeli; Nazzi, Thierry
2012-01-01
Previous work has shown that preterm infants are at higher risk for cognitive/language delays than full-term infants. Recent studies, focusing on prosody (i.e. rhythm, intonation), have suggested that prosodic perception development in preterms is indexed by maturational rather than postnatal/listening age. However, because prosody is heard…
Prosody Production and Perception with Conversational Speech
ERIC Educational Resources Information Center
Mo, Yoonsook
2010-01-01
Speech utterances are more than the linear concatenation of individual phonemes or words. They are organized by prosodic structures comprising phonological units of different sizes (e.g., syllable, foot, word, and phrase) and the prominence relations among them. As the linguistic structure of spoken languages, prosody serves an important function…
Computational Prosodic Markers for Autism
ERIC Educational Resources Information Center
Van Santen, Jan P.H.; Prud'hommeaux, Emily T.; Black, Lois M.; Mitchell, Margaret
2010-01-01
We present results obtained with new instrumental methods for the acoustic analysis of prosody to evaluate prosody production by children with Autism Spectrum Disorder (ASD) and Typical Development (TD). Two tasks elicit focal stress--one in a vocal imitation paradigm, the other in a picture-description paradigm; a third task also uses a vocal…
Cross-Modal Facilitation in Speech Prosody
ERIC Educational Resources Information Center
Foxton, Jessica M.; Riviere, Louis-David; Barone, Pascal
2010-01-01
Speech prosody has traditionally been considered solely in terms of its auditory features, yet correlated visual features exist, such as head and eyebrow movements. This study investigated the extent to which visual prosodic features are able to affect the perception of the auditory features. Participants were presented with videos of a speaker…
[Detection and specific studies in procedural learning difficulties].
Magallón, S; Narbona, J
2009-02-27
The main disabilities in non-verbal learning disorder (NLD) are: the acquisition and automating of motor and cognitive processes, visual spatial integration, motor coordination, executive functions, difficulty in comprehension of the context, and social skills. AIMS. To review the research to date on NLD, and to discuss whether the term 'procedural learning disorder' (PLD) would be more suitable to refer to NLD. A considerable amount of research suggests a neurological correlate of PLD with dysfunctions in the 'posterior' attention system, or the right hemisphere, or the cerebellum. Even if it is said to be difficult the delimitation between NLD and other disorders or syndromes like Asperger syndrome, certain characteristics contribute to differential diagnosis. Intervention strategies for the PLD must lead to the development of motor automatisms and problem solving strategies, including social skills. The basic dysfunction in NLD affects to implicit learning of routines, automating of motor skills and cognitive strategies that spare conscious resources in daily behaviours. These limitations are partly due to a dysfunction in non-declarative procedural memory. Various dimensions of language are also involved: context comprehension, processing of the spatial and emotional indicators of verbal language, language inferences, prosody, organization of the inner speech, use of language and non-verbal communication; this is why the diagnostic label 'PLD' would be more appropriate, avoiding the euphemistic adjective 'non-verbal'.
Pawełczyk, Agnieszka; Łojek, Emila; Żurner, Natalia; Gawłowska-Sawosz, Marta; Pawełczyk, Tomasz
2018-05-31
The purpose of the study was to examine the presence of pragmatic dysfunctions in first episode (FE) subjects and their healthy first degree relatives as a potential endophenotype for schizophrenia. Thirty-four FE patients, 34 parents of the patients (REL) and 32 healthy controls (HC) took part in the study. Pragmatic language functions were evaluated with the Right Hemisphere Language Battery, attention and executive functions were controlled, as well as age and education level. The parents differed from HC but not from their FE offspring with regard to overall level of language and communication and the general knowledge component of language processing. The FE participants differed from HC in comprehension of inferred meaning, emotional prosody, discourse dimensions, overall level of language and communication, language processing with regard to general knowledge and communication competences. The FE participants differed from REL regarding discourse dimensions. Our findings suggest that pragmatic dysfunctions may act as vulnerability markers of schizophrenia; their assessment may help in the diagnosis of early stages of the illness and in understanding its pathophysiology. In future research the adoptive and biological parents of schizophrenia patients should be compared to elucidate which language failures reflect genetic vulnerability and which ones environmental factors. Copyright © 2018. Published by Elsevier B.V.
Sokka, Laura; Huotilainen, Minna; Leinikka, Marianne; Korpela, Jussi; Henelius, Andreas; Alain, Claude; Müller, Kiti; Pakarinen, Satu
2014-12-01
Job burnout is a significant cause of work absenteeism. Evidence from behavioral studies and patient reports suggests that job burnout is associated with impairments of attention and decreased working capacity, and it has overlapping elements with depression, anxiety and sleep disturbances. Here, we examined the electrophysiological correlates of automatic sound change detection and involuntary attention allocation in job burnout using scalp recordings of event-related potentials (ERP). Volunteers with job burnout symptoms but without severe depression and anxiety disorders and their non-burnout controls were presented with natural speech sound stimuli (standard and nine deviants), as well as three rarely occurring speech sounds with strong emotional prosody. All stimuli elicited mismatch negativity (MMN) responses that were comparable in both groups. The groups differed with respect to the P3a, an ERP component reflecting involuntary shift of attention: job burnout group showed a shorter P3a latency in response to the emotionally negative stimulus, and a longer latency in response to the positive stimulus. Results indicate that in job burnout, automatic speech sound discrimination is intact, but there is an attention capture tendency that is faster for negative, and slower to positive information compared to that of controls. Copyright © 2014 Elsevier B.V. All rights reserved.
Laughter exaggerates happy and sad faces depending on visual context
Sherman, Aleksandra; Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru
2012-01-01
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced visual perception of facial expressions. We simultaneously presented laughter with a happy, neutral, or sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distracter faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a re-examination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may similarly be context dependent. PMID:22215467
Preschoolers Use Phrasal Prosody Online to Constrain Syntactic Analysis
ERIC Educational Resources Information Center
de Carvalho, Alex; Dautriche, Isabelle; Christophe, Anne
2016-01-01
Two experiments were conducted to investigate whether young children are able to take into account phrasal prosody when computing the syntactic structure of a sentence. Pairs of French noun/verb homophones were selected to create locally ambiguous sentences (["la petite 'ferme'"] ["est très jolie"] "the small farm is very…
Entrainment of Prosody in the Interaction of Mothers with Their Young Children
ERIC Educational Resources Information Center
Ko, Eon-Suk; Seidl, Amanda; Cristia, Alejandrina; Reimchen, Melissa; Soderstrom, Melanie
2016-01-01
Caregiver speech is not a static collection of utterances, but occurs in "conversational exchanges," in which caregiver and child dynamically influence each other's speech. We investigate (a) whether children and caregivers modulate the prosody of their speech as a function of their interlocutor's speech, and (b) the influence of the…
Beyond the Particular: Prosody and the Coordination of Actions
ERIC Educational Resources Information Center
Szczepek Reed, Beatrice
2012-01-01
The majority of research on prosody in conversation to date has focused on exploring the role of individual prosodic features, such as certain types of pitch accent, pitch register or voice quality, for the accomplishment of specified social actions. From this research the picture emerges that when it comes to the implementation of specific…
Oral Reading Fluency and Prosody: A Preliminary Analysis of the Greek Language
ERIC Educational Resources Information Center
Sarris, Menelaos; Dimakos, Ioannis C.
2015-01-01
This article presents results from an initial investigation of Greek oral reading fluency and prosody. Although currently held perspectives consider reading the product of reading decoding and reading comprehension, there is enough evidence (both Greek and foreign) to suggest that other variables may affect reading, as well. Such variables include…
ERIC Educational Resources Information Center
Esteve-Gibert, Nuria; Prieto, Pilar
2013-01-01
There is considerable debate about whether early vocalizations mimic the target language and whether prosody signals emergent intentional communication. A longitudinal corpus of four Catalan-babbling infants was analyzed to investigate whether children use different prosodic patterns to distinguish communicative from investigative vocalizations…
Children's Use of Phonological Information in Ambiguity Resolution: A View from Mandarin Chinese
ERIC Educational Resources Information Center
Zhou, Peng; Su, Yi; Crain, Stephen; Gao, Liqun; Zhan, Likan
2012-01-01
How do children develop the mapping between prosody and other levels of linguistic knowledge? This question has received considerable attention in child language research. In the present study two experiments were conducted to investigate four- to five-year-old Mandarin-speaking children's sensitivity to prosody in ambiguity resolution. Experiment…
Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals
ERIC Educational Resources Information Center
Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.
2017-01-01
Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…
Say It like You Mean It: Mothers' Use of Prosody to Convey Word Meaning
ERIC Educational Resources Information Center
Herold, Debora S.; Nygaard, Lynne C.; Namy, Laura L.
2012-01-01
Prosody plays a variety of roles in infants' communicative development, aiding in attention modulation, speech segmentation, and syntax acquisition. This study investigates the extent to which parents also spontaneously modulate prosodic aspects of infant directed speech in ways that distinguish semantic aspects of language. Fourteen mothers of…
The semantics of prosody: acoustic and perceptual evidence of prosodic correlates to word meaning.
Nygaard, Lynne C; Herold, Debora S; Namy, Laura L
2009-01-01
This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language. Copyright © 2009 Cognitive Science Society, Inc.
Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.
2009-01-01
In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083
Prosody perception and musical pitch discrimination in adults using cochlear implants.
Kalathottukaren, Rose Thomas; Purdy, Suzanne C; Ballard, Elaine
2015-07-01
This study investigated prosodic perception and musical pitch discrimination in adults using cochlear implants (CI), and examined the relationship between prosody perception scores and non-linguistic auditory measures, demographic variables, and speech recognition scores. Participants were given four subtests of the PEPS-C (profiling elements of prosody in speech-communication), the adult paralanguage subtest of the DANVA 2 (diagnostic analysis of non verbal accuracy 2), and the contour and interval subtests of the MBEA (Montreal battery of evaluation of amusia). Twelve CI users aged 25;5 to 78;0 years participated. CI participants performed significantly more poorly than normative values for New Zealand adults for PEPS-C turn-end, affect, and contrastive stress reception subtests, but were not different from the norm for the chunking reception subtest. Performance on the DANVA 2 adult paralanguage subtest was lower than the normative mean reported by Saindon (2010) . Most of the CI participants performed at chance level on both MBEA subtests. CI users have difficulty perceiving prosodic information accurately. Difficulty in understanding different aspects of prosody and music may be associated with reduced pitch perception ability.
Drury, John E.; Baum, Shari R.; Valeriote, Hope; Steinhauer, Karsten
2016-01-01
This study presents the first two ERP reading studies of comma-induced effects of covert (implicit) prosody on syntactic parsing decisions in English. The first experiment used a balanced 2 × 2 design in which the presence/absence of commas determined plausibility (e.g., John, said Mary, was the nicest boy at the party vs. John said Mary was the nicest boy at the party). The second reading experiment replicated a previous auditory study investigating the role of overt prosodic boundaries in closure ambiguities (Pauker et al., 2011). In both experiments, commas reliably elicited CPS components and generally played a dominant role in determining parsing decisions in the face of input ambiguity. The combined set of findings provides further evidence supporting the claim that mechanisms subserving speech processing play an active role during silent reading. PMID:27695428
Speech Prosody Across Stimulus Types for Individuals with Parkinson's Disease.
K-Y Ma, Joan; Schneider, Christine B; Hoffmann, Rüdiger; Storch, Alexander
2015-01-01
Up to 89% of the individuals with Parkinson's disease (PD) experience speech problem over the course of the disease. Speech prosody and intelligibility are two of the most affected areas in hypokinetic dysarthria. However, assessment of these areas could potentially be problematic as speech prosody and intelligibility could be affected by the type of speech materials employed. To comparatively explore the effects of different types of speech stimulus on speech prosody and intelligibility in PD speakers. Speech prosody and intelligibility of two groups of individuals with varying degree of dysarthria resulting from PD was compared to that of a group of control speakers using sentence reading, passage reading and monologue. Acoustic analysis including measures on fundamental frequency (F0), intensity and speech rate was used to form a prosodic profile for each individual. Speech intelligibility was measured for the speakers with dysarthria using direct magnitude estimation. Difference in F0 variability between the speakers with dysarthria and control speakers was only observed in sentence reading task. Difference in the average intensity level was observed for speakers with mild dysarthria to that of the control speakers. Additionally, there were stimulus effect on both intelligibility and prosodic profile. The prosodic profile of PD speakers was different from that of the control speakers in the more structured task, and lower intelligibility was found in less structured task. This highlighted the value of both structured and natural stimulus to evaluate speech production in PD speakers.
Tone Discrimination as a Window into Acoustic Perceptual Deficits in Parkinson's Disease
ERIC Educational Resources Information Center
Troche, Joshua; Troche, Michelle S.; Berkowitz, Rebecca; Grossman, Murray; Reilly, Jamie
2012-01-01
Purpose: Deficits in auditory perception compromise a range of linguistic processes in persons with Parkinson's disease (PD), including speech perception and sensitivity to affective and linguistic prosody. An unanswered question is whether this deficit exists not only at the level of speech perception, but also at a more pervasive level of…
Neural correlates of early-closure garden-path processing: Effects of prosody and plausibility.
den Ouden, Dirk-Bart; Dickey, Michael Walsh; Anderson, Catherine; Christianson, Kiel
2016-01-01
Functional magnetic resonance imaging (fMRI) was used to investigate neural correlates of early-closure garden-path sentence processing and use of extrasyntactic information to resolve temporary syntactic ambiguities. Sixteen participants performed an auditory picture verification task on sentences presented with natural versus flat intonation. Stimuli included sentences in which the garden-path interpretation was plausible, implausible because of a late pragmatic cue, or implausible because of a semantic mismatch between an optionally transitive verb and the following noun. Natural sentence intonation was correlated with left-hemisphere temporal activation, but also with activation that suggests the allocation of more resources to interpretation when natural prosody is provided. Garden-path processing was associated with upregulation in bilateral inferior parietal and right-hemisphere dorsolateral prefrontal and inferior frontal cortex, while differences between the strength and type of plausibility cues were also reflected in activation patterns. Region of interest (ROI) analyses in regions associated with complex syntactic processing are consistent with a role for posterior temporal cortex supporting access to verb argument structure. Furthermore, ROI analyses within left-hemisphere inferior frontal gyrus suggest a division of labour, with the anterior-ventral part primarily involved in syntactic-semantic mismatch detection, the central part supporting structural reanalysis, and the posterior-dorsal part showing a general structural complexity effect.
A Cross-Sectional Study of Fluency and Reading Comprehension in Spanish Primary School Children
ERIC Educational Resources Information Center
Calet, Nuria; Gutiérrez-Palma, Nicolás; Defior, Sylvia
2015-01-01
The importance of prosodic elements is recognised in most definitions of fluency. Although speed and accuracy have been typically considered the constituents of reading fluency, prosody is emerging as an additional component. The relevance of prosody in comprehension is increasingly recognised in the latest studies. The purpose of this research is…
The Effect of Non-Sentential Context Prosody on Homographs' Lexical Activation in Persian
ERIC Educational Resources Information Center
Feizabadi, Parvin Sadat; Bijankhan, Mahmood
2015-01-01
This study examines the effect of non-sentential context prosody pattern on lexical activation in Persian. For this purpose a questionnaire including target and non-target words is used. The target words are homographs with two possible stress patterns belonging to different syntactic categories. Participants are asked to read out the words aloud…
ERIC Educational Resources Information Center
Shriberg, Lawrence D.; Paul, Rhea; McSweeny, Jane L.; Klin, Ami; Cohen, Donald J.; Volkmar, Fred R.
2001-01-01
This study compared the speech and prosody-voice profiles for 30 male speakers with either high-functioning autism (HFA) or Asperger syndrome (AS), and 53 typically developing male speakers. Both HFA and AS groups had more residual articulation distortion errors and utterances coded as inappropriate for phrasing, stress, and resonance. AS speakers…
The Effects of Morpheme and Prosody Instruction on Middle School Spelling
ERIC Educational Resources Information Center
Dornay, Margaret A.
2017-01-01
A single case design was used to investigate the impact of two types of instruction on middle school students' spelling. Phase 1 emphasized morphology awareness instruction (MAI) and phase 2 employed the addition of prosody awareness instruction (PAI). In order to compare the effects of MAI and PAI, spelling scores were gathered from eight…
ERIC Educational Resources Information Center
Kargas, Niko; López, Beatriz; Morris, Paul; Reddy, Vasudevi
2016-01-01
Purpose: To date, the literature on perception of affective, pragmatic, and grammatical prosody abilities in autism spectrum disorders (ASD) has been sparse and contradictory. It is interesting to note that the primary perception of syllable stress within the word structure, which is crucial for all prosody functions, remains relatively unexplored…
Prosody as a Tool for Assessing Reading Fluency of Adult ESL Students
ERIC Educational Resources Information Center
Sinambela, Seftirina Evina
2017-01-01
The prosodic features in reading aloud assignment has been associated with the students' decoding skill. The goal of the present study is to determine the reliability of prosody for assessing reading fluency of adult ESL students in Indonesia context. The participants were all Indonesian natives, undergraduate students, adult females and males who…
Second Language Prosody and Oral Reading Comprehension in Learners of Brazilian Portuguese
ERIC Educational Resources Information Center
McCune, W. M. Duce, II
2011-01-01
Learning to read can pose a major challenge to students, and much of this challenge is due to the fact that written language is necessarily impoverished when compared to the rich, continuous speech signal. Prosodic elements of language are scarcely represented in written text, and while oral reading prosody has been addressed in the literature…
Prosodic Abilities of Spanish-Speaking Adolescents and Adults with Williams Syndrome
ERIC Educational Resources Information Center
Martinez-Castilla, Pastora; Sotillo, Maria; Campos, Ruth
2011-01-01
In spite of the relevant role of prosody in communication, and in contrast with other linguistic components, there is paucity of research in this field for Williams syndrome (WS). Therefore, this study performed a systematic assessment of prosodic abilities in WS. The Spanish version of the Profiling Elements of Prosody in Speech-Communication…
Preschool Children's Performance on Profiling Elements of Prosody in Speech-Communication (PEPS-C)
ERIC Educational Resources Information Center
Gibbon, Fiona E.; Smyth, Heather
2013-01-01
Profiling Elements of Prosody in Speech-Communication (PEPS-C) has not been used widely to assess prosodic abilities of preschool children. This study was therefore aimed at investigating typically developing 4-year-olds' performance on PEPS-C. PEPS-C was presented to 30 typically developing 4-year-olds recruited in southern Ireland. Children were…
ERIC Educational Resources Information Center
Coates, Robert Alexander Graham; Gorham, Judith; Nicholas, Richard
2017-01-01
Recent neurological breakthroughs in our understanding of the Critical Period Hypothesis and prosody may suggest strategies on how phonics instruction could improve L2 language learning and in particular phoneme/grapheme decoding. We therefore conducted a randomised controlled-trial on the application of prosody and phonics techniques, to improve…
Slavic Prosody: Language Change and Phonological Theory. Cambridge Studies in Linguistics 86.
ERIC Educational Resources Information Center
Bethin, Christina Yurkiw
The history of Slavic prosody gives an account of Slavic languages at the time of their differentiation and relates these developments to issues in phonological theory. It is first argued that the syllable structure of Slavic changes before the fall of the jers and suggests that intra- and intersyllabic reorganization in Late Common Slavic was far…
ERIC Educational Resources Information Center
Mietz, Anja; Toepel, Ulrike; Ischebeck, Anja; Alter, Kai
2008-01-01
The current study on German investigates Event-Related brain Potentials (ERPs) for the perception of sentences with intonations which are infrequent (i.e. vocatives) or inadequate in daily conversation. These ERPs are compared to the processing correlates for sentences in which the syntax-to-prosody relations are congruent and used frequently…
ERIC Educational Resources Information Center
Delgado Algarra, Emilio José
2016-01-01
Most of the studies focus on the teaching of foreign languages indicate that little attention is paid to the prosodic features both didactic materials and teaching-learning processes (Martinsen, Avord and Tanner, 2014). In this context and throughout this article, an analysis of the didactical and technical dimensions of OJAD (Japanese Accent…
ERIC Educational Resources Information Center
Ito, Kiwako; Bibyk, Sarah A.; Wagner, Laura; Speer, Shari R.
2014-01-01
Both off-line and on-line comprehension studies suggest not only toddlers and preschoolers, but also older school-age children have trouble interpreting contrast-marking pitch prominence. To test whether children achieve adult-like proficiency in processing contrast-marking prosody during school years, an eye-tracking experiment examined the…
Prosodic Awareness Is Related to Reading Ability in Children with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Nash, Renae; Arciuli, Joanne
2016-01-01
Prosodic awareness has been linked with reading accuracy in typically developing children. Although children with autism spectrum disorders (ASD) often have difficulty processing prosody and often have trouble learning to read, no previous study has looked at the link between explicit prosodic awareness and reading in ASD. In the current study, 29…
Perception of Lexical Stress by Brain-Damaged Individuals: Effects on Lexical-Semantic Activation
ERIC Educational Resources Information Center
Shah, Amee P.; Baum, Shari R.
2006-01-01
A semantic priming, lexical-decision study was conducted to examine the ability of left- and right-brain damaged individuals to perceive lexical-stress cues and map them onto lexical-semantic representations. Correctly and incorrectly stressed primes were paired with related and unrelated target words to tap implicit processing of lexical prosody.…
Phrase Length and Prosody in On-Line Ambiguity Resolution
ERIC Educational Resources Information Center
Webman-Shafran, Ronit; Fodor, Janet Dean
2016-01-01
We investigated the processing of ambiguous double-PP constructions in Hebrew. Selection restrictions forced the first prepositional phrase (PP1) to attach low, but PP2 could attach maximally high to VP or maximally low to the NP inside PP1. A length contrast in PP2 was also examined. This construction affords more potential locations for prosodic…
Infant-Directed Visual Prosody: Mothers’ Head Movements and Speech Acoustics
Smith, Nicholas A.; Strader, Heather L.
2014-01-01
Acoustical changes in the prosody of mothers’ speech to infants are distinct and near universal. However, less is known about the visible properties mothers’ infant-directed (ID) speech, and their relation to speech acoustics. Mothers’ head movements were tracked as they interacted with their infants using ID speech, and compared to movements accompanying their adult-directed (AD) speech. Movement measures along three dimensions of head translation, and three axes of head rotation were calculated. Overall, more head movement was found for ID than AD speech, suggesting that mothers exaggerate their visual prosody in a manner analogous to the acoustical exaggerations in their speech. Regression analyses examined the relation between changing head position and changing acoustical pitch (F0) over time. Head movements and voice pitch were more strongly related in ID speech than in AD speech. When these relations were examined across time windows of different durations, stronger relations were observed for shorter time windows (< 5 sec). However, the particular form of these more local relations did not extend or generalize to longer time windows. This suggests that the multimodal correspondences in speech prosody are variable in form, and occur within limited time spans. PMID:25242907
Audio-vocal system regulation in children with autism spectrum disorders.
Russo, Nicole; Larson, Charles; Kraus, Nina
2008-06-01
Do children with autism spectrum disorders (ASD) respond similarly to perturbations in auditory feedback as typically developing (TD) children? Presentation of pitch-shifted voice auditory feedback to vocalizing participants reveals a close coupling between the processing of auditory feedback and vocal motor control. This paradigm was used to test the hypothesis that abnormalities in the audio-vocal system would negatively impact ASD compensatory responses to perturbed auditory feedback. Voice fundamental frequency (F(0)) was measured while children produced an /a/ sound into a microphone. The voice signal was fed back to the subjects in real time through headphones. During production, the feedback was pitch shifted (-100 cents, 200 ms) at random intervals for 80 trials. Averaged voice F(0) responses to pitch-shifted stimuli were calculated and correlated with both mental and language abilities as tested via standardized tests. A subset of children with ASD produced larger responses to perturbed auditory feedback than TD children, while the other children with ASD produced significantly lower response magnitudes. Furthermore, robust relationships between language ability, response magnitude and time of peak magnitude were identified. Because auditory feedback helps to stabilize voice F(0) (a major acoustic cue of prosody) and individuals with ASD have problems with prosody, this study identified potential mechanisms of dysfunction in the audio-vocal system for voice pitch regulation in some children with ASD. Objectively quantifying this deficit may inform both the assessment of a subgroup of ASD children with prosody deficits, as well as remediation strategies that incorporate pitch training.
ERIC Educational Resources Information Center
Shriberg, Lawrence D.; Ballard, Kirrie J.; Tomblin, J. Bruce; Duffy, Joseph R.; Odell, Katharine H.; Williams, Charles A.
2006-01-01
Purpose: The primary goal of this case study was to describe the speech, prosody, and voice characteristics of a mother and daughter with a breakpoint in a balanced 7;13 chromosomal translocation that disrupted the transcription gene, "FOXP2" (cf. J. B. Tomblin et al., 2005). As with affected members of the widely cited KE family, whose…
Loss of regional accent after damage to the speech production network.
Berthier, Marcelo L; Dávila, Guadalupe; Moreno-Torres, Ignacio; Beltrán-Corbellini, Álvaro; Santana-Moreno, Daniel; Roé-Vellvé, Núria; Thurnhofer-Hemsi, Karl; Torres-Prioris, María José; Massone, María Ignacia; Ruiz-Cruces, Rafael
2015-01-01
Lesion-symptom mapping studies reveal that selective damage to one or more components of the speech production network can be associated with foreign accent syndrome, changes in regional accent (e.g., from Parisian accent to Alsatian accent), stronger regional accent, or re-emergence of a previously learned and dormant regional accent. Here, we report loss of regional accent after rapidly regressive Broca's aphasia in three Argentinean patients who had suffered unilateral or bilateral focal lesions in components of the speech production network. All patients were monolingual speakers with three different native Spanish accents (Cordobés or central, Guaranítico or northeast, and Bonaerense). Samples of speech production from the patient with native Córdoba accent were compared with previous recordings of his voice, whereas data from the patient with native Guaranítico accent were compared with speech samples from one healthy control matched for age, gender, and native accent. Speech samples from the patient with native Buenos Aires's accent were compared with data obtained from four healthy control subjects with the same accent. Analysis of speech production revealed discrete slowing in speech rate, inappropriate long pauses, and monotonous intonation. Phonemic production remained similar to those of healthy Spanish speakers, but phonetic variants peculiar to each accent (e.g., intervocalic aspiration of /s/ in Córdoba accent) were absent. While basic normal prosodic features of Spanish prosody were preserved, features intrinsic to melody of certain geographical areas (e.g., rising end F0 excursion in declarative sentences intoned with Córdoba accent) were absent. All patients were also unable to produce sentences with different emotional prosody. Brain imaging disclosed focal left hemisphere lesions involving the middle part of the motor cortex, the post-central cortex, the posterior inferior and/or middle frontal cortices, insula, anterior putamen and supplementary motor area. Our findings suggest that lesions affecting the middle part of the left motor cortex and other components of the speech production network disrupt neural processes involved in the production of regional accent features.
Loss of regional accent after damage to the speech production network
Berthier, Marcelo L.; Dávila, Guadalupe; Moreno-Torres, Ignacio; Beltrán-Corbellini, Álvaro; Santana-Moreno, Daniel; Roé-Vellvé, Núria; Thurnhofer-Hemsi, Karl; Torres-Prioris, María José; Massone, María Ignacia; Ruiz-Cruces, Rafael
2015-01-01
Lesion-symptom mapping studies reveal that selective damage to one or more components of the speech production network can be associated with foreign accent syndrome, changes in regional accent (e.g., from Parisian accent to Alsatian accent), stronger regional accent, or re-emergence of a previously learned and dormant regional accent. Here, we report loss of regional accent after rapidly regressive Broca’s aphasia in three Argentinean patients who had suffered unilateral or bilateral focal lesions in components of the speech production network. All patients were monolingual speakers with three different native Spanish accents (Cordobés or central, Guaranítico or northeast, and Bonaerense). Samples of speech production from the patient with native Córdoba accent were compared with previous recordings of his voice, whereas data from the patient with native Guaranítico accent were compared with speech samples from one healthy control matched for age, gender, and native accent. Speech samples from the patient with native Buenos Aires’s accent were compared with data obtained from four healthy control subjects with the same accent. Analysis of speech production revealed discrete slowing in speech rate, inappropriate long pauses, and monotonous intonation. Phonemic production remained similar to those of healthy Spanish speakers, but phonetic variants peculiar to each accent (e.g., intervocalic aspiration of /s/ in Córdoba accent) were absent. While basic normal prosodic features of Spanish prosody were preserved, features intrinsic to melody of certain geographical areas (e.g., rising end F0 excursion in declarative sentences intoned with Córdoba accent) were absent. All patients were also unable to produce sentences with different emotional prosody. Brain imaging disclosed focal left hemisphere lesions involving the middle part of the motor cortex, the post-central cortex, the posterior inferior and/or middle frontal cortices, insula, anterior putamen and supplementary motor area. Our findings suggest that lesions affecting the middle part of the left motor cortex and other components of the speech production network disrupt neural processes involved in the production of regional accent features. PMID:26594161
Meinhardt-Injac, Bozana; Daum, Moritz M.; Meinhardt, Günter; Persike, Malte
2018-01-01
According to the two-systems account of theory of mind (ToM), understanding mental states of others involves both fast social-perceptual processes, as well as slower, reflexive cognitive operations (Frith and Frith, 2008; Apperly and Butterfill, 2009). To test the respective roles of specific abilities in either of these processes we administered 15 experimental procedures to a large sample of 343 participants, testing ability in face recognition and holistic perception, language, and reasoning. ToM was measured by a set of tasks requiring ability to track and to infer complex emotional and mental states of others from faces, eyes, spoken language, and prosody. We used structural equation modeling to test the relative strengths of a social-perceptual (face processing related) and reflexive-cognitive (language and reasoning related) path in predicting ToM ability. The two paths accounted for 58% of ToM variance, thus validating a general two-systems framework. Testing specific predictor paths revealed language and face recognition as strong and significant predictors of ToM. For reasoning, there were neither direct nor mediated effects, albeit reasoning was strongly associated with language. Holistic face perception also failed to show a direct link with ToM ability, while there was a mediated effect via face recognition. These results highlight the respective roles of face recognition and language for the social brain, and contribute closer empirical specification of the general two-systems account. PMID:29445336
Punctuation, Prosody, and Discourse: Afterthought Vs. Right Dislocation
Kalbertodt, Janina; Primus, Beatrice; Schumacher, Petra B.
2015-01-01
In a reading production experiment we investigate the impact of punctuation and discourse structure on the prosodic differentiation of right dislocation (RD) and afterthought (AT). Both discourse structure and punctuation are likely to affect the prosodic marking of these right-peripheral constructions, as certain prosodic markings are appropriate only in certain discourse structures, and punctuation is said to correlate with prosodic phrasing. With RD and AT clearly differing in discourse function (comment-topic structuring vs. disambiguation) and punctuation (comma vs. full stop), critical items in this study were manipulated with regard to the (mis-)match of these parameters. Since RD and AT are said to prosodically differ in pitch range, phrasing, and accentuation patterns, we measured the reduction of pitch range, boundary strength and prominence level. Results show an effect of both punctuation and discourse context (mediated by syntax) on phrasing and accentuation. Interestingly, for pitch range reduction no difference between RDs and ATs could be observed. Our results corroborate a language architecture model in which punctuation, prosody, syntax, and discourse-semantics are independent but interacting domains with correspondence constraints between them. Our findings suggest there are tight correspondence constraints between (i) punctuation (full stop and comma in particular) and syntax, (ii) prosody and syntax as well as (iii) prosody and discourse-semantics. PMID:26648883
NASA Astrophysics Data System (ADS)
Roth, Wolff-Michael; Tobin, Kenneth
2010-12-01
This ethnographic study of teaching and learning in urban high school science classes investigates the ways in which teachers and students talk, gesture, and use space and time in interaction rituals. In situations where teachers coteach as a means of learning to teach in inner-city schools, successful teacher-teacher collaborations are characterized by prosodic expressions that converge over time and adapt to match the prosodic parameters of students' talk. In these situations our ethnographic data provide evidence of solidarity and positive emotions among the teachers and also between students and teachers. Unsuccessful collaborations are associated with considerable differences in pitch between consecutive speakers participating in turns-at-talk, these being related to the production of negative emotions and conflicts at longer time scales. Situational conflicts are co-expressed by increases in pitch levels, speech intensities, and speech rates; and conflict resolution is accelerated by the coordination of pitch levels. Our study therefore suggests that prosodic alignment and misalignment are resources that are pragmatically deployed to manage face-to-face interactions that have solidarity and conflict as their longer-term outcomes.
Laughter exaggerates happy and sad faces depending on visual context.
Sherman, Aleksandra; Sweeny, Timothy D; Grabowecky, Marcia; Suzuki, Satoru
2012-04-01
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced the visual perception of facial expressions. We presented a sound clip of laughter simultaneously with a happy, a neutral, or a sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of the happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces, laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distractor faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a reexamination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may be similarly context dependent.
Motherese in Interaction: At the Cross-Road of Emotion and Cognition? (A Systematic Review)
Saint-Georges, Catherine; Chetouani, Mohamed; Cassel, Raquel; Apicella, Fabio; Mahdhaoui, Ammar; Muratori, Filippo; Laznik, Marie-Christine; Cohen, David
2013-01-01
Various aspects of motherese also known as infant-directed speech (IDS) have been studied for many years. As it is a widespread phenomenon, it is suspected to play some important roles in infant development. Therefore, our purpose was to provide an update of the evidence accumulated by reviewing all of the empirical or experimental studies that have been published since 1966 on IDS driving factors and impacts. Two databases were screened and 144 relevant studies were retained. General linguistic and prosodic characteristics of IDS were found in a variety of languages, and IDS was not restricted to mothers. IDS varied with factors associated with the caregiver (e.g., cultural, psychological and physiological) and the infant (e.g., reactivity and interactive feedback). IDS promoted infants’ affect, attention and language learning. Cognitive aspects of IDS have been widely studied whereas affective ones still need to be developed. However, during interactions, the following two observations were notable: (1) IDS prosody reflects emotional charges and meets infants’ preferences, and (2) mother-infant contingency and synchrony are crucial for IDS production and prolongation. Thus, IDS is part of an interactive loop that may play an important role in infants’ cognitive and social development. PMID:24205112
The Prosody of Topic Transition in Interaction: Pitch Register Variations.
Riou, Marine
2017-12-01
In conversation, speakers can mobilize a variety of prosodic cues to signal a switch in topics. This paper uses a mixed-methods approach combining Conversation Analysis and Instrumental Prosody to investigate the prosody of topic transition in American English, and analyzes the ways in which speakers can play on register level and on register span. A cluster of three prosodic parameters was found to be predictive of transitions: a higher maximum fundamental frequency (F0), a higher median F0 (key), and an expanded register span. Relative to speakers' habitual profiles, the mobilization of such prosodic cues corresponds to a marked upgraded prosodic design. This finding is consistent with the general assumption that continuation constitutes the norm in conversation, and that departing from it, as in the case of a topic transition, requires a marked action and marked linguistic design. The disjunctive action of opening a new topic corresponds to the use of a marked prosodic cue.
Cross-linguistic differences in prosodic cues to syntactic disambiguation in German and English
O’Brien, Mary Grantham; Jackson, Carrie N.; Gardner, Christine E.
2012-01-01
This study examined whether late-learning English-German L2 learners and late-learning German-English L2 learners use prosodic cues to disambiguate temporarily ambiguous L1 and L2 sentences during speech production. Experiments 1a and 1b showed that English-German L2 learners and German-English L2 learners used a pitch rise and pitch accent to disambiguate prepositional phrase-attachment sentences in German. However, the same participants, as well as monolingual English speakers, only used pitch accent to disambiguate similar English sentences. Taken together, these results indicate the L2 learners used prosody to disambiguate sentences in both of their languages and did not fully transfer cues to disambiguation from their L1 to their L2. The results have implications for the acquisition of L2 prosody and the interaction between prosody and meaning in L2 production. PMID:24453383
ERIC Educational Resources Information Center
Dekydtspotter, Laurent; Donaldson, Bryan; Edmonds, Amanda C.; Fultz, Audrey Liljestrand; Petrush, Rebecca A.
2008-01-01
This study investigates the manner in which syntax, prosody, and context interact when second- and fourth-semester college-level English-French learners process relative clause (RC) attachment to either the first noun phrase (NP1) or the second noun phrase (NP2) in complex nominal expressions such as "le secretaire du psychologue qui se promene"…
Gupta, Rahul; Audhkhasi, Kartik; Lee, Sungbok; Narayanan, Shrikanth
2017-01-01
Non-verbal communication involves encoding, transmission and decoding of non-lexical cues and is realized using vocal (e.g. prosody) or visual (e.g. gaze, body language) channels during conversation. These cues perform the function of maintaining conversational flow, expressing emotions, and marking personality and interpersonal attitude. In particular, non-verbal cues in speech such as paralanguage and non-verbal vocal events (e.g. laughters, sighs, cries) are used to nuance meaning and convey emotions, mood and attitude. For instance, laughters are associated with affective expressions while fillers (e.g. um, ah, um) are used to hold floor during a conversation. In this paper we present an automatic non-verbal vocal events detection system focusing on the detect of laughter and fillers. We extend our system presented during Interspeech 2013 Social Signals Sub-challenge (that was the winning entry in the challenge) for frame-wise event detection and test several schemes for incorporating local context during detection. Specifically, we incorporate context at two separate levels in our system: (i) the raw frame-wise features and, (ii) the output decisions. Furthermore, our system processes the output probabilities based on a few heuristic rules in order to reduce erroneous frame-based predictions. Our overall system achieves an Area Under the Receiver Operating Characteristics curve of 95.3% for detecting laughters and 90.4% for fillers on the test set drawn from the data specifications of the Interspeech 2013 Social Signals Sub-challenge. We perform further analysis to understand the interrelation between the features and obtained results. Specifically, we conduct a feature sensitivity analysis and correlate it with each feature's stand alone performance. The observations suggest that the trained system is more sensitive to a feature carrying higher discriminability with implications towards a better system design. PMID:28713197
Basirat, Anahita
2017-01-01
Cochlear implant (CI) users frequently achieve good speech understanding based on phoneme and word recognition. However, there is a significant variability between CI users in processing prosody. The aim of this study was to examine the abilities of an excellent CI user to segment continuous speech using intonational cues. A post-lingually deafened adult CI user and 22 normal hearing (NH) subjects segmented phonemically identical and prosodically different sequences in French such as 'l'affiche' (the poster) versus 'la fiche' (the sheet), both [lafiʃ]. All participants also completed a minimal pair discrimination task. Stimuli were presented in auditory-only and audiovisual presentation modalities. The performance of the CI user in the minimal pair discrimination task was 97% in the auditory-only and 100% in the audiovisual condition. In the segmentation task, contrary to the NH participants, the performance of the CI user did not differ from the chance level. Visual speech did not improve word segmentation. This result suggests that word segmentation based on intonational cues is challenging when using CIs even when phoneme/word recognition is very well rehabilitated. This finding points to the importance of the assessment of CI users' skills in prosody processing and the need for specific interventions focusing on this aspect of speech communication.
Fengler, Ineke; Nava, Elena; Röder, Brigitte
2015-01-01
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166
Word Prosody and Intonation of Sgaw Karen
NASA Astrophysics Data System (ADS)
West, Luke Alexander
The prosodic, and specifically intonation, systems of Tibeto-Burman languages have received less attention in research than those of other families. This study investigates the word prosody and intonation of Sgaw Karen, a tonal Tibeto-Burman language of eastern Burma, and finds similarities to both closely related Tibeto-Burman languages and the more distant Sinitic languages like Mandarin. Sentences of varying lengths with controlled tonal environments were elicited from a total of 12 participants (5 male). In terms of word prosody, Sgaw Karen does not exhibit word stress cues, but does maintain a prosodic distinction between the more prominent major syllable and the phonologically reduced minor syllable. In terms of intonation, Sgaw Karen patterns like related Pwo Karen in its limited use of post-lexical tone, which is only present at Intonation Phrase (IP) boundaries. Unlike the intonation systems of Pwo Karen and Mandarin, however, Sgaw Karen exhibits downstep across its Accentual Phrases (AP), similarly to phenomena identified in Tibetan and Burmese.
[Prosody, speech input and language acquisition].
Jungheim, M; Miller, S; Kühn, D; Ptok, M
2014-04-01
In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.
Preschool children's performance on Profiling Elements of Prosody in Speech-Communication (PEPS-C).
Gibbon, Fiona E; Smyth, Heather
2013-07-01
Profiling Elements of Prosody in Speech-Communication (PEPS-C) has not been used widely to assess prosodic abilities of preschool children. This study was therefore aimed at investigating typically developing 4-year-olds' performance on PEPS-C. PEPS-C was presented to 30 typically developing 4-year-olds recruited in southern Ireland. Children were judged to have completed the test if they produced analysable responses to >95% of the items. The children's scores were compared with data from typically developing 5-6-year-olds. The majority (83%) of 4-year-olds were able to complete the test. The children scored at chance or weak ability levels on all subtests. The 4-year-olds had lower scores than 5-6-year-olds in all subtests, apart from one, with the difference reaching statistical significance in 8 out of 12 subtests. The results indicate that PEPS-C could be a valuable tool for assessing prosody in young children with typical development and some groups of young children with communication disorders.
'Who's a good boy?!' Dogs prefer naturalistic dog-directed speech.
Benjamin, Alex; Slocombe, Katie
2018-05-01
Infant-directed speech (IDS) is a special speech register thought to aid language acquisition and improve affiliation in human infants. Although IDS shares some of its properties with dog-directed speech (DDS), it is unclear whether the production of DDS is functional, or simply an overgeneralisation of IDS within Western cultures. One recent study found that, while puppies attended more to a script read with DDS compared with adult-directed speech (ADS), adult dogs displayed no preference. In contrast, using naturalistic speech and a more ecologically valid set-up, we found that adult dogs attended to and showed more affiliative behaviour towards a speaker of DDS than of ADS. To explore whether this preference for DDS was modulated by the dog-specific words typically used in DDS, the acoustic features (prosody) of DDS or a combination of the two, we conducted a second experiment. Here the stimuli from experiment 1 were produced with reversed prosody, meaning the prosody and content of ADS and DDS were mismatched. The results revealed no significant effect of speech type, or content, suggesting that it is maybe the combination of the acoustic properties and the dog-related content of DDS that modulates the preference shown for naturalistic DDS. Overall, the results of this study suggest that naturalistic DDS, comprising of both dog-directed prosody and dog-relevant content words, improves dogs' attention and may strengthen the affiliative bond between humans and their pets.
Wible, Cynthia G.
2012-01-01
A framework is described for understanding the schizophrenic syndrome at the brain systems level. It is hypothesized that over-activation of dynamic gesture and social perceptual processes in the temporal-parietal occipital junction (TPJ), posterior superior temporal sulcus (PSTS) and surrounding regions produce the syndrome (including positive and negative symptoms, their prevalence, prodromal signs, and cognitive deficits). Hippocampal system hyper-activity and atrophy have been consistently found in schizophrenia. Hippocampal activity is highly correlated with activity in the TPJ and may be a source of over-excitation of the TPJ and surrounding regions. Strong evidence for this comes from in-vivo recordings in humans during psychotic episodes. Many positive symptoms of schizophrenia can be reframed as the erroneous sense of a presence or other who is observing, acting, speaking, or controlling; these qualia are similar to those evoked during abnormal activation of the TPJ. The TPJ and PSTS play a key role in the perception (and production) of dynamic social, emotional, and attentional gestures for the self and others (e.g., body/face/eye gestures, audiovisual speech and prosody, and social attentional gestures such as eye gaze). The single cell representation of dynamic gestures is multimodal (auditory, visual, tactile), matching the predominant hallucinatory categories in schizophrenia. Inherent in the single cell perceptual signal of dynamic gesture representations is a computation of intention, agency, and anticipation or expectancy (for the self and others). Stimulation of the TPJ resulting in activation of the self representation has been shown to result a feeling of a presence or multiple presences (due to heautoscopy) and also bizarre tactile experiences. Neurons in the TPJ are also tuned, or biased to detect threat related emotions. Abnormal over-activation in this system could produce the conscious hallucination of a voice (audiovisual speech), a person or a touch. Over-activation could interfere with attentional/emotional gesture perception and production (negative symptoms). It could produce the unconscious feeling of being watched, followed, or of a social situation unfolding along with accompanying abnormal perception of intent and agency (delusions). Abnormal activity in the TPJ would also be predicted to create several cognitive disturbances that are characteristic of schizophrenia, including abnormalities in attention, predictive social processing, working memory, and a bias to erroneously perceive threat. PMID:22737114
Acoustic constituents of prosodic typology
NASA Astrophysics Data System (ADS)
Komatsu, Masahiko
Different languages sound different, and considerable part of it derives from the typological difference of prosody. Although such difference is often referred to as lexical accent types (stress accent, pitch accent, and tone; e.g. English, Japanese, and Chinese respectively) and rhythm types (stress-, syllable-, and mora-timed rhythms; e.g. English, Spanish, and Japanese respectively), it is unclear whether these types are determined in terms of acoustic properties, The thesis intends to provide a potential basis for the description of prosody in terms of acoustics. It argues for the hypothesis that the source component of the source-filter model (acoustic features) approximately corresponds to prosody (linguistic features) through several experimental-phonetic studies. The study consists of four parts. (1) Preliminary experiment: Perceptual language identification tests were performed using English and Japanese speech samples whose frequency spectral information (i.e. non-source component) is heavily reduced. The results indicated that humans can discriminate languages with such signals. (2) Discussion on the linguistic information that the source component contains: This part constitutes the foundation of the argument of the thesis. Perception tests of consonants with the source signal indicated that the source component carries the information on broad categories of phonemes that contributes to the creation of rhythm. (3) Acoustic analysis: The speech samples of Chinese, English, Japanese, and Spanish, differing in prosodic types, were analyzed. These languages showed difference in acoustic characteristics of the source component. (4) Perceptual experiment: A language identification test for the above four languages was performed using the source signal with its acoustic features parameterized. It revealed that humans can discriminate prosodic types solely with the source features and that the discrimination is easier as acoustic information increases. The series of studies showed the correspondence of the source component to prosodic features. In linguistics, prosodic types have not been discussed purely in terms of acoustics; they are usually related to the function of prosody or phonological units such as phonemes. The present thesis focuses on acoustics and makes a contribution to establishing the crosslinguistic description system of prosody.
Speech impairment in Down syndrome: a review.
Kent, Ray D; Vorperian, Houri K
2013-02-01
This review summarizes research on disorders of speech production in Down syndrome (DS) for the purposes of informing clinical services and guiding future research. Review of the literature was based on searches using MEDLINE, Google Scholar, PsycINFO, and HighWire Press, as well as consideration of reference lists in retrieved documents (including online sources). Search terms emphasized functions related to voice, articulation, phonology, prosody, fluency, and intelligibility. The following conclusions pertain to four major areas of review: voice, speech sounds, fluency and prosody, and intelligibility. The first major area is voice. Although a number of studies have reported on vocal abnormalities in DS, major questions remain about the nature and frequency of the phonatory disorder. Results of perceptual and acoustic studies have been mixed, making it difficult to draw firm conclusions or even to identify sensitive measures for future study. The second major area is speech sounds. Articulatory and phonological studies show that speech patterns in DS are a combination of delayed development and errors not seen in typical development. Delayed (i.e., developmental) and disordered (i.e., nondevelopmental) patterns are evident by the age of about 3 years, although DS-related abnormalities possibly appear earlier, even in infant babbling. The third major area is fluency and prosody. Stuttering and/or cluttering occur in DS at rates of 10%-45%, compared with about 1% in the general population. Research also points to significant disturbances in prosody. The fourth major area is intelligibility. Studies consistently show marked limitations in this area, but only recently has the research gone beyond simple rating scales.
Speech and gait in Parkinson's disease: When rhythm matters.
Ricciardi, Lucia; Ebreo, Michela; Graziosi, Adriana; Barbuto, Marianna; Sorbera, Chiara; Morgante, Letterio; Morgante, Francesca
2016-11-01
Speech disturbances in Parkinson's disease (PD) are heterogeneous, ranging from hypokinetic to hyperkinetic types. Repetitive speech disorder has been demonstrated in more advanced disease stages and has been considered the speech equivalent of freezing of gait (FOG). We aimed to verify a possible relationship between speech and FOG in patients with PD. Forty-three consecutive PD patients and 20 healthy control subjects underwent standardized speech evaluation using the Italian version of the Dysarthria Profile (DP), for its motor component, and subsets of the Battery for the Analysis of the Aphasic Deficit (BADA), for its procedural component. DP is a scale composed of 7 sub-sections assessing different features of speech; the rate/prosody section of DP includes items investigating the presence of repetitive speech disorder. Severity of FOG was evaluated with the new freezing of gait questionnaire (NFGQ). PD patients performed worse at DP and BADA compared to healthy controls; patients with FOG or with Hoehn-Yahr >2 reported lower scores in the articulation, intellibility, rate/prosody sections of DP and in the semantic verbal fluency test. Logistic regression analysis showed that only age and rate/prosody scores were significantly associated to FOG in PD. Multiple regression analysis showed that only the severity of FOG was associated to rate/prosody score. Our data demonstrate that repetitive speech disorder is related to FOG and is associated to advanced disease stages and independent of disease duration. Speech dysfluency represents a disorder of motor speech control, possibly sharing pathophysiological mechanisms with FOG. Copyright © 2016 Elsevier Ltd. All rights reserved.
Deficits in Social Cognition: An Unveiled Signature of Multiple Sclerosis.
Chalah, Moussa A; Ayache, Samar S
2017-03-01
Multiple sclerosis (MS) is a chronic progressive inflammatory disease of the central nervous system, representing the primary cause of non-traumatic disability in young adults. Cognitive dysfunction can affect patients at any time during the disease process and might alter the six core functional domains. Social cognition is a multi-component construct that includes the theory of mind, empathy and social perception of emotions from facial, bodily and vocal cues. Deficits in this cognitive faculty might have a drastic impact on interpersonal relationships and quality of life (QoL). Although exhaustive data exist for non-social cognitive functions in MS, only a little attention has been paid for social cognition. The objectives of the present work are to reappraise the definition and anatomy of social cognition and evaluate the integrity of this domain across MS studies. We will put special emphasis on neuropsychological and neuroimaging studies concerning social cognitive performance in MS. Studies were selected in conformity with PRISMA guidelines. We looked for computerized databases (PubMed, Medline, and Scopus) that index peer-reviewed journals to identify published reports in English and French languages that mention social cognition and multiple sclerosis, regardless of publication year. We combined keywords as follows: (facial emotion or facial expression or emotional facial expressions or theory of mind or social cognition or empathy or affective prosody) AND multiple sclerosis AND (MRI or functional MRI or positron emission tomography or functional imaging or structural imaging). We also scanned references from articles aiming to get additional relevant studies. In total, 26 studies matched the abovementioned criteria (26 neuropsychological studies including five neuroimaging studies). Available data support the presence of social cognitive deficits even at early stages of MS. The increase in disease burden along with the "multiple disconnection syndrome" resulting from gray and white matters pathology might exceed the "threshold for cerebral tolerance" and can manifest as deficits in social cognition. Admitting the impact of the latter on patients' social functioning, a thorough screening for such deficits is crucial to improving patients' QoL. (JINS, 2017, 23, 266-286).
Oxytocin improves behavioural and neural deficits in inferring others' social emotions in autism.
Aoki, Yuta; Yahata, Noriaki; Watanabe, Takamitsu; Takano, Yosuke; Kawakubo, Yuki; Kuwabara, Hitoshi; Iwashiro, Norichika; Natsubori, Tatsunobu; Inoue, Hideyuki; Suga, Motomu; Takao, Hidemasa; Sasaki, Hiroki; Gonoi, Wataru; Kunimatsu, Akira; Kasai, Kiyoto; Yamasue, Hidenori
2014-11-01
Recent studies have suggested oxytocin's therapeutic effects on deficits in social communication and interaction in autism spectrum disorder through improvement of emotion recognition with direct emotional cues, such as facial expression and voice prosody. Although difficulty in understanding of others' social emotions and beliefs under conditions without direct emotional cues also plays an important role in autism spectrum disorder, no study has examined the potential effect of oxytocin on this difficulty. Here, we sequentially conducted both a case-control study and a clinical trial to investigate the potential effects of oxytocin on this difficulty at behavioural and neural levels measured using functional magnetic resonance imaging during a psychological task. This task was modified from the Sally-Anne Task, a well-known first-order false belief task. The task was optimized for investigation of the abilities to infer another person's social emotions and beliefs distinctively so as to test the hypothesis that oxytocin improves deficit in inferring others' social emotions rather than beliefs, under conditions without direct emotional cues. In the case-control study, 17 males with autism spectrum disorder showed significant behavioural deficits in inferring others' social emotions (P = 0.018) but not in inferring others' beliefs (P = 0.064) compared with 17 typically developing demographically-matched male participants. They also showed significantly less activity in the right anterior insula and posterior superior temporal sulcus during inferring others' social emotions, and in the dorsomedial prefrontal cortex during inferring others' beliefs compared with the typically developing participants (P < 0.001 and cluster size > 10 voxels). Then, to investigate potential effects of oxytocin on these behavioural and neural deficits, we conducted a double-blind placebo-controlled crossover within-subject trial for single-dose intranasal administration of 24 IU oxytocin in an independent group of 20 males with autism spectrum disorder. Behaviourally, oxytocin significantly increased the correct rate in inferring others' social emotions (P = 0.043, one-tail). At the neural level, the peptide significantly enhanced the originally-diminished brain activity in the right anterior insula during inferring others' social emotions (P = 0.004), but not in the dorsomedial prefrontal cortex during inferring others' beliefs (P = 0.858). The present findings suggest that oxytocin enhances the ability to understand others' social emotions that have also required second-order false belief rather than first-order false beliefs under conditions without direct emotional cues in autism spectrum disorder at both the behaviour and neural levels. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Asking or Telling--Real-time Processing of Prosodically Distinguished Questions and Statements.
Heeren, Willemijn F L; Bibyk, Sarah A; Gunlogson, Christine; Tanenhaus, Michael K
2015-12-01
We introduce a targeted language game approach using the visual world, eye-movement paradigm to assess when and how certain intonational contours affect the interpretation of utterances. We created a computer-based card game in which elliptical utterances such as "Got a candy" occurred with a nuclear contour most consistent with a yes-no question (H* H-H%) or a statement (L* L-L%). In Experiment I we explored how such contours are integrated online. In Experiment 2 we studied the expectations listeners have for how intonational contours signal intentions: do these reflect linguistic categories or rapid adaptation to the paradigm? Prosody had an immediate effect on interpretation, as indexed by the pattern and timing of fixations. Moreover, the association between different contours and intentions was quite robust in the absence of clear syntactic cues to sentence type, and was not due to rapid adaptation. Prosody had immediate effects on interpretation even though there was a construction-based bias to interpret "got a" as a question. Taken together, we believe this paradigm will provide further insights into how intonational contours and their phonetic realization interact with other cues to sentence type in online comprehension.
The Atlanta Motor Speech Disorders Corpus: Motivation, Development, and Utility.
Laures-Gore, Jacqueline; Russell, Scott; Patel, Rupal; Frankel, Michael
2016-01-01
This paper describes the design and collection of a comprehensive spoken language dataset from speakers with motor speech disorders in Atlanta, Ga., USA. This collaborative project aimed to gather a spoken database consisting of nonmainstream American English speakers residing in the Southeastern US in order to provide a more diverse perspective of motor speech disorders. Ninety-nine adults with an acquired neurogenic disorder resulting in a motor speech disorder were recruited. Stimuli include isolated vowels, single words, sentences with contrastive focus, sentences with emotional content and prosody, sentences with acoustic and perceptual sensitivity to motor speech disorders, as well as 'The Caterpillar' and 'The Grandfather' passages. Utility of this data in understanding the potential interplay of dialect and dysarthria was demonstrated with a subset of the speech samples existing in the database. The Atlanta Motor Speech Disorders Corpus will enrich our understanding of motor speech disorders through the examination of speech from a diverse group of speakers. © 2016 S. Karger AG, Basel.
What's in a voice? Prosody as a test case for the Theory of Mind account of autism.
Chevallier, Coralie; Noveck, Ira; Happé, Francesca; Wilson, Deirdre
2011-02-01
The human voice conveys a variety of information about people's feelings, emotions and mental states. Some of this information relies on sophisticated Theory of Mind (ToM) skills, whilst others are simpler and do not require ToM. This variety provides an interesting test case for the ToM account of autism, which would predict greater impairment as ToM requirements increase. In this paper, we draw on psychological and pragmatic theories to classify vocal cues according to the amount of mindreading required to identify them. Children with a high functioning Autism Spectrum Disorder and matched controls were tested in three experiments where the speakers' state had to be extracted from their vocalizations. Although our results confirm that people with autism have subtle difficulties dealing with vocal cues, they show a pattern of performance that is inconsistent with the view that atypical recognition of vocal cues is caused by impaired ToM. Copyright © 2010 Elsevier Ltd. All rights reserved.
Evaluating Interpreter's Skill by Measurement of Prosody Recognition
NASA Astrophysics Data System (ADS)
Tanaka, Saori; Nakazono, Kaoru; Nishida, Masafumi; Horiuchi, Yasuo; Ichikawa, Akira
Sign language is a visual language in which main articulators are hands, torso, head, and face. For simultaneous interpreters of Japanese sign language (JSL) and spoken Japanese, it is very important to recognize not only the hands movement but also prosody such like head, eye, posture and facial expression. This is because prosody has grammatical rules for representing the case and modification relations in JSL. The goal of this study is to introduce an examination called MPR (Measurement of Prosody Recognition) and to demonstrate that it can be an indicator for the other general skills of interpreters. For this purpose, we conducted two experiments: the first studies the relationship between the interpreter's experience and the performance score on MPR (Experiment-1), and the second investigates the specific skill that can be estimated by MPR (Experiment-2). The data in Experiment-1 came from four interpreters who had more than 1-year experience as interpreters, and more four interpreters who had less than 1-year experience. The mean accuracy of MPR in the more experienced group was higher than that in the less experienced group. The data in Experiment-2 came from three high MPR interpreters and three low MPR interpreters. Two hearing subjects and three deaf subjects evaluated their skill in terms of the speech or sign interpretation skill, the reliability of interpretation, the expeditiousness, and the subjective sense of accomplishment for the ordering pizza task. The two experiments indicated a possibility that MPR could be useful for estimating if the interpreter is sufficiently experienced to interpret from sign language to spoken Japanese, and if they can work on the interpretation expeditiously without making the deaf or the hearing clients anxious. Finally we end this paper with suggestions for conclusions and future work.
Metrical expectations from preceding prosody influence perception of lexical stress
Brown, Meredith; Salverda, Anne Pier; Dilley, Laura C.; Tanenhaus, Michael K.
2015-01-01
Two visual-world experiments tested the hypothesis that expectations based on preceding prosody influence the perception of suprasegmental cues to lexical stress. The results demonstrate that listeners’ consideration of competing alternatives with different stress patterns (e.g., ‘jury/gi’raffe) can be influenced by the fundamental frequency and syllable timing patterns across material preceding a target word. When preceding stressed syllables distal to the target word shared pitch and timing characteristics with the first syllable of the target word, pictures of alternatives with primary lexical stress on the first syllable (e.g., jury) initially attracted more looks than alternatives with unstressed initial syllables (e.g., giraffe). This effect was modulated when preceding unstressed syllables had pitch and timing characteristics similar to the initial syllable of the target word, with more looks to alternatives with unstressed initial syllables (e.g., giraffe) than to those with stressed initial syllables (e.g., jury). These findings suggest that expectations about the acoustic realization of upcoming speech include information about metrical organization and lexical stress, and that these expectations constrain the initial interpretation of suprasegmental stress cues. These distal prosody effects implicate on-line probabilistic inferences about the sources of acoustic-phonetic variation during spoken-word recognition. PMID:25621583
The emergence of complexity in prosody and syntax
Meir, Irit; Dachkovsky, Svetlana; Padden, Carol; Aronoff, Mark
2011-01-01
The relation between prosody and syntax is investigated here by tracing the emergence of each in a new language, Al-Sayyid Bedouin Sign Language. We analyze the structure of narratives of four signers of this language: two older second generation signers, and two about 15 years younger. We find that younger signers produce prosodic cues to dependency between semantically related constituents, e.g., the two clauses of conditionals, revealing a type and degree of complexity in their language that is not frequent in that of the older pair. In these younger signers, several rhythmic and (facial) intonational cues are aligned at constituent boundaries, indicating the emergence of a grammatical system. There are no overt syntactic markers (such as complementizers) to relate clauses; prosody is the only clue. But this prosodic complexity is matched by syntactic complexity inside propositions in the younger signers, who are more likely to use pronouns as abstract grammatical markers of arguments, and to combine predicates with their arguments within in a constituent. As the prosodic means emerge for identifying constituent types and signaling dependency relations between them, the constituents themselves become increasingly complex. Finally, our study shows that the emergence of grammatical complexity is gradual. PMID:23087486
Signature of prosody in tonal realization: Evidence from Standard Chinese
NASA Astrophysics Data System (ADS)
Chen, Yiya
2004-05-01
It is by now widely accepted that the articulation of speech is influenced by the prosodic structure into which the utterance is organized. Furthermore, the effect of prosody on F0 realization has been shown to be mainly phonological [Beckman and Pierrehumbert (1986); Selkirk and Shen (1990)]. This paper presents data from the F0 realizations of lexical tones in Standard Chinese and shows that prosodic factors may influence the articulation of a lexical tone and induce phonetic variations in its surface F0 contours, similar to the phonetic effect of prosody on segment articulation [de Jong (1995); Keating and Foureron (1997)]. Data were elicited from four native speakers of Standard Chinese producing all four lexical tones in different tonal contexts and under various focus conditions (i.e., under focus, no focus, and post focus), with three renditions for each condition. The observed F0 variations are argued to be best analyzed as resulted from prosodically driven differences in the phonetic implementation of the lexical tonal targets, which in turn is induced by pragmatically driven differences in how distinctive an underlying tonal target should be realized. Implications of this study on the phonetic implementation of phonological tonal targets will also be discussed.
NASA Astrophysics Data System (ADS)
Gao, Pei-pei; Liu, Feng
2016-10-01
With the development of information technology and artificial intelligence, speech synthesis plays a significant role in the fields of Human-Computer Interaction Techniques. However, the main problem of current speech synthesis techniques is lacking of naturalness and expressiveness so that it is not yet close to the standard of natural language. Another problem is that the human-computer interaction based on the speech synthesis is too monotonous to realize mechanism of user subjective drive. This thesis introduces the historical development of speech synthesis and summarizes the general process of this technique. It is pointed out that prosody generation module is an important part in the process of speech synthesis. On the basis of further research, using eye activity rules when reading to control and drive prosody generation was introduced as a new human-computer interaction method to enrich the synthetic form. In this article, the present situation of speech synthesis technology is reviewed in detail. Based on the premise of eye gaze data extraction, using eye movement signal in real-time driving, a speech synthesis method which can express the real speech rhythm of the speaker is proposed. That is, when reader is watching corpora with its eyes in silent reading, capture the reading information such as the eye gaze duration per prosodic unit, and establish a hierarchical prosodic pattern of duration model to determine the duration parameters of synthesized speech. At last, after the analysis, the feasibility of the above method is verified.
The ``listener'' in the modeling of speech prosody
NASA Astrophysics Data System (ADS)
Kohler, Klaus J.
2004-05-01
Autosegmental-metrical modeling of speech prosody is principally speaker-oriented. The production of pitch patterns, in systematic lab speech experiments as well as in spontaneous speech corpora, is analyzed in f0 tracings, from which sequences of H(igh) and L(ow) are abstracted. The perceptual relevance of these pitch categories in the transmission from speakers to listeners is largely not conceptualized; thus their modeling in speech communication lacks an essential component. In the metalinguistic task of labeling speech data with the annotation system ToBI, the ``listener'' plays a subordinate role as well: H and L, being suggestive of signal values, are allocated with reference to f0 curves and little or no concern for perceptual classification by the trained labeler. The seriousness of this theoretical gap in the modeling of speech prosody is demonstrated by experimental data concerning f0-peak alignment. A number of papers in JASA have dealt with this topic from the point of synchronizing f0 with the vocal tract time course in acoustic output. However, perceptual experiments within the Kiel intonation model show that ``early,'' ``medial'' and ``late'' peak alignments need to be defined perceptually and that in doing so microprosodic variation has to be filtered out from the surface signal.
Metrical expectations from preceding prosody influence perception of lexical stress.
Brown, Meredith; Salverda, Anne Pier; Dilley, Laura C; Tanenhaus, Michael K
2015-04-01
Two visual-world experiments tested the hypothesis that expectations based on preceding prosody influence the perception of suprasegmental cues to lexical stress. The results demonstrate that listeners' consideration of competing alternatives with different stress patterns (e.g., 'jury/gi'raffe) can be influenced by the fundamental frequency and syllable timing patterns across material preceding a target word. When preceding stressed syllables distal to the target word shared pitch and timing characteristics with the first syllable of the target word, pictures of alternatives with primary lexical stress on the first syllable (e.g., jury) initially attracted more looks than alternatives with unstressed initial syllables (e.g., giraffe). This effect was modulated when preceding unstressed syllables had pitch and timing characteristics similar to the initial syllable of the target word, with more looks to alternatives with unstressed initial syllables (e.g., giraffe) than to those with stressed initial syllables (e.g., jury). These findings suggest that expectations about the acoustic realization of upcoming speech include information about metrical organization and lexical stress and that these expectations constrain the initial interpretation of suprasegmental stress cues. These distal prosody effects implicate online probabilistic inferences about the sources of acoustic-phonetic variation during spoken-word recognition. (c) 2015 APA, all rights reserved.
Nakai, Yasushi; Takiguchi, Tetsuya; Matsui, Gakuyo; Yamaoka, Noriko; Takada, Satoshi
2017-10-01
Abnormal prosody is often evident in the voice intonations of individuals with autism spectrum disorders. We compared a machine-learning-based voice analysis with human hearing judgments made by 10 speech therapists for classifying children with autism spectrum disorders ( n = 30) and typical development ( n = 51). Using stimuli limited to single-word utterances, machine-learning-based voice analysis was superior to speech therapist judgments. There was a significantly higher true-positive than false-negative rate for machine-learning-based voice analysis but not for speech therapists. Results are discussed in terms of some artificiality of clinician judgments based on single-word utterances, and the objectivity machine-learning-based voice analysis adds to judging abnormal prosody.
Automatic measurement of prosody in behavioral variant FTD.
Nevler, Naomi; Ash, Sharon; Jester, Charles; Irwin, David J; Liberman, Mark; Grossman, Murray
2017-08-15
To help understand speech changes in behavioral variant frontotemporal dementia (bvFTD), we developed and implemented automatic methods of speech analysis for quantification of prosody, and evaluated clinical and anatomical correlations. We analyzed semi-structured, digitized speech samples from 32 patients with bvFTD (21 male, mean age 63 ± 8.5, mean disease duration 4 ± 3.1 years) and 17 matched healthy controls (HC). We automatically extracted fundamental frequency (f0, the physical property of sound most closely correlating with perceived pitch) and computed pitch range on a logarithmic scale (semitone) that controls for individual and sex differences. We correlated f0 range with neuropsychiatric tests, and related f0 range to gray matter (GM) atrophy using 3T T1 MRI. We found significantly reduced f0 range in patients with bvFTD (mean 4.3 ± 1.8 ST) compared to HC (5.8 ± 2.1 ST; p = 0.03). Regression related reduced f0 range in bvFTD to GM atrophy in bilateral inferior and dorsomedial frontal as well as left anterior cingulate and anterior insular regions. Reduced f0 range reflects impaired prosody in bvFTD. This is associated with neuroanatomic networks implicated in language production and social disorders centered in the frontal lobe. These findings support the feasibility of automated speech analysis in frontotemporal dementia and other disorders. © 2017 American Academy of Neurology.
The Interface of Syntax with Pragmatics and Prosody in Children with Autism Spectrum Disorders.
Terzi, Arhonto; Marinis, Theodoros; Francis, Kostantinos
2016-08-01
In order to study problems of individuals with Autism Spectrum Disorders (ASD) with morphosyntax, we investigated twenty high-functioning Greek-speaking children (mean age: 6;11) and twenty age- and language-matched typically developing children on environments that allow or forbid object clitics or their corresponding noun phrase. Children with ASD fell behind typically developing children in comprehending and producing simple clitics and producing noun phrases in focus structures. The two groups performed similarly in comprehending and producing clitics in clitic left dislocation and in producing noun phrases in non-focus structures. We argue that children with ASD have difficulties at the interface of (morpho)syntax with pragmatics and prosody, namely, distinguishing a discourse prominent element, and considering intonation relevant for a particular interpretation that excludes clitics.
EEG Correlates of Song Prosody: A New Look at the Relationship between Linguistic and Musical Rhythm
Gordon, Reyna L.; Magne, Cyrille L.; Large, Edward W.
2011-01-01
Song composers incorporate linguistic prosody into their music when setting words to melody, a process called “textsetting.” Composers tend to align the expected stress of the lyrics with strong metrical positions in the music. The present study was designed to explore the idea that temporal alignment helps listeners to better understand song lyrics by directing listeners’ attention to instances where strong syllables occur on strong beats. Three types of textsettings were created by aligning metronome clicks with all, some or none of the strong syllables in sung sentences. Electroencephalographic recordings were taken while participants listened to the sung sentences (primes) and performed a lexical decision task on subsequent words and pseudowords (targets, presented visually). Comparison of misaligned and well-aligned sentences showed that temporal alignment between strong/weak syllables and strong/weak musical beats were associated with modulations of induced beta and evoked gamma power, which have been shown to fluctuate with rhythmic expectancies. Furthermore, targets that followed well-aligned primes elicited greater induced alpha and beta activity, and better lexical decision task performance, compared with targets that followed misaligned and varied sentences. Overall, these findings suggest that alignment of linguistic stress and musical meter in song enhances musical beat tracking and comprehension of lyrics by synchronizing neural activity with strong syllables. This approach may begin to explain the mechanisms underlying the relationship between linguistic and musical rhythm in songs, and how rhythmic attending facilitates learning and recall of song lyrics. Moreover, the observations reported here coincide with a growing number of studies reporting interactions between the linguistic and musical dimensions of song, which likely stem from shared neural resources for processing music and speech. PMID:22144972
Su, Qiaotong; Galvin, John J.; Zhang, Guoping; Li, Yongxin
2016-01-01
Cochlear implant (CI) speech performance is typically evaluated using well-enunciated speech produced at a normal rate by a single talker. CI users often have greater difficulty with variations in speech production encountered in everyday listening. Within a single talker, speaking rate, amplitude, duration, and voice pitch information may be quite variable, depending on the production context. The coarse spectral resolution afforded by the CI limits perception of voice pitch, which is an important cue for speech prosody and for tonal languages such as Mandarin Chinese. In this study, sentence recognition from the Mandarin speech perception database was measured in adult and pediatric Mandarin-speaking CI listeners for a variety of speaking styles: voiced speech produced at slow, normal, and fast speaking rates; whispered speech; voiced emotional speech; and voiced shouted speech. Recognition of Mandarin Hearing in Noise Test sentences was also measured. Results showed that performance was significantly poorer with whispered speech relative to the other speaking styles and that performance was significantly better with slow speech than with fast or emotional speech. Results also showed that adult and pediatric performance was significantly poorer with Mandarin Hearing in Noise Test than with Mandarin speech perception sentences at the normal rate. The results suggest that adult and pediatric Mandarin-speaking CI patients are highly susceptible to whispered speech, due to the lack of lexically important voice pitch cues and perhaps other qualities associated with whispered speech. The results also suggest that test materials may contribute to differences in performance observed between adult and pediatric CI users. PMID:27363714
The Hypothesis of Apraxia of Speech in Children with Autism Spectrum Disorder
Shriberg, Lawrence D.; Paul, Rhea; Black, Lois M.; van Santen, Jan P.
2010-01-01
In a sample of 46 children aged 4 to 7 years with Autism Spectrum Disorder (ASD) and intelligible speech, there was no statistical support for the hypothesis of concomitant Childhood Apraxia of Speech (CAS). Perceptual and acoustic measures of participants’ speech, prosody, and voice were compared with data from 40 typically-developing children, 13 preschool children with Speech Delay, and 15 participants aged 5 to 49 years with CAS in neurogenetic disorders. Speech Delay and Speech Errors, respectively, were modestly and substantially more prevalent in participants with ASD than reported population estimates. Double dissociations in speech, prosody, and voice impairments in ASD were interpreted as consistent with a speech attunement framework, rather than with the motor speech impairments that define CAS. Key Words: apraxia, dyspraxia, motor speech disorder, speech sound disorder PMID:20972615
Simmons, Elizabeth Schoen; Paul, Rhea; Shic, Frederick
2016-01-01
This study examined the acceptability of a mobile application, SpeechPrompts, designed to treat prosodic disorders in children with ASD and other communication impairments. Ten speech-language pathologists (SLPs) in public schools and 40 of their students, 5-19 years with prosody deficits participated. Students received treatment with the software over eight weeks. Pre- and post-treatment speech samples and student engagement data were collected. Feedback on the utility of the software was also obtained. SLPs implemented the software with their students in an authentic education setting. Student engagement ratings indicated students' attention to the software was maintained during treatment. Although more testing is warranted, post-treatment prosody ratings suggest that SpeechPrompts has potential to be a useful tool in the treatment of prosodic disorders.
Speech Impairment in Down Syndrome: A Review
Kent, Ray D.; Vorperian, Houri K.
2012-01-01
Purpose This review summarizes research on disorders of speech production in Down Syndrome (DS) for the purposes of informing clinical services and guiding future research. Method Review of the literature was based on searches using Medline, Google Scholar, Psychinfo, and HighWire Press, as well as consideration of reference lists in retrieved documents (including online sources). Search terms emphasized functions related to voice, articulation, phonology, prosody, fluency and intelligibility. Conclusions The following conclusions pertain to four major areas of review: (a) Voice. Although a number of studies have been reported on vocal abnormalities in DS, major questions remain about the nature and frequency of the phonatory disorder. Results of perceptual and acoustic studies have been mixed, making it difficult to draw firm conclusions or even to identify sensitive measures for future study. (b) Speech sounds. Articulatory and phonological studies show that speech patterns in DS are a combination of delayed development and errors not seen in typical development. Delayed (i.e., developmental) and disordered (i.e., nondevelopmental) patterns are evident by the age of about 3 years, although DS-related abnormalities possibly appear earlier, even in infant babbling. (c) Fluency and prosody. Stuttering and/or cluttering occur in DS at rates of 10 to 45%, compared to about 1% in the general population. Research also points to significant disturbances in prosody. (d) Intelligibility. Studies consistently show marked limitations in this area but it is only recently that research goes beyond simple rating scales. PMID:23275397
Acquiring Complex Focus-Marking: Finnish 4- to 5-Year-Olds Use Prosody and Word Order in Interaction
Arnhold, Anja; Chen, Aoju; Järvikivi, Juhani
2016-01-01
Using a language game to elicit short sentences in various information structural conditions, we found that Finnish 4- to 5-year-olds already exhibit a characteristic interaction between prosody and word order in marking information structure. Providing insights into the acquisition of this complex system of interactions, the production data showed interesting parallels to adult speakers of Finnish on the one hand and to children acquiring other languages on the other hand. Analyzing a total of 571 sentences produced by 16 children, we found that children rarely adjusted input word order, but did systematically avoid marked OVS order in contrastive object focus condition. Focus condition also significantly affected four prosodic parameters, f0, duration, pauses and voice quality. Differing slightly from effects displayed in adult Finnish speech, the children produced larger f0 ranges for words in contrastive focus and smaller ones for unfocused words, varied only the duration of object constituents to be longer in focus and shorter in unfocused condition, inserted more pauses before and after focused constituents and systematically modified their use of non-modal voice quality only in utterances with narrow focus. Crucially, these effects were modulated by word order. In contrast to comparable data from children acquiring Germanic languages, the present findings reflect the more central role of word order and of interactions between word order and prosody in marking information structure in Finnish. Thus, the study highlights the role of the target language in determining linguistic development. PMID:27990130
Bone, Daniel; Lee, Chi-Chun; Black, Matthew P.; Williams, Marian E.; Lee, Sungbok; Levitt, Pat; Narayanan, Shrikanth
2015-01-01
Purpose The purpose of this study was to examine relationships between prosodic speech cues and autism spectrum disorder (ASD) severity, hypothesizing a mutually interactive relationship between the speech characteristics of the psychologist and the child. The authors objectively quantified acoustic-prosodic cues of the psychologist and of the child with ASD during spontaneous interaction, establishing a methodology for future large-sample analysis. Method Speech acoustic-prosodic features were semiautomatically derived from segments of semistructured interviews (Autism Diagnostic Observation Schedule, ADOS; Lord, Rutter, DiLavore, & Risi, 1999; Lord et al., 2012) with 28 children who had previously been diagnosed with ASD. Prosody was quantified in terms of intonation, volume, rate, and voice quality. Research hypotheses were tested via correlation as well as hierarchical and predictive regression between ADOS severity and prosodic cues. Results Automatically extracted speech features demonstrated prosodic characteristics of dyadic interactions. As rated ASD severity increased, both the psychologist and the child demonstrated effects for turn-end pitch slope, and both spoke with atypical voice quality. The psychologist’s acoustic cues predicted the child’s symptom severity better than did the child’s acoustic cues. Conclusion The psychologist, acting as evaluator and interlocutor, was shown to adjust his or her behavior in predictable ways based on the child’s social-communicative impairments. The results support future study of speech prosody of both interaction partners during spontaneous conversation, while using automatic computational methods that allow for scalable analysis on much larger corpora. PMID:24686340
The influence of social cognition on ego disturbances in patients with schizophrenia.
Schimansky, Jenny; Rössler, Wulf; Haker, Helene
2012-01-01
Subjects experiencing ego disturbances can be classified as a distinct subgroup of schizophrenia patients. These symptoms imply a disturbance in the ego-world boundary, which in turn implies aberrations in the perception, processing and understanding of social information. This paper provides a comparison of a group of schizophrenia patients and a group of healthy controls on a range of social-cognitive tasks. Furthermore, it analyzes the relationship between ego disturbances and social-cognitive as well as clinical variables in the schizophrenia subsample. Two groups - 40 schizophrenia patients and 39 healthy subjects - were compared. In the source monitoring task, subjects performed simple computer mouse movements and evaluated the partially manipulated visual feedback as either self- or other-generated. In a second step, participants indicated the confidence of their decision on a 4-point rating scale. In an emotion-recognition task, subjects had to identify 6 basic emotions in the prosody of spoken sentences. In the 'reading-the-mind-in-the-eyes' test, subjects had to infer mental states from pictures that depicted others' eyes. In an attribution task, subjects were presented with descriptions of social events and asked to attribute the cause of the event either to a person, an object or a situation. Additionally, all subjects were tested for cognitive functioning levels. The schizophrenia patient group performed significantly worse on all social-cognitive tasks than the healthy control group. Correlation analysis showed that ego disturbances were related to deficits in person attribution and lower levels of confidence in the source monitoring task. Also, ego disturbances were related to higher PANSS positive scores and a higher number of hospitalizations. Stepwise regression analysis revealed that social-cognitive variables explained 48.0% of the variance in the ego-disturbance score and represented the best predictors for ego disturbances. One particular clinical variable, namely the number of hospitalizations, additionally explained 13.8% of the variance. Our findings suggest that ego disturbances are related to deficits in the social-cognitive domain, and, to a lesser extent, to clinical variables such as the number of hospitalizations. Copyright © 2012 S. Karger AG, Basel.
Mind the gap: Neural coding of species identity in birdsong prosody.
Araki, Makoto; Bandi, M M; Yazaki-Sugiyama, Yoko
2016-12-09
Juvenile songbirds learn vocal communication from adult tutors of the same species but not from adults of other species. How species-specific learning emerges from the basic features of song prosody remains unknown. In the zebra finch auditory cortex, we discovered a class of neurons that register the silent temporal gaps between song syllables and are distinct from neurons encoding syllable morphology. Behavioral learning and neuronal coding of temporal gap structure resisted song tutoring from other species: Zebra finches fostered by Bengalese finch parents learned Bengalese finch song morphology transposed onto zebra finch temporal gaps. During the vocal learning period, temporal gap neurons fired selectively to zebra finch song. The innate temporal coding of intersyllable silent gaps suggests a neuronal barcode for conspecific vocal learning and social communication in acoustically diverse environments. Copyright © 2016, American Association for the Advancement of Science.
Prosody and parsing in coordination structures.
Schepman, A; Rodway, P
2000-05-01
The effect of prosodic boundary cues on the off-line disambiguation and on-line parsing of coordination structures was examined. It was found that relative clauses were attached to coordinated object noun phrases in preference to second conjuncts in sentences like: The lawyer greeted the powerful barrister and the wise judge who was/were walking to the courtroom. Naive speakers signalled the syntactic contrast between the two structures by a prosodic break between the conjuncts when the relative clause was attached to the second conjunct. Listeners were able to use this prosodic information in both off-line syntactic disambiguation and on-line syntactic parsing. The findings are compatible with a model in which prosody has a strong immediate effect on parsing. It is argued that the current experimental design has avoided confounds present in earlier studies on the on-line integration of prosodic and syntactic information.
Rossi, N F; Giacheti, C M
2017-07-01
Williams syndrome (WS) phenotype is described as unique and intriguing. The aim of this study was to investigate the associations between speech-language abilities, general cognitive functioning and behavioural problems in individuals with WS, considering age effects and speech-language characteristics of WS sub-groups. The study's participants were 26 individuals with WS and their parents. General cognitive functioning was assessed with the Wechsler Intelligence Scale. Peabody Picture Vocabulary Test, Token Test and the Cookie Theft Picture test were used as speech-language measures. Five speech-language characteristics were evaluated from a 30-min conversation (clichés, echolalia, perseverative speech, exaggerated prosody and monotone intonation). The Child Behaviour Checklist (CBCL 6-18) was used to assess behavioural problems. Higher single-word receptive vocabulary and narrative vocabulary were negatively associated with CBCL T-scores for Social Problems, Aggressive Behaviour and Total Problems. Speech rate was negatively associated with the CBCL Withdrawn/Depressed T-score. Monotone intonation was associated with shy behaviour, as well as exaggerated prosody with talkative behaviour. WS with perseverative speech and exaggerated prosody presented higher scores on Thought Problems. Echolalia was significantly associated with lower Verbal IQ. No significant association was found between IQ and behaviour problems. Age-associated effects were observed only for the Aggressive Behaviour scale. Associations reported in the present study may represent an insightful background for future predictive studies of speech-language, cognition and behaviour problems in WS. © 2017 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Do patients with schizophrenia use prosody to encode contrastive discourse status?
Michelas, Amandine; Faget, Catherine; Portes, Cristel; Lienhart, Anne-Sophie; Boyer, Laurent; Lançon, Christophe; Champagne-Lavau, Maud
2014-01-01
Patients with schizophrenia (SZ) often display social cognition disorders, including Theory of Mind (ToM) impairments and communication disruptions. Thought language disorders appear to be primarily a disruption of pragmatics, SZ can also experience difficulties at other linguistic levels including the prosodic one. Here, using an interactive paradigm, we showed that SZ individuals did not use prosodic phrasing to encode the contrastive status of discourse referents in French. We used a semi-spontaneous task to elicit noun-adjective pairs in which the noun in the second noun-adjective fragment was identical to the noun in the first fragment (e.g., BONBONS marron “brown candies” vs. BONBONS violets “purple candies”) or could contrast with it (e.g., BOUGIES violettes “purple candles” vs. BONBONS violets “purple candies”). We found that healthy controls parsed the target noun in the second noun-adjective fragment separately from the color adjective, to warn their interlocutor that this noun constituted a contrastive entity (e.g., BOUGIES violettes followed by [BONBONS] [violets]) compared to when it referred to the same object as in the first fragment (e.g., BONBONS marron followed by [BONBONS violets]). On the contrary, SZ individuals did not use prosodic phrasing to encode contrastive status of target nouns. In addition, SZ's difficulties to use prosody of contrast were correlated to their score in a classical ToM task (i.e., the hinting task). Taken together, our data provide evidence that SZ patients exhibit difficulties to prosodically encode discourse statuses and sketch a potential relationship between ToM and the use of linguistic prosody. PMID:25101025
Positive emotion impedes emotional but not cognitive conflict processing.
Zinchenko, Artyom; Obermeier, Christian; Kanske, Philipp; Schröger, Erich; Kotz, Sonja A
2017-06-01
Cognitive control enables successful goal-directed behavior by resolving a conflict between opposing action tendencies, while emotional control arises as a consequence of emotional conflict processing such as in irony. While negative emotion facilitates both cognitive and emotional conflict processing, it is unclear how emotional conflict processing is affected by positive emotion (e.g., humor). In 2 EEG experiments, we investigated the role of positive audiovisual target stimuli in cognitive and emotional conflict processing. Participants categorized either spoken vowels (cognitive task) or their emotional valence (emotional task) and ignored the visual stimulus dimension. Behaviorally, a positive target showed no influence on cognitive conflict processing, but impeded emotional conflict processing. In the emotional task, response time conflict costs were higher for positive than for neutral targets. In the EEG, we observed an interaction of emotion by congruence in the P200 and N200 ERP components in emotional but not in cognitive conflict processing. In the emotional conflict task, the P200 and N200 conflict effect was larger for emotional than neutral targets. Thus, our results show that emotion affects conflict processing differently as a function of conflict type and emotional valence. This suggests that there are conflict- and valence-specific mechanisms modulating executive control.
The sounds of sarcasm in English and Cantonese: A cross-linguistic production and perception study
NASA Astrophysics Data System (ADS)
Cheang, Henry S.
Three studies were conducted to examine the acoustic markers of sarcasm in English and in Cantonese, and the manner in which such markers are perceived across these languages. The first study consisted of acoustic analyses of sarcastic utterances spoken in English to verify whether particular prosodic cues correspond to English sarcastic speech. Native English speakers produced utterances expressing sarcasm, sincerity, humour, or neutrality. Measures taken from each utterance included fundamental frequency (F0), amplitude, speech rate, harmonics-to-noise ratio (HNR, to probe voice quality), and one-third octave spectral values (to probe resonance). The second study was conducted to ascertain whether specific acoustic features marked sarcasm in Cantonese and how such features compare with English sarcastic prosody. The elicitation and acoustic analysis methods from the first study were applied to similarly-constructed Cantonese utterances spoken by native Cantonese speakers. Direct acoustic comparisons between Cantonese and English sarcasm exemplars were also made. To further test for cross-linguistic prosodic cues of sarcasm and to assess whether sarcasm could be conveyed across languages, a cross-linguistic perceptual study was then performed. A subset of utterances from the first two studies was presented to naive listeners fluent in either Cantonese or English. Listeners had to identify the attitude in each utterance regardless of language of presentation. Sarcastic utterances in English (regardless of text) were marked by lower mean F0 and reductions in HNR and F0 standard deviation (relative to comparison attitudes). Resonance changes, reductions in both speech rate and F0 range signalled sarcasm in conjunction with some vocabulary terms. By contrast, higher mean F0, amplitude range reductions, and F0 range restrictions corresponded with sarcastic utterances spoken in Cantonese regardless of text. For Cantonese, reduced speech rate and higher HNR interacted with certain vocabulary to mark sarcasm. Sarcastic prosody was most distinguished from acoustic features corresponding to sincere utterances in both languages. Direct English-Cantonese comparisons between sarcasm tokens confirmed cross-linguistic differences in sarcastic prosody. Finally, Cantonese and English listeners could identify sarcasm in their native languages but identified sarcastic utterances spoken in the unfamiliar language at chance levels. It was concluded that particular acoustic cues marked sarcastic speech in Cantonese and English, and these patterns of sarcastic prosody were specific to each language.
Segmentation and selection of appropriate Chinese characters in writing place names in Japanese.
Tokimoto, S; Flores d'Arcais, G B
2001-03-01
This paper explores the relation between an unknown place name written in hiragana (a Japanese syllabary) and its corresponding written representation in kanji (Chinese characters). We propose three principles as those operating in the selection of the appropriate Chinese characters in writing unknown place names. The three principles are concerned with the combination of on and kun readings (zyuubako-yomi), the number of segmentations, and the bimoraicity characteristics of kanji chosen. We performed two experiments to test the principles; the results supported our hypotheses. These results have some implications for the structure of the Japanese mental lexicon, for the processing load in the use of Chinese characters, and for Japanese prosody and morphology.
The role of the supplementary motor area for speech and language processing.
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann
2016-09-01
Apart from its function in speech motor control, the supplementary motor area (SMA) has largely been neglected in models of speech and language processing in the brain. The aim of this review paper is to summarize more recent work, suggesting that the SMA has various superordinate control functions during speech communication and language reception, which is particularly relevant in case of increased task demands. The SMA is subdivided into a posterior region serving predominantly motor-related functions (SMA proper) whereas the anterior part (pre-SMA) is involved in higher-order cognitive control mechanisms. In analogy to motor triggering functions of the SMA proper, the pre-SMA seems to manage procedural aspects of cognitive processing. These latter functions, among others, comprise attentional switching, ambiguity resolution, context integration, and coordination between procedural and declarative memory structures. Regarding language processing, this refers, for example, to the use of inner speech mechanisms during language encoding, but also to lexical disambiguation, syntax and prosody integration, and context-tracking. Copyright © 2016 Elsevier Ltd. All rights reserved.
Protocol evaluation for effective music therapy for persons with nonfluent aphasia.
Kim, Mijin; Tomaino, Concetta M
2008-01-01
Although the notion of the language specificity of neural correlates has been widely accepted in the past (e.g., lefthemispheric dominance including Broca's and Wernike's area, N400 ERP component of semantic processing, and the P600 ERP component of syntactic processing, etc.), recent studies have shown that music and language share some important neurological aspects in their processing, both involving bilateral hemispheric activities. In line with this are the frequent behavioral clinical observations that persons with aphasia show improved articulation and prosody of speech in musically assisted phrases. Connecting recent neurological findings with clinical observations would not only inform clinical practice but would enhance understanding of the neurological mechanisms involved in the processing of speech/language and music. This article presents a music therapy treatment protocol study of 7 nonfluent patients with aphasia. The data and findings are discussed with regard to some of the recent focuses and issues addressed in the experimental studies using cognitive-behavioral, electrophysiological, and brain-imaging techniques.
Face to face with emotion: holistic face processing is modulated by emotional state.
Curby, Kim M; Johnson, Kareem J; Tyson, Alyssa
2012-01-01
Negative emotions are linked with a local, rather than global, visual processing style, which may preferentially facilitate feature-based, relative to holistic, processing mechanisms. Because faces are typically processed holistically, and because social contexts are prime elicitors of emotions, we examined whether negative emotions decrease holistic processing of faces. We induced positive, negative, or neutral emotions via film clips and measured holistic processing before and after the induction: participants made judgements about cued parts of chimeric faces, and holistic processing was indexed by the interference caused by task-irrelevant face parts. Emotional state significantly modulated face-processing style, with the negative emotion induction leading to decreased holistic processing. Furthermore, self-reported change in emotional state correlated with changes in holistic processing. These results contrast with general assumptions that holistic processing of faces is automatic and immune to outside influences, and they illustrate emotion's power to modulate socially relevant aspects of visual perception.
Overaccommodation in a Singapore Eldercare Facility
ERIC Educational Resources Information Center
Cavallaro, Francesco; Seilhamer, Mark Fifer; Chee, Yi Tian Felicia; Ng, Bee Chin
2016-01-01
Numerous studies have shown that some speech accommodation in interactions with the elderly can aid communication. "Over"accommodaters, however, employing features such as high pitch, exaggerated prosody, and child-like forms of address, often demean, infantilise, and patronise elderly interlocutors rather than facilitate comprehension.…
Speech prosody impairment predicts cognitive decline in Parkinson's disease.
Rektorova, Irena; Mekyska, Jiri; Janousova, Eva; Kostalova, Milena; Eliasova, Ilona; Mrackova, Martina; Berankova, Dagmar; Necasova, Tereza; Smekal, Zdenek; Marecek, Radek
2016-08-01
Impairment of speech prosody is characteristic for Parkinson's disease (PD) and does not respond well to dopaminergic treatment. We assessed whether baseline acoustic parameters, alone or in combination with other predominantly non-dopaminergic symptoms may predict global cognitive decline as measured by the Addenbrooke's cognitive examination (ACE-R) and/or worsening of cognitive status as assessed by a detailed neuropsychological examination. Forty-four consecutive non-depressed PD patients underwent clinical and cognitive testing, and acoustic voice analysis at baseline and at the two-year follow-up. Influence of speech and other clinical parameters on worsening of the ACE-R and of the cognitive status was analyzed using linear and logistic regression. The cognitive status (classified as normal cognition, mild cognitive impairment and dementia) deteriorated in 25% of patients during the follow-up. The multivariate linear regression model consisted of the variation in range of the fundamental voice frequency (F0VR) and the REM Sleep Behavioral Disorder Screening Questionnaire (RBDSQ). These parameters explained 37.2% of the variability of the change in ACE-R. The most significant predictors in the univariate logistic regression were the speech index of rhythmicity (SPIR; p = 0.012), disease duration (p = 0.019), and the RBDSQ (p = 0.032). The multivariate regression analysis revealed that SPIR alone led to 73.2% accuracy in predicting a change in cognitive status. Combining SPIR with RBDSQ improved the prediction accuracy of SPIR alone by 7.3%. Impairment of speech prosody together with symptoms of RBD predicted rapid cognitive decline and worsening of PD cognitive status during a two-year period. Copyright © 2016 Elsevier Ltd. All rights reserved.
Attentional Modulation of Emotional Conflict Processing with Flanker Tasks
Zhou, Pingyan; Liu, Xun
2013-01-01
Emotion processing has been shown to acquire priority by biasing allocation of attentional resources. Aversive images or fearful expressions are processed quickly and automatically. Many existing findings suggested that processing of emotional information was pre-attentive, largely immune from attentional control. Other studies argued that attention gated the processing of emotion. To tackle this controversy, the current study examined whether and to what degrees attention modulated processing of emotion using a stimulus-response-compatibility (SRC) paradigm. We conducted two flanker experiments using color scale faces in neutral expressions or gray scale faces in emotional expressions. We found SRC effects for all three dimensions (color, gender, and emotion) and SRC effects were larger when the conflicts were task relevant than when they were task irrelevant, suggesting that conflict processing of emotion was modulated by attention, similar to those of color and face identity (gender). However, task modulation on color SRC effect was significantly greater than that on gender or emotion SRC effect, indicating that processing of salient information was modulated by attention to a lesser degree than processing of non-emotional stimuli. We proposed that emotion processing can be influenced by attentional control, but at the same time salience of emotional information may bias toward bottom-up processing, rendering less top-down modulation than that on non-emotional stimuli. PMID:23544155
Attentional modulation of emotional conflict processing with flanker tasks.
Zhou, Pingyan; Liu, Xun
2013-01-01
Emotion processing has been shown to acquire priority by biasing allocation of attentional resources. Aversive images or fearful expressions are processed quickly and automatically. Many existing findings suggested that processing of emotional information was pre-attentive, largely immune from attentional control. Other studies argued that attention gated the processing of emotion. To tackle this controversy, the current study examined whether and to what degrees attention modulated processing of emotion using a stimulus-response-compatibility (SRC) paradigm. We conducted two flanker experiments using color scale faces in neutral expressions or gray scale faces in emotional expressions. We found SRC effects for all three dimensions (color, gender, and emotion) and SRC effects were larger when the conflicts were task relevant than when they were task irrelevant, suggesting that conflict processing of emotion was modulated by attention, similar to those of color and face identity (gender). However, task modulation on color SRC effect was significantly greater than that on gender or emotion SRC effect, indicating that processing of salient information was modulated by attention to a lesser degree than processing of non-emotional stimuli. We proposed that emotion processing can be influenced by attentional control, but at the same time salience of emotional information may bias toward bottom-up processing, rendering less top-down modulation than that on non-emotional stimuli.
The Role of Prosodic Sensitivity in Children's Reading Development
ERIC Educational Resources Information Center
Whalley, Karen; Hansen, Julie
2006-01-01
While the critical importance of phonological awareness (segmental phonology) to reading ability is well established, the potential role of prosody (suprasegmental phonology) in reading development has only recently been explored. This study examined the relationship between children's prosodic skills and reading ability. Hierarchical multiple…
Robert Seymour Bridges om: Poet, physician and philosopher
James, Theodore
1994-01-01
There has not been an English poet more interested in prosody nor physician more taken to medicine for its human contact, nor philosopher who lived closer to the tenets of his belief, than Robert Bridges (1844–1930). ImagesFigure 1.Figure 2. PMID:8207726
Prosody and Intonation of Western Cham
ERIC Educational Resources Information Center
Ueki, Kaori
2011-01-01
This dissertation investigates the prosodic and intonational characteristics of Western Cham (three letter code for International Organization for Standardization's ISO 639-3 code: [iso=cja]), an Austronesian language in the Chamic sub-group. I examine acoustic variables of prominence at word and postlexical levels: syllable duration, pitch…
Transitioning from analog to digital audio recording in childhood speech sound disorders.
Shriberg, Lawrence D; McSweeny, Jane L; Anderson, Bruce E; Campbell, Thomas F; Chial, Michael R; Green, Jordan R; Hauner, Katherina K; Moore, Christopher A; Rusiewicz, Heather L; Wilson, David L
2005-06-01
Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital audio systems. Effect sizes calculated for an array of analog versus digital comparisons ranged from negligible to medium, with a trend for participants' speech competency scores to be slightly lower for samples obtained and transcribed using the digital system. We discuss the implications of these and other findings for research and clinical practise.
Transitioning from analog to digital audio recording in childhood speech sound disorders
Shriberg, Lawrence D.; McSweeny, Jane L.; Anderson, Bruce E.; Campbell, Thomas F.; Chial, Michael R.; Green, Jordan R.; Hauner, Katherina K.; Moore, Christopher A.; Rusiewicz, Heather L.; Wilson, David L.
2014-01-01
Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital audio systems. Effect sizes calculated for an array of analog versus digital comparisons ranged from negligible to medium, with a trend for participants’ speech competency scores to be slightly lower for samples obtained and transcribed using the digital system. We discuss the implications of these and other findings for research and clinical practise. PMID:16019779
Poetic rhyme reflects cross-linguistic differences in information structure.
Wagner, Michael; McCurdy, Katherine
2010-11-01
Identical rhymes (right/write, attire/retire) are considered satisfactory and even artistic in French poetry but are considered unsatisfactory in English. This has been a consistent generalization over the course of centuries, a surprising fact given that other aspects of poetic form in French were happily applied in English. This paper puts forward the hypothesis that this difference is not merely one of poetic tradition, but is grounded in the distinct ways in which information-structure affects prosody in the two languages. A study of rhyme usage in poetry and a perception experiment confirm that native speakers' intuitions about rhyming in the two languages indeed differ, and a further perception experiment supports the hypothesis that this fact is due to a constraint on prosody that is active in English but not in French. The findings suggest that certain forms of artistic expression in poetry are influenced, and even constrained, by more general properties of a language. Copyright © 2010 Elsevier B.V. All rights reserved.
White matter pathways for prosodic structure building: A case study.
Sammler, Daniela; Cunitz, Katrin; Gierhan, Sarah M E; Anwander, Alfred; Adermann, Jens; Meixensberger, Jürgen; Friederici, Angela D
2018-05-11
The relevance of left dorsal and ventral fiber pathways for syntactic and semantic comprehension is well established, while pathways for prosody are little explored. The present study examined linguistic prosodic structure building in a patient whose right arcuate/superior longitudinal fascicles and posterior corpus callosum were transiently compromised by a vasogenic peritumoral edema. Compared to ten matched healthy controls, the patient's ability to detect irregular prosodic structure significantly improved between pre- and post-surgical assessment. This recovery was accompanied by an increase in average fractional anisotropy (FA) in right dorsal and posterior transcallosal fiber tracts. Neither general cognitive abilities nor (non-prosodic) syntactic comprehension nor FA in right ventral and left dorsal fiber tracts showed a similar pre-post increase. Together, these findings suggest a contribution of right dorsal and inter-hemispheric pathways to prosody perception, including the right-dorsal tracking and structuring of prosodic pitch contours that is transcallosally informed by concurrent syntactic information. Copyright © 2018 Elsevier Inc. All rights reserved.
Differentiating Emotional Processing and Attention in Psychopathy with Functional Neuroimaging
Anderson, Nathaniel E.; Steele, Vaughn R.; Maurer, J. Michael; Rao, Vikram; Koenigs, Michael R.; Decety, Jean; Kosson, David; Calhoun, Vince; Kiehl, Kent A.
2017-01-01
Psychopathic individuals are often characterized by emotional processing deficits, and recent research has examined the specific contexts and cognitive mechanisms that underlie these abnormalities. Some evidence suggests that abnormal features of attention are fundamental to psychopaths’ emotional deficits, but few studies have demonstrated the neural underpinnings responsible for such effects. Here, we use functional neuroimaging to examine attention-emotion interactions among incarcerated individuals (n=120) evaluated for psychopathic traits using the Hare Psychopathy Checklist – Revised (PCL-R). Using a task designed to manipulate attention to emotional features of visual stimuli, we demonstrate effects representing implicit emotional processing, explicit emotional processing, attention-facilitated emotional processing, and vigilance for emotional content. Results confirm the importance of considering mechanisms of attention when evaluating emotional processing differences related to psychopathic traits. The affective-interpersonal features of psychopathy (PCL-R Factor 1) were associated with relatively lower emotion-dependent augmentation of activity in visual processing areas during implicit emotional processing while antisocial-lifestyle features (PCL-R Factor 2) were associated with elevated activity in the amygdala and related salience-network regions. During explicit emotional processing psychopathic traits were associated with upregulation in the medial prefrontal cortex, insula, and superior frontal regions. Isolating the impact of explicit attention to emotional content, only Factor 1 was related to upregulation of activity in the visual processing stream, which was accompanied by increased activity in the angular gyrus. These effects highlight some important mechanisms underlying abnormal features of attention and emotional processing that accompany psychopathic traits. PMID:28092055
Differentiating emotional processing and attention in psychopathy with functional neuroimaging.
Anderson, Nathaniel E; Steele, Vaughn R; Maurer, J Michael; Rao, Vikram; Koenigs, Michael R; Decety, Jean; Kosson, David S; Calhoun, Vince D; Kiehl, Kent A
2017-06-01
Individuals with psychopathy are often characterized by emotional processing deficits, and recent research has examined the specific contexts and cognitive mechanisms that underlie these abnormalities. Some evidence suggests that abnormal features of attention are fundamental to emotional deficits in persons with psychopathy, but few studies have demonstrated the neural underpinnings responsible for such effects. Here, we use functional neuroimaging to examine attention-emotion interactions among incarcerated individuals (n = 120) evaluated for psychopathic traits using the Hare Psychopathy Checklist-Revised (PCL-R). Using a task designed to manipulate attention to emotional features of visual stimuli, we demonstrate effects representing implicit emotional processing, explicit emotional processing, attention-facilitated emotional processing, and vigilance for emotional content. Results confirm the importance of considering mechanisms of attention when evaluating emotional processing differences related to psychopathic traits. The affective-interpersonal features of psychopathy (PCL-R Factor 1) were associated with relatively lower emotion-dependent augmentation of activity in visual processing areas during implicit emotional processing, while antisocial-lifestyle features (PCL-R Factor 2) were associated with elevated activity in the amygdala and related salience network regions. During explicit emotional processing, psychopathic traits were associated with upregulation in the medial prefrontal cortex, insula, and superior frontal regions. Isolating the impact of explicit attention to emotional content, only Factor 1 was related to upregulation of activity in the visual processing stream, which was accompanied by increased activity in the angular gyrus. These effects highlight some important mechanisms underlying abnormal features of attention and emotional processing that accompany psychopathic traits.
The time course of attentional modulation on emotional conflict processing.
Zhou, Pingyan; Yang, Guochun; Nan, Weizhi; Liu, Xun
2016-01-01
Cognitive conflict resolution is critical to human survival in a rapidly changing environment. However, emotional conflict processing seems to be particularly important for human interactions. This study examined whether the time course of attentional modulation on emotional conflict processing was different from cognitive conflict processing during a flanker task. Results showed that emotional N200 and P300 effects, similar to colour conflict processing, appeared only during the relevant task. However, the emotional N200 effect preceded the colour N200 effect, indicating that emotional conflict can be identified earlier than cognitive conflict. Additionally, a significant emotional N100 effect revealed that emotional valence differences could be perceived during early processing based on rough aspects of input. The present data suggest that emotional conflict processing is modulated by top-down attention, similar to cognitive conflict processing (reflected by N200 and P300 effects). However, emotional conflict processing seems to have more time advantages during two different processing stages.
Implicit and explicit processing of emotional facial expressions in Parkinson's disease.
Wagenbreth, Caroline; Wattenberg, Lena; Heinze, Hans-Jochen; Zaehle, Tino
2016-04-15
Besides motor problems, Parkinson's disease (PD) is associated with detrimental emotional and cognitive functioning. Deficient explicit emotional processing has been observed, whilst patients also show impaired Theory of Mind (ToM) abilities. However, it is unclear whether this PD patients' ToM deficit is based on an inability to infer otherś emotional states or whether it is due to explicit emotional processing deficits. We investigated implicit and explicit emotional processing in PD with an affective priming paradigm in which we used pictures of human eyes for emotional primes and a lexical decision task (LDT) with emotional connoted words for target stimuli. Sixteen PD patients and sixteen matched healthy controls performed a LTD combined with an emotional priming paradigm providing emotional information through the facial eye region to assess implicit emotional processing. Second, participants explicitly evaluated the emotional status of eyes and words used in the implicit task. Compared to controls implicit emotional processing abilities were generally preserved in PD with, however, considerable alterations for happiness and disgust processing. Furthermore, we observed a general impairment of patients for explicit evaluation of emotional stimuli, which was augmented for the rating of facial expressions. This is the first study reporting results for affective priming with facial eye expressions in PD patients. Our findings indicate largely preserved implicit emotional processing, with a specific altered processing of disgust and happiness. Explicit emotional processing was considerably impaired for semantic and especially for facial stimulus material. Poor ToM abilities in PD patients might be based on deficient explicit emotional processing, with preserved ability to implicitly infer other people's feelings. Copyright © 2016 Elsevier B.V. All rights reserved.
Nuske, Heather J; Vivanti, Giacomo; Dissanayake, Cheryl
2013-01-01
There is widespread belief that individuals with autism spectrum disorders (ASDs) are "emotionally detached" from others. This comprehensive review examines the empirical evidence for this assumption, addressing three critical questions: (1) Are emotion-processing impairments universal in ASD? (2) Are they specific, or can they be explained by deficits in other domains? (3) Is the emotion processing profile seen in ASD unique to these conditions? Upon review of the literature (over 200 studies), we conclude that: (1) emotion-processing impairments might not be universal in ASD, as suggested by variability across participants and across emotion-processing tasks; (2) emotion-processing impairments might not be specific to ASD, as domain-general processes appear to account for some of these impairments; and (3) the specific pattern of emotion-processing strengths and weaknesses observed in ASD, involving difficulties with processing social versus non-social, and complex versus simple emotional information (with impairments more consistently reported on implicit than explicit emotion-processing tasks), appears to be unique to ASD. The emotion-processing profile observed in ASD might be best understood as resulting from heterogeneous vulnerabilities in different components of an "emotional communication system" that, in typical development, emerges from the interplay between domain-general cognitive, social and affective processes.
Emotional processing during experiential treatment of depression.
Pos, Alberta E; Greenberg, Leslie S; Goldman, Rhonda N; Korman, Lorne M
2003-12-01
This study explored the importance of early and late emotional processing to change in depressive and general symptomology, self-esteem, and interpersonal problems for 34 clients who received 16-20 sessions of experiential treatment for depression. The independent contribution to outcome of the early working alliance was also explored. Early and late emotional processing predicted reductions in reported symptoms and gains in self-esteem. More important, emotional-processing skill significantly improved during treatment. Hierarchical regression models demonstrated that late emotional processing both mediated the relationship between clients' early emotional processing capacity and outcome and was the sole emotional-processing variable that independently predicted improvement. After controlling for emotional processing, the working alliance added an independent contribution to explaining improvement in reported symptomology only. (c) 2003 APA
ERIC Educational Resources Information Center
Krach, S. Kathleen; McCreery, Michael P.; Loe, Scott A.; Jones, W. Paul
2016-01-01
Previous research demonstrates specific relationships between personality traits and general academic performance. In addition, research studies have demonstrated relationships among personality and variables related to reading fluency (i.e. speed, accuracy, automaticity, and prosody). However, little investigation has examined specific links…
Prosody and Alignment: A Sequential Perspective
ERIC Educational Resources Information Center
Reed, Beatrice Szczepek
2010-01-01
In their analysis of a corpus of classroom interactions in an inner city high school, Roth and Tobin describe how teachers and students accomplish interactional alignment by prosodically matching each other's turns. Prosodic matching, and specific prosodic patterns are interpreted as signs of, and contributions to successful interactional outcomes…
Rapid communication: Global-local processing affects recognition of distractor emotional faces.
Srinivasan, Narayanan; Gupta, Rashmi
2011-03-01
Recent studies have shown links between happy faces and global, distributed attention as well as sad faces to local, focused attention. Emotions have been shown to affect global-local processing. Given that studies on emotion-cognition interactions have not explored the effect of perceptual processing at different spatial scales on processing stimuli with emotional content, the present study investigated the link between perceptual focus and emotional processing. The study investigated the effects of global-local processing on the recognition of distractor faces with emotional expressions. Participants performed a digit discrimination task with digits at either the global level or the local level presented against a distractor face (happy or sad) as background. The results showed that global processing associated with broad scope of attention facilitates recognition of happy faces, and local processing associated with narrow scope of attention facilitates recognition of sad faces. The novel results of the study provide conclusive evidence for emotion-cognition interactions by demonstrating the effect of perceptual processing on emotional faces. The results along with earlier complementary results on the effect of emotion on global-local processing support a reciprocal relationship between emotional processing and global-local processing. Distractor processing with emotional information also has implications for theories of selective attention.
Emotion and goal-directed behavior: ERP evidence on cognitive and emotional conflict
Kanske, Philipp; Obermeier, Christian; Schröger, Erich; Kotz, Sonja A.
2015-01-01
Cognitive control supports goal-directed behavior by resolving conflict among opposing action tendencies. Emotion can trigger cognitive control processes, thus speeding up conflict processing when the target dimension of stimuli is emotional. However, it is unclear what role emotionality of the target dimension plays in the processing of emotional conflict (e.g. in irony). In two EEG experiments, we compared the influence of emotional valence of the target (emotional, neutral) in cognitive and emotional conflict processing. To maximally approximate real-life communication, we used audiovisual stimuli. Participants either categorized spoken vowels (cognitive conflict) or their emotional valence (emotional conflict), while visual information was congruent or incongruent. Emotional target dimension facilitated both cognitive and emotional conflict processing, as shown in a reduced reaction time conflict effect. In contrast, the N100 in the event-related potentials showed a conflict-specific reversal: the conflict effect was larger for emotional compared with neutral trials in cognitive conflict and smaller in emotional conflict. Additionally, domain-general conflict effects were observed in the P200 and N200 responses. The current findings confirm that emotions have a strong influence on cognitive and emotional conflict processing. They also highlight the complexity and heterogeneity of the interaction of emotion with different types of conflict. PMID:25925271
Processing and memory for emotional and neutral material in amyotrophic lateral sclerosis
Cuddy, Marion; Papps, Benjamin J.; Thambisetty, Madhav; Leigh, P. Nigel; Goldstein, Laura H.
2018-01-01
Several studies have reported changes in emotional memory and processing in people with ALS (pwALS). In this study, we sought to analyse differences in emotional processing and memory between pwALS and healthy controls and to investigate the relationship between emotional memory and self-reported depression. Nineteen pwALS and 19 healthy controls were assessed on measures of emotional processing, emotional memory, verbal memory and depression. Although pwALS and controls did not differ significantly on measures of emotional memory, a subgroup of patients performed poorly on an emotional recognition task. With regard to emotional processing, pwALS gave significantly stronger ratings of emotional valence to positive words than to negative words. Higher ratings of emotional words were associated with better recall in controls but not pwALS. Self-reported depression and emotional processing or memory variables were not associated in either group. In conclusion, the results from this small study suggest that a subgroup of pwALS may show weakened ‘emotional enhancement’, although in the current sample this may reflect general memory impairment rather than specific changes in emotional memory. Nonetheless, different patterns of processing of emotionally-salient material by pwALS may have care and management-related implications. PMID:22873560
The Influence of Negative Emotion on Cognitive and Emotional Control Remains Intact in Aging
Zinchenko, Artyom; Obermeier, Christian; Kanske, Philipp; Schröger, Erich; Villringer, Arno; Kotz, Sonja A.
2017-01-01
Healthy aging is characterized by a gradual decline in cognitive control and inhibition of interferences, while emotional control is either preserved or facilitated. Emotional control regulates the processing of emotional conflicts such as in irony in speech, and cognitive control resolves conflict between non-affective tendencies. While negative emotion can trigger control processes and speed up resolution of both cognitive and emotional conflicts, we know little about how aging affects the interaction of emotion and control. In two EEG experiments, we compared the influence of negative emotion on cognitive and emotional conflict processing in groups of younger adults (mean age = 25.2 years) and older adults (69.4 years). Participants viewed short video clips and either categorized spoken vowels (cognitive conflict) or their emotional valence (emotional conflict), while the visual facial information was congruent or incongruent. Results show that negative emotion modulates both cognitive and emotional conflict processing in younger and older adults as indicated in reduced response times and/or enhanced event-related potentials (ERPs). In emotional conflict processing, we observed a valence-specific N100 ERP component in both age groups. In cognitive conflict processing, we observed an interaction of emotion by congruence in the N100 responses in both age groups, and a main effect of congruence in the P200 and N200. Thus, the influence of emotion on conflict processing remains intact in aging, despite a marked decline in cognitive control. Older adults may prioritize emotional wellbeing and preserve the role of emotion in cognitive and emotional control. PMID:29163132
The Influence of Negative Emotion on Cognitive and Emotional Control Remains Intact in Aging.
Zinchenko, Artyom; Obermeier, Christian; Kanske, Philipp; Schröger, Erich; Villringer, Arno; Kotz, Sonja A
2017-01-01
Healthy aging is characterized by a gradual decline in cognitive control and inhibition of interferences, while emotional control is either preserved or facilitated. Emotional control regulates the processing of emotional conflicts such as in irony in speech, and cognitive control resolves conflict between non-affective tendencies. While negative emotion can trigger control processes and speed up resolution of both cognitive and emotional conflicts, we know little about how aging affects the interaction of emotion and control. In two EEG experiments, we compared the influence of negative emotion on cognitive and emotional conflict processing in groups of younger adults (mean age = 25.2 years) and older adults (69.4 years). Participants viewed short video clips and either categorized spoken vowels (cognitive conflict) or their emotional valence (emotional conflict), while the visual facial information was congruent or incongruent. Results show that negative emotion modulates both cognitive and emotional conflict processing in younger and older adults as indicated in reduced response times and/or enhanced event-related potentials (ERPs). In emotional conflict processing, we observed a valence-specific N100 ERP component in both age groups. In cognitive conflict processing, we observed an interaction of emotion by congruence in the N100 responses in both age groups, and a main effect of congruence in the P200 and N200. Thus, the influence of emotion on conflict processing remains intact in aging, despite a marked decline in cognitive control. Older adults may prioritize emotional wellbeing and preserve the role of emotion in cognitive and emotional control.
Vanderhasselt, Marie-Anne; Remue, Jonathan; Ng, Kwun Kei; De Raedt, Rudi
2014-01-01
Emotions can occur during an emotion-eliciting event, but they can also arise when anticipating the event. We used pupillary responses, as a measure of effortful cognitive processing, to test whether the anticipation of an emotional stimulus (positive and negative) influences the subsequent online processing of that emotional stimulus. Moreover, we tested whether individual differences in the habitual use of emotion regulation strategies are associated with pupillary responses during the anticipation and/or online processing of this emotional stimulus. Our results show that, both for positive and negative stimuli, pupillary diameter during the anticipation of emotion-eliciting events is inversely and strongly correlated to pupillary responses during the emotional image presentation. The variance in this temporal interplay between anticipation and online processing was related to individual differences in emotion regulation. Specifically, the results show that high reappraisal scores are related to larger pupil diameter during the anticipation which is related to smaller pupillary responses during the online processing of emotion-eliciting events. The habitual use of expressive suppression was not associated to pupillary responses in the anticipation and subsequent online processing of emotional stimuli. Taken together, the current data suggest (most strongly for individuals scoring high on the habitual use of reappraisal) that larger pupillary responses during the anticipation of an emotional stimulus are indicative of a sustained attentional set activation to prepare for an upcoming emotional stimulus, which subsequently directs a reduced need to cognitively process that emotional event. Hence, because the habitual use of reappraisal is known to have a positive influence on emotional well-being, the interplay between anticipation and online processing of emotional stimuli might be a significant marker of this well-being.
Tiedens, L Z; Linton, S
2001-12-01
The authors argued that emotions characterized by certainty appraisals promote heuristic processing, whereas emotions characterized by uncertainty appraisals result in systematic processing. The 1st experiment demonstrated that the certainty associated with an emotion affects the certainty experienced in subsequent situations. The next 3 experiments investigated effects on processing of emotions associated with certainty and uncertainty. Compared with emotions associated with uncertainty, emotions associated with certainty resulted in greater reliance on the expertise of a source of a persuasive message in Experiment 2, more stereotyping in Experiment 3, and less attention to argument quality in Experiment 4. In contrast to previous theories linking valence and processing, these findings suggest that the certainty appraisal content of emotions is also important in determining whether people engage in systematic or heuristic processing.
Prosodic Awareness and Punctuation Ability in Adult Readers
ERIC Educational Resources Information Center
Heggie, Lindsay; Wade-Woolley, Lesly
2018-01-01
We examined the relationship between two metalinguistic tasks: prosodic awareness and punctuation ability. Specifically, we investigated whether adults' ability to punctuate was related to the degree to which they are aware of and able to manipulate prosody in spoken language. English-speaking adult readers (n = 115) were administered a receptive…
Dysprosody and Stimulus Effects in Cantonese Speakers with Parkinson's Disease
ERIC Educational Resources Information Center
Ma, Joan K.-Y.; Whitehill, Tara; Cheung, Katherine S.-K.
2010-01-01
Background: Dysprosody is a common feature in speakers with hypokinetic dysarthria. However, speech prosody varies across different types of speech materials. This raises the question of what is the most appropriate speech material for the evaluation of dysprosody. Aims: To characterize the prosodic impairment in Cantonese speakers with…
Melody as Prosody: Toward a Usage-Based Theory of Music
ERIC Educational Resources Information Center
Pooley, Thomas Mathew
2014-01-01
Rationalist modes of inquiry have dominated the cognitive science of music over the past several decades. This dissertation contests many rationalist assumptions, including its core tenets of nativism, modularity, and computationism, by drawing on a wide range of evidence from psychology, neuroscience, linguistics, and cognitive music theory, as…
Computational Support for Early Elicitation and Classification of Tone
ERIC Educational Resources Information Center
Bird, Steven; Lee, Haejoong
2014-01-01
Investigating a tone language involves careful transcription of tone on words and phrases. This is challenging when the phonological categories--the tones or melodies--have not been identified. Effects such as coarticulation, sandhi, and phrase-level prosody appear as obstacles to early elicitation and classification of tone. This article presents…
Prenuclear Accentuation in English: Phonetics, Phonology, Information Structure
ERIC Educational Resources Information Center
Bishop, Jason Brandon
2013-01-01
A primary function of prosody in many languages is to convey information structure--the "packaging" of a sentence's content into categories such as "focus", "given" and "topic". In English and other West Germanic languages it is widely assumed that focus is signaled prosodically by the location of a…
The Prosodic Evolution of West Slavic in the Context of the Neo-Acute Stress
ERIC Educational Resources Information Center
Feldstein, Ronald F.
1975-01-01
Because of neo-acute stress--or transferred acute stress--long vowel prosody in West Slavic had a special evolution. Two kinds of long vowel evolution are examined. The nature of transitionality across Slavic territory from tonal opposition to distinctive stress placement is pointed out. (SC)
Detecting Stress Patterns Is Related to Children's Performance on Reading Tasks
ERIC Educational Resources Information Center
Gutierrez-Palma, Nicolas; Raya-Garcia, Manuel; Palma-Reyes, Alfonso
2009-01-01
This paper investigates the relationship between the ability to detect changes in prosody and reading performance in Spanish. Participants were children aged 6-8 years who completed tasks involving reading words, reading pseudowords, stressing pseudowords, and reproducing pseudoword stress patterns. Results showed that the capacity to reproduce…
Prosodic Perception Problems in Spanish Dyslexia
ERIC Educational Resources Information Center
Cuetos, Fernando; Martínez-García, Cristina; Suárez-Coalla, Paz
2018-01-01
The aim of this study was to investigate the prosody abilities on top of phonological and visual abilities in children with dyslexia in Spanish that can be considered a syllable-timed language. The performances on prosodic tasks (prosodic perception, rise-time perception), phonological tasks (phonological awareness, rapid naming, verbal working…
Considering the Context and Texts for Fluency: Performance, Readers Theater, and Poetry
ERIC Educational Resources Information Center
Young, Chase; Nageldinger, James
2014-01-01
This article describes the importance of teaching reading fluency and all of its components, including automaticity and prosody. The authors explain how teachers can create a context for reading fluency instruction by engaging students in reading performance activities. To support the instructional contexts, the authors suggest particular…
A Closer Look at Formulaic Language: Prosodic Characteristics of Swedish Proverbs
ERIC Educational Resources Information Center
Hallin, Anna Eva; Van Lancker Sidtis, Diana
2017-01-01
Formulaic expressions (such as idioms, proverbs, and conversational speech formulas) are currently a topic of interest. Examination of prosody in formulaic utterances, a less explored property of formulaic expressions, has yielded controversial views. The present study investigates prosodic characteristics of proverbs, as one type of formulaic…
Evidence for Prosody in Silent Reading
ERIC Educational Resources Information Center
Gross, Jennifer; Millett, Amanda L.; Bartek, Brian; Bredell, Kyle Hampton; Winegard, Bo
2014-01-01
English speakers and expressive readers emphasize new content in an ongoing discourse. Do silent readers emphasize new content in their inner voice? Because the inner voice cannot be directly observed, we borrowed the cap-emphasis technique (e.g., "toMAYto") from the pronunciation guides of dictionaries to elicit prosodic emphasis.…
Phonetic Diversity, Statistical Learning, and Acquisition of Phonology
ERIC Educational Resources Information Center
Pierrehumbert, Janet B.
2003-01-01
In learning to perceive and produce speech, children master complex language-specific patterns. Daunting language-specific variation is found both in the segmental domain and in the domain of prosody and intonation. This article reviews the challenges posed by results in phonetic typology and sociolinguistics for the theory of language…
Emotion and goal-directed behavior: ERP evidence on cognitive and emotional conflict.
Zinchenko, Artyom; Kanske, Philipp; Obermeier, Christian; Schröger, Erich; Kotz, Sonja A
2015-11-01
Cognitive control supports goal-directed behavior by resolving conflict among opposing action tendencies. Emotion can trigger cognitive control processes, thus speeding up conflict processing when the target dimension of stimuli is emotional. However, it is unclear what role emotionality of the target dimension plays in the processing of emotional conflict (e.g. in irony). In two EEG experiments, we compared the influence of emotional valence of the target (emotional, neutral) in cognitive and emotional conflict processing. To maximally approximate real-life communication, we used audiovisual stimuli. Participants either categorized spoken vowels (cognitive conflict) or their emotional valence (emotional conflict), while visual information was congruent or incongruent. Emotional target dimension facilitated both cognitive and emotional conflict processing, as shown in a reduced reaction time conflict effect. In contrast, the N100 in the event-related potentials showed a conflict-specific reversal: the conflict effect was larger for emotional compared with neutral trials in cognitive conflict and smaller in emotional conflict. Additionally, domain-general conflict effects were observed in the P200 and N200 responses. The current findings confirm that emotions have a strong influence on cognitive and emotional conflict processing. They also highlight the complexity and heterogeneity of the interaction of emotion with different types of conflict. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Emotional Speech Perception Unfolding in Time: The Role of the Basal Ganglia
Paulmann, Silke; Ott, Derek V. M.; Kotz, Sonja A.
2011-01-01
The basal ganglia (BG) have repeatedly been linked to emotional speech processing in studies involving patients with neurodegenerative and structural changes of the BG. However, the majority of previous studies did not consider that (i) emotional speech processing entails multiple processing steps, and the possibility that (ii) the BG may engage in one rather than the other of these processing steps. In the present study we investigate three different stages of emotional speech processing (emotional salience detection, meaning-related processing, and identification) in the same patient group to verify whether lesions to the BG affect these stages in a qualitatively different manner. Specifically, we explore early implicit emotional speech processing (probe verification) in an ERP experiment followed by an explicit behavioral emotional recognition task. In both experiments, participants listened to emotional sentences expressing one of four emotions (anger, fear, disgust, happiness) or neutral sentences. In line with previous evidence patients and healthy controls show differentiation of emotional and neutral sentences in the P200 component (emotional salience detection) and a following negative-going brain wave (meaning-related processing). However, the behavioral recognition (identification stage) of emotional sentences was impaired in BG patients, but not in healthy controls. The current data provide further support that the BG are involved in late, explicit rather than early emotional speech processing stages. PMID:21437277
Independent and Collaborative Contributions of the Cerebral Hemispheres to Emotional Processing
Shobe, Elizabeth R.
2014-01-01
Presented is a model suggesting that the right hemisphere (RH) directly mediates the identification and comprehension of positive and negative emotional stimuli, whereas the left hemisphere (LH) contributes to higher level processing of emotional information that has been shared via the corpus callosum. RH subcortical connections provide initial processing of emotional stimuli, and their innervation to cortical structures provides a secondary pathway by which the hemispheres process emotional information more fully. It is suggested that the LH contribution to emotion processing is in emotional regulation, social well-being, and adaptation, and transforming the RH emotional experience into propositional and verbal codes. Lastly, it is proposed that the LH has little ability at the level of emotion identification, having a default positive bias and no ability to identify a stimulus as negative. Instead, the LH must rely on the transfer of emotional information from the RH to engage higher-order emotional processing. As such, either hemisphere can identify positive emotions, but they must collaborate for complete processing of negative emotions. Evidence presented draws from behavioral, neurological, and clinical research, including discussions of subcortical and cortical pathways, callosal agenesis, commissurotomy, emotion regulation, mood disorders, interpersonal interaction, language, and handedness. Directions for future research are offered. PMID:24795597
The four key characteristics of interpersonal emotion regulation.
Niven, Karen
2017-10-01
Emotion researchers are increasingly interested in processes by which people influence others' feelings. Although one such process, interpersonal emotion regulation, has received particular attention in recent years, there remains confusion about exactly how to define this process. The present article aims to distinguish interpersonal emotion regulation from other, related processes by outlining its four key characteristics. Specifically, interpersonal emotion regulation is presented as a process of (i) regulation, that (ii) has an affective target, (iii) is deliberate, and (iv) has a social target. Considering these characteristics raises questions for future research concerning factors that may influence the process of interpersonal emotion regulation, why interpersonal emotion regulation sometimes fails, and whether interventions can improve people's use of interpersonal emotion regulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
van Leeuwen, Ninke; Bossema, Ercolie R; van Middendorp, Henriët; Kruize, Aike A; Bootsma, Hendrika; Bijlsma, Johannes W J; Geenen, Rinie
2012-01-01
The hampered ability to cry in patients with Sjögren's syndrome may affect their ways of dealing with emotions. The aim of this study was to examine differences in emotion processing and regulation between people with and without Sjögren's syndrome and correlations of emotion processing and regulation with mental well-being. In 300 patients with primary Sjögren's syndrome and 100 demographically matched control participants (mean age 56.8 years, 93% female), emotion processing (affect intensity and alexithymia, i.e. difficulty identifying and describing feelings), emotion regulation (cognitive reappraisal, suppression and expression of emotions), and mental well-being were assessed. Criteria for clinical alexithymia applied to 22% of the patients and 12% of the control participants; patients had significantly more difficulty identifying feelings than control participants. No other significant differences in emotion processing and emotion regulation were found. In patients, the emotion processing styles affect intensity and alexithymia (0.32
An Integrated Model of Emotion Processes and Cognition in Social Information Processing.
ERIC Educational Resources Information Center
Lemerise, Elizabeth A.; Arsenio, William F.
2000-01-01
Interprets literature on contributions of social cognitive and emotion processes to children's social competence in the context of an integrated model of emotion processes and cognition in social information processing. Provides neurophysiological and functional evidence for the centrality of emotion processes in personal-social decision making.…
Park, Kiho; Choi, Kee-Hong
2018-04-26
This study examined whether better emotional context processing is associated with better community functioning among persons with schizophrenia, and whether the relationship between the two variables is moderated by level of paranoid symptoms. The Brief Psychiatric Rating Scale-Expanded Version, Emotional Context Processing Scale, and Multnomah Community Ability Scale were administered to 39 community-dwelling participants with schizophrenia or schizoaffective disorder. Emotional context processing had a small-to-moderate association with community functioning. However, the association between emotional context processing and community functioning was moderated by level of paranoid symptoms. Emotional context processing in participants with mild paranoid symptoms was strongly associated with better community functioning, whereas emotional context processing in those with severe paranoid symptoms was not. Emotional context processing and the degree of paranoia should be considered in treatment plans designed to enhance the community functioning of individuals with schizophrenia to help them improve their understanding of social situations.
The role of emotion and emotion regulation in social anxiety disorder.
Jazaieri, Hooria; Morrison, Amanda S; Goldin, Philippe R; Gross, James J
2015-01-01
Many psychiatric disorders involve problematic patterns of emotional reactivity and regulation. In this review, we consider recent findings regarding emotion and emotion regulation in the context of social anxiety disorder (SAD). We first describe key features of SAD which suggest altered emotional and self-related processing difficulties. Next, we lay the conceptual foundation for a discussion of emotion and emotion regulation and present a common framework for understanding emotion regulation, the process model of emotion regulation. Using the process model, we evaluate the recent empirical literature spanning self-report, observational, behavioral, and physiological methods across five specific families of emotion regulation processes-situation selection, situation modification, attentional deployment, cognitive change, and response modulation. Next, we examine the empirical evidence behind two psychosocial interventions for SAD: cognitive behavioral therapy (CBT) and mindfulness-based stress reduction (MBSR). Throughout, we present suggestions for future directions in the continued examination of emotion and emotion regulation in SAD.
Gui, Dan-Yang; Gan, Tian; Liu, Chao
2016-01-01
Behavioral and neurological studies have revealed that emotions influence moral cognition. Although moral stimuli are emotionally charged, the time course of interactions between emotions and moral judgments remains unknown. In the present study, we investigated the temporal dynamics of the interaction between emotional processes and moral cognition. The results revealed that when making moral judgments, the time course of the event-related potential (ERP) waveform was significantly different between high emotional arousal and low emotional arousal contexts. Different stages of processing were distinguished, showing distinctive interactions between emotional processes and moral reasoning. The precise time course of moral intuition and moral reasoning sheds new light on theoretical models of moral psychology. Specifically, the N1 component (interpreted as representing moral intuition) did not appear to be influenced by emotional arousal. However, the N2 component and late positive potential were strongly affected by emotional arousal; the slow wave was influenced by both emotional arousal and morality, suggesting distinct moral processing at different emotional arousal levels.
The effects of serotonin manipulations on emotional information processing and mood.
Merens, Wendelien; Willem Van der Does, A J; Spinhoven, Philip
2007-11-01
Serotonin is implicated in both mood and cognition. It has recently been shown that antidepressant treatment has immediate effects on emotional information processing, which is much faster than any clinically significant effects. This review aims to investigate whether the effects on emotional information processing are reliable, and whether these effects are related to eventual clinical outcome. Treatment-efficiency may be greatly improved if early changes in emotional information processing are found to predict clinical outcome following antidepressant treatment. Review of studies investigating the short-term effects of serotonin manipulations (including medication) on the processing of emotional information, using PubMed and PsycInfo databases. Twenty-five studies were identified. Serotonin manipulations were found to affect attentional bias, facial emotion recognition, emotional memory, dysfunctional attitudes and decision making. The sequential link between changes in emotional processing and mood remains to be further investigated. The number of studies on serotonin manipulations and emotional information processing in currently depressed subjects is small. No studies yet have directly tested the link between emotional information processing and clinical outcome during the course of antidepressant treatment. Serotonin function is related to several aspects of emotional information processing, but it is unknown whether these changes predict or have any relationship with clinical outcome. Suggestions for future research are provided.
Interpersonal reactivity and the attribution of emotional reactions.
Haas, Brian W; Anderson, Ian W; Filkowski, Megan M
2015-06-01
The ability to identify the cause of another person's emotional reaction is an important component associated with improved success of social relationships and survival. Although many studies have investigated the mechanisms involved in emotion recognition, very little is currently known regarding the processes involved during emotion attribution decisions. Research on complementary "emotion understanding" mechanisms, including empathy and theory of mind, has demonstrated that emotion understanding decisions are often made through relatively emotion- or cognitive-based processing streams. The current study was designed to investigate the behavioral and brain mechanisms involved in emotion attribution decisions. We predicted that dual processes, emotional and cognitive, are engaged during emotion attribution decisions. Sixteen healthy adults completed the Interpersonal Reactivity Index to characterize individual differences in tendency to make emotion- versus cognitive-based interpersonal decisions. Participants then underwent functional MRI while making emotion attribution decisions. We found neuroimaging evidence that emotion attribution decisions engage a similar brain network as other forms of emotion understanding. Further, we found evidence in support of a dual processes model involved during emotion attribution decisions. Higher scores of personal distress were associated with quicker emotion attribution decisions and increased anterior insula activity. Conversely, higher scores in perspective taking were associated with delayed emotion attribution decisions and increased prefrontal cortex and premotor activity. These findings indicate that the making of emotion attribution decisions relies on dissociable emotional and cognitive processing streams within the brain. (c) 2015 APA, all rights reserved).
Bourne, Victoria J; Watling, Dawn
2015-01-01
Previous research examining the possible association between emotion lateralisation and social anxiety has found conflicting results. In this paper two studies are presented to assess two aspects related to different features of social anxiety: fear of negative evaluation (FNE) and emotion regulation. Lateralisation for the processing of facial emotion was measured using the chimeric faces test. Individuals with greater FNE were more strongly lateralised to the right hemisphere for the processing of anger, happiness and sadness; and, for the processing of fearful faces the relationship was found for females only. Emotion regulation strategies were reduced to two factors: positive strategies and negative strategies. For males, but not females, greater reported use of negative emotion strategies is associated with stronger right hemisphere lateralisation for processing negative emotions. The implications for further understanding the neuropsychological processing of emotion in individuals with social anxiety are discussed.
Goldenberg, Amit; Saguy, Tamar; Halperin, Eran
2014-10-01
Extensive research has established the pivotal role that group-based emotions play in shaping intergroup processes. The underlying implicit assumption in previous work has been that these emotions reflect what the rest of the group feels (i.e., collective emotions). However, one can experience an emotion in the name of her or his group, which is inconsistent with what the collective feels. The current research investigated this phenomenon of emotional nonconformity. Particularly, we proposed that when a certain emotional reaction is perceived as appropriate, but the collective is perceived as not experiencing this emotion, people would experience stronger levels of group-based emotion, placing their emotional experience farther away from that of the collective. We provided evidence for this process across 2 different emotions: group-based guilt and group-based anger (Studies 1 and 2) and across different intergroup contexts (Israeli-Palestinian relations in Israel, and Black-White relations in the United States). In Studies 3 and 4, we demonstrate that this process is moderated by the perceived appropriateness of the collective emotional response. Studies 4 and 5 further provided evidence for the mechanisms underlying this effect, pointing to a process of emotional burden (i.e., feeling responsible for carrying the emotion in the name of the group) and of emotional transfer (i.e., transferring negative feelings one has toward the ingroup, toward the event itself). This work brings to light processes that were yet to be studied regarding the relationship between group members, their perception of their group, and the emotional processes that connect them. 2014 APA, all rights reserved
Lee, Seung A; Kim, Chai-Youn; Lee, Seung-Hwan
2016-03-01
Psychophysiological and functional neuroimaging studies have frequently and consistently shown that emotional information can be processed outside of the conscious awareness. Non-conscious processing comprises automatic, uncontrolled, and fast processing that occurs without subjective awareness. However, how such non-conscious emotional processing occurs in patients with various psychiatric disorders requires further examination. In this article, we reviewed and discussed previous studies on the non-conscious emotional processing in patients diagnosed with anxiety disorder, schizophrenia, bipolar disorder, and depression, to further understand how non-conscious emotional processing varies across these psychiatric disorders. Although the symptom profile of each disorder does not often overlap with one another, these patients commonly show abnormal emotional processing based on the pathology of their mood and cognitive function. This indicates that the observed abnormalities of emotional processing in certain social interactions may derive from a biased mood or cognition process that precedes consciously controlled and voluntary processes. Since preconscious forms of emotional processing appear to have a major effect on behaviour and cognition in patients with these disorders, further investigation is required to understand these processes and their impact on patient pathology.
[Significance of emotion-focused concepts to cognitive-behavioral therapy].
Lammers, C-H
2006-09-01
Emotions are the central process of motivation and play a key role in adaptive behavior in humans. Although cognitive-behavioral therapy stresses the importance of changing both cognition and behavior, there is growing emphasis on direct therapeutic work on emotions and emotional processing, as problematic emotional processes are at the core of nearly all psychic disorders. This type of work is the goal of emotion-focused psychotherapy, which centers on direct change of problematic emotions, especially those which are usually suppressed resp. overregulated by the patient. This paper examines the basic phobic/emotional conflict, the problematic emotional processes arising from this conflict, and the importance to cognitive-behavioral therapy of their potentially integrative role.
Anticipatory Emotions in Decision Tasks: Covert Markers of Value or Attentional Processes?
Davis, Tyler; Love, Bradley C.; Maddox, Todd
2009-01-01
Anticipatory emotions precede behavioral outcomes and provide a means to infer interactions between emotional and cognitive processes. A number of theories hold that anticipatory emotions serve as inputs to the decision process and code the value or risk associated with a stimulus. We argue that current data do not unequivocally support this theory. We present an alternative theory whereby anticipatory emotions reflect the outcome of a decision process and serve to ready the subject for new information when making an uncertain response. We test these two accounts, which we refer to as emotions-as-input and emotions-as-outcome, in a task that allows risky stimuli to be dissociated from uncertain responses. We find that emotions are associated with responses as opposed to stimuli. This finding is contrary to the emotions-as-input perspective as it shows that emotions arise from decision processes. PMID:19428002
Emotional processing and self-control in adolescents with type 1 diabetes.
Hughes, Amy E; Berg, Cynthia A; Wiebe, Deborah J
2012-09-01
This study examined whether emotional processing (understanding emotions), self-control (regulation of thoughts, emotions, and behavior), and their interaction predicted HbA1c for adolescents with type 1 diabetes over and above diabetes-specific constructs. Self-report measures of self-control, emotional processing, self-efficacy for diabetes management, diabetes-specific negative affect, and adherence, and HbA1c from medical records were obtained from 137 adolescents with type 1 diabetes (M age = 13.48 years). Emotional processing interacted with self-control to predict HbA1c, such that when adolescents had both low emotional processing and low self-control, HbA1c was poorest. Also, both high emotional processing and self-control buffered negative effects of low capacity in the other in relation to HbA1c. The interaction of emotional processing × self-control predicted HbA1c over diabetes-specific self-efficacy, negative affect, and adherence. These findings suggest the importance of emotional processing and self-control for health outcomes in adolescents with diabetes.
Emotional Processing and Self-Control in Adolescents With Type 1 Diabetes
Hughes, Amy E.; Wiebe, Deborah J.
2012-01-01
Objective This study examined whether emotional processing (understanding emotions), self-control (regulation of thoughts, emotions, and behavior), and their interaction predicted HbA1c for adolescents with type 1 diabetes over and above diabetes-specific constructs. Methods Self-report measures of self-control, emotional processing, self-efficacy for diabetes management, diabetes-specific negative affect, and adherence, and HbA1c from medical records were obtained from 137 adolescents with type 1 diabetes (M age = 13.48 years). Results Emotional processing interacted with self-control to predict HbA1c, such that when adolescents had both low emotional processing and low self-control, HbA1c was poorest. Also, both high emotional processing and self-control buffered negative effects of low capacity in the other in relation to HbA1c. The interaction of emotional processing × self-control predicted HbA1c over diabetes-specific self-efficacy, negative affect, and adherence. Conclusions These findings suggest the importance of emotional processing and self-control for health outcomes in adolescents with diabetes. PMID:22523404
Emotional words can be embodied or disembodied: the role of superficial vs. deep types of processing
Abbassi, Ensie; Blanchette, Isabelle; Ansaldo, Ana I.; Ghassemzadeh, Habib; Joanette, Yves
2015-01-01
Emotional words are processed rapidly and automatically in the left hemisphere (LH) and slowly, with the involvement of attention, in the right hemisphere (RH). This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This part of the process is automatic and may be sufficient for the purpose of language processing. Deep processing, in contrast, seems to involve conceptual information and imagery of a word’s perceptual and emotional properties using autobiographical memory contents. Imagery and the involvement of autobiographical memory likely differentiate between emotional and neutral word processing and explain the salient role of the RH in emotional word processing. It is concluded that the level of emotional word processing in the RH should be deeper than in the LH and, thus, it is conceivable that the slow mode of processing adds certain qualities to the output. PMID:26217288
Abbassi, Ensie; Blanchette, Isabelle; Ansaldo, Ana I; Ghassemzadeh, Habib; Joanette, Yves
2015-01-01
Emotional words are processed rapidly and automatically in the left hemisphere (LH) and slowly, with the involvement of attention, in the right hemisphere (RH). This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This part of the process is automatic and may be sufficient for the purpose of language processing. Deep processing, in contrast, seems to involve conceptual information and imagery of a word's perceptual and emotional properties using autobiographical memory contents. Imagery and the involvement of autobiographical memory likely differentiate between emotional and neutral word processing and explain the salient role of the RH in emotional word processing. It is concluded that the level of emotional word processing in the RH should be deeper than in the LH and, thus, it is conceivable that the slow mode of processing adds certain qualities to the output.
Watling, Dawn; Bourne, Victoria J
2007-09-01
Understanding of emotions has been shown to develop between the ages of 4 and 10 years; however, individual differences exist in this development. While previous research has typically examined these differences in terms of developmental and/or social factors, little research has considered the possible impact of neuropsychological development on the behavioural understanding of emotions. Emotion processing tends to be lateralised to the right hemisphere of the brain in adults, yet this pattern is not as evident in children until around the age of 10 years. In this study 136 children between 5 and 10 years were given both behavioural and neuropsychological tests of emotion processing. The behavioural task examined expression regulation knowledge (ERK) for prosocial and self-presentational hypothetical interactions. The chimeric faces test was given as a measure of lateralisation for processing positive facial emotion. An interaction between age and lateralisation for emotion processing was predictive of children's ERK for only the self-presentational interactions. The relationships between children's ERK and lateralisation for emotion processing changes across the three age groups, emerging as a positive relationship in the 10-year-olds. The 10-years-olds who were more lateralised to the right hemisphere for emotion processing tended to show greater understanding of the need for regulating negative emotions during interactions that would have a self-presentational motivation. This finding suggests an association between the behavioural and neuropsychological development of emotion processing.
Improving the Use of Suprasegmentals with Severely Handicapped Children through Music and Movement.
ERIC Educational Resources Information Center
Leung, Katherine
1985-01-01
The paper reviews techniques suggested in the literature for the improvement of suprasegmentals (prosody) and the role of music in speech remediation with communicatively impaired children. Specific strategies, including the Z. Kodaly method of teaching singing and the use of a quartz metronome, are recommended. (Author/CL)
Criteria for Labelling Prosodic Aspects of English Speech.
ERIC Educational Resources Information Center
Bagshaw, Paul C.; Williams, Briony J.
A study reports a set of labelling criteria which have been developed to label prosodic events in clear, continuous speech, and proposes a scheme whereby this information can be transcribed in a machine readable format. A prosody in a syllabic domain which is synchronized with a phonemic segmentation was annotated. A procedural definition of…
ERIC Educational Resources Information Center
Ma, Joan K.-Y.; Whitehill, Tara L.; So, Susanne Y.-S.
2010-01-01
Purpose: Speech produced by individuals with hypokinetic dysarthria associated with Parkinson's disease (PD) is characterized by a number of features including impaired speech prosody. The purpose of this study was to investigate intonation contrasts produced by this group of speakers. Method: Speech materials with a question-statement contrast…
Motivating Adolescent Readers: A Middle School Reading Fluency and Prosody Intervention
ERIC Educational Resources Information Center
Whittington, Marta
2012-01-01
Adolescent learners face a complexity of reading content they have never before encountered as they enter middle school and become independent in structuring their own academic frameworks. Some students become disconnected and unmotivated readers as school competes with their multiple reading lives. This study examined the use of choice along with…
Effects of Prosody and Position on the Timing of Deictic Gestures
ERIC Educational Resources Information Center
Rusiewicz, Heather Leavy; Shaiman, Susan; Iverson, Jana M.; Szuminsky, Neil
2013-01-01
Purpose: In this study, the authors investigated the hypothesis that the perceived tight temporal synchrony of speech and gesture is evidence of an integrated spoken language and manual gesture communication system. It was hypothesized that experimental manipulations of the spoken response would affect the timing of deictic gestures. Method: The…
The Relationship between Form and Function Level Receptive Prosodic Abilities in Autism
ERIC Educational Resources Information Center
Jarvinen-Pasley, Anna; Peppe, Susan; King-Smith, Gavin; Heaton, Pamela
2008-01-01
Prosody can be conceived as having form (auditory-perceptual characteristics) and function (pragmatic/linguistic meaning). No known studies have examined the relationship between form- and function-level prosodic skills in relation to the effects of stimulus length and/or complexity upon such abilities in autism. Research in this area is both…
Prosody and Animacy in the Development of Noun Determiner Use: A Cross-Linguistic Approach
ERIC Educational Resources Information Center
Bassano, Dominique; Korecky-Kröll, Katharina; Maillochon, Isabelle; van Dijk, Marijn; Laaha, Sabine; van Geert, Paul; Dressler, Wolfgang U.
2013-01-01
This study investigates prosodic (noun length) and lexical-semantic (animacy) influences on determiner use in the spontaneous speech of three children acquiring French, Austrian German and Dutch. In support of typological and language-specific hypotheses from the Germanic-Romance contrast, an advantage of monosyllabic nouns and of inanimate nouns…
The Role of Nonspeech Rhythm in Spanish Word Reading
ERIC Educational Resources Information Center
González-Trujillo, M. Carmen; Defior, Sylvia; Gutiérrez-Palma, Nicolás
2014-01-01
Recent literacy research shows an increasing interest in the influence of prosody on literacy acquisition. The current study examines the relationship of nonspeech rhythmic skills to children's reading acquisition, and their possible relation to stress assignment in Spanish, a syllable-timed language. Sixty-six third graders with no reading…
The Role of Intonation in Language Discrimination by Infants and Adults
ERIC Educational Resources Information Center
Vicenik, Chad Joseph
2011-01-01
It has been widely shown that infants and adults are capable of using only prosodic information to discriminate between languages. However, it remains unclear which aspects of prosody, either rhythm or intonation, listeners attend to for language discrimination. Previous researchers have suggested that rhythm, the duration and timing of speech…
Central Timing Deficits in Subtypes of Primary Speech Disorders
ERIC Educational Resources Information Center
Peter, Beate; Stoel-Gammon, Carol
2008-01-01
Childhood apraxia of speech (CAS) is a proposed speech disorder subtype that interferes with motor planning and/or programming, affecting prosody in many cases. Pilot data (Peter & Stoel-Gammon, 2005) were consistent with the notion that deficits in timing accuracy in speech and music-related tasks may be associated with CAS. This study…
ERIC Educational Resources Information Center
Wang, Wei
2017-01-01
This study investigates Mandarin discourse markers from both functional and prosodic perspectives. Discourse markers are defined as sequentially dependent elements which bracket units of talk (Schiffrin 1987). In this study, I focus on three discourse markers, "ranhou" "then", "wo juede" "I think/feel", and…
ERIC Educational Resources Information Center
Hill, K. Dara
2017-01-01
The current climate of reading instruction calls for fluency strategies that stress automaticity, accuracy, and prosody, within the scope of prescribed reading programs that compromise teacher autonomy, with texts that are often irrelevant to the students' experiences. Consequently, accuracy and speed are developed, but deep comprehension is…
Breathing-Impaired Speech after Brain Haemorrhage: A Case Study
ERIC Educational Resources Information Center
Heselwood, Barry
2007-01-01
Results are presented from an auditory and acoustic analysis of the speech of an adult male with impaired prosody and articulation due to brain haemorrhage. They show marked effects on phonation, speech rate and articulator velocity, and a speech rhythm disrupted by "intrusive" stresses. These effects are discussed in relation to the speaker's…
Perceptual and Acoustic Reliability Estimates for the Speech Disorders Classification System (SDCS)
ERIC Educational Resources Information Center
Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.
2010-01-01
A companion paper describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). The SDCS uses perceptual and acoustic data reduction methods to obtain information on a speaker's speech, prosody, and voice. The present paper provides reliability estimates for…
The Relationship between Reading Fluency and Reading Comprehension in Fifth-Grade Turkish Students
ERIC Educational Resources Information Center
Yildiz, Mustafa; Yildirim, Kasim; Ates, Seyit; Rasinski, Timothy; Fitzgerald, Shawn; Zimmerman, Belinda
2014-01-01
This research study focused on the relationships among the various components of reading fluency components (word recognition accuracy, automaticity, and prosody), as well as their relationships with reading comprehension among fifth-grade students in Turkey. A total of 119 fifth-grade elementary school students participated in the study. The…
Transitioning from Analog to Digital Audio Recording in Childhood Speech Sound Disorders
ERIC Educational Resources Information Center
Shriberg, Lawrence D.; Mcsweeny, Jane L.; Anderson, Bruce E.; Campbell, Thomas F.; Chial, Michael R.; Green, Jordan R.; Hauner, Katherina K.; Moore, Christopher A.; Rusiewicz, Heather L.; Wilson, David L.
2005-01-01
Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing…
Fluency Idol: Using Pop Culture to Engage Students and Boost Fluency Skills
ERIC Educational Resources Information Center
Calo, Kristine M.; Woolard-Ferguson, Taylor; Koitz, Ellen
2013-01-01
This article shares an oral reading practice that develops children's fluency skills, with a particular emphasis on performance reading and prosody. The authors share their experiences with Fluency Idol! as a way to engage young children by tapping into pop culture. The practice emphasizes repeated readings, feedback, practice, and…
Perceptual Grouping Affects Pitch Judgments across Time and Frequency
ERIC Educational Resources Information Center
Borchert, Elizabeth M. O.; Micheyl, Christophe; Oxenham, Andrew J.
2011-01-01
Pitch, the perceptual correlate of fundamental frequency (F0), plays an important role in speech, music, and animal vocalizations. Changes in F0 over time help define musical melodies and speech prosody, while comparisons of simultaneous F0 are important for musical harmony, and for segregating competing sound sources. This study compared…
Contrast-Marking Prosodic Emphasis in Williams Syndrome: Results of Detailed Phonetic Analysis
ERIC Educational Resources Information Center
Ito, Kiwako; Martens, Marilee A.
2017-01-01
Background: Past reports on the speech production of individuals with Williams syndrome (WS) suggest that their prosody is anomalous and may lead to challenges in spoken communication. While existing prosodic assessments confirm that individuals with WS fail to use prosodic emphasis to express contrast, those reports typically lack detailed…
Oral Reading Fluency Assessment: Issues of Construct, Criterion, and Consequential Validity
ERIC Educational Resources Information Center
Valencia, Sheila W.; Smith, Antony T.; Reece, Anne M.; Li, Min; Wixson, Karen K.; Newman, Heather
2010-01-01
This study investigated multiple models for assessing oral reading fluency, including 1-minute oral reading measures that produce scores reported as words correct per minute (wcpm). We compared a measure of wcpm with measures of the individual and combined indicators of oral reading fluency (rate, accuracy, prosody, and comprehension) to examine…
Research and Clinical Center for Child Development Annual Report, 1999-2000, No. 23.
ERIC Educational Resources Information Center
Chen, Shing-Jen, Ed.; Fujino, Yuki, Ed.
This annual report presents several articles related to the work of the Clinical Center for Child Development at Hokkaido University in Sapporo, Japan. The articles are: (1) "Intrinsic Musicality: Rhythm and Prosody in Infant-Directed Voices" (Niki Powers); (2) "Movable Cognitive Studies with a Portable, Telemetric Near-Infrared…
So Long, Robot Reader! A Superhero Intervention Plan for Improving Fluency
ERIC Educational Resources Information Center
Marcell, Barclay; Ferraro, Christine
2013-01-01
This article presents an engaging means for turning disfluent readers into prosody superstars. Each week students align with Poetry Power Man and his superhero friends to battle the evil Robot Reader and his sidekicks. The Fluency Foursome helps students adhere to the multidimensional aspects of fluency where expression and comprehension are…
Presentation Trainer: What Experts and Computers Can Tell about Your Nonverbal Communication
ERIC Educational Resources Information Center
Schneider, J.; Börner, D.; van Rosmalen, P.; Specht, M.
2017-01-01
The ability to present effectively is essential for professionals; therefore, oral communication courses have become part of the curricula for higher education studies. However, speaking in public is still a challenge for many graduates. To tackle this problem, driven by the recent advances in computer vision techniques and prosody analysis,…
Hand Gesture and Mathematics Learning: Lessons from an Avatar
ERIC Educational Resources Information Center
Cook, Susan Wagner; Friedman, Howard S.; Duggan, Katherine A.; Cui, Jian; Popescu, Voicu
2017-01-01
A beneficial effect of gesture on learning has been demonstrated in multiple domains, including mathematics, science, and foreign language vocabulary. However, because gesture is known to co-vary with other non-verbal behaviors, including eye gaze and prosody along with face, lip, and body movements, it is possible the beneficial effect of gesture…
Pragmatic Inferences in Context: Learning to Interpret Contrastive Prosody
ERIC Educational Resources Information Center
Kurumada, Chigusa; Clark, Eve V.
2017-01-01
Can preschoolers make pragmatic inferences based on the intonation of an utterance? Previous work has found that young children appear to ignore intonational meanings and come to understand contrastive intonation contours only after age six. We show that four-year-olds succeed in interpreting an English utterance, such as "It LOOKS like a…
From Sound to Syntax: The Prosodic Bootstrapping of Clauses
ERIC Educational Resources Information Center
Hawthorne, Kara
2013-01-01
It has long been argued that prosodic cues may facilitate syntax acquisition (e.g., Morgan, 1986). Previous studies have shown that infants are sensitive to violations of typical correlations between clause-final prosodic cues (Hirsh-Pasek et al., 1987) and that prosody facilitates memory for strings of words (Soderstrom et al., 2005). This…
Espousing melodic intonation therapy in aphasia rehabilitation: a case study.
Goldfarb, R; Bader, E
1979-01-01
A program of Melodic Intonation Therapy (MIT) was adapted as a home training procedure to enable a severely affected aphasic adult to respond to 52 simple questions bearing relevance to his daily life. MIT involves embedding short phrases or sentences in a simple, non-distinct melody pattern. As the patient progresses through the program, the melodic aspect is faded and the program eventually leads to production of the target phrase or sentence in normal speech prosody. The present procedure consisted of three levels of training designed to advance the subject from an initial level of intoning responses in a simple melody to producing the responses in normal speech prosody. The subject's wife was trained to administer MIT both in the clinical and home settings. Considerable improvement was obtained in imitation and in context related responses to questions. These findings lend support to the proposal that the music dominance to the right hemisphere assists, and perhaps diminishes the language dominance of, the damaged left hemisphere. The limitations of use of Melodic Intonation Therapy were discussed.
Verona, Edelyn; Sprague, Jenessa; Sadeh, Naomi
2012-05-01
The field of personality disorders has had a long-standing interest in understanding interactions between emotion and inhibitory control, as well as neurophysiological indices of these processes. More work in particular is needed to clarify differential deficits in offenders with antisocial personality disorder (APD) who differ on psychopathic traits, as APD and psychopathy are considered separate, albeit related, syndromes. Evidence of distinct neurobiological processing in these disorders would have implications for etiology-based personality disorder taxonomies in future psychiatric classification systems. To inform this area of research, we recorded event-related brain potentials during an emotional-linguistic Go/No-Go task to examine modulation of negative emotional processing by inhibitory control in three groups: psychopathy (n = 14), APD (n = 16), and control (n = 15). In control offenders, inhibitory control demands (No-Go vs. Go) modulated frontal-P3 amplitude to negative emotional words, indicating appropriate prioritization of inhibition over emotional processing. In contrast, the psychopathic group showed blunted processing of negative emotional words regardless of inhibitory control demands, consistent with research on emotional deficits in psychopathy. Finally, the APD group demonstrated enhanced processing of negative emotion words in both Go and No-Go trials, suggesting a failure to modulate negative emotional processing when inhibitory control is required. Implications for emotion-cognition interactions and putative etiological processes in these personality disorders are discussed.
Loutrari, Ariadne; Tselekidou, Freideriki; Proios, Hariklia
2018-02-27
Prosodic patterns of speech appear to make a critical contribution to memory-related processing. We considered the case of a previously unexplored prosodic feature of Greek storytelling and its effect on free recall in thirty typically developing children between the ages of 10 and 12 years, using short ecologically valid auditory stimuli. The combination of a falling pitch contour and, more notably, extensive final-syllable vowel lengthening, which gives rise to the prosodic feature in question, led to statistically significantly higher performance in comparison to neutral phrase-final prosody. Number of syllables in target words did not reveal substantial difference in performance. The current study presents a previously undocumented culturally-specific prosodic pattern and its effect on short-term memory.
Emotional Picture and Word Processing: An fMRI Study on Effects of Stimulus Complexity
Schlochtermeier, Lorna H.; Kuchinke, Lars; Pehrs, Corinna; Urton, Karolina; Kappelhoff, Hermann; Jacobs, Arthur M.
2013-01-01
Neuroscientific investigations regarding aspects of emotional experiences usually focus on one stimulus modality (e.g., pictorial or verbal). Similarities and differences in the processing between the different modalities have rarely been studied directly. The comparison of verbal and pictorial emotional stimuli often reveals a processing advantage of emotional pictures in terms of larger or more pronounced emotion effects evoked by pictorial stimuli. In this study, we examined whether this picture advantage refers to general processing differences or whether it might partly be attributed to differences in visual complexity between pictures and words. We first developed a new stimulus database comprising valence and arousal ratings for more than 200 concrete objects representable in different modalities including different levels of complexity: words, phrases, pictograms, and photographs. Using fMRI we then studied the neural correlates of the processing of these emotional stimuli in a valence judgment task, in which the stimulus material was controlled for differences in emotional arousal. No superiority for the pictorial stimuli was found in terms of emotional information processing with differences between modalities being revealed mainly in perceptual processing regions. While visual complexity might partly account for previously found differences in emotional stimulus processing, the main existing processing differences are probably due to enhanced processing in modality specific perceptual regions. We would suggest that both pictures and words elicit emotional responses with no general superiority for either stimulus modality, while emotional responses to pictures are modulated by perceptual stimulus features, such as picture complexity. PMID:23409009
Deeper than skin deep - The effect of botulinum toxin-A on emotion processing.
Baumeister, J-C; Papa, G; Foroni, F
2016-08-01
The effect of facial botulinum Toxin-A (BTX) injections on the processing of emotional stimuli was investigated. The hypothesis, that BTX would interfere with processing of slightly emotional stimuli and less with very emotional or neutral stimuli, was largely confirmed. BTX-users rated slightly emotional sentences and facial expressions, but not very emotional or neutral ones, as less emotional after the treatment. Furthermore, they became slower at categorizing slightly emotional facial expressions under time pressure. Copyright © 2016 Elsevier Ltd. All rights reserved.
The case for positive emotions in the stress process.
Folkman, Susan
2008-01-01
For many decades, the stress process was described primarily in terms of negative emotions. However, robust evidence that positive emotions co-occurred with negative emotions during intensely stressful situations suggested the need to consider the possible roles of positive emotions in the stress process. About 10 years ago, these possibilities were incorporated into a revision of stress and coping theory (Folkman, 1997). This article summarizes the research reported during the intervening 10 years that pertains to the revised model. Evidence has accumulated regarding the co-occurrence of positive and negative emotions during stressful periods; the restorative function of positive emotions with respect to physiological, psychological, and social coping resources; and the kinds of coping processes that generate positive emotions including benefit finding and reminding, adaptive goal processes, reordering priorities, and infusing ordinary events with positive meaning. Overall, the evidence supports the propositions set forth in the revised model. Contrary to earlier tendencies to dismiss positive emotions, the evidence indicates they have important functions in the stress process and are related to coping processes that are distinct from those that regulate distress. Including positive emotions in future studies will help address an imbalance between research and clinical practice due to decades of nearly exclusive concern with the negative emotions.
Increased heart rate after exercise facilitates the processing of fearful but not disgusted faces.
Pezzulo, G; Iodice, P; Barca, L; Chausse, P; Monceau, S; Mermillod, M
2018-01-10
Embodied theories of emotion assume that emotional processing is grounded in bodily and affective processes. Accordingly, the perception of an emotion re-enacts congruent sensory and affective states; and conversely, bodily states congruent with a specific emotion facilitate emotional processing. This study tests whether the ability to process facial expressions (faces having a neutral expression, expressing fear, or disgust) can be influenced by making the participants' body state congruent with the expressed emotion (e.g., high heart rate in the case of faces expressing fear). We designed a task requiring participants to categorize pictures of male and female faces that either had a neutral expression (neutral), or expressed emotions whose linkage with high heart rate is strong (fear) or significantly weaker or absent (disgust). Critically, participants were tested in two conditions: with experimentally induced high heart rate (Exercise) and with normal heart rate (Normal). Participants processed fearful faces (but not disgusted or neutral faces) faster when they were in the Exercise condition than in the Normal condition. These results support the idea that an emotionally congruent body state facilitates the automatic processing of emotionally-charged stimuli and this effect is emotion-specific rather than due to generic factors such as arousal.
Relating empathy and emotion regulation: do deficits in empathy trigger emotion dysregulation?
Schipper, Marc; Petermann, Franz
2013-01-01
Emotion regulation is a crucial skill in adulthood; its acquisition represents one of the key developmental tasks in early childhood. Difficulties with adaptive emotion regulation increase the risk of psychopathology in childhood and adulthood. This is, for instance, shown by a relation between emotion regulation and aggressive behavior in childhood age, indicating emotion dysregulation as an important risk factor of aggressive behavior and potential precursor of psychopathology. Based on (1) interrelations between emotion processes and social information processing (maladaptive emotion regulation and social information processing are associated with higher levels of aggression) and (2) recent neuroscientific findings showing that empathy deficits might not only result in difficulties labeling others' emotions but one's own emotions too, we suggest that empathy deficits might serve as potential trigger of emotion dysregulation. Different studies investigating the relation between empathy and emotion regulation are presented and discussed. Discussions are based on the assumed potential of empathy deficits triggering emotion dysregulation. Furthermore, developmental neuroscientific findings on empathy and emotion regulation are highlighted which provide further insights on how these processes might relate. Finally, possible directions for future research are presented.
Sleep and emotion regulation: An organizing, integrative review.
Palmer, Cara A; Alfano, Candice A
2017-02-01
A growing body of research suggests that disrupted sleep is a robust risk and maintenance factor for a range of psychiatric conditions. One explanatory mechanism linking sleep and psychological health is emotion regulation. However, numerous components embedded within this construct create both conceptual and empirical challenges to the study of emotion regulation. These challenges are reflected in most sleep-emotion research by way of poor delineation of constructs and insufficient distinction among emotional processes. Most notably, a majority of research has focused on emotions generated as a consequence of inadequate sleep rather than underlying regulatory processes that may alter these experiences. The current review utilizes the process model of emotion regulation as an organizing framework for examining the impact of sleep upon various aspects of emotional experiences. Evidence is provided for maladaptive changes in emotion at multiple stages of the emotion generation and regulation process. We conclude with a call for experimental research designed to clearly explicate which points in the emotion regulation process appear most vulnerable to sleep loss as well as longitudinal studies to follow these processes in relation to the development of psychopathological conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shpak, Talma; Most, Tova; Luntz, Michal
2014-01-01
The aim of this study was to examine the role of fundamental frequency (F0) information in improving speech perception of individuals with a cochlear implant (CI) who use a contralateral hearing aid (HA). The authors hypothesized that in bilateral-bimodal (CI/HA) users the perception of natural prosody speech would be superior to the perception of speech with monotonic flattened F0 contour, whereas in unilateral CI users the perception of both speech signals would be similar. They also hypothesized that in the CI/HA listening condition the speech perception scores would improve as a function of the magnitude of the difference between the F0 characteristics of the target speech signal and the F0 characteristics of the competitors, whereas in the CI-alone condition such a pattern would not be recognized, or at least not as clearly. Two tests were administered to 29 experienced CI/HA adult users who, regardless of their residual hearing or speech perception abilities, had chosen to continue using an HA in the nonimplanted ear for at least 75% of their waking hours. In the first test, the difference between the perception of speech characterized by natural prosody and speech characterized by monotonic flattened F0 contour was assessed in the presence of babble noise produced by three competing male talkers. In the second test the perception of semantically unpredictable sentences was evaluated in the presence of a competing reversed speech sentence spoken by different single talkers with different F0 characteristics. Each test was carried out under two listening conditions: CI alone and CI/HA. Under both listening conditions, the perception of speech characterized by natural prosody was significantly better than the perception of speech in which monotonic F0 contour was flattened. Differences between the scores for natural prosody and for monotonic flattened F0 speech contour were significantly greater, however, in the CI/HA condition than with CI alone. In the second test, the overall scores for perception of semantically unpredictable sentences in the presence of all competitors were higher in the CI/HA condition in the presence of all competitors. In both listening conditions, scores increased significantly with increasing difference between the F0 characteristics of the target speech signal and the F0 characteristics of the competitor. The higher scores obtained in the CI/HA condition than with CI alone in both of the task-specific tests suggested that the use of a contralateral HA provides improved low-frequency information, resulting in better performance by the CI/HA users.
Novakova, Barbora; Howlett, Stephanie; Baker, Roger; Reuber, Markus
2015-07-01
This exploratory study aimed to examine emotion-processing styles in patients with psychogenic non-epileptic seizures (PNES), compared to healthy individuals, and to explore associations of emotion processing with other psychological measures and seizure frequency, using the new Emotional Processing Scale (EPS-25), which had not previously been used in this patient group. Fifty consecutive patients with PNES referred for psychotherapy completed a set of self-report questionnaires, including the Emotional Processing Scale (EPS-25), Clinical Outcome in Routine Evaluation (CORE-10), Short Form-36 (SF-36), Patient Health Questionnaire (PHQ-15), and Brief Illness Perception Questionnaire (BIPQ). Responses on the EPS-25 were compared to data from 224 healthy controls. Patients with PNES had greater emotion processing deficits across all dimensions of the EPS-25 than healthy individuals (suppression/unprocessed emotion/unregulated emotion/avoidance/impoverished emotional experience). Impaired emotion processing was highly correlated with psychological distress, more frequent and severe somatic symptoms, and a more threatening understanding of the symptoms. Emotion processing problems were also associated with reduced health-related quality of life on the mental health (but not the physical health) component of the SF-36. The unregulated emotions sub-scale of the EPS was associated with lower seizure frequency. The results showed clear impairments of emotion processing in patients with PNES compared to healthy individuals, which were associated with greater psychological distress and reduced mental health functioning. These findings seem to support the face validity of the EPS-25 as a measure for PNES patients and its potential as a tool to assess the effectiveness of psychological interventions. Copyright © 2015 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
Shafer, A.T.; Matveychuk, D.; Penney, T.; O’Hare, A.J.; Stokes, J.; Dolcos, F.
2015-01-01
Traditionally, emotional stimuli have been thought to be automatically processed via a bottom-up automatic “capture of attention” mechanism. Recently, this view has been challenged by evidence that emotion processing depends on the availability of attentional resources. Although these two views are not mutually exclusive, direct evidence reconciling them is lacking. One limitation of previous investigations supporting the traditional or competing views is that they have not systematically investigated the impact of emotional charge of task-irrelevant distraction in conjunction with manipulations of attentional demands. Using event-related fMRI, we investigated the nature of emotion-cognition interactions in a perceptual discrimination task with emotional distraction, by manipulating both the emotional charge of the distracting information and the demands of the main task. Findings suggest that emotion processing is both automatic and modulated by attention, but emotion and attention were only found to interact when finer assessments of emotional charge (comparison of most vs. least emotional conditions) were considered along with an effective manipulation of processing load (high vs. low). The study also identified brain regions reflecting the detrimental impact of emotional distraction on performance as well as regions involved in helping with such distraction. Activity in the dorsomedial prefrontal cortex (PFC) and ventrolateral PFC was linked to a detrimental impact of emotional distraction, whereas the dorsal anterior cingulate cortex and lateral occiptal cortex were involved in helping with emotional distraction. These findings demonstrate that task-irrelevant emotion processing is subjective to both the emotional content of distraction and the level of attentional demand. PMID:22332805
Emotion Processing by ERP Combined with Development and Plasticity.
Ding, Rui; Li, Ping; Wang, Wei; Luo, Wenbo
2017-01-01
Emotions important for survival and social interaction have received wide and deep investigations. The application of the fMRI technique into emotion processing has obtained overwhelming achievements with respect to the localization of emotion processes. The ERP method, which possesses highly temporal resolution compared to fMRI, can be employed to investigate the time course of emotion processing. The emotional modulation of the ERP component has been verified across numerous researches. Emotions, described as dynamically developing along with the growing age, have the possibility to be enhanced through learning (or training) or to be damaged due to disturbances in growth, which is underlain by the neural plasticity of emotion-relevant nervous systems. And mood disorders with typical symptoms of emotion discordance probably have been caused by the dysfunctional neural plasticity.
Emotion Processing by ERP Combined with Development and Plasticity
2017-01-01
Emotions important for survival and social interaction have received wide and deep investigations. The application of the fMRI technique into emotion processing has obtained overwhelming achievements with respect to the localization of emotion processes. The ERP method, which possesses highly temporal resolution compared to fMRI, can be employed to investigate the time course of emotion processing. The emotional modulation of the ERP component has been verified across numerous researches. Emotions, described as dynamically developing along with the growing age, have the possibility to be enhanced through learning (or training) or to be damaged due to disturbances in growth, which is underlain by the neural plasticity of emotion-relevant nervous systems. And mood disorders with typical symptoms of emotion discordance probably have been caused by the dysfunctional neural plasticity. PMID:28831313
Xu, Min; Xu, Guiping; Yang, Yang
2016-01-01
Understanding how the nature of interference might influence the recruitments of the neural systems is considered as the key to understanding cognitive control. Although, interference processing in the emotional domain has recently attracted great interest, the question of whether there are separable neural patterns for emotional and non-emotional interference processing remains open. Here, we performed an activation likelihood estimation meta-analysis of 78 neuroimaging experiments, and examined common and distinct neural systems for emotional and non-emotional interference processing. We examined brain activation in three domains of interference processing: emotional verbal interference in the face-word conflict task, non-emotional verbal interference in the color-word Stroop task, and non-emotional spatial interference in the Simon, SRC and Flanker tasks. Our results show that the dorsal anterior cingulate cortex (ACC) was recruited for both emotional and non-emotional interference. In addition, the right anterior insula, presupplementary motor area (pre-SMA), and right inferior frontal gyrus (IFG) were activated by interference processing across both emotional and non-emotional domains. In light of these results, we propose that the anterior insular cortex may serve to integrate information from different dimensions and work together with the dorsal ACC to detect and monitor conflicts, whereas pre-SMA and right IFG may be recruited to inhibit inappropriate responses. In contrast, the dorsolateral prefrontal cortex (DLPFC) and posterior parietal cortex (PPC) showed different degrees of activation and distinct lateralization patterns for different processing domains, which suggests that these regions may implement cognitive control based on the specific task requirements. PMID:27895564
NASA Astrophysics Data System (ADS)
Trost, Wiebke; Frühholz, Sascha
2015-06-01
The proposed quartet theory of human emotions by Koelsch and colleagues [1] identifies four different affect systems to be involved in the processing of particular types of emotions. Moreover, the theory integrates both basic emotions and more complex emotion concepts, which include also aesthetic emotions such as musical emotions. The authors identify a particular brain system for each kind of emotion type, also by contrasting them to brain structures that are generally involved in emotion processing irrespective of the type of emotion. A brain system that has been less regarded in emotion theories, but which represents one of the four systems of the quartet to induce attachment related emotions, is the hippocampus.
Anxiety, emotional processing and depression in people with multiple sclerosis.
Gay, Marie-Claire; Bungener, Catherine; Thomas, Sarah; Vrignaud, Pierre; Thomas, Peter W; Baker, Roger; Montel, Sébastien; Heinzlef, Olivier; Papeix, Caroline; Assouad, Rana; Montreuil, Michèle
2017-02-23
Despite the high comorbidity of anxiety and depression in people with multiple sclerosis (MS), little is known about their inter-relationships. Both involve emotional perturbations and the way in which emotions are processed is likely central to both. The aim of the current study was to explore relationships between the domains of mood, emotional processing and coping and to analyse how anxiety affects coping, emotional processing, emotional balance and depression in people with MS. A cross-sectional questionnaire study involving 189 people with MS with a confirmed diagnosis of MS recruited from three French hospitals. Study participants completed a battery of questionnaires encompassing the following domains: i. anxiety and depression (Hospital Anxiety and Depression Scale (HADS)); ii. emotional processing (Emotional Processing Scale (EPS-25)); iii. positive and negative emotions (Positive and Negative Emotionality Scale (EPN-31)); iv. alexithymia (Bermond-Vorst Alexithymia Questionnaire) and v. coping (Coping with Health Injuries and Problems-Neuro (CHIP-Neuro) questionnaire. Relationships between these domains were explored using path analysis. Anxiety was a strong predictor of depression, in both a direct and indirect way, and our model explained 48% of the variance of depression. Gender and functional status (measured by the Expanded Disability Status Scale) played a modest role. Non-depressed people with MS reported high levels of negative emotions and low levels of positive emotions. Anxiety also had an indirect impact on depression via one of the subscales of the Emotional Processing Scale ("Unregulated Emotion") and via negative emotions (EPN-31). This research confirms that anxiety is a vulnerability factor for depression via both direct and indirect pathways. Anxiety symptoms should therefore be assessed systematically and treated in order to lessen the likelihood of depression symptoms.
Explicit attention interferes with selective emotion processing in human extrastriate cortex.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2007-02-22
Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (approximately 150-300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon.
Explicit attention interferes with selective emotion processing in human extrastriate cortex
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2007-01-01
Background Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (~150–300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Results Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. Conclusion The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon. PMID:17316444
ERIC Educational Resources Information Center
Mazer, Joseph P.; McKenna-Buchanan, Timothy P.; Quinlan, Margaret M.; Titsworth, Scott
2014-01-01
Based on emotional response theory (ERT), recent researchers have observed connections between teachers' communication behaviors and students' emotional reactions. In the present study, we further elaborated ERT by exploring the effects of teacher communication behaviors and emotional processes on discrete negative emotions, including anger,…
Dissociative tendencies and facilitated emotional processing.
Oathes, Desmond J; Ray, William J
2008-10-01
Dissociation is a process linked to lapses of attention, history of abuse or trauma, compromised emotional memory, and a disintegrated sense of self. It is theorized that dissociation stems from avoiding emotional information, especially negative emotion, to protect a fragile psyche. The present study tested whether or not dissociaters do actually avoid processing emotion by asking groups scoring high or low on the Dissociative Experiences Scale to judge the affective valence of several types of emotional stimuli. Manipulations of valence, modality (pictures or words), task complexity, and personal relevance lead to results suggesting that dissociation is linked to facilitated rather than deficient emotional processing. Our results are consistent with a theory that sensitivity to emotional material may be a contributing factor in subsequent dissociation to avoid further elaboration of upsetting emotion in these individuals. The findings for dissociation further exemplify the influence of individual differences in the link between cognition and emotion. (c) 2008 APA, all rights reserved
Pickett, Scott M; Kurby, Christopher A
2010-12-01
Experiential avoidance is a functional class of maladaptive strategies that contribute to the development and maintenance of psychopathology. Although previous research has demonstrated group differences in the interpretation of aversive stimuli, there is limited work on the influence of experiential avoidance during the online processing of emotion. An experimental design investigated the influence of self-reported experiential avoidance during emotion processing by assessing emotion inferences during the comprehension of narratives that imply different emotions. Results suggest that experiential avoidance is partially characterized by an emotional information processing bias. Specifically, individuals reporting higher experiential avoidance scores exhibited a bias towards activating negative emotion inferences, whereas individuals reporting lower experiential avoidance scores exhibited a bias towards activating positive emotion inferences. Minimal emotional inference was observed for the non-bias affective valence. Findings are discussed in terms of the implications of experiential avoidance as a cognitive vulnerability for psychopathology.
Damaskinou, Nikoleta; Watling, Dawn
2018-05-01
This study was designed to investigate the patterns of electrophysiological responses of early emotional processing at frontocentral sites in adults and to explore whether adults' activation patterns show hemispheric lateralization for facial emotion processing. Thirty-five adults viewed full face and chimeric face stimuli. After viewing two faces, sequentially, participants were asked to decide which of the two faces was more emotive. The findings from the standard faces and the chimeric faces suggest that emotion processing is present during the early phases of face processing in the frontocentral sites. In particular, sad emotional faces are processed differently than neutral and happy (including happy chimeras) faces in these early phases of processing. Further, there were differences in the electrode amplitudes over the left and right hemisphere, particularly in the early temporal window. This research provides supporting evidence that the chimeric face test is a test of emotion processing that elicits right hemispheric processing.
Electrodermal Reactivity to Emotion Processing in Adults with Autistic Spectrum Disorders
ERIC Educational Resources Information Center
Hubert, B. E.; Wicker, B.; Monfardini, E.; Deruelle, C.
2009-01-01
Although alterations of emotion processing are recognized as a core component of autism, the level at which alterations occur is still debated. Discrepant results suggest that overt assessment of emotion processing is not appropriate. In this study, skin conductance response (SCR) was used to examine covert emotional processes. Both behavioural…
Time course of implicit processing and explicit processing of emotional faces and emotional words.
Frühholz, Sascha; Jellinghaus, Anne; Herrmann, Manfred
2011-05-01
Facial expressions are important emotional stimuli during social interactions. Symbolic emotional cues, such as affective words, also convey information regarding emotions that is relevant for social communication. Various studies have demonstrated fast decoding of emotions from words, as was shown for faces, whereas others report a rather delayed decoding of information about emotions from words. Here, we introduced an implicit (color naming) and explicit task (emotion judgment) with facial expressions and words, both containing information about emotions, to directly compare the time course of emotion processing using event-related potentials (ERP). The data show that only negative faces affected task performance, resulting in increased error rates compared to neutral faces. Presentation of emotional faces resulted in a modulation of the N170, the EPN and the LPP components and these modulations were found during both the explicit and implicit tasks. Emotional words only affected the EPN during the explicit task, but a task-independent effect on the LPP was revealed. Finally, emotional faces modulated source activity in the extrastriate cortex underlying the generation of the N170, EPN and LPP components. Emotional words led to a modulation of source activity corresponding to the EPN and LPP, but they also affected the N170 source on the right hemisphere. These data show that facial expressions affect earlier stages of emotion processing compared to emotional words, but the emotional value of words may have been detected at early stages of emotional processing in the visual cortex, as was indicated by the extrastriate source activity. Copyright © 2011 Elsevier B.V. All rights reserved.
Poor sleep quality predicts deficient emotion information processing over time in early adolescence.
Soffer-Dudek, Nirit; Sadeh, Avi; Dahl, Ronald E; Rosenblat-Stein, Shiran
2011-11-01
There is deepening understanding of the effects of sleep on emotional information processing. Emotion information processing is a key aspect of social competence, which undergoes important maturational and developmental changes in adolescence; however, most research in this area has focused on adults. Our aim was to test the links between sleep and emotion information processing during early adolescence. Sleep and facial information processing were assessed objectively during 3 assessment waves, separated by 1-year lags. Data were obtained in natural environments-sleep was assessed in home settings, and facial information processing was assessed at school. 94 healthy children (53 girls, 41 boys), aged 10 years at Time 1. N/A. Facial information processing was tested under neutral (gender identification) and emotional (emotional expression identification) conditions. Sleep was assessed in home settings using actigraphy for 7 nights at each assessment wave. Waking > 5 min was considered a night awakening. Using multilevel modeling, elevated night awakenings and decreased sleep efficiency significantly predicted poor performance only in the emotional information processing condition (e.g., b = -1.79, SD = 0.52, confidence interval: lower boundary = -2.82, upper boundary = -0.076, t(416.94) = -3.42, P = 0.001). Poor sleep quality is associated with compromised emotional information processing during early adolescence, a sensitive period in socio-emotional development.
Native Language Influence in the Segmentation of a Novel Language
ERIC Educational Resources Information Center
Ordin, Mikhail; Nespor, Marina
2016-01-01
A major problem in second language acquisition (SLA) is the segmentation of fluent speech in the target language, i.e., detecting the boundaries of phonological constituents like words and phrases in the speech stream. To this end, among a variety of cues, people extensively use prosody and statistical regularities. We examined the role of pitch,…
What Oral Text Reading Fluency Can Reveal about Reading Comprehension
ERIC Educational Resources Information Center
Veenendaal, Nathalie J.; Groen, Margriet A.; Verhoeven, Ludo
2015-01-01
Text reading fluency--the ability to read quickly, accurately and with a natural intonation--has been proposed as a predictor of reading comprehension. In the current study, we examined the role of oral text reading fluency, defined as text reading rate and text reading prosody, as a contributor to reading comprehension outcomes in addition to…
Rhythm in Ethiopian English: Implications for the Teaching of English Prosody
ERIC Educational Resources Information Center
Gashaw, Anegagregn
2017-01-01
In order to verify that English speeches produced by Ethiopian speakers fall under syllable-timed or stress-timed rhythm, the study tried to examine the nature of stress and rhythm in the pronunciation of Ethiopian speakers of English by focusing on one language group speaking Amharic as a native language. Using acoustic analysis of the speeches…