Teki, Sundeep; Barnes, Gareth R; Penny, William D; Iverson, Paul; Woodhead, Zoe V J; Griffiths, Timothy D; Leff, Alexander P
2013-06-01
In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.
Barnes, Gareth R.; Penny, William D.; Iverson, Paul; Woodhead, Zoe V. J.; Griffiths, Timothy D.; Leff, Alexander P.
2013-01-01
In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics’ speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired. PMID:23715097
Sato, M; Yasui, N; Isobe, I; Kobayashi, T
1982-10-01
A-49-year-old right-handed female was reported. She showed pure word deafness and auditory agnosia because of bilateral temporo-parietal lesions. The left lesion resulted from angiospasm of the left anterior and middle cerebral arteries after subarachnoid hemorrhage due to a ruptured aneurysm of the left carotid artery, and the right one resulted from subcortical hematoma after the V-P shunt operation. CT scan revealed the abnormal low density area on the bilateral temporo-parietal regions seven months after onset. Neurophychological findings were as follows: there were no aphasic symptoms such as paraphasia, word finding difficulties, or disturbances of spontaneous writing, reading and calculation. But her auditory comprehension was severely disturbed, and she could neither repeat words after the tester nor write from dictation. She also could not recognize meaningful sounds and music in spite of normal hearing sensitivity for pure tone, BSR and AER. We discussed the neuropsychological mechanisms of auditory recognition, and assumed that each hemisphere might process both verbal and non-verbal auditory stimuli in the secondary auditory area. The auditory input may be recognized at the left association area, the final level of this mechanism. Pure word deafness and auditory agnosia of this case might be caused by the disruption of the right secondary auditory area, the pathway between the left primary auditory area and the left secondary auditory area, and between the left and right secondary auditory areas.
Zhang, G-Y; Yang, M; Liu, B; Huang, Z-C; Li, J; Chen, J-Y; Chen, H; Zhang, P-P; Liu, L-J; Wang, J; Teng, G-J
2016-01-28
Previous studies often report that early auditory deprivation or congenital deafness contributes to cross-modal reorganization in the auditory-deprived cortex, and this cross-modal reorganization limits clinical benefit from cochlear prosthetics. However, there are inconsistencies among study results on cortical reorganization in those subjects with long-term unilateral sensorineural hearing loss (USNHL). It is also unclear whether there exists a similar cross-modal plasticity of the auditory cortex for acquired monaural deafness and early or congenital deafness. To address this issue, we constructed the directional brain functional networks based on entropy connectivity of resting-state functional MRI and researched changes of the networks. Thirty-four long-term USNHL individuals and seventeen normally hearing individuals participated in the test, and all USNHL patients had acquired deafness. We found that certain brain regions of the sensorimotor and visual networks presented enhanced synchronous output entropy connectivity with the left primary auditory cortex in the left long-term USNHL individuals as compared with normally hearing individuals. Especially, the left USNHL showed more significant changes of entropy connectivity than the right USNHL. No significant plastic changes were observed in the right USNHL. Our results indicate that the left primary auditory cortex (non-auditory-deprived cortex) in patients with left USNHL has been reorganized by visual and sensorimotor modalities through cross-modal plasticity. Furthermore, the cross-modal reorganization also alters the directional brain functional networks. The auditory deprivation from the left or right side generates different influences on the human brain. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Auditory cortical volumes and musical ability in Williams syndrome.
Martens, Marilee A; Reutens, David C; Wilson, Sarah J
2010-07-01
Individuals with Williams syndrome (WS) have been shown to have atypical morphology in the auditory cortex, an area associated with aspects of musicality. Some individuals with WS have demonstrated specific musical abilities, despite intellectual delays. Primary auditory cortex and planum temporale volumes were manually segmented in 25 individuals with WS and 25 control participants, and the participants also underwent testing of musical abilities. Left and right planum temporale volumes were significantly larger in the participants with WS than in controls, with no significant difference noted between groups in planum temporale asymmetry or primary auditory cortical volumes. Left planum temporale volume was significantly increased in a subgroup of the participants with WS who demonstrated specific musical strengths, as compared to the remaining WS participants, and was highly correlated with scores on a musical task. These findings suggest that differences in musical ability within WS may be in part associated with variability in the left auditory cortical region, providing further evidence of cognitive and neuroanatomical heterogeneity within this syndrome. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Geissler, Diana B; Ehret, Günter
2004-02-01
Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies.
Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J
2018-03-01
Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.
Crinion, Jenny; Price, Cathy J
2005-12-01
Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.
Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T
2013-10-01
Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity. Copyright © 2013 Elsevier Inc. All rights reserved.
Maffei, Chiara; Capasso, Rita; Cazzolli, Giulia; Colosimo, Cesare; Dell'Acqua, Flavio; Piludu, Francesca; Catani, Marco; Miceli, Gabriele
2017-12-01
Pure Word Deafness (PWD) is a rare disorder, characterized by selective loss of speech input processing. Its most common cause is temporal damage to the primary auditory cortex of both hemispheres, but it has been reported also following unilateral lesions. In unilateral cases, PWD has been attributed to the disconnection of Wernicke's area from both right and left primary auditory cortex. Here we report behavioral and neuroimaging evidence from a new case of left unilateral PWD with both cortical and white matter damage due to a relatively small stroke lesion in the left temporal gyrus. Selective impairment in auditory language processing was accompanied by intact processing of nonspeech sounds and normal speech, reading and writing. Performance on dichotic listening was characterized by a reversal of the right-ear advantage typically observed in healthy subjects. Cortical thickness and gyral volume were severely reduced in the left superior temporal gyrus (STG), although abnormalities were not uniformly distributed and residual intact cortical areas were detected, for example in the medial portion of the Heschl's gyrus. Diffusion tractography documented partial damage to the acoustic radiations (AR), callosal temporal connections and intralobar tracts dedicated to single words comprehension. Behavioral and neuroimaging results in this case are difficult to integrate in a pure cortical or disconnection framework, as damage to primary auditory cortex in the left STG was only partial and Wernicke's area was not completely isolated from left or right-hemisphere input. On the basis of our findings we suggest that in this case of PWD, concurrent partial topological (cortical) and disconnection mechanisms have contributed to a selective impairment of speech sounds. The discrepancy between speech and non-speech sounds suggests selective damage to a language-specific left lateralized network involved in phoneme processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
A bilateral cortical network responds to pitch perturbations in speech feedback
Kort, Naomi S.; Nagarajan, Srikantan S.; Houde, John F.
2014-01-01
Auditory feedback is used to monitor and correct for errors in speech production, and one of the clearest demonstrations of this is the pitch perturbation reflex. During ongoing phonation, speakers respond rapidly to shifts of the pitch of their auditory feedback, altering their pitch production to oppose the direction of the applied pitch shift. In this study, we examine the timing of activity within a network of brain regions thought to be involved in mediating this behavior. To isolate auditory feedback processing relevant for motor control of speech, we used magnetoencephalography (MEG) to compare neural responses to speech onset and to transient (400ms) pitch feedback perturbations during speaking with responses to identical acoustic stimuli during passive listening. We found overlapping, but distinct bilateral cortical networks involved in monitoring speech onset and feedback alterations in ongoing speech. Responses to speech onset during speaking were suppressed in bilateral auditory and left ventral supramarginal gyrus/posterior superior temporal sulcus (vSMG/pSTS). In contrast, during pitch perturbations, activity was enhanced in bilateral vSMG/pSTS, bilateral premotor cortex, right primary auditory cortex, and left higher order auditory cortex. We also found speaking-induced delays in responses to both unaltered and altered speech in bilateral primary and secondary auditory regions, the left vSMG/pSTS and right premotor cortex. The network dynamics reveal the cortical processing involved in both detecting the speech error and updating the motor plan to create the new pitch output. These results implicate vSMG/pSTS as critical in both monitoring auditory feedback and initiating rapid compensation to feedback errors. PMID:24076223
Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J
2017-06-01
Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.
Identification of a pathway for intelligible speech in the left temporal lobe
Scott, Sophie K.; Blank, C. Catrin; Rosen, Stuart; Wise, Richard J. S.
2017-01-01
Summary It has been proposed that the identification of sounds, including species-specific vocalizations, by primates depends on anterior projections from the primary auditory cortex, an auditory pathway analogous to the ventral route proposed for the visual identification of objects. We have identified a similar route in the human for understanding intelligible speech. Using PET imaging to identify separable neural subsystems within the human auditory cortex, we used a variety of speech and speech-like stimuli with equivalent acoustic complexity but varying intelligibility. We have demonstrated that the left superior temporal sulcus responds to the presence of phonetic information, but its anterior part only responds if the stimulus is also intelligible. This novel observation demonstrates a left anterior temporal pathway for speech comprehension. PMID:11099443
Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J
2007-02-01
Seeing a speaker's facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the "McGurk illusion", where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at approximately 290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350-400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process.
Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J.
2006-01-01
Seeing a speaker’s facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the “McGurk illusion”, where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at ~290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350–400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process. PMID:16757004
Okuda, Yuji; Shikata, Hiroshi; Song, Wen-Jie
2011-09-01
As a step to develop auditory prosthesis by cortical stimulation, we tested whether a single train of pulses applied to the primary auditory cortex could elicit classically conditioned behavior in guinea pigs. Animals were trained using a tone as the conditioned stimulus and an electrical shock to the right eyelid as the unconditioned stimulus. After conditioning, a train of 11 pulses applied to the left AI induced the conditioned eye-blink response. Cortical stimulation induced no response after extinction. Our results support the feasibility of auditory prosthesis by electrical stimulation of the cortex. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Dichotic listening in patients with splenial and nonsplenial callosal lesions.
Pollmann, Stefan; Maertens, Marianne; von Cramon, D Yves; Lepsien, Joeran; Hugdahl, Kenneth
2002-01-01
The authors found splenial lesions to be associated with left ear suppression in dichotic listening of consonant-vowel syllables. This was found in both a rapid presentation dichotic monitoring task and a standard dichotic listening task, ruling out attentional limitations in the processing of high stimulus loads as a confounding factor. Moreover, directed attention to the left ear did not improve left ear target detection in the patients, independent of callosal lesion location. The authors' data may indicate that auditory callosal fibers pass through the splenium more posterior than previously thought. However, further studies should investigate whether callosal fibers between primary and secondary auditory cortices, or between higher level multimodal cortices, are vital for the detection of left ear targets in dichotic listening.
Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D
2015-09-01
To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.
Heine, Lizette; Castro, Maïté; Martial, Charlotte; Tillmann, Barbara; Laureys, Steven; Perrin, Fabien
2015-01-01
Preferred music is a highly emotional and salient stimulus, which has previously been shown to increase the probability of auditory cognitive event-related responses in patients with disorders of consciousness (DOC). To further investigate whether and how music modifies the functional connectivity of the brain in DOC, five patients were assessed with both a classical functional connectivity scan (control condition), and a scan while they were exposed to their preferred music (music condition). Seed-based functional connectivity (left or right primary auditory cortex), and mean network connectivity of three networks linked to conscious sound perception were assessed. The auditory network showed stronger functional connectivity with the left precentral gyrus and the left dorsolateral prefrontal cortex during music as compared to the control condition. Furthermore, functional connectivity of the external network was enhanced during the music condition in the temporo-parietal junction. Although caution should be taken due to small sample size, these results suggest that preferred music exposure might have effects on patients auditory network (implied in rhythm and music perception) and on cerebral regions linked to autobiographical memory. PMID:26617542
The role of the primary auditory cortex in the neural mechanism of auditory verbal hallucinations
Kompus, Kristiina; Falkenberg, Liv E.; Bless, Josef J.; Johnsen, Erik; Kroken, Rune A.; Kråkvik, Bodil; Larøi, Frank; Løberg, Else-Marie; Vedul-Kjelsås, Einar; Westerhausen, René; Hugdahl, Kenneth
2013-01-01
Auditory verbal hallucinations (AVHs) are a subjective experience of “hearing voices” in the absence of corresponding physical stimulation in the environment. The most remarkable feature of AVHs is their perceptual quality, that is, the experience is subjectively often as vivid as hearing an actual voice, as opposed to mental imagery or auditory memories. This has lead to propositions that dysregulation of the primary auditory cortex (PAC) is a crucial component of the neural mechanism of AVHs. One possible mechanism by which the PAC could give rise to the experience of hallucinations is aberrant patterns of neuronal activity whereby the PAC is overly sensitive to activation arising from internal processing, while being less responsive to external stimulation. In this paper, we review recent research relevant to the role of the PAC in the generation of AVHs. We present new data from a functional magnetic resonance imaging (fMRI) study, examining the responsivity of the left and right PAC to parametrical modulation of the intensity of auditory verbal stimulation, and corresponding attentional top-down control in non-clinical participants with AVHs, and non-clinical participants with no AVHs. Non-clinical hallucinators showed reduced activation to speech sounds but intact attentional modulation in the right PAC. Additionally, we present data from a group of schizophrenia patients with AVHs, who do not show attentional modulation of left or right PAC. The context-appropriate modulation of the PAC may be a protective factor in non-clinical hallucinations. PMID:23630479
Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna
2017-11-01
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Out-of-synchrony speech entrainment in developmental dyslexia.
Molinaro, Nicola; Lizarazu, Mikel; Lallier, Marie; Bourguignon, Mathieu; Carreiras, Manuel
2016-08-01
Developmental dyslexia is a reading disorder often characterized by reduced awareness of speech units. Whether the neural source of this phonological disorder in dyslexic readers results from the malfunctioning of the primary auditory system or damaged feedback communication between higher-order phonological regions (i.e., left inferior frontal regions) and the auditory cortex is still under dispute. Here we recorded magnetoencephalographic (MEG) signals from 20 dyslexic readers and 20 age-matched controls while they were listening to ∼10-s-long spoken sentences. Compared to controls, dyslexic readers had (1) an impaired neural entrainment to speech in the delta band (0.5-1 Hz); (2) a reduced delta synchronization in both the right auditory cortex and the left inferior frontal gyrus; and (3) an impaired feedforward functional coupling between neural oscillations in the right auditory cortex and the left inferior frontal regions. This shows that during speech listening, individuals with developmental dyslexia present reduced neural synchrony to low-frequency speech oscillations in primary auditory regions that hinders higher-order speech processing steps. The present findings, thus, strengthen proposals assuming that improper low-frequency acoustic entrainment affects speech sampling. This low speech-brain synchronization has the strong potential to cause severe consequences for both phonological and reading skills. Interestingly, the reduced speech-brain synchronization in dyslexic readers compared to normal readers (and its higher-order consequences across the speech processing network) appears preserved through the development from childhood to adulthood. Thus, the evaluation of speech-brain synchronization could possibly serve as a diagnostic tool for early detection of children at risk of dyslexia. Hum Brain Mapp 37:2767-2783, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Tuning in to the Voices: A Multisite fMRI Study of Auditory Hallucinations
Ford, Judith M.; Roach, Brian J.; Jorgensen, Kasper W.; Turner, Jessica A.; Brown, Gregory G.; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A.; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S.; Lim, Kelvin O.; Glover, Gary; Potkin, Steven G.; Mathalon, Daniel H.
2009-01-01
Introduction: Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically “tuned” to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. Methods: At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Results: Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Discussion: Although “voices” are the anticipated sensory experience, it appears that even primary auditory cortex is “turned on” and “tuned in” to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample. PMID:18987102
Long-range synchrony of gamma oscillations and auditory hallucination symptoms in schizophrenia
Mulert, C.; Kirsch; Pascual-Marqui, Roberto; McCarley, Robert W.; Spencer, Kevin M.
2010-01-01
Phase locking in the gamma-band range has been shown to be diminished in patients with schizophrenia. Moreover, there have been reports of positive correlations between phase locking in the gamma-band range and positive symptoms, especially hallucinations. The aim of the present study was to use a new methodological approach in order to investigate gamma-band phase synchronization between the left and right auditory cortex in patients with schizophrenia and its relationship to auditory hallucinations. Subjects were 18 patients with chronic schizophrenia (SZ) and 16 healthy control (HC) subjects. Auditory hallucination symptom scores were obtained using the Scale for the Assessment of Positive Symptoms. Stimuli were 40-Hz binaural click trains. The generators of the 40 Hz-ASSR were localized using eLORETA and based on the computed intracranial signals lagged interhemispheric phase locking between primary and secondary auditory cortices was analyzed. Current source density of the 40 ASSR response was significantly diminished in SZ in comparison to HC in the right superior and middle temporal gyrus (p<0.05). Interhemispheric phase locking was reduced in SZ in comparison to HC for the primary auditory cortices (p<0.05) but not in the secondary auditory cortices. A significant positive correlation was found between auditory hallucination symptom scores and phase synchronization between the primary auditory cortices (p<0.05, corrected for multiple testing) but not for the secondary auditory cortices. These results suggest that long-range synchrony of gamma oscillations is disturbed in schizophrenia and that this deficit is related to clinical symptoms such as auditory hallucinations. PMID:20713096
Laterality of basic auditory perception.
Sininger, Yvonne S; Bhatara, Anjali
2012-01-01
Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.
Laterality of Basic Auditory Perception
Sininger, Yvonne S.; Bhatara, Anjali
2010-01-01
Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: 1) gap detection 2) frequency discrimination and 3) intensity discrimination. Stimuli included tones (500, 1000 and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was: processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by 1) spectral width, a narrow band noise (NBN) of 450 Hz bandwidth was evaluated using intensity discrimination and 2) stimulus duration, 200, 500 and 1000 ms duration tones were evaluated using frequency discrimination. Results A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterized as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli. PMID:22385138
Mapping a lateralization gradient within the ventral stream for auditory speech perception.
Specht, Karsten
2013-01-01
Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic resonance imaging studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesized, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a "lateralization" gradient. This increasing leftward lateralization was particularly evident for the left superior temporal sulcus and more anterior parts of the temporal lobe.
Temporal lobe networks supporting the comprehension of spoken words.
Bonilha, Leonardo; Hillis, Argye E; Hickok, Gregory; den Ouden, Dirk B; Rorden, Chris; Fridriksson, Julius
2017-09-01
Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Auditory Resting-State Network Connectivity in Tinnitus: A Functional MRI Study
Maudoux, Audrey; Lefebvre, Philippe; Cabay, Jean-Evrard; Demertzi, Athena; Vanhaudenhuyse, Audrey; Laureys, Steven; Soddu, Andrea
2012-01-01
The underlying functional neuroanatomy of tinnitus remains poorly understood. Few studies have focused on functional cerebral connectivity changes in tinnitus patients. The aim of this study was to test if functional MRI “resting-state” connectivity patterns in auditory network differ between tinnitus patients and normal controls. Thirteen chronic tinnitus subjects and fifteen age-matched healthy controls were studied on a 3 tesla MRI. Connectivity was investigated using independent component analysis and an automated component selection approach taking into account the spatial and temporal properties of each component. Connectivity in extra-auditory regions such as brainstem, basal ganglia/NAc, cerebellum, parahippocampal, right prefrontal, parietal, and sensorimotor areas was found to be increased in tinnitus subjects. The right primary auditory cortex, left prefrontal, left fusiform gyrus, and bilateral occipital regions showed a decreased connectivity in tinnitus. These results show that there is a modification of cortical and subcortical functional connectivity in tinnitus encompassing attentional, mnemonic, and emotional networks. Our data corroborate the hypothesized implication of non-auditory regions in tinnitus physiopathology and suggest that various regions of the brain seem involved in the persistent awareness of the phenomenon as well as in the development of the associated distress leading to disabling chronic tinnitus. PMID:22574141
Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie
2015-01-01
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430
Penhune, V B; Zatorre, R J; Feindel, W H
1999-03-01
This experiment examined the participation of the auditory cortex of the temporal lobe in the perception and retention of rhythmic patterns. Four patient groups were tested on a paradigm contrasting reproduction of auditory and visual rhythms: those with right or left anterior temporal lobe removals which included Heschl's gyrus (HG), the region of primary auditory cortex (RT-A and LT-A); and patients with right or left anterior temporal lobe removals which did not include HG (RT-a and LT-a). Estimation of lesion extent in HG using an MRI-based probabilistic map indicated that, in the majority of subjects, the lesion was confined to the anterior secondary auditory cortex located on the anterior-lateral extent of HG. On the rhythm reproduction task, RT-A patients were impaired in retention of auditory but not visual rhythms, particularly when accurate reproduction of stimulus durations was required. In contrast, LT-A patients as well as both RT-a and LT-a patients were relatively unimpaired on this task. None of the patient groups was impaired in the ability to make an adequate motor response. Further, they were unimpaired when using a dichotomous response mode, indicating that they were able to adequately differentiate the stimulus durations and, when given an alternative method of encoding, to retain them. Taken together, these results point to a specific role for the right anterior secondary auditory cortex in the retention of a precise analogue representation of auditory tonal patterns.
Auditory training changes temporal lobe connectivity in 'Wernicke's aphasia': a randomised trial.
Woodhead, Zoe Vj; Crinion, Jennifer; Teki, Sundeep; Penny, Will; Price, Cathy J; Leff, Alexander P
2017-07-01
Aphasia is one of the most disabling sequelae after stroke, occurring in 25%-40% of stroke survivors. However, there remains a lack of good evidence for the efficacy or mechanisms of speech comprehension rehabilitation. This within-subjects trial tested two concurrent interventions in 20 patients with chronic aphasia with speech comprehension impairment following left hemisphere stroke: (1) phonological training using 'Earobics' software and (2) a pharmacological intervention using donepezil, an acetylcholinesterase inhibitor. Donepezil was tested in a double-blind, placebo-controlled, cross-over design using block randomisation with bias minimisation. The primary outcome measure was speech comprehension score on the comprehensive aphasia test. Magnetoencephalography (MEG) with an established index of auditory perception, the mismatch negativity response, tested whether the therapies altered effective connectivity at the lower (primary) or higher (secondary) level of the auditory network. Phonological training improved speech comprehension abilities and was particularly effective for patients with severe deficits. No major adverse effects of donepezil were observed, but it had an unpredicted negative effect on speech comprehension. The MEG analysis demonstrated that phonological training increased synaptic gain in the left superior temporal gyrus (STG). Patients with more severe speech comprehension impairments also showed strengthening of bidirectional connections between the left and right STG. Phonological training resulted in a small but significant improvement in speech comprehension, whereas donepezil had a negative effect. The connectivity results indicated that training reshaped higher order phonological representations in the left STG and (in more severe patients) induced stronger interhemispheric transfer of information between higher levels of auditory cortex.Clinical trial registrationThis trial was registered with EudraCT (2005-004215-30, https:// eudract .ema.europa.eu/) and ISRCTN (68939136, http://www.isrctn.com/). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Spatial localization deficits and auditory cortical dysfunction in schizophrenia
Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.
2014-01-01
Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608
Psychophysical and Neural Correlates of Auditory Attraction and Aversion
NASA Astrophysics Data System (ADS)
Patten, Kristopher Jakob
This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.
Clinical significance and developmental changes of auditory-language-related gamma activity
Kojima, Katsuaki; Brown, Erik C.; Rothermel, Robert; Carlson, Alanna; Fuerst, Darren; Matsuzaki, Naoyuki; Shah, Aashit; Atkinson, Marie; Basha, Maysaa; Mittal, Sandeep; Sood, Sandeep; Asano, Eishi
2012-01-01
OBJECTIVE We determined the clinical impact and developmental changes of auditory-language-related augmentation of gamma activity at 50–120 Hz recorded on electrocorticography (ECoG). METHODS We analyzed data from 77 epileptic patients ranging 4 – 56 years in age. We determined the effects of seizure-onset zone, electrode location, and patient-age upon gamma-augmentation elicited by an auditory-naming task. RESULTS Gamma-augmentation was less frequently elicited within seizure-onset sites compared to other sites. Regardless of age, gamma-augmentation most often involved the 80–100 Hz frequency band. Gamma-augmentation initially involved bilateral superior-temporal regions, followed by left-side dominant involvement in the middle-temporal, medial-temporal, inferior-frontal, dorsolateral-premotor, and medial-frontal regions and concluded with bilateral inferior-Rolandic involvement. Compared to younger patients, those older than 10 years had a larger proportion of left dorsolateral-premotor and right inferior-frontal sites showing gamma-augmentation. The incidence of a post-operative language deficit requiring speech therapy was predicted by the number of resected sites with gamma-augmentation in the superior-temporal, inferior-frontal, dorsolateral-premotor, and inferior-Rolandic regions of the left hemisphere assumed to contain essential language function (r2=0.59; p=0.001; odds ratio=6.04 [95% confidence-interval: 2.26 to 16.15]). CONCLUSIONS Auditory-language-related gamma-augmentation can provide additional information useful to localize the primary language areas. SIGNIFICANCE These results derived from a large sample of patients support the utility of auditory-language-related gamma-augmentation in presurgical evaluation. PMID:23141882
Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S
2016-12-01
Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated temporal lobe structures, which are resected during ATLR, more frequently than did verbal fluency. Controlling for auditory and visual input resulted in more left-lateralised activations. We hypothesise that these paradigms may be more predictive of postoperative language decline than verbal fluency fMRI. Copyright © 2016 Elsevier B.V. All rights reserved.
Abnormal auditory synchronization in stuttering: A magnetoencephalographic study.
Kikuchi, Yoshikazu; Okamoto, Tsuyoshi; Ogata, Katsuya; Hagiwara, Koichi; Umezaki, Toshiro; Kenjo, Masamutsu; Nakagawa, Takashi; Tobimatsu, Shozo
2017-02-01
In a previous magnetoencephalographic study, we showed both functional and structural reorganization of the right auditory cortex and impaired left auditory cortex function in people who stutter (PWS). In the present work, we reevaluated the same dataset to further investigate how the right and left auditory cortices interact to compensate for stuttering. We evaluated bilateral N100m latencies as well as indices of local and inter-hemispheric phase synchronization of the auditory cortices. The left N100m latency was significantly prolonged relative to the right N100m latency in PWS, while healthy control participants did not show any inter-hemispheric differences in latency. A phase-locking factor (PLF) analysis, which indicates the degree of local phase synchronization, demonstrated enhanced alpha-band synchrony in the right auditory area of PWS. A phase-locking value (PLV) analysis of inter-hemispheric synchronization demonstrated significant elevations in the beta band between the right and left auditory cortices in PWS. In addition, right PLF and PLVs were positively correlated with stuttering frequency in PWS. Taken together, our data suggest that increased right hemispheric local phase synchronization and increased inter-hemispheric phase synchronization are electrophysiological correlates of a compensatory mechanism for impaired left auditory processing in PWS. Published by Elsevier B.V.
Hardy, Chris J D; Agustus, Jennifer L; Marshall, Charles R; Clark, Camilla N; Russell, Lucy L; Bond, Rebecca L; Brotherhood, Emilie V; Thomas, David L; Crutch, Sebastian J; Rohrer, Jonathan D; Warren, Jason D
2017-07-27
Non-verbal auditory impairment is increasingly recognised in the primary progressive aphasias (PPAs) but its relationship to speech processing and brain substrates has not been defined. Here we addressed these issues in patients representing the non-fluent variant (nfvPPA) and semantic variant (svPPA) syndromes of PPA. We studied 19 patients with PPA in relation to 19 healthy older individuals. We manipulated three key auditory parameters-temporal regularity, phonemic spectral structure and prosodic predictability (an index of fundamental information content, or entropy)-in sequences of spoken syllables. The ability of participants to process these parameters was assessed using two-alternative, forced-choice tasks and neuroanatomical associations of task performance were assessed using voxel-based morphometry of patients' brain magnetic resonance images. Relative to healthy controls, both the nfvPPA and svPPA groups had impaired processing of phonemic spectral structure and signal predictability while the nfvPPA group additionally had impaired processing of temporal regularity in speech signals. Task performance correlated with standard disease severity and neurolinguistic measures. Across the patient cohort, performance on the temporal regularity task was associated with grey matter in the left supplementary motor area and right caudate, performance on the phoneme processing task was associated with grey matter in the left supramarginal gyrus, and performance on the prosodic predictability task was associated with grey matter in the right putamen. Our findings suggest that PPA syndromes may be underpinned by more generic deficits of auditory signal analysis, with a distributed cortico-subcortical neuraoanatomical substrate extending beyond the canonical language network. This has implications for syndrome classification and biomarker development.
Auditory, Vestibular and Cognitive Effects due to Repeated Blast Exposure on the Warfighter
2012-10-01
Gaze Horizontal (Left and Right) Description: The primary purpose of the Gaze Horizontal subtest was to detect nystagmus when the head is fixed and...to detect nystagmus when the head is fixed and the eyes are gazing off center from the primary (straight ahead) gaze position. This test is designed...physiological target area and examiner instructions for testing): Spontaneous Nystagmus Smooth Harmonic Acceleration (.01, .08, .32, .64, 1.75
Holcomb, H H; Medoff, D R; Caudill, P J; Zhao, Z; Lahti, A C; Dannals, R F; Tamminga, C A
1998-09-01
Tone recognition is partially subserved by neural activity in the right frontal and primary auditory cortices. First we determined the brain areas associated with tone perception and recognition. This study then examined how regional cerebral blood flow (rCBF) in these and other brain regions correlates with the behavioral characteristics of a difficult tone recognition task. rCBF changes were assessed using H2(15)O positron emission tomography. Subtraction procedures were used to localize significant change regions and correlational analyses were applied to determine how response times (RT) predicted rCBF patterns. Twelve trained normal volunteers were studied in three conditions: REST, sensory motor control (SMC) and decision (DEC). The SMC-REST contrast revealed bilateral activation of primary auditory cortices, cerebellum and bilateral inferior frontal gyri. DEC-SMC produced significant clusters in the right middle and inferior frontal gyri, insula and claustrum; the anterior cingulate gyrus and supplementary motor area; the left insula/claustrum; and the left cerebellum. Correlational analyses, RT versus rCBF from DEC scans, showed a positive correlation in right inferior and middle frontal cortex; rCBF in bilateral auditory cortices and cerebellum exhibited significant negative correlations with RT These changes suggest that neural activity in the right frontal, superior temporal and cerebellar regions shifts back and forth in magnitude depending on whether tone recognition RT is relatively fast or slow, during a difficult, accurate assessment.
Washington, Stuart D.
2012-01-01
Species-specific vocalizations of mammals, including humans, contain slow and fast frequency modulations (FMs) as well as tone and noise bursts. In this study, we established sex-specific hemispheric differences in the tonal and FM response characteristics of neurons in the Doppler-shifted constant-frequency processing area in the mustached bat's primary auditory cortex (A1). We recorded single-unit cortical activity from the right and left A1 in awake bats in response to the presentation of tone bursts and linear FM sweeps that are contained within their echolocation and/or communication sounds. Peak response latencies to neurons' preferred or best FMs were significantly longer on the right compared with the left in both sexes, and in males this right-left difference was also present for the most excitatory tone burst. Based on peak response magnitudes, right hemispheric A1 neurons in males preferred low-rate, narrowband FMs, whereas those on the left were less selective, responding to FMs with a variety of rates and bandwidths. The distributions of parameters for best FMs in females were similar on the two sides. Together, our data provide the first strong physiological support of a sex-specific, spectrotemporal hemispheric asymmetry for the representation of tones and FMs in a nonhuman mammal. Specifically, our results demonstrate a left hemispheric bias in males for the representation of a diverse array of FMs differing in rate and bandwidth. We propose that these asymmetries underlie lateralized processing of communication sounds and are common to species as divergent as bats and humans. PMID:22649207
Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier
2007-08-29
In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.
Oxytocin Enables Maternal Behavior by Balancing Cortical Inhibition
Marlin, Bianca J.; Mitre, Mariela; D’amour, James A.; Chao, Moses V.; Froemke, Robert C.
2015-01-01
Oxytocin is important for social interactions and maternal behavior. However, little is known about when, where, and how oxytocin modulates neural circuits to improve social cognition. Here we show how oxytocin enables pup retrieval behavior in female mice by enhancing auditory cortical pup call responses. Retrieval behavior required left but not right auditory cortex, was accelerated by oxytocin in left auditory cortex, and oxytocin receptors were preferentially expressed in left auditory cortex. Neural responses to pup calls were lateralized, with co-tuned and temporally-precise excitatory and inhibitory responses in left cortex of maternal but not pup-naive adults. Finally, pairing calls with oxytocin enhanced responses by balancing the magnitude and timing of inhibition with excitation. Our results describe fundamental synaptic mechanisms by which oxytocin increases the salience of acoustic social stimuli. Furthermore, oxytocin-induced plasticity provides a biological basis for lateralization of auditory cortical processing. PMID:25874674
Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment
Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru
2013-01-01
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873
Mapping a lateralization gradient within the ventral stream for auditory speech perception
Specht, Karsten
2013-01-01
Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic resonance imaging studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesized, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory–phonetic to lexico-semantic processing and along the posterior–anterior axis, thus forming a “lateralization” gradient. This increasing leftward lateralization was particularly evident for the left superior temporal sulcus and more anterior parts of the temporal lobe. PMID:24106470
Auditory Spatial Perception: Auditory Localization
2012-05-01
cochlear nucleus, TB – trapezoid body, SOC – superior olivary complex, LL – lateral lemniscus, IC – inferior colliculus. Adapted from Aharonson and...Figure 5. Auditory pathways in the central nervous system. LE – left ear, RE – right ear, AN – auditory nerve, CN – cochlear nucleus, TB...fibers leaving the left and right inner ear connect directly to the synaptic inputs of the cochlear nucleus (CN) on the same (ipsilateral) side of
Kornysheva, Katja; Schubotz, Ricarda I.
2011-01-01
Integrating auditory and motor information often requires precise timing as in speech and music. In humans, the position of the ventral premotor cortex (PMv) in the dorsal auditory stream renders this area a node for auditory-motor integration. Yet, it remains unknown whether the PMv is critical for auditory-motor timing and which activity increases help to preserve task performance following its disruption. 16 healthy volunteers participated in two sessions with fMRI measured at baseline and following rTMS (rTMS) of either the left PMv or a control region. Subjects synchronized left or right finger tapping to sub-second beat rates of auditory rhythms in the experimental task, and produced self-paced tapping during spectrally matched auditory stimuli in the control task. Left PMv rTMS impaired auditory-motor synchronization accuracy in the first sub-block following stimulation (p<0.01, Bonferroni corrected), but spared motor timing and attention to task. Task-related activity increased in the homologue right PMv, but did not predict the behavioral effect of rTMS. In contrast, anterior midline cerebellum revealed most pronounced activity increase in less impaired subjects. The present findings suggest a critical role of the left PMv in feed-forward computations enabling accurate auditory-motor timing, which can be compensated by activity modulations in the cerebellum, but not in the homologue region contralateral to stimulation. PMID:21738657
Ludersdorfer, Philipp; Wimmer, Heinz; Richlan, Fabio; Schurz, Matthias; Hutzler, Florian; Kronbichler, Martin
2016-01-01
The present fMRI study investigated the hypothesis that activation of the left ventral occipitotemporal cortex (vOT) in response to auditory words can be attributed to lexical orthographic rather than lexico-semantic processing. To this end, we presented auditory words in both an orthographic ("three or four letter word?") and a semantic ("living or nonliving?") task. In addition, a auditory control condition presented tones in a pitch evaluation task. The results showed that the left vOT exhibited higher activation for orthographic relative to semantic processing of auditory words with a peak in the posterior part of vOT. Comparisons to the auditory control condition revealed that orthographic processing of auditory words elicited activation in a large vOT cluster. In contrast, activation for semantic processing was only weak and restricted to the middle part vOT. We interpret our findings as speaking for orthographic processing in left vOT. In particular, we suggest that activation in left middle vOT can be attributed to accessing orthographic whole-word representations. While activation of such representations was experimentally ascertained in the orthographic task, it might have also occurred automatically in the semantic task. Activation in the more posterior vOT region, on the other hand, may reflect the generation of explicit images of word-specific letter sequences required by the orthographic but not the semantic task. In addition, based on cross-modal suppression, the finding of marked deactivations in response to the auditory tones is taken to reflect the visual nature of representations and processes in left vOT. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M
2016-01-01
This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hickok, G.; Okada, K.; Barr, W.; Pa, J.; Rogalsky, C.; Donnelly, K.; Barde, L.; Grant, A.
2008-01-01
Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated…
Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian
2018-04-18
Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.
Changes in resting-state connectivity in musicians with embouchure dystonia.
Haslinger, Bernhard; Noé, Jonas; Altenmüller, Eckart; Riedl, Valentin; Zimmer, Claus; Mantel, Tobias; Dresel, Christian
2017-03-01
Embouchure dystonia is a highly disabling task-specific dystonia in professional brass musicians leading to spasms of perioral muscles while playing the instrument. As they are asymptomatic at rest, resting-state functional magnetic resonance imaging in these patients can reveal changes in functional connectivity within and between brain networks independent from dystonic symptoms. We therefore compared embouchure dystonia patients to healthy musicians with resting-state functional magnetic resonance imaging in combination with independent component analyses. Patients showed increased functional connectivity of the bilateral sensorimotor mouth area and right secondary somatosensory cortex, but reduced functional connectivity of the bilateral sensorimotor hand representation, left inferior parietal cortex, and mesial premotor cortex within the lateral motor function network. Within the auditory function network, the functional connectivity of bilateral secondary auditory cortices, right posterior parietal cortex and left sensorimotor hand area was increased, the functional connectivity of right primary auditory cortex, right secondary somatosensory cortex, right sensorimotor mouth representation, bilateral thalamus, and anterior cingulate cortex was reduced. Negative functional connectivity between the cerebellar and lateral motor function network and positive functional connectivity between the cerebellar and primary visual network were reduced. Abnormal resting-state functional connectivity of sensorimotor representations of affected and unaffected body parts suggests a pathophysiological predisposition for abnormal sensorimotor and audiomotor integration in embouchure dystonia. Altered connectivity to the cerebellar network highlights the important role of the cerebellum in this disease. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.
Edgar, J Christopher; Fisk, Charles L; Liu, Song; Pandey, Juhi; Herrington, John D; Schultz, Robert T; Roberts, Timothy P L
2016-01-01
x03B3; (∼30-80 Hz) brain rhythms are thought to be abnormal in neurodevelopmental disorders such as schizophrenia and autism spectrum disorder (ASD). In adult populations, auditory 40-Hz click trains or 40-Hz amplitude-modulated tones are used to assess the integrity of superior temporal gyrus (STG) 40-Hz x03B3;-band circuits. As STG 40-Hz auditory steady-state responses (ASSRs) are not fully developed in children, tasks using these stimuli may not be optimal in younger patient populations. The present study examined this issue in typically developing (TD) children as well as in children with ASD, using source localization to directly assess activity in the principal generators of the 40-Hz ASSR in the left and right primary/secondary auditory cortices. 40-Hz amplitude-modulated tones of 1 s duration were binaurally presented while magnetoencephalography data were obtained from 48 TD children (45 males; 7-14 years old) and 42 ASD children (38 males; 8-14 years old). T1-weighted structural MRI was obtained. Using single dipoles anatomically constrained to each participant's left and right Heschl's Gyrus, left and right 40-Hz ASSR total power (TP) and intertrial coherence (ITC) measures were obtained. Associations between 40-Hz ASSR TP, ITC and age as well as STG gray matter cortical thickness (CT) were assessed. Group STG function and structure differences were also examined. TD and ASD did not differ in 40-Hz ASSR TP or ITC. In TD and ASD, age was associated with left and right 40-Hz ASSR ITC (p < 0.01). The interaction term was not significant, indicating in both groups a ∼0.01/year increase in ITC. 40-Hz ASSR TP and ITC were greater in the right than left STG. Groups did not differ in STG CT, and no associations were observed between 40-Hz ASSR activity and STG CT. Finally, right STG transient x03B3; (50-100 ms and 30-50 Hz) was greater in TD versus ASD (significant for TP, trend for ITC). The 40-Hz ASSR develops, in part, via an age-related increase in neural synchrony. Greater right than left 40-Hz ASSRs (ITC and TP) suggested earlier maturation of right versus left STG neural network(s). Given a ∼0.01/year increase in ITC, 40-Hz ASSRs were weak or absent in many of the younger participants, suggesting that 40-Hz driving stimuli are not optimal for examining STG 40-Hz auditory neural circuits in younger populations. Given the caveat that 40-Hz auditory steady-state neural networks are poorly assessed in children, the present analyses did not point to atypical development of STG 40-Hz ASSRs in higher-functioning children with ASD. Although groups did not differ in 40-Hz auditory steady-state activity, replicating previous studies, there was evidence for greater right STG transient x03B3; activity in TD versus ASD. © 2016 S. Karger AG, Basel.
Activity in the left auditory cortex is associated with individual impulsivity in time discounting.
Han, Ruokang; Takahashi, Taiki; Miyazaki, Akane; Kadoya, Tomoka; Kato, Shinya; Yokosawa, Koichi
2015-01-01
Impulsivity dictates individual decision-making behavior. Therefore, it can reflect consumption behavior and risk of addiction and thus underlies social activities as well. Neuroscience has been applied to explain social activities; however, the brain function controlling impulsivity has remained unclear. It is known that impulsivity is related to individual time perception, i.e., a person who perceives a certain physical time as being longer is impulsive. Here we show that activity of the left auditory cortex is related to individual impulsivity. Individual impulsivity was evaluated by a self-answered questionnaire in twelve healthy right-handed adults, and activities of the auditory cortices of bilateral hemispheres when listening to continuous tones were recorded by magnetoencephalography. Sustained activity of the left auditory cortex was significantly correlated to impulsivity, that is, larger sustained activity indicated stronger impulsivity. The results suggest that the left auditory cortex represent time perception, probably because the area is involved in speech perception, and that it represents impulsivity indirectly.
Visual and auditory accessory stimulus offset and the Simon effect.
Nishimura, Akio; Yokosawa, Kazuhiko
2010-10-01
We investigated the effect on the right and left responses of the disappearance of a task-irrelevant stimulus located on the right or left side. Participants pressed a right or left response key on the basis of the color of a centrally located visual target. Visual (Experiment 1) or auditory (Experiment 2) task-irrelevant accessory stimuli appeared or disappeared at locations to the right or left of the central target. In Experiment 1, responses were faster when onset or offset of the visual accessory stimulus was spatially congruent with the response. In Experiment 2, responses were again faster when onset of the auditory accessory stimulus and the response were on the same side. However, responses were slightly slower when offset of the auditory accessory stimulus and the response were on the same side than when they were on opposite sides. These findings indicate that transient change information is crucial for a visual Simon effect, whereas sustained stimulation from an ongoing stimulus also contributes to an auditory Simon effect.
Delays in auditory processing identified in preschool children with FASD
Stephen, Julia M.; Kodituwakku, Piyadasa W.; Kodituwakku, Elizabeth L.; Romero, Lucinda; Peters, Amanda M.; Sharadamma, Nirupama Muniswamy; Caprihan, Arvind; Coffman, Brian A.
2012-01-01
Background Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool aged children. Since sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Materials and Methods Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control children aged 3-6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multi-dipole spatio-temporal modeling technique (CSST – Ranken et al. 2002) to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. Results There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Discussion Auditory delay revealed by MEG in children with FASD may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. PMID:22458372
Delays in auditory processing identified in preschool children with FASD.
Stephen, Julia M; Kodituwakku, Piyadasa W; Kodituwakku, Elizabeth L; Romero, Lucinda; Peters, Amanda M; Sharadamma, Nirupama M; Caprihan, Arvind; Coffman, Brian A
2012-10-01
Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool-aged children. As sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control (HC) children aged 3 to 6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1,000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multidipole spatio-temporal modeling technique to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Auditory delay revealed by MEG in children with FASDs may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. Copyright © 2012 by the Research Society on Alcoholism.
Yoshimura, Yuko; Kikuchi, Mitsuru; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Remijn, Gerard B; Oi, Manabu; Munesue, Toshio; Higashida, Haruhiro; Minabe, Yoshio
2016-01-01
The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.
Chen, Yu-Chen; Xia, Wenqing; Chen, Huiyou; Feng, Yuan; Xu, Jin-Jing; Gu, Jian-Ping; Salvi, Richard; Yin, Xindao
2017-05-01
The phantom sound of tinnitus is believed to be triggered by aberrant neural activity in the central auditory pathway, but since this debilitating condition is often associated with emotional distress and anxiety, these comorbidities likely arise from maladaptive functional connections to limbic structures such as the amygdala and hippocampus. To test this hypothesis, resting-state functional magnetic resonance imaging (fMRI) was used to identify aberrant effective connectivity of the amygdala and hippocampus in tinnitus patients and to determine the relationship with tinnitus characteristics. Chronic tinnitus patients (n = 26) and age-, sex-, and education-matched healthy controls (n = 23) were included. Both groups were comparable for hearing level. Granger causality analysis utilizing the amygdala and hippocampus as seed regions were used to investigate the directional connectivity and the relationship with tinnitus duration or distress. Relative to healthy controls, tinnitus patients demonstrated abnormal directional connectivity of the amygdala and hippocampus, including primary and association auditory cortex, and other non-auditory areas. Importantly, scores on the Tinnitus Handicap Questionnaires were positively correlated with increased connectivity from the left amygdala to left superior temporal gyrus (r = 0.570, P = 0.005), and from the right amygdala to right superior temporal gyrus (r = 0.487, P = 0.018). Moreover, enhanced effective connectivity from the right hippocampus to left transverse temporal gyrus was correlated with tinnitus duration (r = 0.452, P = 0.030). The results showed that tinnitus distress strongly correlates with enhanced effective connectivity that is directed from the amygdala to the auditory cortex. The longer the phantom sensation, the more likely acute tinnitus becomes permanently encoded by memory traces in the hippocampus. Hum Brain Mapp 38:2384-2397, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Langguth, Berthold; Zowe, Marc; Landgrebe, Michael; Sand, Philipp; Kleinjung, Tobias; Binder, Harald; Hajak, Göran; Eichhammer, Peter
2006-01-01
Auditory phantom perceptions are associated with hyperactivity of the central auditory system. Neuronavigation guided repetitive transcranial magnetic stimulation (rTMS) of the area of increased activity was demonstrated to reduce tinnitus perception. The study aimed at developing an easy applicable standard procedure for transcranial magnetic stimulation of the primary auditory cortex and to investigate this coil positioning strategy for the treatment of chronic tinnitus in clinical practice. The left gyrus of Heschl was targeted in 25 healthy subjects using a frameless stereotactical system. Based on individual scalp coordinates of the coil, a positioning strategy with reference to the 10--20-EEG system was developed. Using this coil positioning approach we started an open treatment trial. 28 patients with chronic tinnitus received 10 sessions of rTMS (intensity 110% of motor threshold, 1 Hz, 2000 Stimuli/day). Being within a range of about 20 mm diameter, the scalp coordinates for stimulating the primary auditory cortex allowed to determine a standard procedure for coil positioning. Clinical validation of this coil positioning method resulted in a significant improvement of tinnitus complaints (p<0.001). The newly developed coil positioning strategy may have the potential to offer a more easy-to-use stimulation approach for treating chronic tinnitus as compared with highly sophisticated, imaging guided treatment methods.
Neural substrates related to auditory working memory comparisons in dyslexia: An fMRI study
CONWAY, TIM; HEILMAN, KENNETH M.; GOPINATH, KAUNDINYA; PECK, KYUNG; BAUER, RUSSELL; BRIGGS, RICHARD W.; TORGESEN, JOSEPH K.; CROSSON, BRUCE
2010-01-01
Adult readers with developmental phonological dyslexia exhibit significant difficulty comparing pseudowords and pure tones in auditory working memory (AWM). This suggests deficient AWM skills for adults diagnosed with dyslexia. Despite behavioral differences, it is unknown whether neural substrates of AWM differ between adults diagnosed with dyslexia and normal readers. Prior neuroimaging of adults diagnosed with dyslexia and normal readers, and post-mortem findings of neural structural anomalies in adults diagnosed with dyslexia support the hypothesis of atypical neural activity in temporoparietal and inferior frontal regions during AWM tasks in adults diagnosed with dyslexia. We used fMRI during two binaural AWM tasks (pseudowords or pure tones comparisons) in adults diagnosed with dyslexia (n = 11) and normal readers (n = 11). For both AWM tasks, adults diagnosed with dyslexia exhibited greater activity in left posterior superior temporal (BA 22) and inferior parietal regions (BA 40) than normal readers. Comparing neural activity between groups and between stimuli contrasts (pseudowords vs. tones), adults diagnosed with dyslexia showed greater primary auditory cortex activity (BA 42; tones > pseudowords) than normal readers. Thus, greater activity in primary auditory, posterior superior temporal, and inferior parietal cortices during linguistic and non-linguistic AWM tasks for adults diagnosed with dyslexia compared to normal readers indicate differences in neural substrates of AWM comparison tasks. PMID:18577292
Influence of auditory and audiovisual stimuli on the right-left prevalence effect.
Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim
2014-01-01
When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.
Evaluation of auditory perception development in neonates by event-related potential technique.
Zhang, Qinfen; Li, Hongxin; Zheng, Aibin; Dong, Xuan; Tu, Wenjuan
2017-08-01
To investigate auditory perception development in neonates and correlate it with days after birth, left and right hemisphere development and sex using event-related potential (ERP) technique. Sixty full-term neonates, consisting of 32 males and 28 females, aged 2-28days were included in this study. An auditory oddball paradigm was used to elicit ERPs. N2 wave latencies and areas were recorded at different days after birth, to study on relationship between auditory perception and age, and comparison of left and right hemispheres, and males and females. Average wave forms of ERPs in neonates started from relatively irregular flat-bottomed troughs to relatively regular steep-sided ripples. A good linear relationship between ERPs and days after birth in neonates was observed. As days after birth increased, N2 latencies gradually and significantly shortened, and N2 areas gradually and significantly increased (both P<0.01). N2 areas in the central part of the brain were significantly greater, and N2 latencies in the central part were significantly shorter in the left hemisphere compared with the right, indicative of left hemisphere dominance (both P<0.05). N2 areas were greater and N2 latencies shorter in female neonates compared with males. The neonatal period is one of rapid auditory perception development. In the days following birth, the auditory perception ability of neonates gradually increases. This occurs predominantly in the left hemisphere, with auditory perception ability appearing to develop earlier in female neonates than in males. ERP can be used as an objective index used to evaluate auditory perception development in neonates. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia
Dale, Corby L.; Brown, Ethan G.; Fisher, Melissa; Herman, Alexander B.; Dowling, Anne F.; Hinkley, Leighton B.; Subramaniam, Karuna; Nagarajan, Srikantan S.; Vinogradov, Sophia
2016-01-01
Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668
ERIC Educational Resources Information Center
Murakami, Takenobu; Restle, Julia; Ziemann, Ulf
2012-01-01
A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…
Processing of spectral and amplitude envelope of animal vocalizations in the human auditory cortex.
Altmann, Christian F; Gomes de Oliveira Júnior, Cícero; Heinemann, Linda; Kaiser, Jochen
2010-08-01
In daily life, we usually identify sounds effortlessly and efficiently. Two properties are particularly salient and of importance for sound identification: the sound's overall spectral envelope and its temporal amplitude envelope. In this study, we aimed at investigating the representation of these two features in the human auditory cortex by using a functional magnetic resonance imaging adaptation paradigm. We presented pairs of sound stimuli derived from animal vocalizations that preserved the time-averaged frequency spectrum of the animal vocalizations and the amplitude envelope. We presented the pairs in four different conditions: (a) pairs with the same amplitude envelope and mean spectral envelope, (b) same amplitude envelope, but different mean spectral envelope, (c) different amplitude envelope, but same mean spectral envelope and (d) both different amplitude envelope and mean spectral envelope. We found fMRI adaptation effects for both the mean spectral envelope and the amplitude envelope of animal vocalizations in overlapping cortical areas in the bilateral superior temporal gyrus posterior to Heschl's gyrus. Areas sensitive to the amplitude envelope extended further anteriorly along the lateral superior temporal gyrus in the left hemisphere, while areas sensitive to the spectral envelope extended further anteriorly along the right lateral superior temporal gyrus. Posterior tonotopic areas within the left superior temporal lobe displayed sensitivity for the mean spectrum. Our findings suggest involvement of primary auditory areas in the representation of spectral cues and encoding of general spectro-temporal features of natural sounds in non-primary posterior and lateral superior temporal cortex. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Kuriki, Shinya; Yokosawa, Koichi; Takahashi, Makoto
2013-01-01
The auditory illusory perception “scale illusion” occurs when a tone of ascending scale is presented in one ear, a tone of descending scale is presented simultaneously in the other ear, and vice versa. Most listeners hear illusory percepts of smooth pitch contours of the higher half of the scale in the right ear and the lower half in the left ear. Little is known about neural processes underlying the scale illusion. In this magnetoencephalographic study, we recorded steady-state responses to amplitude-modulated short tones having illusion-inducing pitch sequences, where the sound level of the modulated tones was manipulated to decrease monotonically with increase in pitch. The steady-state responses were decomposed into right- and left-sound components by means of separate modulation frequencies. It was found that the time course of the magnitude of response components of illusion-perceiving listeners was significantly correlated with smooth pitch contour of illusory percepts and that the time course of response components of stimulus-perceiving listeners was significantly correlated with discontinuous pitch contour of stimulus percepts in addition to the contour of illusory percepts. The results suggest that the percept of illusory pitch sequence was represented in the neural activity in or near the primary auditory cortex, i.e., the site of generation of auditory steady-state response, and that perception of scale illusion is maintained by automatic low-level processing. PMID:24086676
González-García, Nadia; González, Martha A; Rendón, Pablo L
2016-07-15
Relationships between musical pitches are described as either consonant, when associated with a pleasant and harmonious sensation, or dissonant, when associated with an inharmonious feeling. The accurate singing of musical intervals requires communication between auditory feedback processing and vocal motor control (i.e. audio-vocal integration) to ensure that each note is produced correctly. The objective of this study is to investigate the neural mechanisms through which trained musicians produce consonant and dissonant intervals. We utilized 4 musical intervals (specifically, an octave, a major seventh, a fifth, and a tritone) as the main stimuli for auditory discrimination testing, and we used the same interval tasks to assess vocal accuracy in a group of musicians (11 subjects, all female vocal students at conservatory level). The intervals were chosen so as to test for differences in recognition and production of consonant and dissonant intervals, as well as narrow and wide intervals. The subjects were studied using fMRI during performance of the interval tasks; the control condition consisted of passive listening. Singing dissonant intervals as opposed to singing consonant intervals led to an increase in activation in several regions, most notably the primary auditory cortex, the primary somatosensory cortex, the amygdala, the left putamen, and the right insula. Singing wide intervals as opposed to singing narrow intervals resulted in the activation of the right anterior insula. Moreover, we also observed a correlation between singing in tune and brain activity in the premotor cortex, and a positive correlation between training and activation of primary somatosensory cortex, primary motor cortex, and premotor cortex during singing. When singing dissonant intervals, a higher degree of training correlated with the right thalamus and the left putamen. Our results indicate that singing dissonant intervals requires greater involvement of neural mechanisms associated with integrating external feedback from auditory and sensorimotor systems than singing consonant intervals, and it would then seem likely that dissonant intervals are intoned by adjusting the neural mechanisms used for the production of consonant intervals. Singing wide intervals requires a greater degree of control than singing narrow intervals, as it involves neural mechanisms which again involve the integration of internal and external feedback. Copyright © 2016 Elsevier B.V. All rights reserved.
Richardson, Fiona M; Ramsden, Sue; Ellis, Caroline; Burnett, Stephanie; Megnin, Odette; Catmur, Caroline; Schofield, Tom M; Leff, Alex P; Price, Cathy J
2011-12-01
A central feature of auditory STM is its item-limited processing capacity. We investigated whether auditory STM capacity correlated with regional gray and white matter in the structural MRI images from 74 healthy adults, 40 of whom had a prior diagnosis of developmental dyslexia whereas 34 had no history of any cognitive impairment. Using whole-brain statistics, we identified a region in the left posterior STS where gray matter density was positively correlated with forward digit span, backward digit span, and performance on a "spoonerisms" task that required both auditory STM and phoneme manipulation. Across tasks and participant groups, the correlation was highly significant even when variance related to reading and auditory nonword repetition was factored out. Although the dyslexics had poorer phonological skills, the effect of auditory STM capacity in the left STS was the same as in the cognitively normal group. We also illustrate that the anatomical location of this effect is in proximity to a lesion site recently associated with reduced auditory STM capacity in patients with stroke damage. This result, therefore, indicates that gray matter density in the posterior STS predicts auditory STM capacity in the healthy and damaged brain. In conclusion, we suggest that our present findings are consistent with the view that there is an overlap between the mechanisms that support language processing and auditory STM.
Davis, Chris; Kislyuk, Daniel; Kim, Jeesun; Sams, Mikko
2008-11-25
We used whole-head magnetoencephalograpy (MEG) to record changes in neuromagnetic N100m responses generated in the left and right auditory cortex as a function of the match between visual and auditory speech signals. Stimuli were auditory-only (AO) and auditory-visual (AV) presentations of /pi/, /ti/ and /vi/. Three types of intensity matched auditory stimuli were used: intact speech (Normal), frequency band filtered speech (Band) and speech-shaped white noise (Noise). The behavioural task was to detect the /vi/ syllables which comprised 12% of stimuli. N100m responses were measured to averaged /pi/ and /ti/ stimuli. Behavioural data showed that identification of the stimuli was faster and more accurate for Normal than for Band stimuli, and for Band than for Noise stimuli. Reaction times were faster for AV than AO stimuli. MEG data showed that in the left hemisphere, N100m to both AO and AV stimuli was largest for the Normal, smaller for Band and smallest for Noise stimuli. In the right hemisphere, Normal and Band AO stimuli elicited N100m responses of quite similar amplitudes, but N100m amplitude to Noise was about half of that. There was a reduction in N100m for the AV compared to the AO conditions. The size of this reduction for each stimulus type was same in the left hemisphere but graded in the right (being largest to the Normal, smaller to the Band and smallest to the Noise stimuli). The N100m decrease for the Normal stimuli was significantly larger in the right than in the left hemisphere. We suggest that the effect of processing visual speech seen in the right hemisphere likely reflects suppression of the auditory response based on AV cues for place of articulation.
Dislocation of the incus into the external auditory canal after mountain-biking accident.
Saito, T; Kono, Y; Fukuoka, Y; Yamamoto, H; Saito, H
2001-01-01
We report a rare case of incus dislocation to the external auditory canal after a mountain-biking accident. Otoscopy showed ossicular protrusion in the upper part of the left external auditory canal. CT indicated the disappearance of the incus, and an incus-like bone was found in the left external auditory canal. There was another bony and board-like structure in the attic. During the surgery, a square-shaped bony plate (1 x 1 cm) was found in the attic. It was determined that the bony plate had fallen from the tegmen of the attic. The fracture line in the posterosuperior auditory canal extending to the fossa incudis was identified. According to these findings, it was considered that the incus was pushed into the external auditory canal by the impact of skull injury through the fractured posterosuperior auditory canal, which opened widely enough for incus dislocation. Copyright 2001 S. Karger AG, Basel
Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa
2011-07-01
A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.
Kantrowitz, J T; Hoptman, M J; Leitman, D I; Silipo, G; Javitt, D C
2014-01-01
Intact sarcasm perception is a crucial component of social cognition and mentalizing (the ability to understand the mental state of oneself and others). In sarcasm, tone of voice is used to negate the literal meaning of an utterance. In particular, changes in pitch are used to distinguish between sincere and sarcastic utterances. Schizophrenia patients show well-replicated deficits in auditory function and functional connectivity (FC) within and between auditory cortical regions. In this study we investigated the contributions of auditory deficits to sarcasm perception in schizophrenia. Auditory measures including pitch processing, auditory emotion recognition (AER) and sarcasm detection were obtained from 76 patients with schizophrenia/schizo-affective disorder and 72 controls. Resting-state FC (rsFC) was obtained from a subsample and was analyzed using seeds placed in both auditory cortex and meta-analysis-defined core-mentalizing regions relative to auditory performance. Patients showed large effect-size deficits across auditory measures. Sarcasm deficits correlated significantly with general functioning and impaired pitch processing both across groups and within the patient group alone. Patients also showed reduced sensitivity to alterations in mean pitch and variability. For patients, sarcasm discrimination correlated exclusively with the level of rsFC within primary auditory regions whereas for controls, correlations were observed exclusively within core-mentalizing regions (the right posterior superior temporal gyrus, anterior superior temporal sulcus and insula, and left posterior medial temporal gyrus). These findings confirm the contribution of auditory deficits to theory of mind (ToM) impairments in schizophrenia, and demonstrate that FC within auditory, but not core-mentalizing, regions is rate limiting with respect to sarcasm detection in schizophrenia.
Bais, Leonie; Vercammen, Ans; Stewart, Roy; van Es, Frank; Visser, Bert; Aleman, André; Knegtering, Henderikus
2014-01-01
Background Repetitive transcranial magnetic stimulation of the left temporo-parietal junction area has been studied as a treatment option for auditory verbal hallucinations. Although the right temporo-parietal junction area has also shown involvement in the genesis of auditory verbal hallucinations, no studies have used bilateral stimulation. Moreover, little is known about durability effects. We studied the short and long term effects of 1 Hz treatment of the left temporo-parietal junction area in schizophrenia patients with persistent auditory verbal hallucinations, compared to sham stimulation, and added an extra treatment arm of bilateral TPJ area stimulation. Methods In this randomized controlled trial, 51 patients diagnosed with schizophrenia and persistent auditory verbal hallucinations were randomly allocated to treatment of the left or bilateral temporo-parietal junction area or sham treatment. Patients were treated for six days, twice daily for 20 minutes. Short term efficacy was measured with the Positive and Negative Syndrome Scale (PANSS), the Auditory Hallucinations Rating Scale (AHRS), and the Positive and Negative Affect Scale (PANAS). We included follow-up measures with the AHRS and PANAS at four weeks and three months. Results The interaction between time and treatment for Hallucination item P3 of the PANSS showed a trend for significance, caused by a small reduction of scores in the left group. Although self-reported hallucination scores, as measured with the AHRS and PANAS, decreased significantly during the trial period, there were no differences between the three treatment groups. Conclusion We did not find convincing evidence for the efficacy of left-sided rTMS, compared to sham rTMS. Moreover, bilateral rTMS was not superior over left rTMS or sham in improving AVH. Optimizing treatment parameters may result in stronger evidence for the efficacy of rTMS treatment of AVH. Moreover, future research should consider investigating factors predicting individual response. Trial Registration Dutch Trial Register NTR1813 PMID:25329799
Relationship between Speech Production and Perception in People Who Stutter.
Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter
2016-01-01
Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl's gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS.
Cooperative dynamics in auditory brain response
NASA Astrophysics Data System (ADS)
Kwapień, J.; DrożdŻ, S.; Liu, L. C.; Ioannides, A. A.
1998-11-01
Simultaneous estimates of activity in the left and right auditory cortex of five normal human subjects were extracted from multichannel magnetoencephalography recordings. Left, right, and binaural stimulations were used, in separate runs, for each subject. The resulting time series of left and right auditory cortex activity were analyzed using the concept of mutual information. The analysis constitutes an objective method to address the nature of interhemispheric correlations in response to auditory stimulations. The results provide clear evidence of the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20 ms, as can be seen in the average signal. The strength of the interhemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.
Effective connectivity associated with auditory error detection in musicians with absolute pitch
Parkinson, Amy L.; Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Larson, Charles R.; Robin, Donald A.
2014-01-01
It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. PMID:24634644
Hickok, G; Okada, K; Barr, W; Pa, J; Rogalsky, C; Donnelly, K; Barde, L; Grant, A
2008-12-01
Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.
Kanwal, Jagmeet S
2012-01-01
In the Doppler-shifted constant frequency processing area in the primary auditory cortex of mustached bats, Pteronotus parnellii, neurons respond to both social calls and to echolocation signals. This multifunctional nature of cortical neurons creates a paradox for simultaneous processing of two behaviorally distinct categories of sound. To test the possibility of a stimulus-specific hemispheric bias, single-unit responses were obtained to both types of sounds, calls and pulse-echo tone pairs, from the right and left auditory cortex. Neurons on the left exhibited only slightly higher peak response magnitudes for their respective best calls, but they showed a significantly higher sensitivity (lower response thresholds) to calls than neurons on the right. On average, call-to-tone response ratios were significantly higher for neurons on the left than for those on the right. Neurons on the right responded significantly more strongly to pulse-echo tone pairs than those on the left. Overall, neurons in males responded to pulse-echo tone pairs with a much higher spike count compared to females, but this difference was less pronounced for calls. Multidimensional scaling of call responses yielded a segregated representation of call types only on the left. These data establish for the first time, a behaviorally directed right-left asymmetry at the level of single cortical neurons. It is proposed that a lateralized cortex emerges from multiparametric integration (e.g. combination-sensitivity) within a neuron and inhibitory interactions between neurons that come into play during the processing of complex sounds. © 2011 The Author. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
New HRCT-based measurement of the human outer ear canal as a basis for acoustical methods.
Grewe, Johanna; Thiele, Cornelia; Mojallal, Hamidreza; Raab, Peter; Sankowsky-Rothe, Tobias; Lenarz, Thomas; Blau, Matthias; Teschner, Magnus
2013-06-01
As the form and size of the external auditory canal determine its transmitting function and hence the sound pressure in front of the eardrum, it is important to understand its anatomy in order to develop, optimize, and compare acoustical methods. High-resolution computed tomography (HRCT) data were measured retrospectively for 100 patients who had received a cochlear implant. In order to visualize the anatomy of the auditory canal, its length, radius, and the angle at which it runs were determined for the patients’ right and left ears. The canal’s volume was calculated, and a radius function was created. The determined length of the auditory canal averaged 23.6 mm for the right ear and 23.5 mm for the left ear. The calculated auditory canal volume (Vtotal) was 0.7 ml for the right ear and 0.69 ml for the left ear. The auditory canal was found to be significantly longer in men than in women, and the volume greater. The values obtained can be employed to develop a method that represents the shape of the auditory canal as accurately as possible to allow the best possible outcomes for hearing aid fitting.
Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas
2010-07-01
Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Seither-Preisler, Annemarie; Parncutt, Richard; Schneider, Peter
2014-08-13
Playing a musical instrument is associated with numerous neural processes that continuously modify the human brain and may facilitate characteristic auditory skills. In a longitudinal study, we investigated the auditory and neural plasticity of musical learning in 111 young children (aged 7-9 y) as a function of the intensity of instrumental practice and musical aptitude. Because of the frequent co-occurrence of central auditory processing disorders and attentional deficits, we also tested 21 children with attention deficit (hyperactivity) disorder [AD(H)D]. Magnetic resonance imaging and magnetoencephalography revealed enlarged Heschl's gyri and enhanced right-left hemispheric synchronization of the primary evoked response (P1) to harmonic complex sounds in children who spent more time practicing a musical instrument. The anatomical characteristics were positively correlated with frequency discrimination, reading, and spelling skills. Conversely, AD(H)D children showed reduced volumes of Heschl's gyri and enhanced volumes of the plana temporalia that were associated with a distinct bilateral P1 asynchrony. This may indicate a risk for central auditory processing disorders that are often associated with attentional and literacy problems. The longitudinal comparisons revealed a very high stability of auditory cortex morphology and gray matter volumes, suggesting that the combined anatomical and functional parameters are neural markers of musicality and attention deficits. Educational and clinical implications are considered. Copyright © 2014 the authors 0270-6474/14/3410937-13$15.00/0.
ERIC Educational Resources Information Center
Wood, Frank; And Others
1991-01-01
Investigates the proposed left hemisphere dysfunction in dyslexia by reviewing four studies using regional cerebral blood flow (RCBF) and combined auditory evoked responses with positron emission tomography. Emphasizes methodological issues. Finds that dyslexics showed a positive correlation between Heschl's gyrus activation and phonemic…
Auditory Space Perception in Left- and Right-Handers
ERIC Educational Resources Information Center
Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg
2010-01-01
Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442
Ferri, Lorenzo; Bisulli, Francesca; Nobili, Lino; Tassi, Laura; Licchetta, Laura; Mostacci, Barbara; Stipa, Carlotta; Mainieri, Greta; Bernabè, Giorgia; Provini, Federica; Tinuper, Paolo
2014-11-01
To describe the anatomo-electro-clinical findings of patients with nocturnal hypermotor seizures (NHS) preceded by auditory symptoms, to evaluate the localizing value of auditory aura. Our database of 165 patients with nocturnal frontal lobe epilepsy (NFLE) diagnosis confirmed by videopolysomnography (VPSG) was reviewed, selecting those who reported an auditory aura as the initial ictal symptom in at least two NHS during their lifetime. Eleven patients were selected (seven males, four females). According to the anatomo-electro-clinical data, three groups were identified. Group 1 [defined epileptogenic zone (EZ)]: three subjects were studied with stereo-EEG. The EZ lay in the left superior temporal gyrus in two cases, whereas in the third case seizures arose from a dysplastic lesion located in the left temporal lobe. One of these three patients underwent left Heschl's gyrus resection, and is currently seizure-free. Group 2 (presumed EZ): three cases in which a presumed EZ was identified; in the left temporal lobe in two cases and in the left temporal lobe extending to the insula in one subject. Group 3 (uncertain EZ): five cases had anatomo-electro-clinical correlations discordant. This work suggests that auditory aura may be a helpful anamnestic feature suggesting an extra-frontal seizure origin. This finding could guide secondary investigations to improve diagnostic definition and selection of candidates for surgical treatment. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Yamamoto, Katsura; Tabei, Kenichi; Katsuyama, Narumi; Taira, Masato; Kitamura, Ken
2017-01-01
Patients with unilateral sensorineural hearing loss (UHL) often complain of hearing difficulties in noisy environments. To clarify this, we compared brain activation in patients with UHL with that of healthy participants during speech perception in a noisy environment, using functional magnetic resonance imaging (fMRI). A pure tone of 1 kHz, or 14 monosyllabic speech sounds at 65‒70 dB accompanied by MRI scan noise at 75 dB, were presented to both ears for 1 second each and participants were instructed to press a button when they could hear the pure tone or speech sound. Based on the activation areas of healthy participants, the primary auditory cortex, the anterior auditory association areas, and the posterior auditory association areas were set as regions of interest (ROI). In each of these regions, we compared brain activity between healthy participants and patients with UHL. The results revealed that patients with right-side UHL showed different brain activity in the right posterior auditory area during perception of pure tones versus monosyllables. Clinically, left-side and right-side UHL are not presently differentiated and are similarly diagnosed and treated; however, the results of this study suggest that a lateralityspecific treatment should be chosen.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
Fröhlich, F; Burrello, T N; Mellin, J M; Cordle, A L; Lustenberger, C M; Gilmore, J H; Jarskog, L F
2016-03-01
Auditory hallucinations are resistant to pharmacotherapy in about 25% of adults with schizophrenia. Treatment with noninvasive brain stimulation would provide a welcomed additional tool for the clinical management of auditory hallucinations. A recent study found a significant reduction in auditory hallucinations in people with schizophrenia after five days of twice-daily transcranial direct current stimulation (tDCS) that simultaneously targeted left dorsolateral prefrontal cortex and left temporo-parietal cortex. We hypothesized that once-daily tDCS with stimulation electrodes over left frontal and temporo-parietal areas reduces auditory hallucinations in patients with schizophrenia. We performed a randomized, double-blind, sham-controlled study that evaluated five days of daily tDCS of the same cortical targets in 26 outpatients with schizophrenia and schizoaffective disorder with auditory hallucinations. We found a significant reduction in auditory hallucinations measured by the Auditory Hallucination Rating Scale (F2,50=12.22, P<0.0001) that was not specific to the treatment group (F2,48=0.43, P=0.65). No significant change of overall schizophrenia symptom severity measured by the Positive and Negative Syndrome Scale was observed. The lack of efficacy of tDCS for treatment of auditory hallucinations and the pronounced response in the sham-treated group in this study contrasts with the previous finding and demonstrates the need for further optimization and evaluation of noninvasive brain stimulation strategies. In particular, higher cumulative doses and higher treatment frequencies of tDCS together with strategies to reduce placebo responses should be investigated. Additionally, consideration of more targeted stimulation to engage specific deficits in temporal organization of brain activity in patients with auditory hallucinations may be warranted. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Oscillatory support for rapid frequency change processing in infants.
Musacchia, Gabriella; Choudhury, Naseem A; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P; Benasich, April A
2013-11-01
Rapid auditory processing and auditory change detection abilities are crucial aspects of speech and language development, particularly in the first year of life. Animal models and adult studies suggest that oscillatory synchrony, and in particular low-frequency oscillations play key roles in this process. We hypothesize that infant perception of rapid pitch and timing changes is mediated, at least in part, by oscillatory mechanisms. Using event-related potentials (ERPs), source localization and time-frequency analysis of event-related oscillations (EROs), we examined the neural substrates of rapid auditory processing in 4-month-olds. During a standard oddball paradigm, infants listened to tone pairs with invariant standard (STD, 800-800 Hz) and variant deviant (DEV, 800-1200 Hz) pitch. STD and DEV tone pairs were first presented in a block with a short inter-stimulus interval (ISI) (Rapid Rate: 70 ms ISI), followed by a block of stimuli with a longer ISI (Control Rate: 300 ms ISI). Results showed greater ERP peak amplitude in response to the DEV tone in both conditions and later and larger peaks during Rapid Rate presentation, compared to the Control condition. Sources of neural activity, localized to right and left auditory regions, showed larger and faster activation in the right hemisphere for both rate conditions. Time-frequency analysis of the source activity revealed clusters of theta band enhancement to the DEV tone in right auditory cortex for both conditions. Left auditory activity was enhanced only during Rapid Rate presentation. These data suggest that local low-frequency oscillatory synchrony underlies rapid processing and can robustly index auditory perception in young infants. Furthermore, left hemisphere recruitment during rapid frequency change discrimination suggests a difference in the spectral and temporal resolution of right and left hemispheres at a very young age. © 2013 Elsevier Ltd. All rights reserved.
Auditory/visual Duration Bisection in Patients with Left or Right Medial-Temporal Lobe Resection
ERIC Educational Resources Information Center
Melgire, Manuela; Ragot, Richard; Samson, Severine; Penney, Trevor B.; Meck, Warren H.; Pouthas, Viviane
2005-01-01
Patients with unilateral (left or right) medial temporal lobe lesions and normal control (NC) volunteers participated in two experiments, both using a duration bisection procedure. Experiment 1 assessed discrimination of auditory and visual signal durations ranging from 2 to 8 s, in the same test session. Patients and NC participants judged…
Relationship between Speech Production and Perception in People Who Stutter
Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter
2016-01-01
Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl’s gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS. PMID:27242487
ERIC Educational Resources Information Center
Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean
2015-01-01
We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…
Speech comprehension aided by multiple modalities: behavioural and neural interactions
McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K.
2014-01-01
Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. PMID:22266262
Speech comprehension aided by multiple modalities: behavioural and neural interactions.
McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K
2012-04-01
Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. Copyright © 2012 Elsevier Ltd. All rights reserved.
Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim
2015-06-15
Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim
2015-01-01
Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433
Electrostimulation mapping of comprehension of auditory and visual words.
Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François
2015-10-01
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Take-over again: Investigating multimodal and directional TORs to get the driver back into the loop.
Petermeijer, Sebastiaan; Bazilinskyy, Pavlo; Bengler, Klaus; de Winter, Joost
2017-07-01
When a highly automated car reaches its operational limits, it needs to provide a take-over request (TOR) in order for the driver to resume control. The aim of this simulator-based study was to investigate the effects of TOR modality and left/right directionality on drivers' steering behaviour when facing a head-on collision without having received specific instructions regarding the directional nature of the TORs. Twenty-four participants drove three sessions in a highly automated car, each session with a different TOR modality (auditory, vibrotactile, and auditory-vibrotactile). Six TORs were provided per session, warning the participants about a stationary vehicle that had to be avoided by changing lane left or right. Two TORs were issued from the left, two from the right, and two from both the left and the right (i.e., nondirectional). The auditory stimuli were presented via speakers in the simulator (left, right, or both), and the vibrotactile stimuli via a tactile seat (with tactors activated at the left side, right side, or both). The results showed that the multimodal TORs yielded statistically significantly faster steer-touch times than the unimodal vibrotactile TOR, while no statistically significant differences were observed for brake times and lane change times. The unimodal auditory TOR yielded relatively low self-reported usefulness and satisfaction ratings. Almost all drivers overtook the stationary vehicle on the left regardless of the directionality of the TOR, and a post-experiment questionnaire revealed that most participants had not realized that some of the TORs were directional. We conclude that between the three TOR modalities tested, the multimodal approach is preferred. Moreover, our results show that directional auditory and vibrotactile stimuli do not evoke a directional response in uninstructed drivers. More salient and semantically congruent cues, as well as explicit instructions, may be needed to guide a driver into a specific direction during a take-over scenario. Copyright © 2017 Elsevier Ltd. All rights reserved.
Peñaloza López, Yolanda Rebeca; Orozco Peña, Xóchitl Daisy; Pérez Ruiz, Santiago Jesús
2018-04-03
To evaluate the central auditory processing disorders in patients with multiple sclerosis, emphasizing auditory laterality by applying psychoacoustic tests and to identify their relationship with the Multiple Sclerosis Disability Scale (EDSS) functions. Depression scales (HADS), EDSS, and 9 psychoacoustic tests to study CAPD were applied to 26 individuals with multiple sclerosis and 26 controls. Correlation tests were performed between the EDSS and psychoacoustic tests. Seven out of 9 psychoacoustic tests were significantly different (P<.05); right or left (14/19 explorations) with respect to control. In dichotic digits there was a left-ear advantage compared to the usual predominance of RDD. There was significant correlation in five psychoacoustic tests and the specific functions of EDSS. The left-ear advantage detected and interpreted as an expression of deficient influences of the corpus callosum and attention in multiple sclerosis should be investigated. There was a correlation between psychoacoustic tests and specific EDSS functions. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.
Thinking about touch facilitates tactile but not auditory processing.
Anema, Helen A; de Haan, Alyanne M; Gebuis, Titia; Dijkerman, H Chris
2012-05-01
Mental imagery is considered to be important for normal conscious experience. It is most frequently investigated in the visual, auditory and motor domain (imagination of movement), while the studies on tactile imagery (imagination of touch) are scarce. The current study investigated the effect of tactile and auditory imagery on the left/right discriminations of tactile and auditory stimuli. In line with our hypothesis, we observed that after tactile imagery, tactile stimuli were responded to faster as compared to auditory stimuli and vice versa. On average, tactile stimuli were responded to faster as compared to auditory stimuli, and stimuli in the imagery condition were on average responded to slower as compared to baseline performance (left/right discrimination without imagery assignment). The former is probably due to the spatial and somatotopic proximity of the fingers receiving the taps and the thumbs performing the response (button press), the latter to a dual task cost. Together, these results provide the first evidence of a behavioural effect of a tactile imagery assignment on the perception of real tactile stimuli.
Language networks in anophthalmia: maintained hierarchy of processing in 'visual' cortex.
Watkins, Kate E; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M; Smith, Stephen M; Ragge, Nicola; Bridge, Holly
2012-05-01
Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an auditory covert naming task in five bilaterally anophthalmic subjects, who have never received visual input. When listening to auditory definitions and covertly retrieving words, these subjects activated lateral occipital cortex bilaterally in addition to the language areas activated in sighted controls. This activity was significantly greater than that present in a control condition of listening to reversed speech. The lateral occipital cortex was also recruited into a left-lateralized resting-state network that usually comprises anterior and posterior language areas. Levels of activation to the auditory naming and reversed speech conditions did not differ in the calcarine (striate) cortex. This primary 'visual' cortex was not recruited to the left-lateralized resting-state network and showed high interhemispheric correlation of activity at rest, as is typically seen in unimodal cortical areas. In contrast, the interhemispheric correlation of resting activity in extrastriate areas was reduced in anophthalmia to the level of cortical areas that are heteromodal, such as the inferior frontal gyrus. Previous imaging studies in the congenitally blind show that primary visual cortex is activated in higher-order tasks, such as language and memory to a greater extent than during more basic sensory processing, resulting in a reversal of the normal hierarchy of functional organization across 'visual' areas. Our data do not support such a pattern of organization in anophthalmia. Instead, the patterns of activity during task and the functional connectivity at rest are consistent with the known hierarchy of processing in these areas normally seen for vision. The differences in cortical organization between bilateral anophthalmia and other forms of congenital blindness are considered to be due to the total absence of stimulation in 'visual' cortex by light or retinal activity in the former condition, and suggests development of subcortical auditory input to the geniculo-striate pathway.
Specht, Karsten; Baumgartner, Florian; Stadler, Jörg; Hugdahl, Kenneth; Pollmann, Stefan
2014-01-01
To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA) and voice-onset time (VOT) differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI) study takes advantage of the superior spatial resolution and high sensitivity of ultra-high-field 7 T MRI. Subjects were attentively listening to consonant–vowel (CV) syllables with an alveolar or bilabial stop-consonant and either a short or long VOT. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the CV syllables. This was however modulated strongest by PoA such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale (PT) onto the right auditory cortex (AC) during the processing of alveolar CV syllables. Furthermore, the connectivity result indicated also a directed information flow from the right to the left AC, and further to the left PT for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right ACs, with the left PT as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the CV syllables. PMID:24966841
Opposite brain laterality in analogous auditory and visual tests.
Oltedal, Leif; Hugdahl, Kenneth
2017-11-01
Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.
Characteristics of hearing and echolocation in under-studied odontocete species
NASA Astrophysics Data System (ADS)
Smith, Adam B.
All odontoctes (toothed whales and dolphins) studied to date have been shown to echolocate. They use sound as their primary means for foraging, navigation, and communication with conspecifics and are thus considered acoustic specialists. However, the vast majority of what is known about odontocete acoustic systems comes from only a handful of the 76 recognized extant species. The research presented in this dissertation investigated basic characteristics of odontocete hearing and echolocation, including auditory temporal resolution, auditory pathways, directional hearing, and transmission beam characteristics, in individuals of five different odontocete species that are understudied. Modulation rate transfer functions were measured from formerly stranded individuals of four different species (Stenella longirostris, Feresa attenuata, Globicephala melas, Mesoplodon densirostris) using non-invasive auditory evoked potential methods. All individuals showed acute auditory temporal resolution that was comparable to other studied odontocete species. Using the same electrophysiological methods, auditory pathways and directional hearing were investigated in a Risso's dolphin (Grampus griseus) using both localized and far-field acoustic stimuli. The dolphin's hearing showed significant, frequency dependent asymmetry to localized sound presented on the right and left sides of its head. The dolphin also showed acute, but mostly symmetrical, directional auditory sensitivity to sounds presented in the far-field. Furthermore, characteristics of the echolocation transmission beam of this same individual Risso's dolphin were measured using a 16 element hydrophone array. The dolphin exhibited both single and dual lobed beam shapes that were more directional than similar measurements from a bottlenose dolphin, harbor porpoise, and false killer whale.
[Memory peculiarities in patients with schizophrenia and their first-degree relatives].
Savina, T D; Orlova, V A; Shcherbakova, N P; Korsakova, N K; Malova, Iu A; Efanova, N N; Ganisheva, T K; Nikolaev, R A
2008-01-01
Eighty-four families with schizophrenia: 84 patients (probands) and 73 their first-degree unaffected relatives as well as 37 normals and their relatives have been studied using pathopsychological (pictogram) and Luria's neuropsychological tests. The most prominent abnormalities both in patients and relatives were global characteristics of auditory-speech memory predominantly related to left subcortical and left temporal regions. Abnormalities of immediate recall of short logic story (SLS) were connected with dysfunction of the same brain regions. Less prominent delayed recall abnormalities of SLS were revealed only in patients and connected with left subcortical, left subcortical-frontal and left subcortical-temporal zones. This abnormality was absent in relatives and age-matched controls. The span of mediated retention was decreased in patients and, to a less degree, in relatives. A quantitative psychological analysis has demonstrated the disintegration ("schizys") between semantic conception and image memory structure in patients and, to a less degree, in relatives. Data obtained show primary memory abnormalities in families with schizophrenia related to the impairment of decoding information process in the subcortical structures, the left-side dysfunction of brain structures being predominantly typical.
Leske, Sabine; Ruhnau, Philipp; Frey, Julia; Lithari, Chrysa; Müller, Nadia; Hartmann, Thomas; Weisz, Nathan
2015-01-01
An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm. PMID:26408799
Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan
2016-10-01
Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.
Specialization along the left superior temporal sulcus for auditory categorization.
Liebenthal, Einat; Desai, Rutvik; Ellingson, Michael M; Ramachandran, Brinda; Desai, Anjali; Binder, Jeffrey R
2010-12-01
The affinity and temporal course of functional fields in middle and posterior superior temporal cortex for the categorization of complex sounds was examined using functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs) recorded simultaneously. Data were compared before and after subjects were trained to categorize a continuum of unfamiliar nonphonemic auditory patterns with speech-like properties (NP) and a continuum of familiar phonemic patterns (P). fMRI activation for NP increased after training in left posterior superior temporal sulcus (pSTS). The ERP P2 response to NP also increased with training, and its scalp topography was consistent with left posterior superior temporal generators. In contrast, the left middle superior temporal sulcus (mSTS) showed fMRI activation only for P, and this response was not affected by training. The P2 response to P was also independent of training, and its estimated source was more anterior in left superior temporal cortex. Results are consistent with a role for left pSTS in short-term representation of relevant sound features that provide the basis for identifying newly acquired sound categories. Categorization of highly familiar phonemic patterns is mediated by long-term representations in left mSTS. Results provide new insight regarding the function of ventral and dorsal auditory streams.
Exploring auditory neglect: Anatomo-clinical correlations of auditory extinction.
Tissieres, Isabel; Crottaz-Herbette, Sonia; Clarke, Stephanie
2018-05-23
The key symptoms of auditory neglect include left extinction on tasks of dichotic and/or diotic listening and rightward shift in locating sounds. The anatomical correlates of the latter are relatively well understood, but no systematic studies have examined auditory extinction. Here, we performed a systematic study of anatomo-clinical correlates of extinction by using dichotic and/or diotic listening tasks. In total, 20 patients with right hemispheric damage (RHD) and 19 with left hemispheric damage (LHD) performed dichotic and diotic listening tasks. Either task consists of the simultaneous presentation of word pairs; in the dichotic task, 1 word is presented to each ear, and in the diotic task, each word is lateralized by means of interaural time differences and presented to one side. RHD was associated with exclusively contralesional extinction in dichotic or diotic listening, whereas in selected cases, LHD led to contra- or ipsilesional extinction. Bilateral symmetrical extinction occurred in RHD or LHD, with dichotic or diotic listening. The anatomical correlates of these extinction profiles offer an insight into the organisation of the auditory and attentional systems. First, left extinction in dichotic versus diotic listening involves different parts of the right hemisphere, which explains the double dissociation between these 2 neglect symptoms. Second, contralesional extinction in the dichotic task relies on homologous regions in either hemisphere. Third, ipsilesional extinction in dichotic listening after LHD was associated with lesions of the intrahemispheric white matter, interrupting callosal fibres outside their midsagittal or periventricular trajectory. Fourth, bilateral symmetrical extinction was associated with large parieto-fronto-temporal LHD or smaller parieto-temporal RHD, which suggests that divided attention, supported by the right hemisphere, and auditory streaming, supported by the left, likely play a critical role. Copyright © 2018. Published by Elsevier Masson SAS.
Plasticity of white matter connectivity in phonetics experts.
Vandermosten, Maaike; Price, Cathy J; Golestani, Narly
2016-09-01
Phonetics experts are highly trained to analyze and transcribe speech, both with respect to faster changing, phonetic features, and to more slowly changing, prosodic features. Previously we reported that, compared to non-phoneticians, phoneticians had greater local brain volume in bilateral auditory cortices and the left pars opercularis of Broca's area, with training-related differences in the grey-matter volume of the left pars opercularis in the phoneticians group (Golestani et al. 2011). In the present study, we used diffusion MRI to examine white matter microstructure, indexed by fractional anisotropy, in (1) the long segment of arcuate fasciculus (AF_long), which is a well-known language tract that connects Broca's area, including left pars opercularis, to the temporal cortex, and in (2) the fibers arising from the auditory cortices. Most of these auditory fibers belong to three validated language tracts, namely to the AF_long, the posterior segment of the arcuate fasciculus and the middle longitudinal fasciculus. We found training-related differences in phoneticians in left AF_long, as well as group differences relative to non-experts in the auditory fibers (including the auditory fibers belonging to the left AF_long). Taken together, the results of both studies suggest that grey matter structural plasticity arising from phonetic transcription training in Broca's area is accompanied by changes to the white matter fibers connecting this very region to the temporal cortex. Our findings suggest expertise-related changes in white matter fibers connecting fronto-temporal functional hubs that are important for phonetic processing. Further studies can pursue this hypothesis by examining the dynamics of these expertise related grey and white matter changes as they arise during phonetic training.
2012-01-01
Background A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Methods Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10–20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Results Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Conclusions Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections. PMID:23199306
Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako
2012-01-25
The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.
Gritsenko, Karina; Caldwell, William; Shaparin, Naum; Vydyanathan, Amaresh; Kosharskyy, Boleslav
2014-01-01
Tinnitus is described as an auditory phantom perception analogous to central neuropathic pain. Despite the high prevalence of this debilitating symptom, no intervention is recognized that reliably eliminates tinnitus symptoms; a cause has yet to be determined. A 65-year-old healthy man presented with a 3 year history of left-sided tinnitus. Full workup performed by the primary care physician including blood tests for electrolyte imbalance, consultations by 2 independent otholaryngologists, and imaging did not reveal abnormalities to provide etiology of the tinnitus. No other complaints were noted except for occasional minimal left sided neck pain. Cervical spine x-ray showed degenerative changes with facet hypertrophy more pronounced on the left side. Subsequently, the patient underwent diagnostic left-sided C2-C3 medial branch block, resulting in complete resolution of tinnitus for more than 6 hours. After successful radiofrequency ablation of left C2-C3 medial branches, the patient became asymptomatic. At one year follow-up, he continued to be symptom free. Sparce studies have shown interaction between the somatosensory and auditory system at dorsal cochlear nucleus (DCN), inferior colliculus, and parietal association areas. Upper cervical nerve (C2) electrical stimulation evokes potentials in the DCN, eliciting strong patterns of inhibition and weak excitation of the DCN principal cells. New evidence demonstrated successful transcutaneous electrical nerve stimulation (TENS) of upper cervical nerve (C2) for treatment of somatic tinnitus in 240 patients. This case indicates that C2-C3 facet arthropathy may cause tinnitus and radiofrequency ablation of C2-C3 medial branches can provide an effective approach not previously considered.
Kell, Christian A; Neumann, Katrin; Behrens, Marion; von Gudenberg, Alexander W; Giraud, Anne-Lise
2018-03-01
We previously reported speaking-related activity changes associated with assisted recovery induced by a fluency shaping therapy program and unassisted recovery from developmental stuttering (Kell et al., Brain 2009). While assisted recovery re-lateralized activity to the left hemisphere, unassisted recovery was specifically associated with the activation of the left BA 47/12 in the lateral orbitofrontal cortex. These findings suggested plastic changes in speaking-related functional connectivity between left hemispheric speech network nodes. We reanalyzed these data involving 13 stuttering men before and after fluency shaping, 13 men who recovered spontaneously from their stuttering, and 13 male control participants, and examined functional connectivity during overt vs. covert reading by means of psychophysiological interactions computed across left cortical regions involved in articulation control. Persistent stuttering was associated with reduced auditory-motor coupling and enhanced integration of somatosensory feedback between the supramarginal gyrus and the prefrontal cortex. Assisted recovery reduced this hyper-connectivity and increased functional connectivity between the articulatory motor cortex and the auditory feedback processing anterior superior temporal gyrus. In spontaneous recovery, both auditory-motor coupling and integration of somatosensory feedback were normalized. In addition, activity in the left orbitofrontal cortex and superior cerebellum appeared uncoupled from the rest of the speech production network. These data suggest that therapy and spontaneous recovery normalizes the left hemispheric speaking-related activity via an improvement of auditory-motor mapping. By contrast, long-lasting unassisted recovery from stuttering is additionally supported by a functional isolation of the superior cerebellum from the rest of the speech production network, through the pivotal left BA 47/12. Copyright © 2017 Elsevier Inc. All rights reserved.
Łukaszewicz-Moszyńska, Zuzanna; Lachowska, Magdalena; Niemczyk, Kazimierz
2014-01-01
The purpose of this study was to evaluate possible relationships between duration of cochlear implant use and results of positron emission tomography (PET) measurements in the temporal lobes performed while subjects listened to speech stimuli. Other aspects investigated were whether implantation side impacts significantly on cortical representations of functions related to understanding speech (ipsi- or contralateral to the implanted side) and whether any correlation exists between cortical activation and speech therapy results. Objective cortical responses to acoustic stimulation were measured, using PET, in nine cochlear implant patients (age range: 15 to 50 years). All the patients suffered from bilateral deafness, were right-handed, and had no additional neurological deficits. They underwent PET imaging three times: immediately after the first fitting of the speech processor (activation of the cochlear implant), and one and two years later. A tendency towards increasing levels of activation in areas of the primary and secondary auditory cortex on the left side of the brain was observed. There was no clear effect of the side of implantation (left or right) on the degree of cortical activation in the temporal lobe. However, the PET results showed a correlation between degree of cortical activation and speech therapy results.
Łukaszewicz-Moszyńska, Zuzanna; Lachowska, Magdalena; Niemczyk, Kazimierz
2014-01-01
Summary The purpose of this study was to evaluate possible relationships between duration of cochlear implant use and results of positron emission tomography (PET) measurements in the temporal lobes performed while subjects listened to speech stimuli. Other aspects investigated were whether implantation side impacts significantly on cortical representations of functions related to understanding speech (ipsi- or contralateral to the implanted side) and whether any correlation exists between cortical activation and speech therapy results. Objective cortical responses to acoustic stimulation were measured, using PET, in nine cochlear implant patients (age range: 15 to 50 years). All the patients suffered from bilateral deafness, were right-handed, and had no additional neurological deficits. They underwent PET imaging three times: immediately after the first fitting of the speech processor (activation of the cochlear implant), and one and two years later. A tendency towards increasing levels of activation in areas of the primary and secondary auditory cortex on the left side of the brain was observed. There was no clear effect of the side of implantation (left or right) on the degree of cortical activation in the temporal lobe. However, the PET results showed a correlation between degree of cortical activation and speech therapy results. PMID:25306122
Cortical thickness as a contributor to abnormal oscillations in schizophrenia?
Edgar, J Christopher; Chen, Yu-Han; Lanza, Matthew; Howell, Breannan; Chow, Vivian Y; Heiken, Kory; Liu, Song; Wootton, Cassandra; Hunter, Michael A; Huang, Mingxiong; Miller, Gregory A; Cañive, José M
2014-01-01
Although brain rhythms depend on brain structure (e.g., gray and white matter), to our knowledge associations between brain oscillations and structure have not been investigated in healthy controls (HC) or in individuals with schizophrenia (SZ). Observing function-structure relationships, for example establishing an association between brain oscillations (defined in terms of amplitude or phase) and cortical gray matter, might inform models on the origins of psychosis. Given evidence of functional and structural abnormalities in primary/secondary auditory regions in SZ, the present study examined how superior temporal gyrus (STG) structure relates to auditory STG low-frequency and 40 Hz steady-state activity. Given changes in brain activity as a function of age, age-related associations in STG oscillatory activity were also examined. Thirty-nine individuals with SZ and 29 HC were recruited. 40 Hz amplitude-modulated tones of 1 s duration were presented. MEG and T1-weighted sMRI data were obtained. Using the sources localizing 40 Hz evoked steady-state activity (300 to 950 ms), left and right STG total power and inter-trial coherence were computed. Time-frequency group differences and associations with STG structure and age were also examined. Decreased total power and inter-trial coherence in SZ were observed in the left STG for initial post-stimulus low-frequency activity (~ 50 to 200 ms, ~ 4 to 16 Hz) as well as 40 Hz steady-state activity (~ 400 to 1000 ms). Left STG 40 Hz total power and inter-trial coherence were positively associated with left STG cortical thickness in HC, not in SZ. Left STG post-stimulus low-frequency and 40 Hz total power were positively associated with age, again only in controls. Left STG low-frequency and steady-state gamma abnormalities distinguish SZ and HC. Disease-associated damage to STG gray matter in schizophrenia may disrupt the age-related left STG gamma-band function-structure relationships observed in controls.
Naito, Y; Okazawa, H; Honjo, I; Hirano, S; Takahashi, H; Shiomi, Y; Hoji, W; Kawano, M; Ishizu, K; Yonekura, Y
1995-07-01
Six postlingually deaf patients using multi-channel cochlear implants were examined by positron emission tomography (PET) using 15O-labeled water. Changes in regional cerebral blood flow (rCBF) were measured during different sound stimuli. The stimulation paradigms employed consisted of two sets of three different conditions; (1) no sound stimulation with the speech processor of the cochlear implant system switched off, (2) hearing white noise and (3) hearing sequential Japanese sentences. In the primary auditory area, the mean rCBF increase during noise stimulation was significantly greater on the side contralateral to the implant than on the ipsilateral side. Speech stimulation caused significantly greater rCBF increase compared with noise stimulation in the left immediate auditory association area (P < 0.01), the bilateral auditory association areas (P < 0.01), the posterior part of the bilateral inferior frontal gyri; the Broca's area (P < 0.01) and its right hemisphere homologue (P < 0.05). Activation of cortices related to verbal and non-verbal sound recognition was clearly demonstrated in the current subjects probably because complete silence was attained in the control condition.
The influence of gender on auditory and language cortical activation patterns: preliminary data.
Kocak, Mehmet; Ulmer, John L; Biswal, Bharat B; Aralasmak, Ayse; Daniels, David L; Mark, Leighton P
2005-10-01
Intersex cortical and functional asymmetry is an ongoing topic of investigation. In this pilot study, we sought to determine the influence of acoustic scanner noise and sex on auditory and language cortical activation patterns of the dominant hemisphere. Echoplanar functional MR imaging (fMRI; 1.5T) was performed on 12 healthy right-handed subjects (6 men and 6 women). Passive text listening tasks were employed in 2 different background acoustic scanner noise conditions (12 sections/2 seconds TR [6 Hz] and 4 sections/2 seconds TR [2 Hz]), with the first 4 sections in identical locations in the left hemisphere. Cross-correlation analysis was used to construct activation maps in subregions of auditory and language relevant cortex of the dominant (left) hemisphere, and activation areas were calculated by using coefficient thresholds of 0.5, 0.6, and 0.7. Text listening caused robust activation in anatomically defined auditory cortex, and weaker activation in language relevant cortex of all 12 individuals. As a whole, there was no significant difference in regional cortical activation between the 2 background acoustic scanner noise conditions. When sex was considered, men showed a significantly (P < .01) greater change in left hemisphere activation during the high scanner noise rate condition than did women. This effect was significant (P < .05) in the left superior temporal gyrus, the posterior aspect of the left middle temporal gyrus and superior temporal sulcus, and the left inferior frontal gyrus. Increase in the rate of background acoustic scanner noise caused increased activation in auditory and language relevant cortex of the dominant hemisphere in men compared with women where no such change in activation was observed. Our preliminary data suggest possible methodologic confounds of fMRI research and calls for larger investigations to substantiate our findings and further characterize sex-based influences on hemispheric activation patterns.
Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.
2014-01-01
The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545
Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.
Poremba, Amy; Mishkin, Mortimer
2007-07-01
Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys.
Exploring the extent and function of higher-order auditory cortex in rhesus monkeys
Mishkin, Mortimer
2009-01-01
Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left hemisphere “dominance” during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole “dominance” appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys. PMID:17321703
On pure word deafness, temporal processing, and the left hemisphere.
Stefanatos, Gerry A; Gershkoff, Arthur; Madigan, Sean
2005-07-01
Pure word deafness (PWD) is a rare neurological syndrome characterized by severe difficulties in understanding and reproducing spoken language, with sparing of written language comprehension and speech production. The pathognomonic disturbance of auditory comprehension appears to be associated with a breakdown in processes involved in mapping auditory input to lexical representations of words, but the functional locus of this disturbance and the localization of the responsible lesion have long been disputed. We report here on a woman with PWD resulting from a circumscribed unilateral infarct involving the left superior temporal lobe who demonstrated significant problems processing transitional spectrotemporal cues in both speech and nonspeech sounds. On speech discrimination tasks, she exhibited poor differentiation of stop consonant-vowel syllables distinguished by voicing onset and brief formant frequency transitions. Isolated formant transitions could be reliably discriminated only at very long durations (> 200 ms). By contrast, click fusion threshold, which depends on millisecond-level resolution of brief auditory events, was normal. These results suggest that the problems with speech analysis in this case were not secondary to general constraints on auditory temporal resolution. Rather, they point to a disturbance of left hemisphere auditory mechanisms that preferentially analyze rapid spectrotemporal variations in frequency. The findings have important implications for our conceptualization of PWD and its subtypes.
Wang, Jie; Wu, Dongyu; Chen, Yan; Yuan, Ying; Zhang, Meikui
2013-08-09
We investigate the effects of transcranial direct current stimulation (tDCS) on language improvement and cortical activation in nonfluent variant primary progressive aphasia (nfvPPA). A 67-year-old woman diagnosed as nfvPPA received sham-tDCS for 5 days over the left posterior perisylvian region (PPR) in the morning and over left Broca's area in the afternoon in Phases A1 and A2, and tDCS for 5 days with an anodal electrode over the left PPR in the morning and over left Broca's area in the afternoon in Phases B1 and B2. Auditory word comprehension, picture naming, oral word reading and word repetition subtests of the Psycholinguistic Assessment in Chinese Aphasia (PACA) were administered before and after each phase. The EEG nonlinear index of approximate entropy (ApEn) was calculated before Phase A1, and after Phases B1 and B2. Our findings revealed that the patient improved greatly in the four subtests after A-tDCS and ApEn indices increased in stimulated areas and non-stimulated areas. We demonstrated that anodal tDCS over the left PPR and Broca's area can improve language performance of nfvPPA. tDCS may be used as an alternative therapeutic tool for PPA. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Werner, Sebastian; Noppeney, Uta
2010-02-17
Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects' multisensory benefits in performance accuracy.
Stoppelman, Nadav; Harpaz, Tamar; Ben-Shachar, Michal
2013-05-01
Speech processing engages multiple cortical regions in the temporal, parietal, and frontal lobes. Isolating speech-sensitive cortex in individual participants is of major clinical and scientific importance. This task is complicated by the fact that responses to sensory and linguistic aspects of speech are tightly packed within the posterior superior temporal cortex. In functional magnetic resonance imaging (fMRI), various baseline conditions are typically used in order to isolate speech-specific from basic auditory responses. Using a short, continuous sampling paradigm, we show that reversed ("backward") speech, a commonly used auditory baseline for speech processing, removes much of the speech responses in frontal and temporal language regions of adult individuals. On the other hand, signal correlated noise (SCN) serves as an effective baseline for removing primary auditory responses while maintaining strong signals in the same language regions. We show that the response to reversed speech in left inferior frontal gyrus decays significantly faster than the response to speech, thus suggesting that this response reflects bottom-up activation of speech analysis followed up by top-down attenuation once the signal is classified as nonspeech. The results overall favor SCN as an auditory baseline for speech processing.
Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.
Wengenroth, Martina; Blatow, Maria; Bendszus, Martin; Schneider, Peter
2010-08-23
Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.
2012-01-01
Background About 25% of schizophrenia patients with auditory hallucinations are refractory to pharmacotherapy and electroconvulsive therapy. We conducted a deep transcranial magnetic stimulation (TMS) pilot study in order to evaluate the potential clinical benefit of repeated left temporoparietal cortex stimulation in these patients. The results were encouraging, but a sham-controlled study was needed to rule out a placebo effect. Methods A total of 18 schizophrenic patients with refractory auditory hallucinations were recruited, from Beer Yaakov MHC and other hospitals outpatient populations. Patients received 10 daily treatment sessions with low-frequency (1 Hz for 10 min) deep TMS applied over the left temporoparietal cortex, using the H1 coil at the intensity of 110% of the motor threshold. Procedure was either real or sham according to patient randomization. Patients were evaluated via the Auditory Hallucinations Rating Scale, Scale for the Assessment of Positive Symptoms-Negative Symptoms, Clinical Global Impressions, and Quality of Life Questionnaire. Results In all, 10 patients completed the treatment (10 TMS sessions). Auditory hallucination scores of both groups improved; however, there was no statistical difference in any of the scales between the active and the sham treated groups. Conclusions Low-frequency deep TMS to the left temporoparietal cortex using the protocol mentioned above has no statistically significant effect on auditory hallucinations or the other clinical scales measured in schizophrenic patients. Trial Registration Clinicaltrials.gov identifier: NCT00564096. PMID:22559192
Analysis of MEG Auditory 40-Hz Response by Event-Related Coherence
NASA Astrophysics Data System (ADS)
Tanaka, Keita; Kawakatsu, Masaki; Yunokuchi, Kazutomo
We examined the event-related coherence of magnetoencephalography (auditory 40-Hz response) while the subjects were presented click acoustic stimuli at repetition rate 40Hz in the ‘Attend' and ‘Reading' conditions. MEG signals were recorded of 5 healthy males using the whole-head SQUID system. The event-related coherence was used to provide a measurement of short synchronization which occurs in response to a stimulus. The results showed that the peak value of coherence in auditory 40-Hz response between right and left temporal regions was significantly larger when subjects paid attention to stimuli (‘Attend' condition) rather than it was when the subject ignored them (‘Reading' condition). Moreover, the latency of coherence in auditory 40-Hz response was significantly shorter when the subjects paid attention to stimuli (‘Attend' condition). These results suggest that the phase synchronization between right and left temporal region in auditory 40-Hz response correlate closely with selective attention.
Khaleeli, Z; Cercignani, M; Audoin, B; Ciccarelli, O; Miller, D H; Thompson, A J
2007-08-01
Disability in primary progressive multiple sclerosis (PPMS) has been correlated with damage to the normal appearing brain tissues. Magnetization transfer ratio (MTR) and volume changes indicate that much of this damage occurs in the normal appearing grey matter, but the clinical significance of this remains uncertain. We aimed to localize these changes to distinct grey matter regions, and investigate the clinical impact of the MTR changes. 46 patients with early PPMS and 23 controls underwent MT and high-resolution T1-weighted imaging. Patients were scored on the Expanded Disability Status Scale (EDSS), Multiple Sclerosis Functional Composite and subtests (Nine-Hole Peg Test, Timed Walk Test, Paced Auditory Serial Addition Test [PASAT]). Grey matter volume and MTR were compared between patients and controls, adjusting for age. Mean MTR for significant regions within the motor network and in areas relevant to PASAT performance were correlated with appropriate clinical scores, adjusting for grey matter volume. Patients showed reduced MTR and atrophy in the right pre- and left post-central gyri, right middle frontal gyrus, left insula, and thalamus bilaterally. Reduced MTR without significant atrophy occurred in the left pre-central gyrus, left superior frontal gyri, bilateral superior temporal gyri, right insula and visual cortex. Higher EDSS correlated with lower MTR in the right primary motor cortex (BA 4). In conclusion, localized grey matter damage occurs in early PPMS, and MTR change is more widespread than atrophy. Damage demonstrated by reduced MTR is clinically eloquent.
Speech processing: from peripheral to hemispheric asymmetry of the auditory system.
Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier
2012-01-01
Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
Engineer, C.T.; Centanni, T.M.; Im, K.W.; Borland, M.S.; Moreno, N.A.; Carraway, R.S.; Wilson, L.G.; Kilgard, M.P.
2014-01-01
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. PMID:24639033
Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A; Larson, Charles R
2014-02-01
The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. Copyright © 2013 Elsevier Inc. All rights reserved.
Agnosia for accents in primary progressive aphasia☆
Fletcher, Phillip D.; Downey, Laura E.; Agustus, Jennifer L.; Hailstone, Julia C.; Tyndall, Marina H.; Cifelli, Alberto; Schott, Jonathan M.; Warrington, Elizabeth K.; Warren, Jason D.
2013-01-01
As an example of complex auditory signal processing, the analysis of accented speech is potentially vulnerable in the progressive aphasias. However, the brain basis of accent processing and the effects of neurodegenerative disease on this processing are not well understood. Here we undertook a detailed neuropsychological study of a patient, AA with progressive nonfluent aphasia, in whom agnosia for accents was a prominent clinical feature. We designed a battery to assess AA's ability to process accents in relation to other complex auditory signals. AA's performance was compared with a cohort of 12 healthy age and gender matched control participants and with a second patient, PA, who had semantic dementia with phonagnosia and prosopagnosia but no reported difficulties with accent processing. Relative to healthy controls, the patients showed distinct profiles of accent agnosia. AA showed markedly impaired ability to distinguish change in an individual's accent despite being able to discriminate phonemes and voices (apperceptive accent agnosia); and in addition, a severe deficit of accent identification. In contrast, PA was able to perceive changes in accents, phonemes and voices normally, but showed a relatively mild deficit of accent identification (associative accent agnosia). Both patients showed deficits of voice and environmental sound identification, however PA showed an additional deficit of face identification whereas AA was able to identify (though not name) faces normally. These profiles suggest that AA has conjoint (or interacting) deficits involving both apperceptive and semantic processing of accents, while PA has a primary semantic (associative) deficit affecting accents along with other kinds of auditory objects and extending beyond the auditory modality. Brain MRI revealed left peri-Sylvian atrophy in case AA and relatively focal asymmetric (predominantly right sided) temporal lobe atrophy in case PA. These cases provide further evidence for the fractionation of brain mechanisms for complex sound analysis, and for the stratification of progressive aphasia syndromes according to the signature of nonverbal auditory deficits they produce. PMID:23721780
Agnosia for accents in primary progressive aphasia.
Fletcher, Phillip D; Downey, Laura E; Agustus, Jennifer L; Hailstone, Julia C; Tyndall, Marina H; Cifelli, Alberto; Schott, Jonathan M; Warrington, Elizabeth K; Warren, Jason D
2013-08-01
As an example of complex auditory signal processing, the analysis of accented speech is potentially vulnerable in the progressive aphasias. However, the brain basis of accent processing and the effects of neurodegenerative disease on this processing are not well understood. Here we undertook a detailed neuropsychological study of a patient, AA with progressive nonfluent aphasia, in whom agnosia for accents was a prominent clinical feature. We designed a battery to assess AA's ability to process accents in relation to other complex auditory signals. AA's performance was compared with a cohort of 12 healthy age and gender matched control participants and with a second patient, PA, who had semantic dementia with phonagnosia and prosopagnosia but no reported difficulties with accent processing. Relative to healthy controls, the patients showed distinct profiles of accent agnosia. AA showed markedly impaired ability to distinguish change in an individual's accent despite being able to discriminate phonemes and voices (apperceptive accent agnosia); and in addition, a severe deficit of accent identification. In contrast, PA was able to perceive changes in accents, phonemes and voices normally, but showed a relatively mild deficit of accent identification (associative accent agnosia). Both patients showed deficits of voice and environmental sound identification, however PA showed an additional deficit of face identification whereas AA was able to identify (though not name) faces normally. These profiles suggest that AA has conjoint (or interacting) deficits involving both apperceptive and semantic processing of accents, while PA has a primary semantic (associative) deficit affecting accents along with other kinds of auditory objects and extending beyond the auditory modality. Brain MRI revealed left peri-Sylvian atrophy in case AA and relatively focal asymmetric (predominantly right sided) temporal lobe atrophy in case PA. These cases provide further evidence for the fractionation of brain mechanisms for complex sound analysis, and for the stratification of progressive aphasia syndromes according to the signature of nonverbal auditory deficits they produce. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
2017-05-05
Directed Attention Mediated by Real -Time fMRI Neurofeedback presented at/published to 2017 Radiological Society of North America Conference in...DATE Sherwood - p.1 Self-regulation of the primary auditory cortex attention via directed attention mediated by real -time fMRI neurofeedback M S...auditory cortex hyperactivity by self-regulation of the primary auditory cortex (A 1) based on real -time functional magnetic resonance imaging neurofeedback
Figueiredo, Carolina Calsolari; de Andrade, Adriana Neves; Marangoni-Castan, Andréa Tortosa; Gil, Daniela; Suriano, Italo Capraro
2015-01-01
ABSTRACT Objective To investigate the long-term efficacy of acoustically controlled auditory training in adults after tarumatic brain injury. Methods A total of six audioogically normal individuals aged between 20 and 37 years were studied. They suffered severe traumatic brain injury with diffuse axional lesion and underwent an acoustically controlled auditory training program approximately one year before. The results obtained in the behavioral and electrophysiological evaluation of auditory processing immediately after acoustically controlled auditory training were compared to reassessment findings, one year later. Results Quantitative analysis of auditory brainsteim response showed increased absolute latency of all waves and interpeak intervals, bilaterraly, when comparing both evaluations. Moreover, increased amplitude of all waves, and the wave V amplitude was statistically significant for the right ear, and wave III for the left ear. As to P3, decreased latency and increased amplitude were found for both ears in reassessment. The previous and current behavioral assessment showed similar results, except for the staggered spondaic words in the left ear and the amount of errors on the dichotic consonant-vowel test. Conclusion The acoustically controlled auditory training was effective in the long run, since better latency and amplitude results were observed in the electrophysiological evaluation, in addition to stability of behavioral measures after one-year training. PMID:26676270
Plastic brain mechanisms for attaining auditory temporal order judgment proficiency.
Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas
2010-04-15
Accurate perception of the order of occurrence of sensory information is critical for the building up of coherent representations of the external world from ongoing flows of sensory inputs. While some psychophysical evidence reports that performance on temporal perception can improve, the underlying neural mechanisms remain unresolved. Using electrical neuroimaging analyses of auditory evoked potentials (AEPs), we identified the brain dynamics and mechanism supporting improvements in auditory temporal order judgment (TOJ) during the course of the first vs. latter half of the experiment. Training-induced changes in brain activity were first evident 43-76 ms post stimulus onset and followed from topographic, rather than pure strength, AEP modulations. Improvements in auditory TOJ accuracy thus followed from changes in the configuration of the underlying brain networks during the initial stages of sensory processing. Source estimations revealed an increase in the lateralization of initially bilateral posterior sylvian region (PSR) responses at the beginning of the experiment to left-hemisphere dominance at its end. Further supporting the critical role of left and right PSR in auditory TOJ proficiency, as the experiment progressed, responses in the left and right PSR went from being correlated to un-correlated. These collective findings provide insights on the neurophysiologic mechanism and plasticity of temporal processing of sounds and are consistent with models based on spike timing dependent plasticity. Copyright 2010 Elsevier Inc. All rights reserved.
Lew, Henry L; Lee, Eun Ha; Miyoshi, Yasushi; Chang, Douglas G; Date, Elaine S; Jerger, James F
2004-03-01
Because of the violent nature of traumatic brain injury, traumatic brain injury patients are susceptible to various types of trauma involving the auditory system. We report a case of a 55-yr-old man who presented with communication problems after traumatic brain injury. Initial results from behavioral audiometry and Weber/Rinne tests were not reliable because of poor cooperation. He was transferred to our service for inpatient rehabilitation, where review of the initial head computed tomographic scan showed only left temporal bone fracture. Brainstem auditory-evoked potential was then performed to evaluate his hearing function. The results showed bilateral absence of auditory-evoked responses, which strongly suggested bilateral deafness. This finding led to a follow-up computed tomographic scan, with focus on bilateral temporal bones. A subtle transverse fracture of the right temporal bone was then detected, in addition to the left temporal bone fracture previously identified. Like children with hearing impairment, traumatic brain injury patients may not be able to verbalize their auditory deficits in a timely manner. If hearing loss is suspected in a patient who is unable to participate in traditional behavioral audiometric testing, brainstem auditory-evoked potential may be an option for evaluating hearing dysfunction.
Leftward Lateralization of Auditory Cortex Underlies Holistic Sound Perception in Williams Syndrome
Bendszus, Martin; Schneider, Peter
2010-01-01
Background Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Methodology/Principal Findings Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. Conclusions/Significance There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties. PMID:20808792
Brechmann, André; Baumgart, Frank; Scheich, Henning
2002-01-01
Recognition of sound patterns must be largely independent of level and of masking or jamming background sounds. Auditory patterns of relevance in numerous environmental sounds, species-specific vocalizations and speech are frequency modulations (FM). Level-dependent activation of the human auditory cortex (AC) in response to a large set of upward and downward FM tones was studied with low-noise (48 dB) functional magnetic resonance imaging at 3 Tesla. Separate analysis in four territories of AC was performed in each individual brain using a combination of anatomical landmarks and spatial activation criteria for their distinction. Activation of territory T1b (including primary AC) showed the most robust level dependence over the large range of 48-102 dB in terms of activated volume and blood oxygen level dependent contrast (BOLD) signal intensity. The left nonprimary territory T2 also showed a good correlation of level with activated volume but, in contrast to T1b, not with BOLD signal intensity. These findings are compatible with level coding mechanisms observed in animal AC. A systematic increase of activation with level was not observed for T1a (anterior of Heschl's gyrus) and T3 (on the planum temporale). Thus these areas might not be specifically involved in processing of the overall intensity of FM. The rostral territory T1a of the left hemisphere exhibited highest activation when the FM sound level fell 12 dB below scanner noise. This supports the previously suggested special involvement of this territory in foreground-background decomposition tasks. Overall, AC of the left hemisphere showed a stronger level-dependence of signal intensity and activated volume than the right hemisphere. But any side differences of signal intensity at given levels were lateralized to right AC. This might point to an involvement of the right hemisphere in more specific aspects of FM processing than level coding.
Strategy in short-term memory for pictures in childhood: a near-infrared spectroscopy study.
Sanefuji, Masafumi; Takada, Yui; Kimura, Naoko; Torisu, Hiroyuki; Kira, Ryutaro; Ishizaki, Yoshito; Hara, Toshiro
2011-02-01
In Baddeley's working memory model, verbalizable visual material such as pictures are recoded into a phonological form and then rehearsed, while auditory material is rehearsed directly. The recoding and rehearsal processes are mediated by articulatory control process in the left ventrolateral prefrontal cortex (VLPFC). Developmentally, the phonological strategy for serially-presented visual material emerges around 7 years of age, while that for auditory material is consistently present by 4 years of age. However, the strategy change may actually be correlated with memory ability as this usually increases with age. To investigate the relationship between the strategy for pictures and memory ability, we monitored the left VLPFC activation in 5 to 11 year-old children during free recall of visually- or auditorily-presented familiar objects using event-related near-infrared spectroscopy. We hypothesized that the phonological strategy of rehearsal and recoding for visual material would provoke greater activation than only rehearsal for auditory material in the left VLPFC. Therefore, we presumed that the activation difference for visual material compared with auditory material in the left VLPFC may represent the tendency to use a phonological strategy. We found that the activation difference in the left VLPFC showed a significant positive correlation with memory ability but not with age, suggesting that children with high memory ability make more use of phonological strategy for pictures. The present study provides functional evidence that the strategy in short-term memory for pictures shifts gradually from non-phonological to phonological as memory ability increases in childhood. Copyright © 2010 Elsevier Inc. All rights reserved.
Role of the right inferior parietal cortex in auditory selective attention: An rTMS study.
Bareham, Corinne A; Georgieva, Stanimira D; Kamke, Marc R; Lloyd, David; Bekinschtein, Tristan A; Mattingley, Jason B
2018-02-01
Selective attention is the process of directing limited capacity resources to behaviourally relevant stimuli while ignoring competing stimuli that are currently irrelevant. Studies in healthy human participants and in individuals with focal brain lesions have suggested that the right parietal cortex is crucial for resolving competition for attention. Following right-hemisphere damage, for example, patients may have difficulty reporting a brief, left-sided stimulus if it occurs with a competitor on the right, even though the same left stimulus is reported normally when it occurs alone. Such "extinction" of contralesional stimuli has been documented for all the major sense modalities, but it remains unclear whether its occurrence reflects involvement of one or more specific subregions of the temporo-parietal cortex. Here we employed repetitive transcranial magnetic stimulation (rTMS) over the right hemisphere to examine the effect of disruption of two candidate regions - the supramarginal gyrus (SMG) and the superior temporal gyrus (STG) - on auditory selective attention. Eighteen neurologically normal, right-handed participants performed an auditory task, in which they had to detect target digits presented within simultaneous dichotic streams of spoken distractor letters in the left and right channels, both before and after 20 min of 1 Hz rTMS over the SMG, STG or a somatosensory control site (S1). Across blocks, participants were asked to report on auditory streams in the left, right, or both channels, which yielded focused and divided attention conditions. Performance was unchanged for the two focused attention conditions, regardless of stimulation site, but was selectively impaired for contralateral left-sided targets in the divided attention condition following stimulation of the right SMG, but not the STG or S1. Our findings suggest a causal role for the right inferior parietal cortex in auditory selective attention. Copyright © 2017 Elsevier Ltd. All rights reserved.
Left and right reaction time differences to the sound intensity in normal and AD/HD children.
Baghdadi, Golnaz; Towhidkhah, Farzad; Rostami, Reza
2017-06-01
Right hemisphere, which is attributed to the sound intensity discrimination, has abnormality in people with attention deficit/hyperactivity disorder (AD/HD). However, it is not studied whether the defect in the right hemisphere has influenced on the intensity sensation of AD/HD subjects or not. In this study, the sensitivity of normal and AD/HD children to the sound intensity was investigated. Nineteen normal and fourteen AD/HD children participated in the study and performed a simple auditory reaction time task. Using the regression analysis, the sensitivity of right and left ears to various sound intensity levels was examined. The statistical results showed that the sensitivity of AD/HD subjects to the intensity was lower than the normal group (p < 0.0001). Left and right pathways of the auditory system had the same pattern of response in AD/HD subjects (p > 0.05). However, in control group the left pathway was more sensitive to the sound intensity level than the right one (p = 0.0156). It can be probable that the deficit of the right hemisphere has influenced on the auditory sensitivity of AD/HD children. The possible existent deficits of other auditory system components such as middle ear, inner ear, or involved brain stem nucleuses may also lead to the observed results. The development of new biomarkers based on the sensitivity of the brain hemispheres to the sound intensity has been suggested to estimate the risk of AD/HD. Designing new technique to correct the auditory feedback has been also proposed in behavioral treatment sessions. Copyright © 2017. Published by Elsevier B.V.
Congenital deafness affects deep layers in primary and secondary auditory cortex
Berger, Christoph; Kühne, Daniela; Scheper, Verena
2017-01-01
Abstract Congenital deafness leads to functional deficits in the auditory cortex for which early cochlear implantation can effectively compensate. Most of these deficits have been demonstrated functionally. Furthermore, the majority of previous studies on deafness have involved the primary auditory cortex; knowledge of higher‐order areas is limited to effects of cross‐modal reorganization. In this study, we compared the cortical cytoarchitecture of four cortical areas in adult hearing and congenitally deaf cats (CDCs): the primary auditory field A1, two secondary auditory fields, namely the dorsal zone and second auditory field (A2); and a reference visual association field (area 7) in the same section stained either using Nissl or SMI‐32 antibodies. The general cytoarchitectonic pattern and the area‐specific characteristics in the auditory cortex remained unchanged in animals with congenital deafness. Whereas area 7 did not differ between the groups investigated, all auditory fields were slightly thinner in CDCs, this being caused by reduced thickness of layers IV–VI. The study documents that, while the cytoarchitectonic patterns are in general independent of sensory experience, reduced layer thickness is observed in both primary and higher‐order auditory fields in layer IV and infragranular layers. The study demonstrates differences in effects of congenital deafness between supragranular and other cortical layers, but similar dystrophic effects in all investigated auditory fields. PMID:28643417
Nonverbal auditory agnosia with lesion to Wernicke's area.
Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic
2010-01-01
We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.
Analysis of speech sounds is left-hemisphere predominant at 100-150ms after sound onset.
Rinne, T; Alho, K; Alku, P; Holi, M; Sinkkonen, J; Virtanen, J; Bertrand, O; Näätänen, R
1999-04-06
Hemispheric specialization of human speech processing has been found in brain imaging studies using fMRI and PET. Due to the restricted time resolution, these methods cannot, however, determine the stage of auditory processing at which this specialization first emerges. We used a dense electrode array covering the whole scalp to record the mismatch negativity (MMN), an event-related brain potential (ERP) automatically elicited by occasional changes in sounds, which ranged from non-phonetic (tones) to phonetic (vowels). MMN can be used to probe auditory central processing on a millisecond scale with no attention-dependent task requirements. Our results indicate that speech processing occurs predominantly in the left hemisphere at the early, pre-attentive level of auditory analysis.
Vannest, Jennifer J.; Karunanayaka, Prasanna R.; Altaye, Mekibib; Schmithorst, Vincent J.; Plante, Elena M.; Eaton, Kenneth J.; Rasmussen, Jerod M.; Holland, Scott K.
2009-01-01
Purpose To use functional MRI methods to visualize a network of auditory and language-processing brain regions associated with processing an aurally-presented story. We compare a passive listening (PL) story paradigm to an active-response (AR) version including on-line performance monitoring and a sparse acquisition technique. Materials/Methods Twenty children (ages 11−13) completed PL and AR story processing tasks. The PL version presented alternating 30-second blocks of stories and tones; the AR version presented story segments, comprehension questions, and 5s tone sequences, with fMRI acquisitions between stimuli. fMRI data was analyzed using a general linear model approach and paired t-test identifying significant group activation. Results Both tasks activated in primary auditory cortex, superior temporal gyrus bilaterally, left inferior frontal gyrus. The AR task demonstrated more extensive activation, including dorsolateral prefrontal cortex and anterior/posterior cingulate cortex. Comparison of effect size in each paradigm showed a larger effect for the AR paradigm in a left inferior frontal ROI. Conclusion Activation patterns for story processing in children are similar in passive listening and active-response tasks. Increases in extent and magnitude of activation in the AR task are likely associated with memory and attention resources engaged across acquisition intervals. PMID:19306445
Vannest, Jennifer J; Karunanayaka, Prasanna R; Altaye, Mekibib; Schmithorst, Vincent J; Plante, Elena M; Eaton, Kenneth J; Rasmussen, Jerod M; Holland, Scott K
2009-04-01
To use functional MRI (fMRI) methods to visualize a network of auditory and language-processing brain regions associated with processing an aurally-presented story. We compare a passive listening (PL) story paradigm to an active-response (AR) version including online performance monitoring and a sparse acquisition technique. Twenty children (ages 11-13 years) completed PL and AR story processing tasks. The PL version presented alternating 30-second blocks of stories and tones; the AR version presented story segments, comprehension questions, and 5-second tone sequences, with fMRI acquisitions between stimuli. fMRI data was analyzed using a general linear model approach and paired t-test identifying significant group activation. Both tasks showed activation in the primary auditory cortex, superior temporal gyrus bilaterally, and left inferior frontal gyrus (IFG). The AR task demonstrated more extensive activation, including the dorsolateral prefrontal cortex and anterior/posterior cingulate cortex. Comparison of effect size in each paradigm showed a larger effect for the AR paradigm in a left inferior frontal region-of-interest (ROI). Activation patterns for story processing in children are similar in PL and AR tasks. Increases in extent and magnitude of activation in the AR task are likely associated with memory and attention resources engaged across acquisition intervals.
ERIC Educational Resources Information Center
Richardson, Fiona M.; Ramsden, Sue; Ellis, Caroline; Burnett, Stephanie; Megnin, Odette; Catmur, Caroline; Schofield, Tom M.; Leff, Alex P.; Price, Cathy J.
2011-01-01
A central feature of auditory STM is its item-limited processing capacity. We investigated whether auditory STM capacity correlated with regional gray and white matter in the structural MRI images from 74 healthy adults, 40 of whom had a prior diagnosis of developmental dyslexia whereas 34 had no history of any cognitive impairment. Using…
Deike, Susann; Deliano, Matthias; Brechmann, André
2016-10-01
One hypothesis concerning the neural underpinnings of auditory streaming states that frequency tuning of tonotopically organized neurons in primary auditory fields in combination with physiological forward suppression is necessary for the separation of representations of high-frequency A and low-frequency B tones. The extent of spatial overlap between the tonotopic activations of A and B tones is thought to underlie the perceptual organization of streaming sequences into one coherent or two separate streams. The present study attempts to interfere with these mechanisms by transcranial direct current stimulation (tDCS) and to probe behavioral outcomes reflecting the perception of ABAB streaming sequences. We hypothesized that tDCS by modulating cortical excitability causes a change in the separateness of the representations of A and B tones, which leads to a change in the proportions of one-stream and two-stream percepts. To test this, 22 subjects were presented with ambiguous ABAB sequences of three different frequency separations (∆F) and had to decide on their current percept after receiving sham, anodal, or cathodal tDCS over the left auditory cortex. We could confirm our hypothesis at the most ambiguous ∆F condition of 6 semitones. For anodal compared with sham and cathodal stimulation, we found a significant decrease in the proportion of two-stream perception and an increase in the proportion of one-stream perception. The results demonstrate the feasibility of using tDCS to probe mechanisms underlying auditory streaming through the use of various behavioral measures. Moreover, this approach allows one to probe the functions of auditory regions and their interactions with other processing stages. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Guinchard, A-C; Ghazaleh, Naghmeh; Saenz, M; Fornari, E; Prior, J O; Maeder, P; Adib, S; Maire, R
2016-11-01
We studied possible brain changes with functional MRI (fMRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) in a patient with a rare, high-intensity "objective tinnitus" (high-level SOAEs) in the left ear of 10 years duration, with no associated hearing loss. This is the first case of objective cochlear tinnitus to be investigated with functional neuroimaging. The objective cochlear tinnitus was measured by Spontaneous Otoacoustic Emissions (SOAE) equipment (frequency 9689 Hz, intensity 57 dB SPL) and is clearly audible to anyone standing near the patient. Functional modifications in primary auditory areas and other brain regions were evaluated using 3T and 7T fMRI and FDG-PET. In the fMRI evaluations, a saturation of the auditory cortex at the tinnitus frequency was observed, but the global cortical tonotopic organization remained intact when compared to the results of fMRI of healthy subjects. The FDG-PET showed no evidence of an increase or decrease of activity in the auditory cortices or in the limbic system as compared to normal subjects. In this patient with high-intensity objective cochlear tinnitus, fMRI and FDG-PET showed no significant brain reorganization in auditory areas and/or in the limbic system, as reported in the literature in patients with chronic subjective tinnitus. Copyright © 2016 Elsevier B.V. All rights reserved.
Maturation of the auditory t-complex brain response across adolescence.
Mahajan, Yatin; McArthur, Genevieve
2013-02-01
Adolescence is a time of great change in the brain in terms of structure and function. It is possible to track the development of neural function across adolescence using auditory event-related potentials (ERPs). This study tested if the brain's functional processing of sound changed across adolescence. We measured passive auditory t-complex peaks to pure tones and consonant-vowel (CV) syllables in 90 children and adolescents aged 10-18 years, as well as 10 adults. Across adolescence, Na amplitude increased to tones and speech at the right, but not left, temporal site. Ta amplitude decreased at the right temporal site for tones, and at both sites for speech. The Tb remained constant at both sites. The Na and Ta appeared to mature later in the right than left hemisphere. The t-complex peaks Na and Tb exhibited left lateralization and Ta showed right lateralization. Thus, the functional processing of sound continued to develop across adolescence and into adulthood. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Bais, Leonie; Liemburg, Edith; Vercammen, Ans; Bruggeman, Richard; Knegtering, Henderikus; Aleman, André
2017-08-01
Efficacy of repetitive Transcranial Magnetic Stimulation (rTMS) targeting the temporo-parietal junction (TPJ) for the treatment of auditory verbal hallucinations (AVH) remains under debate. We assessed the influence of a 1Hz rTMS treatment on neural networks involved in a cognitive mechanism proposed to subserve AVH. Patients with schizophrenia (N=24) experiencing medication-resistant AVH completed a 10-day 1Hz rTMS treatment. Participants were randomized to active stimulation of the left or bilateral TPJ, or sham stimulation. The effects of rTMS on neural networks were investigated with an inner speech task during fMRI. Changes within and between neural networks were analyzed using Independent Component Analysis. rTMS of the left and bilateral TPJ areas resulted in a weaker network contribution of the left supramarginal gyrus to the bilateral fronto-temporal network. Left-sided rTMS resulted in stronger network contributions of the right superior temporal gyrus to the auditory-sensorimotor network, right inferior gyrus to the left fronto-parietal network, and left middle frontal gyrus to the default mode network. Bilateral rTMS was associated with a predominant inhibitory effect on network contribution. Sham stimulation showed different patterns of change compared to active rTMS. rTMS of the left temporo-parietal region decreased the contribution of the left supramarginal gyrus to the bilateral fronto-temporal network, which may reduce the likelihood of speech intrusions. On the other hand, left rTMS appeared to increase the contribution of functionally connected regions involved in perception, cognitive control and self-referential processing. These findings hint to potential neural mechanisms underlying rTMS for hallucinations but need corroboration in larger samples. Copyright © 2017 Elsevier Inc. All rights reserved.
Kraus, Thomas; Kiess, Olga; Hösl, Katharina; Terekhin, Pavel; Kornhuber, Johannes; Forster, Clemens
2013-09-01
It has recently been shown that electrical stimulation of sensory afferents within the outer auditory canal may facilitate a transcutaneous form of central nervous system stimulation. Functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) effects in limbic and temporal structures have been detected in two independent studies. In the present study, we investigated BOLD fMRI effects in response to transcutaneous electrical stimulation of two different zones in the left outer auditory canal. It is hypothesized that different central nervous system (CNS) activation patterns might help to localize and specifically stimulate auricular cutaneous vagal afferents. 16 healthy subjects aged between 20 and 37 years were divided into two groups. 8 subjects were stimulated in the anterior wall, the other 8 persons received transcutaneous vagus nervous stimulation (tVNS) at the posterior side of their left outer auditory canal. For sham control, both groups were also stimulated in an alternating manner on their corresponding ear lobe, which is generally known to be free of cutaneous vagal innervation. Functional MR data from the cortex and brain stem level were collected and a group analysis was performed. In most cortical areas, BOLD changes were in the opposite direction when comparing anterior vs. posterior stimulation of the left auditory canal. The only exception was in the insular cortex, where both stimulation types evoked positive BOLD changes. Prominent decreases of the BOLD signals were detected in the parahippocampal gyrus, posterior cingulate cortex and right thalamus (pulvinar) following anterior stimulation. In subcortical areas at brain stem level, a stronger BOLD decrease as compared with sham stimulation was found in the locus coeruleus and the solitary tract only during stimulation of the anterior part of the auditory canal. The results of the study are in line with previous fMRI studies showing robust BOLD signal decreases in limbic structures and the brain stem during electrical stimulation of the left anterior auditory canal. BOLD signal decreases in the area of the nuclei of the vagus nerve may indicate an effective stimulation of vagal afferences. In contrast, stimulation at the posterior wall seems to lead to unspecific changes of the BOLD signal within the solitary tract, which is a key relay station of vagal neurotransmission. The results of the study show promise for a specific novel method of cranial nerve stimulation and provide a basis for further developments and applications of non-invasive transcutaneous vagus stimulation in psychiatric patients. Copyright © 2013 Elsevier Inc. All rights reserved.
Lateralization of the human mirror neuron system.
Aziz-Zadeh, Lisa; Koski, Lisa; Zaidel, Eran; Mazziotta, John; Iacoboni, Marco
2006-03-15
A cortical network consisting of the inferior frontal, rostral inferior parietal, and posterior superior temporal cortices has been implicated in representing actions in the primate brain and is critical to imitation in humans. This neural circuitry may be an evolutionary precursor of neural systems associated with language. However, language is predominantly lateralized to the left hemisphere, whereas the degree of lateralization of the imitation circuitry in humans is unclear. We conducted a functional magnetic resonance imaging study of imitation of finger movements with lateralized stimuli and responses. During imitation, activity in the inferior frontal and rostral inferior parietal cortex, although fairly bilateral, was stronger in the hemisphere ipsilateral to the visual stimulus and response hand. This ipsilateral pattern is at variance with the typical contralateral activity of primary visual and motor areas. Reliably increased signal in the right superior temporal sulcus (STS) was observed for both left-sided and right-sided imitation tasks, although subthreshold activity was also observed in the left STS. Overall, the data indicate that visual and motor components of the human mirror system are not left-lateralized. The left hemisphere superiority for language, then, must be have been favored by other types of language precursors, perhaps auditory or multimodal action representations.
Neural plasticity expressed in central auditory structures with and without tinnitus
Roberts, Larry E.; Bosnyak, Daniel J.; Thompson, David C.
2012-01-01
Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To assess this assumption, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by electroencephalography (EEG) are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated (AM) sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR) and P2 transient response known to localize to primary and non-primary auditory cortex, respectively. P2 amplitude increased over training sessions equally in participants with tinnitus and in control subjects, suggesting normal remodeling of non-primary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls the phase delay between the 40-Hz response and stimulus waveforms reduced by about 10° over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects) did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not non-primary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected. PMID:22654738
Carod Artal, Francisco Javier; Vázquez Cabrera, Carolina; Horan, Thomas Anthony
2004-01-01
Transcranial Doppler ultrasonography (TCD) permits the assessment of cognitively induced cerebral blood flow velocity (BFV) changes. We sought to investigate the lateralization of BFV acceleration induced by auditory stimulation and speech in a normal population. TCD monitoring of BFV in the middle cerebral arteries (MCA) was performed in 30 normal right-handed volunteers (average age = 31.7 years). Noise stimulation, speech, and instrumental music were administered during 60 sec to both ears by means of earphones. Auditory stimulation induced a significant BFV increase in the ipsilateral MCA compared to BFV during the preceding rest periods. Left MCA BFV increased by an average of 7.1% (noise), 8.4% (language), and 5.2% (melody) over baseline values, and right MCA BFV increased 5.1%, 3.1%, and 4.2%, respectively. Speech stimulation produced a significant increase in BFV in the left hemisphere MCA (from 49.86 to 54.03 cm/sec; p < .0001). Left MCA BFV response to speech stimulation may reflect the dominance of the left hemisphere in language processing by right-handed individuals. Due to the high temporal resolution of TCD we were able show a habituation effect during the 60-sec stimulation period.
Cortical thickness as a contributor to abnormal oscillations in schizophrenia?☆
Edgar, J. Christopher; Chen, Yu-Han; Lanza, Matthew; Howell, Breannan; Chow, Vivian Y.; Heiken, Kory; Liu, Song; Wootton, Cassandra; Hunter, Michael A.; Huang, Mingxiong; Miller, Gregory A.; Cañive, José M.
2013-01-01
Introduction Although brain rhythms depend on brain structure (e.g., gray and white matter), to our knowledge associations between brain oscillations and structure have not been investigated in healthy controls (HC) or in individuals with schizophrenia (SZ). Observing function–structure relationships, for example establishing an association between brain oscillations (defined in terms of amplitude or phase) and cortical gray matter, might inform models on the origins of psychosis. Given evidence of functional and structural abnormalities in primary/secondary auditory regions in SZ, the present study examined how superior temporal gyrus (STG) structure relates to auditory STG low-frequency and 40 Hz steady-state activity. Given changes in brain activity as a function of age, age-related associations in STG oscillatory activity were also examined. Methods Thirty-nine individuals with SZ and 29 HC were recruited. 40 Hz amplitude-modulated tones of 1 s duration were presented. MEG and T1-weighted sMRI data were obtained. Using the sources localizing 40 Hz evoked steady-state activity (300 to 950 ms), left and right STG total power and inter-trial coherence were computed. Time–frequency group differences and associations with STG structure and age were also examined. Results Decreased total power and inter-trial coherence in SZ were observed in the left STG for initial post-stimulus low-frequency activity (~ 50 to 200 ms, ~ 4 to 16 Hz) as well as 40 Hz steady-state activity (~ 400 to 1000 ms). Left STG 40 Hz total power and inter-trial coherence were positively associated with left STG cortical thickness in HC, not in SZ. Left STG post-stimulus low-frequency and 40 Hz total power were positively associated with age, again only in controls. Discussion Left STG low-frequency and steady-state gamma abnormalities distinguish SZ and HC. Disease-associated damage to STG gray matter in schizophrenia may disrupt the age-related left STG gamma-band function–structure relationships observed in controls. PMID:24371794
Horacek, Jiri; Brunovsky, Martin; Novak, Tomas; Skrdlantova, Lucie; Klirova, Monika; Bubenikova-Valesova, Vera; Krajca, Vladimir; Tislerova, Barbora; Kopecek, Milan; Spaniel, Filip; Mohr, Pavel; Höschl, Cyril
2007-01-01
Auditory hallucinations are characteristic symptoms of schizophrenia with high clinical importance. It was repeatedly reported that low frequency (
Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio
2012-01-01
Approximately 2-4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs.
Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio
2012-01-01
Background Approximately 2–4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. Methodology/Principal Findings A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). Conclusions/Significance This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs. PMID:22808289
Primary Auditory Cortex Regulates Threat Memory Specificity
ERIC Educational Resources Information Center
Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.
2017-01-01
Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…
Neural Correlates of Sound Localization in Complex Acoustic Environments
Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto
2013-01-01
Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185
Stress improves selective attention towards emotionally neutral left ear stimuli.
Hoskin, Robert; Hunter, M D; Woodruff, P W R
2014-09-01
Research concerning the impact of psychological stress on visual selective attention has produced mixed results. The current paper describes two experiments which utilise a novel auditory oddball paradigm to test the impact of psychological stress on auditory selective attention. Participants had to report the location of emotionally-neutral auditory stimuli, while ignoring task-irrelevant changes in their content. The results of the first experiment, in which speech stimuli were presented, suggested that stress improves the ability to selectively attend to left, but not right ear stimuli. When this experiment was repeated using tonal stimuli the same result was evident, but only for female participants. Females were also found to experience greater levels of distraction in general across the two experiments. These findings support the goal-shielding theory which suggests that stress improves selective attention by reducing the attentional resources available to process task-irrelevant information. The study also demonstrates, for the first time, that this goal-shielding effect extends to auditory perception. Copyright © 2014 Elsevier B.V. All rights reserved.
Bloemsaat, Gijs; Van Galen, Gerard P; Meulenbroek, Ruud G J
2003-05-01
This study investigated the combined effects of orthographical irregularity and auditory memory load on the kinematics of finger movements in a transcription-typewriting task. Eight right-handed touch-typists were asked to type 80 strings of ten seven-letter words. In half the trials an irregularly spelt target word elicited a specific key press sequence of either the left or right index finger. In the other trials regularly spelt target words elicited the same key press sequence. An auditory memory load was added in half the trials by asking participants to remember the pitch of a tone during task performance. Orthographical irregularity was expected to slow down performance. Auditory memory load, viewed as a low level stressor, was expected to affect performance only when orthographically irregular words needed to be typed. The hypotheses were confirmed. Additional analysis showed differential effects on the left and right hand, possibly related to verbal-manual interference and hand dominance. The results are discussed in relation to relevant findings of recent neuroimaging studies.
Auditory Attentional Control and Selection during Cocktail Party Listening
Hill, Kevin T.
2010-01-01
In realistic auditory environments, people rely on both attentional control and attentional selection to extract intelligible signals from a cluttered background. We used functional magnetic resonance imaging to examine auditory attention to natural speech under such high processing-load conditions. Participants attended to a single talker in a group of 3, identified by the target talker's pitch or spatial location. A catch-trial design allowed us to distinguish activity due to top-down control of attention versus attentional selection of bottom-up information in both the spatial and spectral (pitch) feature domains. For attentional control, we found a left-dominant fronto-parietal network with a bias toward spatial processing in dorsal precentral sulcus and superior parietal lobule, and a bias toward pitch in inferior frontal gyrus. During selection of the talker, attention modulated activity in left intraparietal sulcus when using talker location and in bilateral but right-dominant superior temporal sulcus when using talker pitch. We argue that these networks represent the sources and targets of selective attention in rich auditory environments. PMID:19574393
The processing of auditory and visual recognition of self-stimuli.
Hughes, Susan M; Nicholson, Shevon E
2010-12-01
This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.
Bruneau, Nicole; Bidet-Caulet, Aurélie; Roux, Sylvie; Bonnet-Brilhault, Frédérique; Gomot, Marie
2015-02-01
To investigate brain asymmetry of the temporal auditory evoked potentials (T-complex) in response to monaural stimulation in children compared to adults. Ten children (7 to 9 years) and ten young adults participated in the study. All were right-handed. The auditory stimuli used were tones (1100 Hz, 70 dB SPL, 50 ms duration) delivered monaurally (right, left ear) at four different levels of stimulus onset asynchrony (700-1100-1500-3000 ms). Latency and amplitude of responses were measured at left and right temporal sites according to the ear stimulated. Peaks of the three successive deflections (Na-Ta-Tb) of the T-complex were greater in amplitude and better defined in children than in adults. Amplitude measurements in children indicated that Na culminates on the left hemisphere whatever the ear stimulated whereas Ta and Tb culminate on the right hemisphere but for left ear stimuli only. Peak latency displayed different patterns of asymmetry. Na and Ta displayed shorter latencies for contralateral stimulation. The original finding was that Tb peak latency was the shortest at the left temporal site for right ear stimulation in children. Amplitude increased and/or peak latency decreased with increasing SOA, however no interaction effect was found with recording site or with ear stimulated. Our main original result indicates a right ear-left hemisphere timing advantage for Tb peak in children. The Tb peak would therefore be a good candidate as an electrophysiological marker of ear advantage effects during dichotic stimulation and of functional inter-hemisphere interactions and connectivity in children. Copyright © 2014. Published by Elsevier B.V.
Homan, Philipp; Kindler, Jochen; Hauf, Martinus; Walther, Sebastian; Hubl, Daniela; Dierks, Thomas
2013-01-01
Background: The left superior temporal gyrus (STG) has been suggested to play a key role in auditory verbal hallucinations (AVH) in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant AVH and 19 healthy controls underwent perfusion magnetic resonance (MR) imaging with arterial spin labeling (ASL). Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS) between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF), which consisted of the regional CBF measurement in the left STG and the global CBF measurement in the whole brain. Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001) and to the global CBF in patients (p < 0.004) at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007), and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001). After TMS, PANSS (p = 0.003) and PSYRATS (p = 0.01) scores decreased significantly in patients. Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of AVH in schizophrenia. PMID:23805093
Electrophysiological correlates of cocktail-party listening.
Lewald, Jörg; Getzmann, Stephan
2015-10-01
Detecting, localizing, and selectively attending to a particular sound source of interest in complex auditory scenes composed of multiple competing sources is a remarkable capacity of the human auditory system. The neural basis of this so-called "cocktail-party effect" has remained largely unknown. Here, we studied the cortical network engaged in solving the "cocktail-party" problem, using event-related potentials (ERPs) in combination with two tasks demanding horizontal localization of a naturalistic target sound presented either in silence or in the presence of multiple competing sound sources. Presentation of multiple sound sources, as compared to single sources, induced an increased P1 amplitude, a reduction in N1, and a strong N2 component, resulting in a pronounced negativity in the ERP difference waveform (N2d) around 260 ms after stimulus onset. About 100 ms later, the anterior contralateral N2 subcomponent (N2ac) occurred in the multiple-sources condition, as computed from the amplitude difference for targets in the left minus right hemispaces. Cortical source analyses of the ERP modulation, resulting from the contrast of multiple vs. single sources, generally revealed an initial enhancement of electrical activity in right temporo-parietal areas, including auditory cortex, by multiple sources (at P1) that is followed by a reduction, with the primary sources shifting from right inferior parietal lobule (at N1) to left dorso-frontal cortex (at N2d). Thus, cocktail-party listening, as compared to single-source localization, appears to be based on a complex chronology of successive electrical activities within a specific cortical network involved in spatial hearing in complex situations. Copyright © 2015 Elsevier B.V. All rights reserved.
Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene.
Vander Ghinst, Marc; Bourguignon, Mathieu; Op de Beeck, Marc; Wens, Vincent; Marty, Brice; Hassid, Sergio; Choufani, Georges; Jousmäki, Veikko; Hari, Riitta; Van Bogaert, Patrick; Goldman, Serge; De Tiège, Xavier
2016-02-03
Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole auditory scene and how increasing background noise corrupts this process is still debated. In this magnetoencephalography study, subjects had to attend a speech stream with or without multitalker background noise. Results argue for frequency-dependent cortical tracking mechanisms for the attended speech stream. The left superior temporal gyrus tracked the ∼0.5 Hz modulations of the attended speech stream only when the speech was embedded in multitalker background, whereas the right supratemporal auditory cortex tracked 4-8 Hz modulations during both noiseless and cocktail-party conditions. Copyright © 2016 the authors 0270-6474/16/361597-11$15.00/0.
Correlation Factors Describing Primary and Spatial Sensations of Sound Fields
NASA Astrophysics Data System (ADS)
ANDO, Y.
2002-11-01
The theory of subjective preference of the sound field in a concert hall is established based on the model of human auditory-brain system. The model consists of the autocorrelation function (ACF) mechanism and the interaural crosscorrelation function (IACF) mechanism for signals arriving at two ear entrances, and the specialization of human cerebral hemispheres. This theory can be developed to describe primary sensations such as pitch or missing fundamental, loudness, timbre and, in addition, duration sensation which is introduced here as a fourth. These four primary sensations may be formulated by the temporal factors extracted from the ACF associated with the left hemisphere and, spatial sensations such as localization in the horizontal plane, apparent source width and subjective diffuseness are described by the spatial factors extracted from the IACF associated with the right hemisphere. Any important subjective responses of sound fields may be described by both temporal and spatial factors.
Effects of visual working memory on brain information processing of irrelevant auditory stimuli.
Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye
2014-01-01
Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.
Demopoulos, Carly; Yu, Nina; Tripp, Jennifer; Mota, Nayara; Brandes-Aitken, Anne N.; Desai, Shivani S.; Hill, Susanna S.; Antovich, Ashley D.; Harris, Julia; Honma, Susanne; Mizuiri, Danielle; Nagarajan, Srikantan S.; Marco, Elysa J.
2017-01-01
This study compared magnetoencephalographic (MEG) imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18), those with sensory processing dysfunction (SPD; N = 13) who do not meet ASD criteria, and typically developing control (TDC; N = 19) participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain. PMID:28603492
A selective impairment of perception of sound motion direction in peripheral space: A case study.
Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C
2016-01-08
It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.
Diagnosing Dyslexia: The Screening of Auditory Laterality.
ERIC Educational Resources Information Center
Johansen, Kjeld
A study investigated whether a correlation exists between the degree and nature of left-brain laterality and specific reading and spelling difficulties. Subjects, 50 normal readers and 50 reading disabled persons native to the island of Bornholm, had their auditory laterality screened using pure-tone audiometry and dichotic listening. Results…
Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes
ERIC Educational Resources Information Center
Getzmann, Stephan
2009-01-01
The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…
Van der Haegen, Lise; Acke, Frederic; Vingerhoets, Guy; Dhooge, Ingeborg; De Leenheer, Els; Cai, Qing; Brysbaert, Marc
2016-12-01
Auditory speech perception, speech production and reading lateralize to the left hemisphere in the majority of healthy right-handers. In this study, we investigated to what extent sensory input underlies the side of language dominance. We measured the lateralization of the three core subprocesses of language in patients who had profound hearing loss in the right ear from birth and in matched control subjects. They took part in a semantic decision listening task involving speech and sound stimuli (auditory perception), a word generation task (speech production) and a passive reading task (reading). The results show that a lack of sensory auditory input on the right side, which is strongly connected to the contralateral left hemisphere, does not lead to atypical lateralization of speech perception. Speech production and reading were also typically left lateralized in all but one patient, contradicting previous small scale studies. Other factors such as genetic constraints presumably overrule the role of sensory input in the development of (a)typical language lateralization. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mohebbi, Mehrnaz; Mahmoudian, Saeid; Alborzi, Marzieh Sharifian; Najafi-Koopaie, Mojtaba; Farahani, Ehsan Darestani; Farhadi, Mohammad
2014-09-01
To investigate the association of handedness with auditory middle latency responses (AMLRs) using topographic brain mapping by comparing amplitudes and latencies in frontocentral and hemispheric regions of interest (ROIs). The study included 44 healthy subjects with normal hearing (22 left handed and 22 right handed). AMLRs were recorded from 29 scalp electrodes in response to binaural 4-kHz tone bursts. Frontocentral ROI comparisons revealed that Pa and Pb amplitudes were significantly larger in the left-handed than the right-handed group. Topographic brain maps showed different distributions in AMLR components between the two groups. In hemispheric comparisons, Pa amplitude differed significantly across groups. A left-hemisphere emphasis of Pa was found in the right-handed group but not in the left-handed group. This study provides evidence that handedness is associated with AMLR components in frontocentral and hemispheric ROI. Handedness should be considered an essential factor in the clinical or experimental use of AMLRs.
Karns, Christina M; Stevens, Courtney; Dow, Mark W; Schorr, Emily M; Neville, Helen J
2017-01-01
Considerable research documents the cross-modal reorganization of auditory cortices as a consequence of congenital deafness, with remapped functions that include visual and somatosensory processing of both linguistic and nonlinguistic information. Structural changes accompany this cross-modal neuroplasticity, but precisely which specific structural changes accompany congenital and early deafness and whether there are group differences in hemispheric asymmetries remain to be established. Here, we used diffusion tensor imaging (DTI) to examine microstructural white matter changes accompanying cross-modal reorganization in 23 deaf adults who were genetically, profoundly, and congenitally deaf, having learned sign language from infancy with 26 hearing controls who participated in our previous fMRI studies of cross-modal neuroplasticity. In contrast to prior literature using a whole-brain approach, we introduce a semiautomatic method for demarcating auditory regions in which regions of interest (ROIs) are defined on the normalized white matter skeleton for all participants, projected into each participants native space, and manually constrained to anatomical boundaries. White-matter ROIs were left and right Heschl's gyrus (HG), left and right anterior superior temporal gyrus (aSTG), left and right posterior superior temporal gyrus (pSTG), as well as one tractography-defined region in the splenium of the corpus callosum connecting homologous left and right superior temporal regions (pCC). Within these regions, we measured fractional anisotropy (FA), radial diffusivity (RD), axial diffusivity (AD), and white-matter volume. Congenitally deaf adults had reduced FA and volume in white matter structures underlying bilateral HG, aSTG, pSTG, and reduced FA in pCC. In HG and pCC, this reduction in FA corresponded with increased RD, but differences in aSTG and pSTG could not be localized to alterations in RD or AD. Direct statistical tests of hemispheric asymmetries in these differences indicated the most prominent effects in pSTG, where the largest differences between groups occurred in the right hemisphere. Other regions did not show significant hemispheric asymmetries in group differences. Taken together, these results indicate that atypical white matter microstructure and reduced volume underlies regions of superior temporal primary and association auditory cortex and introduce a robust method for quantifying volumetric and white matter microstructural differences that can be applied to future studies of special populations. Published by Elsevier B.V.
Bilateral acquired external auditory canal stenosis with squamous papilloma: a case report.
Demirbaş, Duygu; Dağlı, Muharrem; Göçer, Celil
2011-01-01
Acquired external auditory canal (EAC) stenosis is described as resulting from a number of different causes such as infection, trauma, neoplasia, inflammation and radiotherapy. Human papilloma virus (HPV) type 6, a deoxyribonucleic acid (DNA) virus, is considered to cause squamous papilloma of the EAC. In this article, we report a case of a 56-year-old male with warty lesions in the left external ear and a totally stenotic right external ear which had similar lesions one year before the involvement of his left ear. On computed tomography of the temporal bone, there was soft tissue obstruction of the right EAC, and thickening in the skin of the left EAC. The middle ear structures were normal on both sides. Biopsy was performed from the lesion in the left ear, and revealed squamous papilloma. We presented this case because squamous papilloma related bilateral acquired EAC stenosis is a rare entity.
Chen, Cheng; Wang, Hui-Ling; Wu, Shi-Hao; Huang, Huan; Zou, Ji-Lin; Chen, Jun; Jiang, Tian-Zi; Zhou, Yuan; Wang, Gao-Hua
2015-01-01
Background: Dysconnectivity hypothesis of schizophrenia has been increasingly emphasized. Recent researches showed that this dysconnectivity might be related to occurrence of auditory hallucination (AH). However, there is still no consistent conclusion. This study aimed to explore intrinsic dysconnectivity pattern of whole-brain functional networks at voxel level in schizophrenic with AH. Methods: Auditory hallucinated patients group (n = 42 APG), no hallucinated patients group (n = 42 NPG) and normal controls (n = 84 NCs) were analyzed by resting-state functional magnetic resonance imaging. The functional connectivity metrics index (degree centrality [DC]) across the entire brain networks was calculated and evaluated among three groups. Results: DC decreased in the bilateral putamen and increased in the left superior frontal gyrus in all the patients. However, in APG, the changes of DC were more obvious compared with NPG. Symptomology scores were negatively correlated with the DC of bilateral putamen in all patients. AH score of APG positively correlated with the DC in left superior frontal gyrus but negatively correlated with the DC in bilateral putamen. Conclusion: Our findings corroborated that schizophrenia was characterized by functional dysconnectivity, and the abnormal DC in bilateral putamen and left superior frontal gyrus might be crucial in the occurrence of AH. PMID:26612293
Descovich, K A; Reints Bok, T E; Lisle, A T; Phillips, C J C
2013-01-01
Behavioural lateralisation is evident across most animal taxa, although few marsupial and no fossorial species have been studied. Twelve wombats (Lasiorhinus latifrons) were bilaterally presented with eight sounds from different contexts (threat, neutral, food) to test for auditory laterality. Head turns were recorded prior to and immediately following sound presentation. Behaviour was recorded for 150 seconds after presentation. Although sound differentiation was evident by the amount of exploration, vigilance, and grooming performed after different sound types, this did not result in different patterns of head turn direction. Similarly, left-right proportions of head turns, walking events, and food approaches in the post-sound period were comparable across sound types. A comparison of head turns performed before and after sound showed a significant change in turn direction (χ(2) (1)=10.65, p=.001) from a left preference during the pre-sound period (mean 58% left head turns, CI 49-66%) to a right preference in the post-sound (mean 43% left head turns, CI 40-45%). This provides evidence of a right auditory bias in response to the presentation of the sound. This study therefore demonstrates that laterality is evident in southern hairy-nosed wombats in response to a sound stimulus, although side biases were not altered by sounds of varying context.
Zhang, Linjun; Yue, Qiuhai; Zhang, Yang; Shu, Hua; Li, Ping
2015-01-01
Numerous studies have revealed the essential role of the left lateral temporal cortex in auditory sentence comprehension along with evidence of the functional specialization of the anterior and posterior temporal sub-areas. However, it is unclear whether task demands (e.g., active vs. passive listening) modulate the functional specificity of these sub-areas. In the present functional magnetic resonance imaging (fMRI) study, we addressed this issue by applying both independent component analysis (ICA) and general linear model (GLM) methods. Consistent with previous studies, intelligible sentences elicited greater activity in the left lateral temporal cortex relative to unintelligible sentences. Moreover, responses to intelligibility in the sub-regions were differentially modulated by task demands. While the overall activation patterns of the anterior and posterior superior temporal sulcus and middle temporal gyrus (STS/MTG) were equivalent during both passive and active tasks, a middle portion of the STS/MTG was found to be selectively activated only during the active task under a refined analysis of sub-regional contributions. Our results not only confirm the critical role of the left lateral temporal cortex in auditory sentence comprehension but further demonstrate that task demands modulate functional specialization of the anterior-middle-posterior temporal sub-areas. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Missing a trick: Auditory load modulates conscious awareness in audition.
Fairnie, Jake; Moore, Brian C J; Remington, Anna
2016-07-01
In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
2013-01-01
Background Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per second - exceeding by far the maximum performance level of normal-sighted listeners (ca. 8 syl/s). To further elucidate the brain mechanisms underlying this extraordinary skill, functional magnetic resonance imaging (fMRI) was performed in blind subjects of varying ultra-fast speech comprehension capabilities and sighted individuals while listening to sentence utterances of a moderately fast (8 syl/s) or ultra-fast (16 syl/s) syllabic rate. Results Besides left inferior frontal gyrus (IFG), bilateral posterior superior temporal sulcus (pSTS) and left supplementary motor area (SMA), blind people highly proficient in ultra-fast speech perception showed significant hemodynamic activation of right-hemispheric primary visual cortex (V1), contralateral fusiform gyrus (FG), and bilateral pulvinar (Pv). Conclusions Presumably, FG supports the left-hemispheric perisylvian “language network”, i.e., IFG and superior temporal lobe, during the (segmental) sequencing of verbal utterances whereas the collaboration of bilateral pulvinar, right auditory cortex, and ipsilateral V1 implements a signal-driven timing mechanism related to syllabic (suprasegmental) modulation of the speech signal. These data structures, conveyed via left SMA to the perisylvian “language zones”, might facilitate – under time-critical conditions – the consolidation of linguistic information at the level of verbal working memory. PMID:23879896
High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy
ERIC Educational Resources Information Center
Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano
2012-01-01
Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…
Hearing with Two Ears: Evidence for Cortical Binaural Interaction during Auditory Processing.
Henkin, Yael; Yaar-Soffer, Yifat; Givon, Lihi; Hildesheimer, Minka
2015-04-01
Integration of information presented to the two ears has been shown to manifest in binaural interaction components (BICs) that occur along the ascending auditory pathways. In humans, BICs have been studied predominantly at the brainstem and thalamocortical levels; however, understanding of higher cortically driven mechanisms of binaural hearing is limited. To explore whether BICs are evident in auditory event-related potentials (AERPs) during the advanced perceptual and postperceptual stages of cortical processing. The AERPs N1, P3, and a late negative component (LNC) were recorded from multiple site electrodes while participants performed an oddball discrimination task that consisted of natural speech syllables (/ka/ vs. /ta/) that differed by place-of-articulation. Participants were instructed to respond to the target stimulus (/ta/) while performing the task in three listening conditions: monaural right, monaural left, and binaural. Fifteen (21-32 yr) young adults (6 females) with normal hearing sensitivity. By subtracting the response to target stimuli elicited in the binaural condition from the sum of responses elicited in the monaural right and left conditions, the BIC waveform was derived and the latencies and amplitudes of the components were measured. The maximal interaction was calculated by dividing BIC amplitude by the summed right and left response amplitudes. In addition, the latencies and amplitudes of the AERPs to target stimuli elicited in the monaural right, monaural left, and binaural listening conditions were measured and subjected to analysis of variance with repeated measures testing the effect of listening condition and laterality. Three consecutive BICs were identified at a mean latency of 129, 406, and 554 msec, and were labeled N1-BIC, P3-BIC, and LNC-BIC, respectively. Maximal interaction increased significantly with progression of auditory processing from perceptual to postperceptual stages and amounted to 51%, 55%, and 75% of the sum of monaural responses for N1-BIC, P3-BIC, and LNC-BIC, respectively. Binaural interaction manifested in a decrease of the binaural response compared to the sum of monaural responses. Furthermore, listening condition affected P3 latency only, whereas laterality effects manifested in enhanced N1 amplitudes at the left (T3) vs. right (T4) scalp electrode and in a greater left-right amplitude difference in the right compared to left listening condition. The current AERP data provides evidence for the occurrence of cortical BICs during perceptual and postperceptual stages, presumably reflecting ongoing integration of information presented to the two ears at the final stages of auditory processing. Increasing binaural interaction with the progression of the auditory processing sequence (N1 to LNC) may support the notion that cortical BICs reflect inherited interactions from preceding stages of upstream processing together with discrete cortical neural activity involved in binaural processing. Clinically, an objective measure of cortical binaural processing has the potential of becoming an appealing neural correlate of binaural behavioral performance. American Academy of Audiology.
Guo, Zhiqiang; Wu, Xiuqin; Li, Weifeng; Jones, Jeffery A; Yan, Nan; Sheft, Stanley; Liu, Peng; Liu, Hanjun
2017-10-25
Although working memory (WM) is considered as an emergent property of the speech perception and production systems, the role of WM in sensorimotor integration during speech processing is largely unknown. We conducted two event-related potential experiments with female and male young adults to investigate the contribution of WM to the neurobehavioural processing of altered auditory feedback during vocal production. A delayed match-to-sample task that required participants to indicate whether the pitch feedback perturbations they heard during vocalizations in test and sample sequences matched, elicited significantly larger vocal compensations, larger N1 responses in the left middle and superior temporal gyrus, and smaller P2 responses in the left middle and superior temporal gyrus, inferior parietal lobule, somatosensory cortex, right inferior frontal gyrus, and insula compared with a control task that did not require memory retention of the sequence of pitch perturbations. On the other hand, participants who underwent extensive auditory WM training produced suppressed vocal compensations that were correlated with improved auditory WM capacity, and enhanced P2 responses in the left middle frontal gyrus, inferior parietal lobule, right inferior frontal gyrus, and insula that were predicted by pretraining auditory WM capacity. These findings indicate that WM can enhance the perception of voice auditory feedback errors while inhibiting compensatory vocal behavior to prevent voice control from being excessively influenced by auditory feedback. This study provides the first evidence that auditory-motor integration for voice control can be modulated by top-down influences arising from WM, rather than modulated exclusively by bottom-up and automatic processes. SIGNIFICANCE STATEMENT One outstanding question that remains unsolved in speech motor control is how the mismatch between predicted and actual voice auditory feedback is detected and corrected. The present study provides two lines of converging evidence, for the first time, that working memory cannot only enhance the perception of vocal feedback errors but also exert inhibitory control over vocal motor behavior. These findings represent a major advance in our understanding of the top-down modulatory mechanisms that support the detection and correction of prediction-feedback mismatches during sensorimotor control of speech production driven by working memory. Rather than being an exclusively bottom-up and automatic process, auditory-motor integration for voice control can be modulated by top-down influences arising from working memory. Copyright © 2017 the authors 0270-6474/17/3710324-11$15.00/0.
Xu, Long-Chun; Zhang, Gang; Zou, Yue; Zhang, Min-Feng; Zhang, Dong-Sheng; Ma, Hua; Zhao, Wen-Bo; Zhang, Guang-Yu
2017-10-13
The objective of the study is to provide some implications for rehabilitation of hearing impairment by investigating changes of neural activities of directional brain networks in patients with long-term bilateral hearing loss. Firstly, we implemented neuropsychological tests of 21 subjects (11 patients with long-term bilateral hearing loss, and 10 subjects with normal hearing), and these tests revealed significant differences between the deaf group and the controls. Then we constructed the individual specific virtual brain based on functional magnetic resonance data of participants by utilizing effective connectivity and multivariate regression methods. We exerted the stimulating signal to the primary auditory cortices of the virtual brain and observed the brain region activations. We found that patients with long-term bilateral hearing loss presented weaker brain region activations in the auditory and language networks, but enhanced neural activities in the default mode network as compared with normally hearing subjects. Especially, the right cerebral hemisphere presented more changes than the left. Additionally, weaker neural activities in the primary auditor cortices were also strongly associated with poorer cognitive performance. Finally, causal analysis revealed several interactional circuits among activated brain regions, and these interregional causal interactions implied that abnormal neural activities of the directional brain networks in the deaf patients impacted cognitive function.
Auditory spatial processing in the human cortex.
Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C
2012-12-01
The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.
Woodruff, P W; Wright, I C; Bullmore, E T; Brammer, M; Howard, R J; Williams, S C; Shapleske, J; Rossell, S; David, A S; McGuire, P K; Murray, R M
1997-12-01
The authors explored whether abnormal functional lateralization of temporal cortical language areas in schizophrenia was associated with a predisposition to auditory hallucinations and whether the auditory hallucinatory state would reduce the temporal cortical response to external speech. Functional magnetic resonance imaging was used to measure the blood-oxygenation-level-dependent signal induced by auditory perception of speech in three groups of male subjects: eight schizophrenic patients with a history of auditory hallucinations (trait-positive), none of whom was currently hallucinating; seven schizophrenic patients without such a history (trait-negative); and eight healthy volunteers. Seven schizophrenic patients were also examined while they were actually experiencing severe auditory verbal hallucinations and again after their hallucinations had diminished. Voxel-by-voxel comparison of the median power of subjects' responses to periodic external speech revealed that this measure was reduced in the left superior temporal gyrus but increased in the right middle temporal gyrus in the combined schizophrenic groups relative to the healthy comparison group. Comparison of the trait-positive and trait-negative patients revealed no clear difference in the power of temporal cortical activation. Comparison of patients when experiencing severe hallucinations and when hallucinations were mild revealed reduced responsivity of the temporal cortex, especially the right middle temporal gyrus, to external speech during the former state. These results suggest that schizophrenia is associated with a reduced left and increased right temporal cortical response to auditory perception of speech, with little distinction between patients who differ in their vulnerability to hallucinations. The auditory hallucinatory state is associated with reduced activity in temporal cortical regions that overlap with those that normally process external speech, possibly because of competition for common neurophysiological resources.
Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi
2015-11-01
Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.
Musical training sharpens and bonds ears and tongue to hear speech better.
Du, Yi; Zatorre, Robert J
2017-12-19
The idea that musical training improves speech perception in challenging listening environments is appealing and of clinical importance, yet the mechanisms of any such musician advantage are not well specified. Here, using functional magnetic resonance imaging (fMRI), we found that musicians outperformed nonmusicians in identifying syllables at varying signal-to-noise ratios (SNRs), which was associated with stronger activation of the left inferior frontal and right auditory regions in musicians compared with nonmusicians. Moreover, musicians showed greater specificity of phoneme representations in bilateral auditory and speech motor regions (e.g., premotor cortex) at higher SNRs and in the left speech motor regions at lower SNRs, as determined by multivoxel pattern analysis. Musical training also enhanced the intrahemispheric and interhemispheric functional connectivity between auditory and speech motor regions. Our findings suggest that improved speech in noise perception in musicians relies on stronger recruitment of, finer phonological representations in, and stronger functional connectivity between auditory and frontal speech motor cortices in both hemispheres, regions involved in bottom-up spectrotemporal analyses and top-down articulatory prediction and sensorimotor integration, respectively.
Musical training sharpens and bonds ears and tongue to hear speech better
Du, Yi; Zatorre, Robert J.
2017-01-01
The idea that musical training improves speech perception in challenging listening environments is appealing and of clinical importance, yet the mechanisms of any such musician advantage are not well specified. Here, using functional magnetic resonance imaging (fMRI), we found that musicians outperformed nonmusicians in identifying syllables at varying signal-to-noise ratios (SNRs), which was associated with stronger activation of the left inferior frontal and right auditory regions in musicians compared with nonmusicians. Moreover, musicians showed greater specificity of phoneme representations in bilateral auditory and speech motor regions (e.g., premotor cortex) at higher SNRs and in the left speech motor regions at lower SNRs, as determined by multivoxel pattern analysis. Musical training also enhanced the intrahemispheric and interhemispheric functional connectivity between auditory and speech motor regions. Our findings suggest that improved speech in noise perception in musicians relies on stronger recruitment of, finer phonological representations in, and stronger functional connectivity between auditory and frontal speech motor cortices in both hemispheres, regions involved in bottom-up spectrotemporal analyses and top-down articulatory prediction and sensorimotor integration, respectively. PMID:29203648
Left cytoarchitectonic BA 44 processes syntactic gender violations in determiner phrases.
Heim, Stefan; van Ermingen, Muna; Huber, Walter; Amunts, Katrin
2010-10-01
Recent neuroimaging studies make contradictory predictions about the involvement of left Brodmann's area (BA) 44 in processing local syntactic violations in determiner phrases (DPs). Some studies suggest a role for BA 44 in detecting local syntactic violations, whereas others attribute this function to the left premotor cortex. Therefore, the present event-related functional magnetic resonance imaging (fMRI) study investigated whether left-cytoarchitectonic BA 44 was activated when German DPs involving syntactic gender violations were compared with correct DPs (correct: 'der Baum'-the[masculine] tree[masculine]; violated: 'das Baum'--the[neuter] tree[masculine]). Grammaticality judgements were made for both visual and auditory DPs to be able to generalize the results across modalities. Grammaticality judgements involved, among others, left BA 44 and left BA 6 in the premotor cortex for visual and auditory stimuli. Most importantly, activation in left BA 44 was consistently higher for violated than for correct DPs. This finding was behaviourally corroborated by longer reaction times for violated versus correct DPs. Additional brain regions, showing the same effect, included left premotor cortex, supplementary motor area, right middle and superior frontal cortex, and left cerebellum. Based on earlier findings from the literature, the results indicate the involvement of left BA 44 in processing local syntactic violations when these include morphological features, whereas left premotor cortex seems crucial for the detection of local word category violations. © 2010 Wiley-Liss, Inc.
Multisensory speech perception without the left superior temporal sulcus.
Baum, Sarah H; Martin, Randi C; Hamilton, A Cris; Beauchamp, Michael S
2012-09-01
Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. Copyright © 2012 Elsevier Inc. All rights reserved.
Multisensory Speech Perception Without the Left Superior Temporal Sulcus
Baum, Sarah H.; Martin, Randi C.; Hamilton, A. Cris; Beauchamp, Michael S.
2012-01-01
Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. PMID:22634292
Moyer, Caitlin E.; Delevich, Kristen M.; Fish, Kenneth N.; Asafu-Adjei, Josephine K.; Sampson, Allan R.; Dorph-Petersen, Karl-Anton; Lewis, David A.; Sweet, Robert A.
2012-01-01
Background Schizophrenia is associated with perceptual and physiological auditory processing impairments that may result from primary auditory cortex excitatory and inhibitory circuit pathology. High-frequency oscillations are important for auditory function and are often reported to be disrupted in schizophrenia. These oscillations may, in part, depend on upregulation of gamma-aminobutyric acid synthesis by glutamate decarboxylase 65 (GAD65) in response to high interneuron firing rates. It is not known whether levels of GAD65 protein or GAD65-expressing boutons are altered in schizophrenia. Methods We studied two cohorts of subjects with schizophrenia and matched control subjects, comprising 27 pairs of subjects. Relative fluorescence intensity, density, volume, and number of GAD65-immunoreactive boutons in primary auditory cortex were measured using quantitative confocal microscopy and stereologic sampling methods. Bouton fluorescence intensities were used to compare the relative expression of GAD65 protein within boutons between diagnostic groups. Additionally, we assessed the correlation between previously measured dendritic spine densities and GAD65-immunoreactive bouton fluorescence intensities. Results GAD65-immunoreactive bouton fluorescence intensity was reduced by 40% in subjects with schizophrenia and was correlated with previously measured reduced spine density. The reduction was greater in subjects who were not living independently at time of death. In contrast, GAD65-immunoreactive bouton density and number were not altered in deep layer 3 of primary auditory cortex of subjects with schizophrenia. Conclusions Decreased expression of GAD65 protein within inhibitory boutons could contribute to auditory impairments in schizophrenia. The correlated reductions in dendritic spines and GAD65 protein suggest a relationship between inhibitory and excitatory synapse pathology in primary auditory cortex. PMID:22624794
Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J
2014-01-01
Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.
Voxel-based morphometry of auditory and speech-related cortex in stutterers.
Beal, Deryk S; Gracco, Vincent L; Lafaille, Sophie J; De Nil, Luc F
2007-08-06
Stutterers demonstrate unique functional neural activation patterns during speech production, including reduced auditory activation, relative to nonstutterers. The extent to which these functional differences are accompanied by abnormal morphology of the brain in stutterers is unclear. This study examined the neuroanatomical differences in speech-related cortex between stutterers and nonstutterers using voxel-based morphometry. Results revealed significant differences in localized grey matter and white matter densities of left and right hemisphere regions involved in auditory processing and speech production.
NASA Astrophysics Data System (ADS)
Modegi, Toshio
We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.
Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.
Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta
2009-01-01
In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.
Todd, N P M; Paillard, A C; Kluk, K; Whittle, E; Colebatch, J G
2014-06-01
Todd et al. (2014) have recently demonstrated the presence of vestibular dependent changes both in the morphology and in the intensity dependence of auditory evoked potentials (AEPs) when passing through the vestibular threshold as determined by vestibular evoked myogenic potentials (VEMPs). In this paper we extend this work by comparing left vs. right ear stimulation and by conducting a source analysis of the resulting evoked potentials of short and long latency. Ten healthy, right-handed subjects were recruited and evoked potentials were recorded to both left- and right-ear sound stimulation, above and below vestibular threshold. Below VEMP threshold, typical AEPs were recorded, consisting of mid-latency (MLR) waves Na and Pa followed by long latency AEPs (LAEPs) N1 and P2. In the supra-threshold condition, the expected changes in morphology were observed, consisting of: (1) short-latency vestibular evoked potentials (VsEPs) which have no auditory correlate, i.e. the ocular VEMP (OVEMP) and inion response related potentials; (2) a later deflection, labelled N42/P52, followed by the LAEPs N1 and P2. Statistical analysis of the vestibular dependent responses indicated a contralateral effect for inion related short-latency responses and a left-ear/right-hemisphere advantage for the long-latency responses. Source analysis indicated that the short-latency effects may be mediated by a contralateral projection to left cerebellum, while the long-latency effects were mediated by a contralateral projection to right cingulate cortex. In addition we found evidence of a possible vestibular contribution to the auditory T-complex in radial temporal lobe sources. These last results raise the possibility that acoustic activation of the otolith organs could potentially contribute to auditory processing. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
'Sorry, I meant the patient's left side': impact of distraction on left-right discrimination.
McKinley, John; Dempster, Martin; Gormley, Gerard J
2015-04-01
Medical students can have difficulty in distinguishing left from right. Many infamous medical errors have occurred when a procedure has been performed on the wrong side, such as in the removal of the wrong kidney. Clinicians encounter many distractions during their work. There is limited information on how these affect performance. Using a neuropsychological paradigm, we aim to elucidate the impacts of different types of distraction on left-right (LR) discrimination ability. Medical students were recruited to a study with four arms: (i) control arm (no distraction); (ii) auditory distraction arm (continuous ambient ward noise); (iii) cognitive distraction arm (interruptions with clinical cognitive tasks), and (iv) auditory and cognitive distraction arm. Participants' LR discrimination ability was measured using the validated Bergen Left-Right Discrimination Test (BLRDT). Multivariate analysis of variance was used to analyse the impacts of the different forms of distraction on participants' performance on the BLRDT. Additional analyses looked at effects of demographics on performance and correlated participants' self-perceived LR discrimination ability and their actual performance. A total of 234 students were recruited. Cognitive distraction had a greater negative impact on BLRDT performance than auditory distraction. Combined auditory and cognitive distraction had a negative impact on performance, but only in the most difficult LR task was this negative impact found to be significantly greater than that of cognitive distraction alone. There was a significant medium-sized correlation between perceived LR discrimination ability and actual overall BLRDT performance. Distraction has a significant impact on performance and multifaceted approaches are required to reduce LR errors. Educationally, greater emphasis on the linking of theory and clinical application is required to support patient safety and human factor training in medical school curricula. Distraction has the potential to impair an individual's ability to make accurate LR decisions and students should be trained from undergraduate level to be mindful of this. © 2015 John Wiley & Sons Ltd.
Neural networks mediating sentence reading in the deaf
Hirshorn, Elizabeth A.; Dye, Matthew W. G.; Hauser, Peter C.; Supalla, Ted R.; Bavelier, Daphne
2014-01-01
The present work addresses the neural bases of sentence reading in deaf populations. To better understand the relative role of deafness and spoken language knowledge in shaping the neural networks that mediate sentence reading, three populations with different degrees of English knowledge and depth of hearing loss were included—deaf signers, oral deaf and hearing individuals. The three groups were matched for reading comprehension and scanned while reading sentences. A similar neural network of left perisylvian areas was observed, supporting the view of a shared network of areas for reading despite differences in hearing and English knowledge. However, differences were observed, in particular in the auditory cortex, with deaf signers and oral deaf showing greatest bilateral superior temporal gyrus (STG) recruitment as compared to hearing individuals. Importantly, within deaf individuals, the same STG area in the left hemisphere showed greater recruitment as hearing loss increased. To further understand the functional role of such auditory cortex re-organization after deafness, connectivity analyses were performed from the STG regions identified above. Connectivity from the left STG toward areas typically associated with semantic processing (BA45 and thalami) was greater in deaf signers and in oral deaf as compared to hearing. In contrast, connectivity from left STG toward areas identified with speech-based processing was greater in hearing and in oral deaf as compared to deaf signers. These results support the growing literature indicating recruitment of auditory areas after congenital deafness for visually-mediated language functions, and establish that both auditory deprivation and language experience shape its functional reorganization. Implications for differential reliance on semantic vs. phonological pathways during reading in the three groups is discussed. PMID:24959127
Auditory changes in acromegaly.
Tabur, S; Korkmaz, H; Baysal, E; Hatipoglu, E; Aytac, I; Akarsu, E
2017-06-01
The aim of this study is to determine the changes involving auditory system in cases with acromegaly. Otological examinations of 41 cases with acromegaly (uncontrolled n = 22, controlled n = 19) were compared with those of age and gender-matched 24 healthy subjects. Whereas the cases with acromegaly underwent examination with pure tone audiometry (PTA), speech audiometry for speech discrimination (SD), tympanometry, stapedius reflex evaluation and otoacoustic emission tests, the control group did only have otological examination and PTA. Additionally, previously performed paranasal sinus-computed tomography of all cases with acromegaly and control subjects were obtained to measure the length of internal acoustic canal (IAC). PTA values were higher (p < 0.001 for right ears and p = 0.001 for left ears), and SD scores were (p = 0.002 for right ears and p = 0.002 for left ears) lower in acromegalic patients. IAC width in acromegaly group was narrower compared to that in control group (p = 0.03 for right ears and p = 0.02 for left ears). When only cases with acromegaly were taken into consideration, PTA values in left ears had positive correlation with growth hormone and insulin-like growth factor-1 levels (r = 0.4, p = 0.02 and r = 0.3, p = 0.03). Of all cases with acromegaly 13 (32%) had hearing loss in at least one ear, 7 (54%) had sensorineural type and 6 (46%) had conductive type hearing loss. Acromegaly may cause certain changes in the auditory system in cases with acromegaly. The changes in the auditory system may be multifactorial causing both conductive and sensorioneural defects.
External auditory exostoses and hearing loss in the Shanidar 1 Neandertal
2017-01-01
The Late Pleistocene Shanidar 1 older adult male Neandertal is known for the crushing fracture of his left orbit with a probable reduction in vision, the loss of his right forearm and hand, and evidence of an abnormal gait, as well as probable diffuse idiopathic skeletal hyperostosis. He also exhibits advanced external auditory exostoses in his left auditory meatus and larger ones with complete bridging across the porus in the right meatus (both Grade 3). These growths indicate at least unilateral conductive hearing (CHL) loss, a serious sensory deprivation for a Pleistocene hunter-gatherer. This condition joins the meatal atresia of the Middle Pleistocene Atapuerca-SH Cr.4 in providing evidence of survival with conductive hearing loss (and hence serious sensory deprivation) among these Pleistocene humans. The presence of CHL in these fossils thereby reinforces the paleobiological and archeological evidence for supporting social matrices among these Pleistocene foraging peoples. PMID:29053746
Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.
Meredith, M Alex; Allman, Brian L
2015-03-01
The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Martí-Bonmatí, Luis; Lull, Juan José; García-Martí, Gracián; Aguilar, Eduardo J; Moratal-Pérez, David; Poyatos, Cecilio; Robles, Montserrat; Sanjuán, Julio
2007-08-01
To prospectively evaluate if functional magnetic resonance (MR) imaging abnormalities associated with auditory emotional stimuli coexist with focal brain reductions in schizophrenic patients with chronic auditory hallucinations. Institutional review board approval was obtained and all participants gave written informed consent. Twenty-one right-handed male patients with schizophrenia and persistent hallucinations (started to hear hallucinations at a mean age of 23 years +/- 10, with 15 years +/- 8 of mean illness duration) and 10 healthy paired participants (same ethnic group [white], age, and education level [secondary school]) were studied. Functional echo-planar T2*-weighted (after both emotional and neutral auditory stimulation) and morphometric three-dimensional gradient-recalled echo T1-weighted MR images were analyzed using Statistical Parametric Mapping (SPM2) software. Brain activation images were extracted by subtracting those with emotional from nonemotional words. Anatomic differences were explored by optimized voxel-based morphometry. The functional and morphometric MR images were overlaid to depict voxels statistically reported by both techniques. A coincidence map was generated by multiplying the emotional subtracted functional MR and volume decrement morphometric maps. Statistical analysis used the general linear model, Student t tests, random effects analyses, and analysis of covariance with a correction for multiple comparisons following the false discovery rate method. Large coinciding brain clusters (P < .005) were found in the left and right middle temporal and superior temporal gyri. Smaller coinciding clusters were found in the left posterior and right anterior cingular gyri, left inferior frontal gyrus, and middle occipital gyrus. The middle and superior temporal and the cingular gyri are closely related to the abnormal neural network involved in the auditory emotional dysfunction seen in schizophrenic patients.
Threshold changes of ABR results in toddlers and children.
Louza, Julia; Polterauer, Daniel; Wittlinger, Natalie; Muzaini, Hanan Al; Scheckinger, Siiri; Hempel, Martin; Schuster, Maria
2016-06-01
Auditory brainstem response (ABR) is a clinically established method to identify the hearing threshold in young children and is regularly performed after hearing screening has failed. Some studies have shown that, after the first diagnosis of hearing impairment in ABR, further development takes place in a spectrum between progression of hearing loss and, surprisingly, hearing improvement. The aim of this study is to evaluate changes over time of auditory thresholds measured by ABR among young children. For this retrospective study, 459 auditory brainstem measurements were performed and analyzed between 2010 and 2014. Hearing loss was detected and assessed according to national guidelines. 104 right ears and 101 left ears of 116 children aged between 0 and 3 years with multiple ABR measurements were included. The auditory threshold was identified using click and/or NB-chirp-stimuli in natural sleep or in general anesthesia. The frequency of differences of at least more than 10dB between the measurements was identified. In 37 (35%) measurements of right ears and 38 (38%) of left ears there was an improvement of the auditory threshold of more than 10dB; in 27 of those measurements more than 20dB improvement was found. Deterioration was seen in 12% of the right ears and 10% of the left ears. Only half of the children had stable hearing thresholds in repeated measurements. The time between the measurements was on average 5 months (0 to 31 months). Hearing threshold changes are often seen in repeated ABR measurements. Therefore multiple measurements are necessary when ABR yields abnormal. Hearing threshold changes should be taken into account for hearing aid provision. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zhang, Yingli; Liang, Wei; Yang, Shichang; Dai, Ping; Shen, Lijuan; Wang, Changhong
2013-10-05
This study assessed the efficacy and tolerability of repetitive transcranial magnetic stimulation for treatment of auditory hallucination of patients with schizophrenia spectrum disorders. Online literature retrieval was conducted using PubMed, ISI Web of Science, EMBASE, Medline and Cochrane Central Register of Controlled Trials databases from January 1985 to May 2012. Key words were "transcranial magnetic stimulation", "TMS", "repetitive transcranial magnetic stimulation", and "hallucination". Selected studies were randomized controlled trials assessing therapeutic efficacy of repetitive transcranial magnetic stimulation for hallucination in patients with schizophrenia spectrum disorders. Experimental intervention was low-frequency repetitive transcranial magnetic stimulation in left temporoparietal cortex for treatment of auditory hallucination in schizophrenia spectrum disorders. Control groups received sham stimulation. The primary outcome was total scores of Auditory Hallucinations Rating Scale, Auditory Hallucination Subscale of Psychotic Symptom Rating Scale, Positive and Negative Symptom Scale-Auditory Hallucination item, and Hallucination Change Scale. Secondary outcomes included response rate, global mental state, adverse effects and cognitive function. Seventeen studies addressing repetitive transcranial magnetic stimulation for treatment of schizophrenia spectrum disorders were screened, with controls receiving sham stimulation. All data were completely effective, involving 398 patients. Overall mean weighted effect size for repetitive transcranial magnetic stimulation versus sham stimulation was statistically significant (MD = -0.42, 95%CI: -0.64 to -0.20, P = 0.000 2). Patients receiving repetitive transcranial magnetic stimulation responded more frequently than sham stimulation (OR = 2.94, 95%CI: 1.39 to 6.24, P = 0.005). No significant differences were found between active repetitive transcranial magnetic stimulation and sham stimulation for positive or negative symptoms. Compared with sham stimulation, active repetitive transcranial magnetic stimulation had equivocal outcome in cognitive function and commonly caused headache and facial muscle twitching. Repetitive transcranial magnetic stimulation is a safe and effective treatment for auditory hallucination in schizophrenia spectrum disorders.
Zhang, Yingli; Liang, Wei; Yang, Shichang; Dai, Ping; Shen, Lijuan; Wang, Changhong
2013-01-01
Objective: This study assessed the efficacy and tolerability of repetitive transcranial magnetic stimulation for treatment of auditory hallucination of patients with schizophrenia spectrum disorders. Data Sources: Online literature retrieval was conducted using PubMed, ISI Web of Science, EMBASE, Medline and Cochrane Central Register of Controlled Trials databases from January 1985 to May 2012. Key words were “transcranial magnetic stimulation”, “TMS”, “repetitive transcranial magnetic stimulation”, and “hallucination”. Study Selection: Selected studies were randomized controlled trials assessing therapeutic efficacy of repetitive transcranial magnetic stimulation for hallucination in patients with schizophrenia spectrum disorders. Experimental intervention was low-frequency repetitive transcranial magnetic stimulation in left temporoparietal cortex for treatment of auditory hallucination in schizophrenia spectrum disorders. Control groups received sham stimulation. Main Outcome Measures: The primary outcome was total scores of Auditory Hallucinations Rating Scale, Auditory Hallucination Subscale of Psychotic Symptom Rating Scale, Positive and Negative Symptom Scale-Auditory Hallucination item, and Hallucination Change Scale. Secondary outcomes included response rate, global mental state, adverse effects and cognitive function. Results: Seventeen studies addressing repetitive transcranial magnetic stimulation for treatment of schizophrenia spectrum disorders were screened, with controls receiving sham stimulation. All data were completely effective, involving 398 patients. Overall mean weighted effect size for repetitive transcranial magnetic stimulation versus sham stimulation was statistically significant (MD = –0.42, 95%CI: –0.64 to –0.20, P = 0.000 2). Patients receiving repetitive transcranial magnetic stimulation responded more frequently than sham stimulation (OR = 2.94, 95%CI: 1.39 to 6.24, P = 0.005). No significant differences were found between active repetitive transcranial magnetic stimulation and sham stimulation for positive or negative symptoms. Compared with sham stimulation, active repetitive transcranial magnetic stimulation had equivocal outcome in cognitive function and commonly caused headache and facial muscle twitching. Conclusion: Repetitive transcranial magnetic stimulation is a safe and effective treatment for auditory hallucination in schizophrenia spectrum disorders. PMID:25206578
Nieto-Diego, Javier; Malmierca, Manuel S.
2016-01-01
Stimulus-specific adaptation (SSA) in single neurons of the auditory cortex was suggested to be a potential neural correlate of the mismatch negativity (MMN), a widely studied component of the auditory event-related potentials (ERP) that is elicited by changes in the auditory environment. However, several aspects on this SSA/MMN relation remain unresolved. SSA occurs in the primary auditory cortex (A1), but detailed studies on SSA beyond A1 are lacking. To study the topographic organization of SSA, we mapped the whole rat auditory cortex with multiunit activity recordings, using an oddball paradigm. We demonstrate that SSA occurs outside A1 and differs between primary and nonprimary cortical fields. In particular, SSA is much stronger and develops faster in the nonprimary than in the primary fields, paralleling the organization of subcortical SSA. Importantly, strong SSA is present in the nonprimary auditory cortex within the latency range of the MMN in the rat and correlates with an MMN-like difference wave in the simultaneously recorded local field potentials (LFP). We present new and strong evidence linking SSA at the cellular level to the MMN, a central tool in cognitive and clinical neuroscience. PMID:26950883
Phillips, D P; Farmer, M E
1990-11-15
This paper explores the nature of the processing disorder which underlies the speech discrimination deficit in the syndrome of acquired word deafness following from pathology to the primary auditory cortex. A critical examination of the evidence on this disorder revealed the following. First, the most profound forms of the condition are expressed not only in an isolation of the cerebral linguistic processor from auditory input, but in a failure of even the perceptual elaboration of the relevant sounds. Second, in agreement with earlier studies, we conclude that the perceptual dimension disturbed in word deafness is a temporal one. We argue, however, that it is not a generalized disorder of auditory temporal processing, but one which is largely restricted to the processing of sounds with temporal content in the milliseconds to tens-of-milliseconds time frame. The perceptual elaboration of sounds with temporal content outside that range, in either direction, may survive the disorder. Third, we present neurophysiological evidence that the primary auditory cortex has a special role in the representation of auditory events in that time frame, but not in the representation of auditory events with temporal grains outside that range.
Auditory cortex asymmetry, altered minicolumn spacing and absence of ageing effects in schizophrenia
Casanova, Manuel F.; Switala, Andy E.; Crow, Timothy J.
2008-01-01
The superior temporal gyrus, which contains the auditory cortex, including the planum temporale, is the most consistently altered neocortical structure in schizophrenia (Shenton ME, Dickey CC, Frumin M, McCarley RW. A review of MRI findings in schizophrenia. Schizophr Res 2001; 49: 1–52). Auditory hallucinations are associated with abnormalities in this region and activation in Heschl's gyrus. Our review of 34 MRI and 5 post-mortem studies of planum temporale reveals that half of those measuring region size reported a change in schizophrenia, usually consistent with a reduction in the left hemisphere and a relative increase in the right hemisphere. Furthermore, female subjects are under-represented in the literature and insight from sex differences may be lost. Here we present evidence from post-mortem brain (N = 21 patients, compared with 17 previously reported controls) that normal age-associated changes in planum temporale are not found in schizophrenia. These age-associated differences are reported in an adult population (age range 29–90 years) and were not found in the primary auditory cortex of Heschl's gyrus, indicating that they are selective to the more plastic regions of association cortex involved in cognition. Areas and volumes of Heschl's gyrus and planum temporale and the separation of the minicolumns that are held to be the structural units of the cerebral cortex were assessed in patients. Minicolumn distribution in planum temporale and Heschl's gyrus was assessed on Nissl-stained sections by semi-automated microscope image analysis. The cortical surface area of planum temporale in the left hemisphere (usually asymmetrically larger) was positively correlated with its constituent minicolumn spacing in patients and controls. Surface area asymmetry of planum temporale was reduced in patients with schizophrenia by a reduction in the left hemisphere (F = 7.7, df 1,32, P < 0.01). The relationship between cortical asymmetry and the connecting, interhemispheric callosal white matter was also investigated; minicolumn asymmetry of both Heschl's gyrus and planum temporale was correlated with axon number in the wrong subregions of the corpus callosum in patients. The spacing of minicolumns was altered in a sex-dependent manner due to the absence of age-related minicolumn thinning in schizophrenia. This is interpreted as a failure of adult neuroplasticity that maintains neuropil space. The arrested capacity to absorb anomalous events and cognitive demands may confer vulnerability to schizophrenic symptoms when adult neuroplastic demands are not met. PMID:18819990
Dysfunctional Noise Cancelling of the Rostral Anterior Cingulate Cortex in Tinnitus Patients
Song, Jae Jin; Vanneste, Sven; De Ridder, Dirk
2015-01-01
Background Peripheral auditory deafferentation and central compensation have been regarded as the main culprits of tinnitus generation. However, patient-to-patient discrepancy in the range of the percentage of daytime in which tinnitus is perceived (tinnitus awareness percentage, 0 – 100%), is not fully explicable only by peripheral deafferentation, considering that the deafferentation is a stable persisting phenomenon but tinnitus is intermittently perceived in most patients. Consequently, the involvement of a dysfunctional noise cancellation mechanism has recently been suggested with regard to the individual differences in reported tinnitus awareness. By correlating the tinnitus awareness percentage with resting-state source-localized electroencephalography findings, we may be able to retrieve the cortical area that is negatively correlated with tinnitus awareness percentage, and then the area may be regarded as the core of the noise cancelling system that is defective in patients with tinnitus. Methods and Findings Using resting-state cortical oscillation, we investigated 80 tinnitus patients by correlating the tinnitus awareness percentage with their source-localized cortical oscillatory activity and functional connectivity. The activity of bilateral rostral anterior cingulate cortices (ACCs), left dorsal- and pregenual ACCs for the delta band, bilateral rostral/pregenual/subgenual ACCs for the theta band, and left rostral/pregenual ACC for the beta 1 band displayed significantly negative correlations with tinnitus awareness percentage. Also, the connectivity between the left primary auditory cortex (A1) and the rostral ACC, as well as between the left A1 and the subgenual ACC for the beta 1 band, were negatively correlated with tinnitus awareness percentage. Conclusions These results may designate the role of the rostral ACC as the core of the descending noise cancellation system, and thus dysfunction of the rostral ACC may result in perception of tinnitus. The present study also opens a possibility of tinnitus modulation by neuromodulatory approaches targeting the rostral ACC. PMID:25875099
Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet
2015-06-01
Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Aberrant Lateralization of Brainstem Auditory Evoked Responses by Individuals with Down Syndrome.
ERIC Educational Resources Information Center
Miezejeski, Charles M.; And Others
1994-01-01
Brainstem auditory evoked response latencies were studied in 80 males (13 with Down's syndrome). Latencies for waves P3 and P5 were shorter for Down's syndrome subjects, who also showed a different pattern of left versus right ear responses. Results suggest decreased lateralization and receptive and expressive language ability among people with…
Jacquin-Courtois, S; Rode, G; Pavani, F; O'Shea, J; Giard, M H; Boisson, D; Rossetti, Y
2010-03-01
Unilateral neglect is a disabling syndrome frequently observed following right hemisphere brain damage. Symptoms range from visuo-motor impairments through to deficient visuo-spatial imagery, but impairment can also affect the auditory modality. A short period of adaptation to a rightward prismatic shift of the visual field is known to improve a wide range of hemispatial neglect symptoms, including visuo-manual tasks, mental imagery, postural imbalance, visuo-verbal measures and number bisection. The aim of the present study was to assess whether the beneficial effects of prism adaptation may generalize to auditory manifestations of neglect. Auditory extinction, whose clinical manifestations are independent of the sensory modalities engaged in visuo-manual adaptation, was examined in neglect patients before and after prism adaptation. Two separate groups of neglect patients (all of whom exhibited left auditory extinction) underwent prism adaptation: one group (n = 6) received a classical prism treatment ('Prism' group), the other group (n = 6) was submitted to the same procedure, but wore neutral glasses creating no optical shift (placebo 'Control' group). Auditory extinction was assessed by means of a dichotic listening task performed three times: prior to prism exposure (pre-test), upon prism removal (0 h post-test) and 2 h later (2 h post-test). The total number of correct responses, the lateralization index (detection asymmetry between the two ears) and the number of left-right fusion errors were analysed. Our results demonstrate that prism adaptation can improve left auditory extinction, thus revealing transfer of benefit to a sensory modality that is orthogonal to the visual, proprioceptive and motor modalities directly implicated in the visuo-motor adaptive process. The observed benefit was specific to the detection asymmetry between the two ears and did not affect the total number of responses. This indicates a specific effect of prism adaptation on lateralized processes rather than on general arousal. Our results suggest that the effects of prism adaptation can extend to unexposed sensory systems. The bottom-up approach of visuo-motor adaptation appears to interact with higher order brain functions related to multisensory integration and can have beneficial effects on sensory processing in different modalities. These findings should stimulate the development of therapeutic approaches aimed at bypassing the affected sensory processing modality by adapting other sensory modalities.
Tinnitus Intensity Dependent Gamma Oscillations of the Contralateral Auditory Cortex
van der Loo, Elsa; Gais, Steffen; Congedo, Marco; Vanneste, Sven; Plazier, Mark; Menovsky, Tomas; Van de Heyning, Paul; De Ridder, Dirk
2009-01-01
Background Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. Methods and Findings In unilateral tinnitus patients (N = 15; 10 right, 5 left) source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05). Conclusion Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception. PMID:19816597
Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex
Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie
2013-01-01
Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225
de Borst, Aline W; de Gelder, Beatrice
2017-08-01
Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Auditory Evoked Potential Mismatch Negativity in Normal-Hearing Adults
Schwade, Laura Flach; Didoné, Dayane Domeneghini; Sleifer, Pricila
2017-01-01
Introduction Mismatch Negativity (MMN) corresponds to a response of the central auditory nervous system. Objective The objective of this study is to analyze MMN latencies and amplitudes in normal-hearing adults and compare the results between ears, gender and hand dominance. Methods This is a cross-sectional study. Forty subjects participated, 20 women and 20 men, aged 18 to 29 years and having normal auditory thresholds. A frequency of 1000Hz (standard stimuli) and 2000Hz (deviant stimuli) was used to evoked the MMN. Results Mean latencies in the right ear were 169.4ms and 175.3ms in the left ear, with mean amplitudes of 4.6µV in the right ear and 4.2µV in the left ear. There was no statistically significant difference between ears. The comparison of latencies between genders showed a statistically significant difference for the right ear, being higher in the men than in women. There was no significant statistical difference between ears for both right-handed and left-handed group. However, the results indicated that the latency of the right ear was significantly higher for the left handers than the right handers. We also found a significant result for the latency of the left ear, which was higher for the right handers. Conclusion It was possible to obtain references of values for the MMN. There are no differences in the MMN latencies and amplitudes between the ears. Regarding gender, the male group presented higher latencies in relation to the female group in the right ear. Some results indicate that there is a significant statistical difference of the MMN between right- and left-handed individuals. PMID:28680490
Auditory processing deficits in individuals with primary open-angle glaucoma.
Rance, Gary; O'Hare, Fleur; O'Leary, Stephen; Starr, Arnold; Ly, Anna; Cheng, Belinda; Tomlin, Dani; Graydon, Kelley; Chisari, Donella; Trounce, Ian; Crowston, Jonathan
2012-01-01
The high energy demand of the auditory and visual pathways render these sensory systems prone to diseases that impair mitochondrial function. Primary open-angle glaucoma, a neurodegenerative disease of the optic nerve, has recently been associated with a spectrum of mitochondrial abnormalities. This study sought to investigate auditory processing in individuals with open-angle glaucoma. DESIGN/STUDY SAMPLE: Twenty-seven subjects with open-angle glaucoma underwent electrophysiologic (auditory brainstem response), auditory temporal processing (amplitude modulation detection), and speech perception (monosyllabic words in quiet and background noise) assessment in each ear. A cohort of age, gender and hearing level matched control subjects was also tested. While the majority of glaucoma subjects in this study demonstrated normal auditory function, there were a significant number (6/27 subjects, 22%) who showed abnormal auditory brainstem responses and impaired auditory perception in one or both ears. The finding that a significant proportion of subjects with open-angle glaucoma presented with auditory dysfunction provides evidence of systemic neuronal susceptibility. Affected individuals may suffer significant communication difficulties in everyday listening situations.
Analyzing pitch chroma and pitch height in the human brain.
Warren, Jason D; Uppenkamp, Stefan; Patterson, Roy D; Griffiths, Timothy D
2003-11-01
The perceptual pitch dimensions of chroma and height have distinct representations in the human brain: chroma is represented in cortical areas anterior to primary auditory cortex, whereas height is represented posterior to primary auditory cortex.
Lang, Alexandre; Vernet, Marine; Yang, Qing; Orssaud, Christophe; Londero, Alain; Kapoula, Zoï
2013-01-01
Subjective tinnitus (ST) is a frequent but poorly understood medical condition. Recent studies demonstrated abnormalities in several types of eye movements (smooth pursuit, optokinetic nystagmus, fixation, and vergence) in ST patients. The present study investigates horizontal and vertical saccades in patients with tinnitus lateralized predominantly to the left or to the right side. Compared to left sided ST, tinnitus perceived on the right side impaired almost all the parameters of saccades (latency, amplitude, velocity, etc.) and noticeably the upward saccades. Relative to controls, saccades from both groups were more dysmetric and were characterized by increased saccade disconjugacy (i.e., poor binocular coordination). Although the precise mechanisms linking ST and saccadic control remain unexplained, these data suggest that ST can lead to detrimental auditory, visuomotor, and perhaps vestibular interactions. PMID:23550269
Greening, Steven G.; Lee, Tae-Ho; Mather, Mara
2016-01-01
Anxiety is associated with an exaggerated expectancy of harm, including overestimation of how likely a conditioned stimulus (CS+) predicts a harmful unconditioned stimulus (US). In the current study we tested whether anxiety-associated expectancy of harm increases primary sensory cortex (S1) activity on non-reinforced (i.e., no shock) CS+ trials. Twenty healthy volunteers completed a differential-tone trace conditioning task while undergoing fMRI, with shock delivered to the left hand. We found a positive correlation between trait anxiety and activity in right, but not left, S1 during CS+ versus CS− conditions. Right S1 activity also correlated with individual differences in both primary auditory cortices (A1) and amygdala activity. Lastly, a seed-based functional connectivity analysis demonstrated that trial-wise S1 activity was positively correlated with regions of dorsolateral prefrontal cortex (dlPFC), suggesting that higher-order cognitive processes contribute to the anticipatory sensory reactivity. Our findings indicate that individual differences in trait anxiety relate to anticipatory reactivity for the US during associative learning. This anticipatory reactivity is also integrated along with emotion-related sensory signals into a brain network implicated in fear-conditioned responding. PMID:26751483
Greening, Steven G; Lee, Tae-Ho; Mather, Mara
2016-01-06
Anxiety is associated with an exaggerated expectancy of harm, including overestimation of how likely a conditioned stimulus (CS+) predicts a harmful unconditioned stimulus (US). In the current study we tested whether anxiety-associated expectancy of harm increases primary sensory cortex (S1) activity on non-reinforced (i.e., no shock) CS+ trials. Twenty healthy volunteers completed a differential-tone trace conditioning task while undergoing fMRI, with shock delivered to the left hand. We found a positive correlation between trait anxiety and activity in right, but not left, S1 during CS+ versus CS- conditions. Right S1 activity also correlated with individual differences in both primary auditory cortices (A1) and amygdala activity. Lastly, a seed-based functional connectivity analysis demonstrated that trial-wise S1 activity was positively correlated with regions of dorsolateral prefrontal cortex (dlPFC), suggesting that higher-order cognitive processes contribute to the anticipatory sensory reactivity. Our findings indicate that individual differences in trait anxiety relate to anticipatory reactivity for the US during associative learning. This anticipatory reactivity is also integrated along with emotion-related sensory signals into a brain network implicated in fear-conditioned responding.
Rado-Triveño, Julia; Alen-Ayca, Jaime
2016-01-01
To determine the validity of the use of acoustic otoacoustic emissions in comparison with the evoked potentials Auditory brainstem examination (PEATC), a study was carried out with 96 children between 0 and 4 years of age that went to Instituto Nacional de Rehabilitación in Lima, Peru. The results show a cut-off point corresponding to 1 in (+): 17.67 in right ear and 16.72 in left ear, and LR (-): 0.25 in ear right and 0.24 in left ear; ROC curve with area under the right ear curve of 0.830 (p<0.001) was obtained and in left ear of 0.829 (p<0.001). According to the results of LR (+) the sensitivity is 76% in the right ear and 65% In the left ear that coincides with the conformation of the ROC curve. In conclusion, acoustic emissions would not represent an alternative sufficiently discriminatory alternative as a screening test in this population.
Mendez, M F
2001-02-01
After a right temporoparietal stroke, a left-handed man lost the ability to understand speech and environmental sounds but developed greater appreciation for music. The patient had preserved reading and writing but poor verbal comprehension. Slower speech, single syllable words, and minimal written cues greatly facilitated his verbal comprehension. On identifying environmental sounds, he made predominant acoustic errors. Although he failed to name melodies, he could match, describe, and sing them. The patient had normal hearing except for presbyacusis, right-ear dominance for phonemes, and normal discrimination of basic psychoacoustic features and rhythm. Further testing disclosed difficulty distinguishing tone sequences and discriminating two clicks and short-versus-long tones, particularly in the left ear. Together, these findings suggest impairment in a direct route for temporal analysis and auditory word forms in his right hemisphere to Wernicke's area in his left hemisphere. The findings further suggest a separate and possibly rhythm-based mechanism for music recognition.
Webb, Alexandra R.; Heller, Howard T.; Benson, Carol B.; Lahav, Amir
2015-01-01
Brain development is largely shaped by early sensory experience. However, it is currently unknown whether, how early, and to what extent the newborn’s brain is shaped by exposure to maternal sounds when the brain is most sensitive to early life programming. The present study examined this question in 40 infants born extremely prematurely (between 25- and 32-wk gestation) in the first month of life. Newborns were randomized to receive auditory enrichment in the form of audio recordings of maternal sounds (including their mother’s voice and heartbeat) or routine exposure to hospital environmental noise. The groups were otherwise medically and demographically comparable. Cranial ultrasonography measurements were obtained at 30 ± 3 d of life. Results show that newborns exposed to maternal sounds had a significantly larger auditory cortex (AC) bilaterally compared with control newborns receiving standard care. The magnitude of the right and left AC thickness was significantly correlated with gestational age but not with the duration of sound exposure. Measurements of head circumference and the widths of the frontal horn (FH) and the corpus callosum (CC) were not significantly different between the two groups. This study provides evidence for experience-dependent plasticity in the primary AC before the brain has reached full-term maturation. Our results demonstrate that despite the immaturity of the auditory pathways, the AC is more adaptive to maternal sounds than environmental noise. Further studies are needed to better understand the neural processes underlying this early brain plasticity and its functional implications for future hearing and language development. PMID:25713382
Human-like brain hemispheric dominance in birdsong learning.
Moorman, Sanne; Gobes, Sharon M H; Kuijpers, Maaike; Kerkhofs, Amber; Zandbergen, Matthijs A; Bolhuis, Johan J
2012-07-31
Unlike nonhuman primates, songbirds learn to vocalize very much like human infants acquire spoken language. In humans, Broca's area in the frontal lobe and Wernicke's area in the temporal lobe are crucially involved in speech production and perception, respectively. Songbirds have analogous brain regions that show a similar neural dissociation between vocal production and auditory perception and memory. In both humans and songbirds, there is evidence for lateralization of neural responsiveness in these brain regions. Human infants already show left-sided dominance in their brain activation when exposed to speech. Moreover, a memory-specific left-sided dominance in Wernicke's area for speech perception has been demonstrated in 2.5-mo-old babies. It is possible that auditory-vocal learning is associated with hemispheric dominance and that this association arose in songbirds and humans through convergent evolution. Therefore, we investigated whether there is similar song memory-related lateralization in the songbird brain. We exposed male zebra finches to tutor or unfamiliar song. We found left-sided dominance of neuronal activation in a Broca-like brain region (HVC, a letter-based name) of juvenile and adult zebra finch males, independent of the song stimulus presented. In addition, juvenile males showed left-sided dominance for tutor song but not for unfamiliar song in a Wernicke-like brain region (the caudomedial nidopallium). Thus, left-sided dominance in the caudomedial nidopallium was specific for the song-learning phase and was memory-related. These findings demonstrate a remarkable neural parallel between birdsong and human spoken language, and they have important consequences for our understanding of the evolution of auditory-vocal learning and its neural mechanisms.
Gao, Fei; Wang, Guangbin; Ma, Wen; Ren, Fuxin; Li, Muwei; Dong, Yuling; Liu, Cheng; Liu, Bo; Bai, Xue; Zhao, Bin; Edden, Richard A.E.
2014-01-01
Gamma-aminobutyric acid (GABA) is the main inhibitory neurotransmitter in the central auditory system. Altered GABAergic neurotransmission has been found in both the inferior colliculus and the auditory cortex in animal models of presbycusis. Edited magnetic resonance spectroscopy (MRS), using the MEGA-PRESS sequence, is the most widely used technique for detecting GABA in the human brain. However, to date there has been a paucity of studies exploring changes to the GABA concentrations in the auditory region of patients with presbycusis. In this study, sixteen patients with presbycusis (5 males/11 females, mean age 63.1 ± 2.6 years) and twenty healthy controls (6 males/14 females, mean age 62.5 ± 2.3 years) underwent audiological and MRS examinations. Pure tone audiometry from 0.125 to 8 KHz and tympanometry were used to assess the hearing abilities of all subjects. The pure tone average (PTA; the average of hearing thresholds at 0.5, 1, 2, and 4 kHz) was calculated. The MEGA-PRESS sequence was used to measure GABA+ concentrations in 4 × 3 × 3 cm3 volumes centered on the left and right Heschl’s gyri. GABA+ concentrations were significantly lower in the presbycusis group compared to the control group (left auditory regions: p = 0.002, right auditory regions: p = 0.008). Significant negative correlations were observed between PTA and GABA+ concentrations in the presbycusis group (r = −0.57, p = 0.02), while a similar trend was found in the control group (r = −0.40, p = 0.08). These results are consistent with a hypothesis of dysfunctional GABAergic neurotransmission in the central auditory system in presbycusis, and suggest a potential treatment target for presbycusis. PMID:25463460
Auditory peripersonal space in humans.
Farnè, Alessandro; Làdavas, Elisabetta
2002-10-01
In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.
Silverman, Carol A; Silman, Shlomo; Emmer, Michele B
2017-06-01
To enhance the understanding of tinnitus origin by disseminating two case studies of vestibular schwannoma (VS) involving behavioural auditory adaptation testing (AAT). Retrospective case study. Two adults who presented with unilateral, non-pulsatile subjective tinnitus and bilateral normal-hearing sensitivity. At the initial evaluation, the otolaryngologic and audiologic findings were unremarkable, bilaterally. Upon retest, years later, VS was identified. At retest, the tinnitus disappeared in one patient and was slightly attenuated in the other patient. In the former, the results of AAT were positive for left retrocochlear pathology; in the latter, the results were negative for the left ear although a moderate degree of auditory adaptation was present despite bilateral normal-hearing sensitivity. Imaging revealed a small VS in both patients, confirmed surgically. Behavioural AAT in patients with tinnitus furnishes a useful tool for exploring tinnitus origin. Decrease or disappearance of tinnitus in patients with auditory adaptation suggests that the tinnitus generator is the cochlea or the cochlear nerve adjacent to the cochlea. Patients with unilateral tinnitus and bilateral, symmetric, normal-hearing thresholds, absent other audiovestibular symptoms, should be routinely monitored through otolaryngologic and audiologic re-evaluations. Tinnitus decrease or disappearance may constitute a red flag for retrocochlear pathology.
Westerhausen, René; Grüner, Renate; Specht, Karsten; Hugdahl, Kenneth
2009-06-01
The midsagittal corpus callosum is topographically organized, that is, with regard to their cortical origin several subtracts can be distinguished within the corpus callosum that belong to specific functional brain networks. Recent diffusion tensor tractography studies have also revealed remarkable interindividual differences in the size and exact localization of these tracts. To examine the functional relevance of interindividual variability in callosal tracts, 17 right-handed male participants underwent structural and diffusion tensor magnetic resonance imaging. Probabilistic tractography was carried out to identify the callosal subregions that interconnect left and right temporal lobe auditory processing areas, and the midsagittal size of this tract was seen as indicator of the (anatomical) strength of this connection. Auditory information transfer was assessed applying an auditory speech perception task with dichotic presentations of consonant-vowel syllables (e.g., /ba-ga/). The frequency of correct left ear reports in this task served as a functional measure of interhemispheric transfer. Statistical analysis showed that a stronger anatomical connection between the superior temporal lobe areas supports a better information transfer. This specific structure-function association in the auditory modality supports the general notion that interindividual differences in callosal topography possess functional relevance.
A Meta-Analytic Study of the Neural Systems for Auditory Processing of Lexical Tones.
Kwok, Veronica P Y; Dan, Guo; Yakpo, Kofi; Matthews, Stephen; Fox, Peter T; Li, Ping; Tan, Li-Hai
2017-01-01
The neural systems of lexical tone processing have been studied for many years. However, previous findings have been mixed with regard to the hemispheric specialization for the perception of linguistic pitch patterns in native speakers of tonal language. In this study, we performed two activation likelihood estimation (ALE) meta-analyses, one on neuroimaging studies of auditory processing of lexical tones in tonal languages (17 studies), and the other on auditory processing of lexical information in non-tonal languages as a control analysis for comparison (15 studies). The lexical tone ALE analysis showed significant brain activations in bilateral inferior prefrontal regions, bilateral superior temporal regions and the right caudate, while the control ALE analysis showed significant cortical activity in the left inferior frontal gyrus and left temporo-parietal regions. However, we failed to obtain significant differences from the contrast analysis between two auditory conditions, which might be caused by the limited number of studies available for comparison. Although the current study lacks evidence to argue for a lexical tone specific activation pattern, our results provide clues and directions for future investigations on this topic, more sophisticated methods are needed to explore this question in more depth as well.
A Meta-Analytic Study of the Neural Systems for Auditory Processing of Lexical Tones
Kwok, Veronica P. Y.; Dan, Guo; Yakpo, Kofi; Matthews, Stephen; Fox, Peter T.; Li, Ping; Tan, Li-Hai
2017-01-01
The neural systems of lexical tone processing have been studied for many years. However, previous findings have been mixed with regard to the hemispheric specialization for the perception of linguistic pitch patterns in native speakers of tonal language. In this study, we performed two activation likelihood estimation (ALE) meta-analyses, one on neuroimaging studies of auditory processing of lexical tones in tonal languages (17 studies), and the other on auditory processing of lexical information in non-tonal languages as a control analysis for comparison (15 studies). The lexical tone ALE analysis showed significant brain activations in bilateral inferior prefrontal regions, bilateral superior temporal regions and the right caudate, while the control ALE analysis showed significant cortical activity in the left inferior frontal gyrus and left temporo-parietal regions. However, we failed to obtain significant differences from the contrast analysis between two auditory conditions, which might be caused by the limited number of studies available for comparison. Although the current study lacks evidence to argue for a lexical tone specific activation pattern, our results provide clues and directions for future investigations on this topic, more sophisticated methods are needed to explore this question in more depth as well. PMID:28798670
Luo, Hao; Ni, Jing-Tian; Li, Zhi-Hao; Li, Xiao-Ou; Zhang, Da-Ren; Zeng, Fan-Gang; Chen, Lin
2006-01-01
In tonal languages such as Mandarin Chinese, a lexical tone carries semantic information and is preferentially processed in the left brain hemisphere of native speakers as revealed by the functional MRI or positron emission tomography studies, which likely measure the temporally aggregated neural events including those at an attentive stage of auditory processing. Here, we demonstrate that early auditory processing of a lexical tone at a preattentive stage is actually lateralized to the right hemisphere. We frequently presented to native Mandarin Chinese speakers a meaningful auditory word with a consonant-vowel structure and infrequently varied either its lexical tone or initial consonant using an odd-ball paradigm to create a contrast resulting in a change in word meaning. The lexical tone contrast evoked a stronger preattentive response, as revealed by whole-head electric recordings of the mismatch negativity, in the right hemisphere than in the left hemisphere, whereas the consonant contrast produced an opposite pattern. Given the distinct acoustic features between a lexical tone and a consonant, this opposite lateralization pattern suggests the dependence of hemisphere dominance mainly on acoustic cues before speech input is mapped into a semantic representation in the processing stream. PMID:17159136
76 FR 61655 - Definition of Part 15 Auditory Assistance Device
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-05
... allocated on a primary basis for radio astronomy, and the 74.8-75.2 MHz band is allocated on a primary basis... radiodetermination, radio astronomy, and TV broadcast services are in bands adjacent to the part 15 auditory...
Thalamic and parietal brain morphology predicts auditory category learning.
Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas
2014-01-01
Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.
Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A
2005-12-07
Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.
Central auditory processing and migraine: a controlled study.
Agessi, Larissa Mendonça; Villa, Thaís Rodrigues; Dias, Karin Ziliotto; Carvalho, Deusvenir de Souza; Pereira, Liliane Desgualdo
2014-11-08
This study aimed to verify and compare central auditory processing (CAP) performance in migraine with and without aura patients and healthy controls. Forty-one volunteers of both genders, aged between 18 and 40 years, diagnosed with migraine with and without aura by the criteria of "The International Classification of Headache Disorders" (ICDH-3 beta) and a control group of the same age range and with no headache history, were included. Gaps-in-noise (GIN), Duration Pattern test (DPT) and Dichotic Digits Test (DDT) tests were used to assess central auditory processing performance. The volunteers were divided into 3 groups: Migraine with aura (11), migraine without aura (15), and control group (15), matched by age and schooling. Subjects with aura and without aura performed significantly worse in GIN test for right ear (p = .006), for left ear (p = .005) and for DPT test (p < .001) when compared with controls without headache, however no significant differences were found in the DDT test for the right ear (p = .362) and for the left ear (p = .190). Subjects with migraine performed worsened in auditory gap detection, in the discrimination of short and long duration. They also presented impairment in the physiological mechanism of temporal processing, especially in temporal resolution and temporal ordering when compared with controls. Migraine could be related to an impaired central auditory processing. Research Ethics Committee (CEP 0480.10) - UNIFESP.
Stein, Aryeh D; Wang, Meng; Rivera, Juan A; Martorell, Reynaldo; Ramakrishnan, Usha
2012-08-01
The evidence relating prenatal supplementation with DHA to offspring neurological development is limited. We investigated the effect of prenatal DHA supplementation on infant brainstem auditory-evoked responses and visual- evoked potentials in a double-blind, randomized controlled trial in Cuernavaca, Mexico. Pregnant women were supplemented daily with 400 mg DHA or placebo from gestation wk 18-22 through delivery. DHA and placebo groups did not differ in maternal characteristics at randomization or infant characteristics at birth. Brainstem auditory-evoked responses were measured at 1 and 3 mo in 749 and 664 infants, respectively, and visual-evoked potentials were measured at 3 and 6 mo in 679 and 817 infants, respectively. Left-right brainstem auditory-evoked potentials were moderately correlated (range, 0.26-0.43; all P < 0.001) and left-right visual-evoked potentials were strongly correlated (range, 0.79-0.94; all P < 0.001) within any assessment. Correlations across visits were modest to moderate (range, 0.09-0.38; all P < 0.01). The offspring of DHA-supplemented women did not differ from those of control women with respect to any outcome measure (all comparisons P > 0.10). We conclude that DHA supplementation during pregnancy did not influence brainstem auditory-evoked responses at 1 and 3 mo or visual-evoked potentials at 3 and 6 mo.
Habituation deficit of auditory N100m in patients with fibromyalgia.
Choi, W; Lim, M; Kim, J S; Chung, C K
2016-11-01
Habituation refers to the brain's inhibitory mechanism against sensory overload and its brain correlate has been investigated in the form of a well-defined event-related potential, N100 (N1). Fibromyalgia is an extensively described chronic pain syndrome with concurrent manifestations of reduced tolerance and enhanced sensation of painful and non-painful stimulation, suggesting an association with central amplification of all sensory domains. Among diverse sensory modalities, we utilized repetitive auditory stimulation to explore the anomalous sensory information processing in fibromyalgia as evidenced by N1 habituation. Auditory N1 was assessed in 19 fibromyalgia patients and age-, education- and gender-matched 21 healthy control subjects under the duration-deviant passive oddball paradigm and magnetoencephalography recording. The brain signal of the first standard stimulus (following each deviant) and last standard stimulus (preceding each deviant) were analysed to identify N1 responses. N1 amplitude difference and adjusted amplitude ratio were computed as habituation indices. Fibromyalgia patients showed lower N1 amplitude difference (left hemisphere: p = 0.004; right hemisphere: p = 0.034) and adjusted N1 amplitude ratio (left hemisphere: p = 0.001; right hemisphere: p = 0.052) than healthy control subjects, indicating deficient auditory habituation. Further, augmented N1 amplitude pattern (p = 0.029) during the stimulus repetition was observed in fibromyalgia patients. Fibromyalgia patients failed to demonstrate auditory N1 habituation to repetitively presenting stimuli, which indicates their compromised early auditory information processing. Our findings provide neurophysiological evidence of inhibitory failure and cortical augmentation in fibromyalgia. WHAT'S ALREADY KNOWN ABOUT THIS TOPIC?: Fibromyalgia has been associated with altered filtering of irrelevant somatosensory input. However, whether this abnormality can extend to the auditory sensory system remains controversial. N!00, an event-related potential, has been widely utilized to assess the brain's habituation capacity against sensory overload. WHAT DOES THIS STUDY ADD?: Fibromyalgia patients showed defect in N100 habituation to repetitive auditory stimuli, indicating compromised early auditory functioning. This study identified deficient inhibitory control over irrelevant auditory stimuli in fibromyalgia. © 2016 European Pain Federation - EFIC®.
Karak, Somdatta; Jacobs, Julie S; Kittelmann, Maike; Spalthoff, Christian; Katana, Radoslaw; Sivan-Loukianova, Elena; Schon, Michael A; Kernan, Maurice J; Eberl, Daniel F; Göpfert, Martin C
2015-11-26
Much like vertebrate hair cells, the chordotonal sensory neurons that mediate hearing in Drosophila are motile and amplify the mechanical input of the ear. Because the neurons bear mechanosensory primary cilia whose microtubule axonemes display dynein arms, we hypothesized that their motility is powered by dyneins. Here, we describe two axonemal dynein proteins that are required for Drosophila auditory neuron function, localize to their primary cilia, and differently contribute to mechanical amplification in hearing. Promoter fusions revealed that the two axonemal dynein genes Dmdnah3 (=CG17150) and Dmdnai2 (=CG6053) are expressed in chordotonal neurons, including the auditory ones in the fly's ear. Null alleles of both dyneins equally abolished electrical auditory neuron responses, yet whereas mutations in Dmdnah3 facilitated mechanical amplification, amplification was abolished by mutations in Dmdnai2. Epistasis analysis revealed that Dmdnah3 acts downstream of Nan-Iav channels in controlling the amplificatory gain. Dmdnai2, in addition to being required for amplification, was essential for outer dynein arms in auditory neuron cilia. This establishes diverse roles of axonemal dyneins in Drosophila auditory neuron function and links auditory neuron motility to primary cilia and axonemal dyneins. Mutant defects in sperm competition suggest that both dyneins also function in sperm motility.
Joachimsthaler, Bettina; Uhlmann, Michaela; Miller, Frank; Ehret, Günter; Kurt, Simone
2014-01-01
Because of its great genetic potential, the mouse (Mus musculus) has become a popular model species for studies on hearing and sound processing along the auditory pathways. Here, we present the first comparative study on the representation of neuronal response parameters to tones in primary and higher-order auditory cortical fields of awake mice. We quantified 12 neuronal properties of tone processing in order to estimate similarities and differences of function between the fields, and to discuss how far auditory cortex (AC) function in the mouse is comparable to that in awake monkeys and cats. Extracellular recordings were made from 1400 small clusters of neurons from cortical layers III/IV in the primary fields AI (primary auditory field) and AAF (anterior auditory field), and the higher-order fields AII (second auditory field) and DP (dorsoposterior field). Field specificity was shown with regard to spontaneous activity, correlation between spontaneous and evoked activity, tone response latency, sharpness of frequency tuning, temporal response patterns (occurrence of phasic responses, phasic-tonic responses, tonic responses, and off-responses), and degree of variation between the characteristic frequency (CF) and the best frequency (BF) (CF–BF relationship). Field similarities were noted as significant correlations between CFs and BFs, V-shaped frequency tuning curves, similar minimum response thresholds and non-monotonic rate-level functions in approximately two-thirds of the neurons. Comparative and quantitative analyses showed that the measured response characteristics were, to various degrees, susceptible to influences of anesthetics. Therefore, studies of neuronal responses in the awake AC are important in order to establish adequate relationships between neuronal data and auditory perception and acoustic response behavior. PMID:24506843
Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.
Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M
2013-11-01
Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.
Sex differences in functional activation patterns revealed by increased emotion processing demands.
Hall, Geoffrey B C; Witelson, Sandra F; Szechtman, Henry; Nahmias, Claude
2004-02-09
Two [O(15)] PET studies assessed sex differences regional brain activation in the recognition of emotional stimuli. Study I revealed that the recognition of emotion in visual faces resulted in bilateral frontal activation in women, and unilateral right-sided activation in men. In study II, the complexity of the emotional face task was increased through tje addition of associated auditory emotional stimuli. Men again showed unilateral frontal activation, in this case to the left; whereas women did not show bilateral frontal activation, but showed greater limbic activity. These results suggest that when processing broader cross-modal emotional stimuli, men engage more in associative cognitive strategies while women draw more on primary emotional references.
Jafari, Zahra; Esmaili, Mahdiye; Delbari, Ahmad; Mehrpour, Masoud; Mohajerani, Majid H
2016-06-01
There have been a few reports about the effects of chronic stroke on auditory temporal processing abilities and no reports regarding the effects of brain damage lateralization on these abilities. Our study was performed on 2 groups of chronic stroke patients to compare the effects of hemispheric lateralization of brain damage and of age on auditory temporal processing. Seventy persons with normal hearing, including 25 normal controls, 25 stroke patients with damage to the right brain, and 20 stroke patients with damage to the left brain, without aphasia and with an age range of 31-71 years were studied. A gap-in-noise (GIN) test and a duration pattern test (DPT) were conducted for each participant. Significant differences were found between the 3 groups for GIN threshold, overall GIN percent score, and DPT percent score in both ears (P ≤ .001). For all stroke patients, performance in both GIN and DPT was poorer in the ear contralateral to the damaged hemisphere, which was significant in DPT and in 2 measures of GIN (P ≤ .046). Advanced age had a negative relationship with temporal processing abilities for all 3 groups. In cases of confirmed left- or right-side stroke involving auditory cerebrum damage, poorer auditory temporal processing is associated with the ear contralateral to the damaged cerebral hemisphere. Replication of our results and the use of GIN and DPT tests for the early diagnosis of auditory processing deficits and for monitoring the effects of aural rehabilitation interventions are recommended. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Ting, S K S; Chan, Y M; Cheong, P W T; Wong, M; Fook-Chong, S; Lo, Y L
2011-09-01
Tinnitus is a subjective auditory perception of sounds or noise not triggered by external auditory stimuli. To date, treatment in severe cases is generally unsatisfactory. Characteristic functional brain imaging changes associated with tinnitus include hyperactivity encompassing both the primary auditory cortex (AC) and the secondary or associative cortex. Brief repetitive transcranial magnetic stimulation (rTMS) trains applied to the scalp overlying the hyperactive left AC is known to produce moderate tinnitus attenuation. Although Western studies have documented the value of rTMS in tinnitus treatment, we evaluate the efficacy of a short duration rTMS protocol for the first time in the Asian setting. Consecutive patients were recruited at our tinnitus clinic. Detailed history, examination, audiogram and baseline tinnitus scales were recorded. RTMS consisted of 1000 pulses/day at 1 Hz and 110% of the motor threshold, for five consecutive days over the left temporoparietal cortex. Tinnitus ratings were determined weekly for 4 weeks after rTMS. Fifteen patients completed the trial; none experienced significant side effects. Repeated measures ANOVA showed significant linear decrease in Tinnitus Handicap Inventory (THI) scores over the time period (F((1,14))=4.7, p=0.04). However, none of the other parameters (severity, annoyance, effect on lifestyle and overall impression: visual analogue scale) showed beneficial outcomes. Our findings point to a positive effect of short duration rTMS in tinnitus treatment using the THI. However, no significant benefits were demonstrated for other subjective patient ratings. Although well tolerated and convenient, short duration rTMS may prove inadequate for modulating maladaptive plastic changes at the cortical level, and our results suggest the need for delivery of more stimuli. Future studies will utilize at least 2000 pulses/day, in line with previous experience in Western settings. Copyright © 2011 Elsevier B.V. All rights reserved.
White matter microstructural properties correlate with sensorimotor synchronization abilities.
Blecher, Tal; Tal, Idan; Ben-Shachar, Michal
2016-09-01
Sensorimotor synchronization (SMS) to an external auditory rhythm is a developed ability in humans, particularly evident in dancing and singing. This ability is typically measured in the lab via a simple task of finger tapping to an auditory beat. While simplistic, there is some evidence that poor performance on this task could be related to impaired phonological and reading abilities in children. Auditory-motor synchronization is hypothesized to rely on a tight coupling between auditory and motor neural systems, but the specific pathways that mediate this coupling have not been identified yet. In this study, we test this hypothesis and examine the contribution of fronto-temporal and callosal connections to specific measures of rhythmic synchronization. Twenty participants went through SMS and diffusion magnetic resonance imaging (dMRI) measurements. We quantified the mean asynchrony between an auditory beat and participants' finger taps, as well as the time to resynchronize (TTR) with an altered meter, and examined the correlations between these behavioral measures and diffusivity in a small set of predefined pathways. We found significant correlations between asynchrony and fractional anisotropy (FA) in the left (but not right) arcuate fasciculus and in the temporal segment of the corpus callosum. On the other hand, TTR correlated with FA in the precentral segment of the callosum. To our knowledge, this is the first demonstration that relates these particular white matter tracts with performance on an auditory-motor rhythmic synchronization task. We propose that left fronto-temporal and temporal-callosal fibers are involved in prediction and constant comparison between auditory inputs and motor commands, while inter-hemispheric connections between the motor/premotor cortices contribute to successful resynchronization of motor responses with a new external rhythm, perhaps via inhibition of tapping to the previous rhythm. Our results indicate that auditory-motor synchronization skills are associated with anatomical pathways that have been previously related to phonological awareness, thus offering a possible anatomical basis for the behavioral covariance between these abilities. Copyright © 2016 Elsevier Inc. All rights reserved.
Hernia of the tympanic membrane.
Ikeda, Ryoukichi; Miyazaki, Hiromitsu; Kawase, Tetsuaki; Katori, Yukio; Kobayashi, Toshimitsu
2017-02-01
Although tympanic bulging is commonly encountered, tympanic herniation occupying the external auditory canal is extremely rare. A 66-year-old man was presented to our hospital with left aural fullness, bilateral hearing loss and otorrhea. Preoperative findings suggested tympanic membrane (TM) hernia located in the left external auditory canal. We performed total resection of the soft mass by a transcanal approach using endoscopy. Ventilation tubes were inserted into bilateral ears. Histopathological findings confirmed diagnosis of TM hernia. Passive opening pressure of this patient was higher than normal condition of the Eustachian tube, where active opening was not observed. Hernia of the TM most likely resulted from long-term excessive Valsalva maneuver. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Cortical oscillations related to processing congruent and incongruent grapheme-phoneme pairs.
Herdman, Anthony T; Fujioka, Takako; Chau, Wilkin; Ross, Bernhard; Pantev, Christo; Picton, Terence W
2006-05-15
In this study, we investigated changes in cortical oscillations following congruent and incongruent grapheme-phoneme stimuli. Hiragana graphemes and phonemes were simultaneously presented as congruent or incongruent audiovisual stimuli to native Japanese-speaking participants. The discriminative reaction time was 57 ms shorter for congruent than incongruent stimuli. Analysis of MEG responses using synthetic aperture magnetometry (SAM) revealed that congruent stimuli evoked larger 2-10 Hz activity in the left auditory cortex within the first 250 ms after stimulus onset, and smaller 2-16 Hz activity in bilateral visual cortices between 250 and 500 ms. These results indicate that congruent visual input can modify cortical activity in the left auditory cortex.
Brain activity related to phonation in young patients with adductor spasmodic dysphonia.
Kiyuna, Asanori; Maeda, Hiroyuki; Higa, Asano; Shingaki, Kouta; Uehara, Takayuki; Suzuki, Mikio
2014-06-01
This study investigated the brain activities during phonation of young patients with adductor spasmodic dysphonia (ADSD) of relatively short disease duration (<10 years). Six subjects with ADSD of short duration (mean age: 24. 3 years; mean disease duration: 41 months) and six healthy controls (mean age: 30.8 years) underwent functional magnetic resonance imaging (fMRI) using a sparse sampling method to identify brain activity during vowel phonation (/i:/). Intragroup and intergroup analyses were performed using statistical parametric mapping software. Areas of activation in the ADSD and control groups were similar to those reported previously for vowel phonation. All of the activated areas were observed bilaterally and symmetrically. Intergroup analysis revealed higher brain activities in the SD group in the auditory-related areas (Brodmann's areas [BA] 40, 41), motor speech areas (BA44, 45), bilateral insula (BA13), bilateral cerebellum, and middle frontal gyrus (BA46). Areas with lower activation were in the left primary sensory area (BA1-3) and bilateral subcortical nucleus (putamen and globus pallidus). The auditory cortical responses observed may reflect that young ADSD patients control their voice by use of the motor speech area, insula, inferior parietal cortex, and cerebellum. Neural activity in the primary sensory area and basal ganglia may affect the voice symptoms of young ADSD patients with short disease duration. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Temporal characteristics of audiovisual information processing.
Fuhrmann Alpert, Galit; Hein, Grit; Tsai, Nancy; Naumer, Marcus J; Knight, Robert T
2008-05-14
In complex natural environments, auditory and visual information often have to be processed simultaneously. Previous functional magnetic resonance imaging (fMRI) studies focused on the spatial localization of brain areas involved in audiovisual (AV) information processing, but the temporal characteristics of AV information flow in these regions remained unclear. In this study, we used fMRI and a novel information-theoretic approach to study the flow of AV sensory information. Subjects passively perceived sounds and images of objects presented either alone or simultaneously. Applying the measure of mutual information, we computed for each voxel the latency in which the blood oxygenation level-dependent signal had the highest information content about the preceding stimulus. The results indicate that, after AV stimulation, the earliest informative activity occurs in right Heschl's gyrus, left primary visual cortex, and the posterior portion of the superior temporal gyrus, which is known as a region involved in object-related AV integration. Informative activity in the anterior portion of superior temporal gyrus, middle temporal gyrus, right occipital cortex, and inferior frontal cortex was found at a later latency. Moreover, AV presentation resulted in shorter latencies in multiple cortical areas compared with isolated auditory or visual presentation. The results provide evidence for bottom-up processing from primary sensory areas into higher association areas during AV integration in humans and suggest that AV presentation shortens processing time in early sensory cortices.
Bijsterbosch, Janine D; Lee, Kwang-Hyuk; Hunter, Michael D; Tsoi, Daniel T; Lankappa, Sudheer; Wilkinson, Iain D; Barker, Anthony T; Woodruff, Peter W R
2011-05-01
Our ability to interact physically with objects in the external world critically depends on temporal coupling between perception and movement (sensorimotor timing) and swift behavioral adjustment to changes in the environment (error correction). In this study, we investigated the neural correlates of the correction of subliminal and supraliminal phase shifts during a sensorimotor synchronization task. In particular, we focused on the role of the cerebellum because this structure has been shown to play a role in both motor timing and error correction. Experiment 1 used fMRI to show that the right cerebellar dentate nucleus and primary motor and sensory cortices were activated during regular timing and during the correction of subliminal errors. The correction of supraliminal phase shifts led to additional activations in the left cerebellum and right inferior parietal and frontal areas. Furthermore, a psychophysiological interaction analysis revealed that supraliminal error correction was associated with enhanced connectivity of the left cerebellum with frontal, auditory, and sensory cortices and with the right cerebellum. Experiment 2 showed that suppression of the left but not the right cerebellum with theta burst TMS significantly affected supraliminal error correction. These findings provide evidence that the left lateral cerebellum is essential for supraliminal error correction during sensorimotor synchronization.
Scanning silence: mental imagery of complex sounds.
Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz
2005-07-15
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.
An eye movement analysis of the effect of interruption modality on primary task resumption.
Ratwani, Raj; Trafton, J Gregory
2010-06-01
We examined the effect of interruption modality (visual or auditory) on primary task (visual) resumption to determine which modality was the least disruptive. Theories examining interruption modality have focused on specific periods of the interruption timeline. Preemption theory has focused on the switch from the primary task to the interrupting task. Multiple resource theory has focused on interrupting tasks that are to be performed concurrently with the primary task. Our focus was on examining how interruption modality influences task resumption.We leverage the memory-for-goals theory, which suggests that maintaining an associative link between environmental cues and the suspended primary task goal is important for resumption. Three interruption modality conditions were examined: auditory interruption with the primary task visible, auditory interruption with a blank screen occluding the primary task, and a visual interruption occluding the primary task. Reaction time and eye movement data were collected. The auditory condition with the primary task visible was the least disruptive. Eye movement data suggest that participants in this condition were actively maintaining an associative link between relevant environmental cues on the primary task interface and the suspended primary task goal during the interruption. These data suggest that maintaining cue association is the important factor for reducing the disruptiveness of interruptions, not interruption modality. Interruption-prone computing environments should be designed to allow for the user to have access to relevant primary task cues during an interruption to minimize disruptiveness.
Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R.
2012-01-01
Previous studies have shown that the pitch of a sound is perceived in the absence of its fundamental frequency (F0), suggesting that a distinct mechanism may resolve pitch based on a pattern that exists between harmonic frequencies. The present study investigated whether such a mechanism is active during voice pitch control. ERPs were recorded in response to +200 cents pitch shifts in the auditory feedback of self-vocalizations and complex tones with and without the F0. The absence of the fundamental induced no difference in ERP latencies. However, a right-hemisphere difference was found in the N1 amplitudes with larger responses to complex tones that included the fundamental compared to when it was missing. The P1 and N1 latencies were shorter in the left hemisphere, and the N1 and P2 amplitudes were larger bilaterally for pitch shifts in voice and complex tones compared with pure tones. These findings suggest hemispheric differences in neural encoding of pitch in sounds with missing fundamental. Data from the present study suggest that the right cortical auditory areas, thought to be specialized for spectral processing, may utilize different mechanisms to resolve pitch in sounds with missing fundamental. The left hemisphere seems to perform faster processing to resolve pitch based on the rate of temporal variations in complex sounds compared with pure tones. These effects indicate that the differential neural processing of pitch in the left and right hemispheres may enable the audio-vocal system to detect temporal and spectral variations in the auditory feedback for vocal pitch control. PMID:22386045
Wei, Yan; Fu, Yong; Liu, Shaosheng; Xia, Guihua; Pan, Song
2013-01-01
The purposes of the current study were to assess the feasibility of post-auricular microinjection of lentiviruses carrying enhanced green fluorescent protein (EGFP) into the scala media through cochleostomies in rats, determine the expression of viral gene in the cochlea, and record the post-operative changes in the number and auditory function of cochlear hair cells (HCs). Healthy rats were randomly divided into two groups. The left ears of the animals in group I were injected with lentivirus carrying EGFP (n=10) via scala media lateral wall cochleostomies, and the left ears of the animals in group II were similarly injected with artificial endolymph (n=10). Prior to and 30 days post-injection, auditory function was assessed with click-auditory brainstem response (ABR) testing, EGFP expression was determined with cochlear frozen sections under fluorescence microscopy, and survival of HCs was estimated based on whole mount preparations. Thirty days after surgery, click-ABR testing revealed that there were significant differences in the auditory function, EGFP expression, and survival of HCs in the left ears before and after surgery in the same rats from each group. In group I, EGFP was noted in the strial marginal cells of the scala media, the organ of Corti, spiral nerves, and spiral ganglion cells. Lentiviruses were successfully introduced into the scala media through cochleostomies in rats, and the EGFP reporter gene was efficiently expressed in the organ of Corti, spiral nerves, and spiral ganglion cells. Copyright © 2013 Elsevier Inc. All rights reserved.
Three-dimensional entertainment as a novel cause of takotsubo cardiomyopathy.
Taylor, Montoya; Amin, Anish; Bush, Charles
2011-11-01
Takotsubo cardiomyopathy (TC) is an uncommon entity. It is known to occur in the setting of extreme catecholamine release and results in left ventricular dysfunction without evidence of angiographically definable coronary artery disease. There have been no published reports of TC occurring with visual stimuli, specifically 3-dimensional (3D) entertainment. We present a 55-year-old woman who presented to her primary care physician's office with extreme palpitations, nausea, vomiting, and malaise <48 hours after watching a 3D action movie at her local theater. Her electrocardiogram demonstrated ST elevations in aVL and V1, prolonged QTc interval, and T-wave inversions in leads I, II, aVL, and V2-V6. Coronary angiography revealed angiographically normal vessels, elevated left ventricular filling pressures, and decreased ejection fraction with a pattern of apical ballooning. The presumed final diagnosis was TC, likely due to visual-auditory-triggered catecholamine release causing impaired coronary microcirculation. © 2011 Wiley Periodicals, Inc.
[FMRI-study of speech perception impairment in post-stroke patients with sensory aphasia].
Maĭorova, L A; Martynova, O V; Fedina, O N; Petrushevskiĭ, A G
2013-01-01
The aim of the study was to find neurophysiological correlates of the primary stage impairment of speech perception, namely phonemic discrimination, in patients with sensory aphasia after acute ischemic stroke in the left hemisphere by noninvasive method of fMRI. For this purpose we registered the fMRI-equivalent of mismatch negativity (MMN) in response to the speech phonemes--syllables "ba" and "pa" in odd-ball paradigm in 20 healthy subjects and 23 patients with post-stroke sensory aphasia. In healthy subjects active brain areas depending from the MMN contrast were observed in the superior temporal and inferior frontal gyri in the right and left hemispheres. In the group of patients there was a significant activation of the auditory cortex in the right hemisphere only, and this activation was less in a volume and intensity than in healthy subjects and correlated to the degree of preservation of speech. Thus, the method of recording fMRI equivalent of MMN is sensitive to study the speech perception impairment.
Yoncheva, Yuliya; Maurer, Urs; Zevin, Jason D; McCandliss, Bruce D
2014-08-15
Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective attention to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by manipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data-driven source localization analyses revealed that selective attention to phonology led to significantly greater recruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings suggest a key role for selective attention in on-line phonological computations. Furthermore, these findings motivate future research on the role that neural mechanisms of attention may play in phonological awareness impairments thought to underlie developmental reading disabilities. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Yoncheva; Maurer, Urs; Zevin, Jason; McCandliss, Bruce
2015-01-01
Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective atten tion to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by ma nipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data- driven source localization analyses revealed that selective attention to phonology led to significantly greater re cruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings support the key role of selective attention to phonology in the development of literacy and motivate future research on the neural bases of the interaction between phonological awareness and literacy, deemed central to both typical and atypical reading development. PMID:24746955
Human-like brain hemispheric dominance in birdsong learning
Moorman, Sanne; Gobes, Sharon M. H.; Kuijpers, Maaike; Kerkhofs, Amber; Zandbergen, Matthijs A.; Bolhuis, Johan J.
2012-01-01
Unlike nonhuman primates, songbirds learn to vocalize very much like human infants acquire spoken language. In humans, Broca’s area in the frontal lobe and Wernicke’s area in the temporal lobe are crucially involved in speech production and perception, respectively. Songbirds have analogous brain regions that show a similar neural dissociation between vocal production and auditory perception and memory. In both humans and songbirds, there is evidence for lateralization of neural responsiveness in these brain regions. Human infants already show left-sided dominance in their brain activation when exposed to speech. Moreover, a memory-specific left-sided dominance in Wernicke’s area for speech perception has been demonstrated in 2.5-mo-old babies. It is possible that auditory-vocal learning is associated with hemispheric dominance and that this association arose in songbirds and humans through convergent evolution. Therefore, we investigated whether there is similar song memory-related lateralization in the songbird brain. We exposed male zebra finches to tutor or unfamiliar song. We found left-sided dominance of neuronal activation in a Broca-like brain region (HVC, a letter-based name) of juvenile and adult zebra finch males, independent of the song stimulus presented. In addition, juvenile males showed left-sided dominance for tutor song but not for unfamiliar song in a Wernicke-like brain region (the caudomedial nidopallium). Thus, left-sided dominance in the caudomedial nidopallium was specific for the song-learning phase and was memory-related. These findings demonstrate a remarkable neural parallel between birdsong and human spoken language, and they have important consequences for our understanding of the evolution of auditory-vocal learning and its neural mechanisms. PMID:22802637
Song, Jae-Jin; Vanneste, Sven; Lazard, Diane S; Van de Heyning, Paul; Park, Joo Hyun; Oh, Seung Ha; De Ridder, Dirk
2015-05-01
Previous positron emission tomography (PET) studies have shown that various cortical areas are activated to process speech signal in cochlear implant (CI) users. Nonetheless, differences in task dimension among studies and low statistical power preclude from understanding sound processing mechanism in CI users. Hence, we performed activation likelihood estimation meta-analysis of PET studies in CI users and normal hearing (NH) controls to compare the two groups. Eight studies (58 CI subjects/92 peak coordinates; 45 NH subjects/40 peak coordinates) were included and analyzed, retrieving areas significantly activated by lexical and nonlexical stimuli. For lexical and nonlexical stimuli, both groups showed activations in the components of the dual-stream model such as bilateral superior temporal gyrus/sulcus, middle temporal gyrus, left posterior inferior frontal gyrus, and left insula. However, CI users displayed additional unique activation patterns by lexical and nonlexical stimuli. That is, for the lexical stimuli, significant activations were observed in areas comprising salience network (SN), also known as the intrinsic alertness network, such as the left dorsal anterior cingulate cortex (dACC), left insula, and right supplementary motor area in the CI user group. Also, for the nonlexical stimuli, CI users activated areas comprising SN such as the right insula and left dACC. Previous episodic observations on lexical stimuli processing using the dual auditory stream in CI users were reconfirmed in this study. However, this study also suggests that dual-stream auditory processing in CI users may need supports from the SN. In other words, CI users need to pay extra attention to cope with degraded auditory signal provided by the implant. © 2015 Wiley Periodicals, Inc.
Mondino, Marine; Jardri, Renaud; Suaud-Chagny, Marie-Françoise; Saoud, Mohamed; Poulet, Emmanuel; Brunelin, Jérôme
2016-01-01
Auditory verbal hallucinations (AVH) in patients with schizophrenia are associated with abnormal hyperactivity in the left temporo-parietal junction (TPJ) and abnormal connectivity between frontal and temporal areas. Recent findings suggest that fronto-temporal transcranial Direct Current stimulation (tDCS) with the cathode placed over the left TPJ and the anode over the left prefrontal cortex can alleviate treatment-resistant AVH in patients with schizophrenia. However, brain correlates of the AVH reduction are unclear. Here, we investigated the effect of tDCS on the resting-state functional connectivity (rs-FC) of the left TPJ. Twenty-three patients with schizophrenia and treatment-resistant AVH were randomly allocated to receive 10 sessions of active (2 mA, 20min) or sham tDCS (2 sessions/d for 5 d). We compared the rs-FC of the left TPJ between patients before and after they received active or sham tDCS. Relative to sham tDCS, active tDCS significantly reduced AVH as well as the negative symptoms. Active tDCS also reduced rs-FC of the left TPJ with the left anterior insula and the right inferior frontal gyrus and increased rs-FC of the left TPJ with the left angular gyrus, the left dorsolateral prefrontal cortex and the precuneus. The reduction of AVH severity was correlated with the reduction of the rs-FC between the left TPJ and the left anterior insula. These findings suggest that the reduction of AVH induced by tDCS is associated with a modulation of the rs-FC within an AVH-related brain network, including brain areas involved in inner speech production and monitoring. PMID:26303936
Click train encoding in primary and non-primary auditory cortex of anesthetized macaque monkeys.
Oshurkova, E; Scheich, H; Brosch, M
2008-06-02
We studied encoding of temporally modulated sounds in 28 multiunits in the primary auditory cortical field (AI) and in 35 multiunits in the secondary auditory cortical field (caudomedial auditory cortical field, CM) by presenting periodic click trains with click rates between 1 and 300 Hz lasting for 2-4 s. We found that all multiunits increased or decreased their firing rate during the steady state portion of the click train and that all except two multiunits synchronized their firing to individual clicks in the train. Rate increases and synchronized responses were most prevalent and strongest at low click rates, as expressed by best modulation frequency, limiting frequency, percentage of responsive multiunits, and average rate response and vector strength. Synchronized responses occurred up to 100 Hz; rate response occurred up to 300 Hz. Both auditory fields responded similarly to low click rates but differed at click rates above approximately 12 Hz at which more multiunits in AI than in CM exhibited synchronized responses and increased rate responses and more multiunits in CM exhibited decreased rate responses. These findings suggest that the auditory cortex of macaque monkeys encodes temporally modulated sounds similar to the auditory cortex of other mammals. Together with other observations presented in this and other reports, our findings also suggest that AI and CM have largely overlapping sensitivities for acoustic stimulus features but encode these features differently.
Sazgar, Amir Arvin; Yazdani, Nasrin; Rezazadeh, Nima; Yazdi, Alireza Karimi
2010-10-01
Our results suggest that isolated auditory or vestibular involvement is unlikely and in fact audiovestibular neuropathy can better explain auditory neuropathy. The purpose of this study was to investigate saccule and related neural pathways in auditory neuropathy patients. Three males and five females diagnosed with auditory neuropathy were included in this prospective study. Patients' ages ranged from 21 to 45 years with a mean age of 28.6 ± 8.1 years and the history of disease was between 4 and 19 years. A group of 30 normal subjects served as the control group. The main outcome measures were the mean peak latency (in ms) of the two early waves (p13 and n23) of the vestibular evoked myogenic potential (VEMP) test in patients and controls. Of the 8 patients (16 ears), normal response was detected in 3 ears (1 in right and 2 in left ears). There were unrepeatable waves in four ears and absent VEMPs in nine ears.
The posterior parietal cortex (PPC) mediates anticipatory motor control.
Krause, Vanessa; Weber, Juliane; Pollok, Bettina
2014-01-01
Flexible and precisely timed motor control is based on functional interaction within a cortico-subcortical network. The left posterior parietal cortex (PPC) is supposed to be crucial for anticipatory motor control by sensorimotor feedback matching. Intention of the present study was to disentangle the specific relevance of the left PPC for anticipatory motor control using transcranial direct current stimulation (tDCS) since a causal link remains to be established. Anodal vs. cathodal tDCS was applied for 10 min over the left PPC in 16 right-handed subjects in separate sessions. Left primary motor cortex (M1) tDCS served as control condition and was applied in additional 15 subjects. Prior to and immediately after tDCS, subjects performed three tasks demanding temporal motor precision with respect to an auditory stimulus: sensorimotor synchronization as measure of anticipatory motor control, interval reproduction and simple reaction. Left PPC tDCS affected right hand synchronization but not simple reaction times. Motor anticipation was deteriorated by anodal tDCS, while cathodal tDCS yielded the reverse effect. The variability of interval reproduction was increased by anodal left M1 tDCS, whereas it was reduced by cathodal tDCS. No significant effects on simple reaction times were found. The present data support the hypothesis that left PPC is causally involved in right hand anticipatory motor control exceeding pure motor implementation as processed by M1 and possibly indicating subjective timing. Since M1 tDCS particularly affects motor implementation, the observed PPC effects are not likely to be explained by alterations of motor-cortical excitability. Copyright © 2014 Elsevier Inc. All rights reserved.
Civilisations of the Left Cerebral Hemisphere?
ERIC Educational Resources Information Center
Racle, Gabriel L.
Research conducted by Tadanobu Tsunoda on auditory and visual sensation, designed to test and understand the functions of the cerebral hemispheres, is discussed. Tsunoda discovered that the Japanese responses to sounds by the left and the right sides of the brain are very different from the responses obtained from people from other countries. His…
ERIC Educational Resources Information Center
Ikeda, Kohei; Higashi, Toshio; Sugawara, Kenichi; Tomori, Kounosuke; Kinoshita, Hiroshi; Kasai, Tatsuya
2012-01-01
The effect of visual and auditory enhancements of finger movement on corticospinal excitability during motor imagery (MI) was investigated using the transcranial magnetic stimulation technique. Motor-evoked potentials were elicited from the abductor digit minimi muscle during MI with auditory, visual and, auditory and visual information, and no…
Petrini, Karin; Crabbe, Frances; Sheridan, Carol; Pollick, Frank E
2011-04-29
In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.
Anderson, Afrouz A; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Dashtestani, Hadis; Chowdhry, Fatima A; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H
2018-01-01
Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing.
Stein, Aryeh D.; Wang, Meng; Rivera, Juan A.; Martorell, Reynaldo; Ramakrishnan, Usha
2012-01-01
The evidence relating prenatal supplementation with DHA to offspring neurological development is limited. We investigated the effect of prenatal DHA supplementation on infant brainstem auditory-evoked responses and visual- evoked potentials in a double-blind, randomized controlled trial in Cuernavaca, Mexico. Pregnant women were supplemented daily with 400 mg DHA or placebo from gestation wk 18–22 through delivery. DHA and placebo groups did not differ in maternal characteristics at randomization or infant characteristics at birth. Brainstem auditory-evoked responses were measured at 1 and 3 mo in 749 and 664 infants, respectively, and visual-evoked potentials were measured at 3 and 6 mo in 679 and 817 infants, respectively. Left-right brainstem auditory-evoked potentials were moderately correlated (range, 0.26–0.43; all P < 0.001) and left-right visual-evoked potentials were strongly correlated (range, 0.79–0.94; all P < 0.001) within any assessment. Correlations across visits were modest to moderate (range, 0.09–0.38; all P < 0.01). The offspring of DHA-supplemented women did not differ from those of control women with respect to any outcome measure (all comparisons P > 0.10). We conclude that DHA supplementation during pregnancy did not influence brainstem auditory-evoked responses at 1 and 3 mo or visual-evoked potentials at 3 and 6 mo. PMID:22739364
Central auditory processing and migraine: a controlled study
2014-01-01
Background This study aimed to verify and compare central auditory processing (CAP) performance in migraine with and without aura patients and healthy controls. Methods Forty-one volunteers of both genders, aged between 18 and 40 years, diagnosed with migraine with and without aura by the criteria of “The International Classification of Headache Disorders” (ICDH-3 beta) and a control group of the same age range and with no headache history, were included. Gaps-in-noise (GIN), Duration Pattern test (DPT) and Dichotic Digits Test (DDT) tests were used to assess central auditory processing performance. Results The volunteers were divided into 3 groups: Migraine with aura (11), migraine without aura (15), and control group (15), matched by age and schooling. Subjects with aura and without aura performed significantly worse in GIN test for right ear (p = .006), for left ear (p = .005) and for DPT test (p < .001) when compared with controls without headache, however no significant differences were found in the DDT test for the right ear (p = .362) and for the left ear (p = .190). Conclusions Subjects with migraine performed worsened in auditory gap detection, in the discrimination of short and long duration. They also presented impairment in the physiological mechanism of temporal processing, especially in temporal resolution and temporal ordering when compared with controls. Migraine could be related to an impaired central auditory processing. Clinical trial registration Research Ethics Committee (CEP 0480.10) – UNIFESP PMID:25380661
Anderson, Afrouz A.; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Chowdhry, Fatima A.; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H.
2018-01-01
Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing. PMID:29870536
Twomey, Tae; Waters, Dafydd; Price, Cathy J; Evans, Samuel; MacSweeney, Mairéad
2017-09-27
To investigate how hearing status, sign language experience, and task demands influence functional responses in the human superior temporal cortices (STC) we collected fMRI data from deaf and hearing participants (male and female), who either acquired sign language early or late in life. Our stimuli in all tasks were pictures of objects. We varied the linguistic and visuospatial processing demands in three different tasks that involved decisions about (1) the sublexical (phonological) structure of the British Sign Language (BSL) signs for the objects, (2) the semantic category of the objects, and (3) the physical features of the objects.Neuroimaging data revealed that in participants who were deaf from birth, STC showed increased activation during visual processing tasks. Importantly, this differed across hemispheres. Right STC was consistently activated regardless of the task whereas left STC was sensitive to task demands. Significant activation was detected in the left STC only for the BSL phonological task. This task, we argue, placed greater demands on visuospatial processing than the other two tasks. In hearing signers, enhanced activation was absent in both left and right STC during all three tasks. Lateralization analyses demonstrated that the effect of deafness was more task-dependent in the left than the right STC whereas it was more task-independent in the right than the left STC. These findings indicate how the absence of auditory input from birth leads to dissociable and altered functions of left and right STC in deaf participants. SIGNIFICANCE STATEMENT Those born deaf can offer unique insights into neuroplasticity, in particular in regions of superior temporal cortex (STC) that primarily respond to auditory input in hearing people. Here we demonstrate that in those deaf from birth the left and the right STC have altered and dissociable functions. The right STC was activated regardless of demands on visual processing. In contrast, the left STC was sensitive to the demands of visuospatial processing. Furthermore, hearing signers, with the same sign language experience as the deaf participants, did not activate the STCs. Our data advance current understanding of neural plasticity by determining the differential effects that hearing status and task demands can have on left and right STC function. Copyright © 2017 Twomey et al.
Tokida, Haruki; Kanaya, Yuhei; Shimoe, Yutaka; Imagawa, Madoka; Fukunaga, Shinya; Kuriyama, Masaru
2017-08-31
A 45-year-old right-handed man with a past history (10 years) of putaminal hemorrage presented with auditory agnosia associated with left putaminal hemorrhage. It was suspected that the auditory agnosia was due to bilateral damage in the acoustic radiations. Generalized auditory agnosia, verbal and non-verbal (music and environmental), was diagnosed by neuropsychological examinations. It improved 4 months after the onset. However, the clinical assessment of attention remained poor. The cognition for speech sounds improved slowly, but once it started to improve, the progress of improvement was rapid. Subsequently, the cognition for music sounds also improved, while the recovery of the cognition for environmental sounds remained delayed. There was a dissociation in recovery between these cognitions. He was able to return to work a year after the onset. We also reviewed the literature for cases with auditory agnosia and discuss their course of recovery in this report.
Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.
Morrill, Ryan J; Hasenstaub, Andrea R
2018-03-14
The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.
Geissler, Diana B.; Schmidt, H. Sabine; Ehret, Günter
2016-01-01
Activation of the auditory cortex (AC) by a given sound pattern is plastic, depending, in largely unknown ways, on the physiological state and the behavioral context of the receiving animal and on the receiver's experience with the sounds. Such plasticity can be inferred when house mouse mothers respond maternally to pup ultrasounds right after parturition and naïve females have to learn to respond. Here we use c-FOS immunocytochemistry to quantify highly activated neurons in the AC fields and layers of seven groups of mothers and naïve females who have different knowledge about and are differently motivated to respond to acoustic models of pup ultrasounds of different behavioral significance. Profiles of FOS-positive cells in the AC primary fields (AI, AAF), the ultrasonic field (UF), the secondary field (AII), and the dorsoposterior field (DP) suggest that activation reflects in AI, AAF, and UF the integration of sound properties with animal state-dependent factors, in the higher-order field AII the news value of a given sound in the behavioral context, and in the higher-order field DP the level of maternal motivation and, by left-hemisphere activation advantage, the recognition of the meaning of sounds in the given context. Anesthesia reduced activation in all fields, especially in cortical layers 2/3. Thus, plasticity in the AC is field-specific preparing different output of AC fields in the process of perception, recognition and responding to communication sounds. Further, the activation profiles of the auditory cortical fields suggest the differentiation between brains hormonally primed to know (mothers) and brains which acquired knowledge via implicit learning (naïve females). In this way, auditory cortical activation discriminates between instinctive (mothers) and learned (naïve females) cognition. PMID:27013959
Abdul Wahab, Noor Alaudin; Zakaria, Mohd Normani; Abdul Rahman, Abdul Hamid; Sidek, Dinsuhaimi; Wahab, Suzaily
2017-11-01
The present, case-control, study investigates binaural hearing performance in schizophrenia patients towards sentences presented in quiet and noise. Participants were twenty-one healthy controls and sixteen schizophrenia patients with normal peripheral auditory functions. The binaural hearing was examined in four listening conditions by using the Malay version of hearing in noise test. The syntactically and semantically correct sentences were presented via headphones to the randomly selected subjects. In each condition, the adaptively obtained reception thresholds for speech (RTS) were used to determine RTS noise composite and spatial release from masking. Schizophrenia patients demonstrated significantly higher mean RTS value relative to healthy controls (p=0.018). The large effect size found in three listening conditions, i.e., in quiet (d=1.07), noise right (d=0.88) and noise composite (d=0.90) indicates statistically significant difference between the groups. However, noise front and noise left conditions show medium (d=0.61) and small (d=0.50) effect size respectively. No statistical difference between groups was noted in regards to spatial release from masking on right (p=0.305) and left (p=0.970) ear. The present findings suggest an abnormal unilateral auditory processing in central auditory pathway in schizophrenia patients. Future studies to explore the role of binaural and spatial auditory processing were recommended.
Anatomical Substrates of Visual and Auditory Miniature Second-language Learning
Newman-Norlund, Roger D.; Frey, Scott H.; Petitto, Laura-Ann; Grafton, Scott T.
2007-01-01
Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca’s area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca’s area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the ‘‘critical period.’’ The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance. PMID:17129186
Auditory interfaces: The human perceiver
NASA Technical Reports Server (NTRS)
Colburn, H. Steven
1991-01-01
A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.
2003-01-01
stability. The ectosylvian gyrus, which includes the primary auditory cortex, was exposed by craniotomy and the dura was reflected. The contralateral... awake monkey. Journal Revista de Acustica, 33:84–87985–06–8. Victor, J. and Knight, B. (1979). Nonlinear analysis with an arbitrary stimulus ensemble
Unusual Presentation of Chronic Idiopathic Thrombocytopenic Purpura
Madhusudhanan, M.; Yusuff, Ali M.
2008-01-01
A snakebite victim presented with normal clotting profile and a low platelet count. A routine CBC in his past records (February 2004) showed a platelet count of 20,000/microlitre, but the patient was not symptomatic. We report a case of chronic idiopathic thrombocytopenic purpura, incidentally found in a patient presenting with snakebite. The patient also has acquired primary testicular failure. After the diagnosis the patient was on regular follow up. He caused trauma to the right external auditory canal and perforated his tympanic membrane. His left tympanic membrane was also scarred and retracted. Establishing a diagnosis of an ITP early is important so that the patient can take precaution to avoid undue trauma and monitor proper follow up. PMID:22567212
Singh, K
2015-01-01
Mobile phone (MP) is commonly used communication tool. Electromagnetic waves (EMWs) emitted from MP may have potential health hazards. So, it was planned to study the effect of electromagnetic waves (EMWs) emitted from the mobile phone on brainstem auditory evoked potential (BAEP) in male subjects in the age group of 20-40 years. BAEPs were recorded using standard method of 10-20 system of electrode placement and sound click stimuli of specified intensity, duration and frequency.Right ear was exposed to EMW emitted from MP for about 10 min. On comparison of before and after exposure to MP in right ear (found to be dominating ear), there was significant increase in latency of II, III (p < 0.05) and V (p < 0.001) wave, amplitude of I-Ia wave (p < 0.05) and decrease in IPL of III-V wave (P < 0.05) after exposure to MP. But no significant change was found in waves of BAEP in left ear before vs after MP. On comparison of right (having exposure routinely as found to be dominating ear) and left ears (not exposed to MP), before exposure to MP, IPL of IIl-V wave and amplitude of V-Va is more (< 0.001) in right ear compared to more latency of III and IV wave (< 0.001) in left ear. After exposure to MP, the amplitude of V-Va was (p < 0.05) more in right ear compared to left ear. In conclusion, EMWs emitted from MP affects the auditory potential.
Brain connectivity and psychiatric comorbidity in adolescents with Internet gaming disorder.
Han, Doug Hyun; Kim, Sun Mi; Bae, Sujin; Renshaw, Perry F; Anderson, Jeffrey S
2017-05-01
Prolonged Internet video game play may have multiple and complex effects on human cognition and brain development in both negative and positive ways. There is not currently a consensus on the principle effects of video game play neither on brain development nor on the relationship to psychiatric comorbidity. In this study, 78 adolescents with Internet gaming disorder (IGD) and 73 comparison subjects without IGD, including subgroups with no other psychiatric comorbid disease, with major depressive disorder and with attention deficit hyperactivity disorder (ADHD), were included in a 3 T resting state functional magnetic resonance imaging analysis. The severity of Internet gaming disorder, depression, anxiety and ADHD symptoms were assessed with the Young Internet Addiction Scale, the Beck Depression Inventory, the Beck Anxiety Inventory and the Korean ADHD rating scales, respectively. Patients with IGD showed an increased functional correlation between seven pairs of regions, all satisfying q < 0.05 False discovery rates in light of multiple statistical tests: left frontal eye field to dorsal anterior cingulate, left frontal eye field to right anterior insula, left dorsolateral prefrontal cortex (DLPFC) to left temporoparietal junction (TPJ), right DLPFC to right TPJ, right auditory cortex to right motor cortex, right auditory cortex to supplementary motor area and right auditory cortex to dorsal anterior cingulate. These findings may represent a training effect of extended game play and suggest a risk or predisposition in game players for over-connectivity of the default mode and executive control networks that may relate to psychiatric comorbidity. © 2015 Society for the Study of Addiction.
Hemispheric asymmetry of auditory steady-state responses to monaural and diotic stimulation.
Poelmans, Hanne; Luts, Heleen; Vandermosten, Maaike; Ghesquière, Pol; Wouters, Jan
2012-12-01
Amplitude modulations in the speech envelope are crucial elements for speech perception. These modulations comprise the processing rate at which syllabic (~3-7 Hz), and phonemic transitions occur in speech. Theories about speech perception hypothesize that each hemisphere in the auditory cortex is specialized in analyzing modulations at different timescales, and that phonemic-rate modulations of the speech envelope lateralize to the left hemisphere, whereas right lateralization occurs for slow, syllabic-rate modulations. In the present study, neural processing of phonemic- and syllabic-rate modulations was investigated with auditory steady-state responses (ASSRs). ASSRs to speech-weighted noise stimuli, amplitude modulated at 4, 20, and 80 Hz, were recorded in 30 normal-hearing adults. The 80 Hz ASSR is primarily generated by the brainstem, whereas 20 and 4 Hz ASSRs are mainly cortically evoked and relate to speech perception. Stimuli were presented diotically (same signal to both ears) and monaurally (one signal to the left or right ear). For 80 Hz, diotic ASSRs were larger than monaural responses. This binaural advantage decreased with decreasing modulation frequency. For 20 Hz, diotic ASSRs were equal to monaural responses, while for 4 Hz, diotic responses were smaller than monaural responses. Comparison of left and right ear stimulation demonstrated that, with decreasing modulation rate, a gradual change from ipsilateral to right lateralization occurred. Together, these results (1) suggest that ASSR enhancement to binaural stimulation decreases in the ascending auditory system and (2) indicate that right lateralization is more prominent for low-frequency ASSRs. These findings may have important consequences for electrode placement in clinical settings, as well as for the understanding of low-frequency ASSR generation.
Referential Coding Contributes to the Horizontal SMARC Effect
ERIC Educational Resources Information Center
Cho, Yang Seok; Bae, Gi Yeul; Proctor, Robert W.
2012-01-01
The present study tested whether coding of tone pitch relative to a referent contributes to the correspondence effect between the pitch height of an auditory stimulus and the location of a lateralized response. When left-right responses are mapped to high or low pitch tones, performance is better with the high-right/low-left mapping than with the…
ERIC Educational Resources Information Center
Railo, H.; Tallus, J.; Hamalainen, H.
2011-01-01
Studies have suggested that supramodal attentional resources are biased rightward due to asymmetric spatial fields of the two hemispheres. This bias has been observed especially in right-handed subjects. We presented left and right-handed subjects with brief uniform grey visual stimuli in either the left or right visual hemifield. Consistent with…
ERIC Educational Resources Information Center
Hadlington, Lee J.; Bridges, Andrew M.; Beaman, C. Philip
2006-01-01
Three experiments attempted to clarify the effect of altering the spatial presentation of irrelevant auditory information. Previous research using serial recall tasks demonstrated a left-ear disadvantage for the presentation of irrelevant sounds (Hadlington, Bridges, & Darby, 2004). Experiments 1 and 2 examined the effects of manipulating the…
Brain correlates of stuttering and syllable production. A PET performance-correlation analysis.
Fox, P T; Ingham, R J; Ingham, J C; Zamarripa, F; Xiong, J H; Lancaster, J L
2000-10-01
To distinguish the neural systems of normal speech from those of stuttering, PET images of brain blood flow were probed (correlated voxel-wise) with per-trial speech-behaviour scores obtained during PET imaging. Two cohorts were studied: 10 right-handed men who stuttered and 10 right-handed, age- and sex-matched non-stuttering controls. Ninety PET blood flow images were obtained in each cohort (nine per subject as three trials of each of three conditions) from which r-value statistical parametric images (SPI¿r¿) were computed. Brain correlates of stutter rate and syllable rate showed striking differences in both laterality and sign (i.e. positive or negative correlations). Stutter-rate correlates, both positive and negative, were strongly lateralized to the right cerebral and left cerebellar hemispheres. Syllable correlates in both cohorts were bilateral, with a bias towards the left cerebral and right cerebellar hemispheres, in keeping with the left-cerebral dominance for language and motor skills typical of right-handed subjects. For both stutters and syllables, the brain regions that were correlated positively were those of speech production: the mouth representation in the primary motor cortex; the supplementary motor area; the inferior lateral premotor cortex (Broca's area); the anterior insula; and the cerebellum. The principal difference between syllable-rate and stutter-rate positive correlates was hemispheric laterality. A notable exception to this rule was that cerebellar positive correlates for syllable rate were far more extensive in the stuttering cohort than in the control cohort, which suggests a specific role for the cerebellum in enabling fluent utterances in persons who stutter. Stutters were negatively correlated with right-cerebral regions (superior and middle temporal gyrus) associated with auditory perception and processing, regions which were positively correlated with syllables in both the stuttering and control cohorts. These findings support long-held theories that the brain correlates of stuttering are the speech-motor regions of the non-dominant (right) cerebral hemisphere, and extend this theory to include the non-dominant (left) cerebellar hemisphere. The present findings also indicate a specific role of the cerebellum in the fluent utterances of persons who stutter. Support is also offered for theories that implicate auditory processing problems in stuttering.
2016-12-01
weighting functions utilized the “M-weighting” functions at lower frequencies, where no TTS existed at that time . Since derivation of the Phase 2...resulting shapes of the weighting functions (left) and exposure functions (right). The arrows indicate the direction of change when the designated parameter...thresholds are in dB re 1 μPa ..................................... iv 1. Species group designations for Navy Phase 3 auditory weighting functions
Grandin, Cécile B.; Dricot, Laurence; Plaza, Paula; Lerens, Elodie; Rombaux, Philippe; De Volder, Anne G.
2013-01-01
Using functional magnetic resonance imaging (fMRI) in ten early blind humans, we found robust occipital activation during two odor-processing tasks (discrimination or categorization of fruit and flower odors), as well as during control auditory-verbal conditions (discrimination or categorization of fruit and flower names). We also found evidence for reorganization and specialization of the ventral part of the occipital cortex, with dissociation according to stimulus modality: the right fusiform gyrus was most activated during olfactory conditions while part of the left ventral lateral occipital complex showed a preference for auditory-verbal processing. Only little occipital activation was found in sighted subjects, but the same right-olfactory/left-auditory-verbal hemispheric lateralization was found overall in their brain. This difference between the groups was mirrored by superior performance of the blind in various odor-processing tasks. Moreover, the level of right fusiform gyrus activation during the olfactory conditions was highly correlated with individual scores in a variety of odor recognition tests, indicating that the additional occipital activation may play a functional role in odor processing. PMID:23967263
Ikeda, Yumiko; Yahata, Noriaki; Takahashi, Hidehiko; Koeda, Michihiko; Asai, Kunihiko; Okubo, Yoshiro; Suzuki, Hidenori
2010-05-01
Comprehending conversation in a crowd requires appropriate orienting and sustainment of auditory attention to and discrimination of the target speaker. While a multitude of cognitive functions such as voice perception and language processing work in concert to subserve this ability, it is still unclear which cognitive components critically determine successful discrimination of speech sounds under constantly changing auditory conditions. To investigate this, we present a functional magnetic resonance imaging (fMRI) study of changes in cerebral activities associated with varying challenge levels of speech discrimination. Subjects participated in a diotic listening paradigm that presented them with two news stories read simultaneously but independently by a target speaker and a distracting speaker of incongruent or congruent sex. We found that the voice of distracter of congruent rather than incongruent sex made the listening more challenging, resulting in enhanced activities mainly in the left temporal and frontal gyri. Further, the activities at the left inferior, left anterior superior and right superior loci in the temporal gyrus were shown to be significantly correlated with accuracy of the discrimination performance. The present results suggest that the subregions of bilateral temporal gyri play a key role in the successful discrimination of speech under constantly changing auditory conditions as encountered in daily life. 2010 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Spatial processing in the auditory cortex of the macaque monkey
NASA Astrophysics Data System (ADS)
Recanzone, Gregg H.
2000-10-01
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
A Brain System for Auditory Working Memory.
Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D
2016-04-20
The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.
Dietrich, Susanne; Hertrich, Ingo; Ackermann, Hermann
2015-01-01
In many functional magnetic resonance imaging (fMRI) studies blind humans were found to show cross-modal reorganization engaging the visual system in non-visual tasks. For example, blind people can manage to understand (synthetic) spoken language at very high speaking rates up to ca. 20 syllables/s (syl/s). FMRI data showed that hemodynamic activation within right-hemispheric primary visual cortex (V1), bilateral pulvinar (Pv), and left-hemispheric supplementary motor area (pre-SMA) covaried with their capability of ultra-fast speech (16 syllables/s) comprehension. It has been suggested that right V1 plays an important role with respect to the perception of ultra-fast speech features, particularly the detection of syllable onsets. Furthermore, left pre-SMA seems to be an interface between these syllabic representations and the frontal speech processing and working memory network. So far, little is known about the networks linking V1 to Pv, auditory cortex (A1), and (mesio-) frontal areas. Dynamic causal modeling (DCM) was applied to investigate (i) the input structure from A1 and Pv toward right V1 and (ii) output from right V1 and A1 to left pre-SMA. As concerns the input Pv was significantly connected to V1, in addition to A1, in blind participants, but not in sighted controls. Regarding the output V1 was significantly connected to pre-SMA in blind individuals, and the strength of V1-SMA connectivity correlated with the performance of ultra-fast speech comprehension. By contrast, in sighted controls, not understanding ultra-fast speech, pre-SMA did neither receive input from A1 nor V1. Taken together, right V1 might facilitate the “parsing” of the ultra-fast speech stream in blind subjects by receiving subcortical auditory input via the Pv (= secondary visual pathway) and transmitting this information toward contralateral pre-SMA. PMID:26148062
Dietrich, Susanne; Hertrich, Ingo; Ackermann, Hermann
2015-01-01
In many functional magnetic resonance imaging (fMRI) studies blind humans were found to show cross-modal reorganization engaging the visual system in non-visual tasks. For example, blind people can manage to understand (synthetic) spoken language at very high speaking rates up to ca. 20 syllables/s (syl/s). FMRI data showed that hemodynamic activation within right-hemispheric primary visual cortex (V1), bilateral pulvinar (Pv), and left-hemispheric supplementary motor area (pre-SMA) covaried with their capability of ultra-fast speech (16 syllables/s) comprehension. It has been suggested that right V1 plays an important role with respect to the perception of ultra-fast speech features, particularly the detection of syllable onsets. Furthermore, left pre-SMA seems to be an interface between these syllabic representations and the frontal speech processing and working memory network. So far, little is known about the networks linking V1 to Pv, auditory cortex (A1), and (mesio-) frontal areas. Dynamic causal modeling (DCM) was applied to investigate (i) the input structure from A1 and Pv toward right V1 and (ii) output from right V1 and A1 to left pre-SMA. As concerns the input Pv was significantly connected to V1, in addition to A1, in blind participants, but not in sighted controls. Regarding the output V1 was significantly connected to pre-SMA in blind individuals, and the strength of V1-SMA connectivity correlated with the performance of ultra-fast speech comprehension. By contrast, in sighted controls, not understanding ultra-fast speech, pre-SMA did neither receive input from A1 nor V1. Taken together, right V1 might facilitate the "parsing" of the ultra-fast speech stream in blind subjects by receiving subcortical auditory input via the Pv (= secondary visual pathway) and transmitting this information toward contralateral pre-SMA.
Renier, Laurent A.; Anurova, Irina; De Volder, Anne G.; Carlson, Synnöve; VanMeter, John; Rauschecker, Josef P.
2012-01-01
The segregation between cortical pathways for the identification and localization of objects is thought of as a general organizational principle in the brain. Yet, little is known about the unimodal versus multimodal nature of these processing streams. The main purpose of the present study was to test whether the auditory and tactile dual pathways converged into specialized multisensory brain areas. We used functional magnetic resonance imaging (fMRI) to compare directly in the same subjects the brain activation related to localization and identification of comparable auditory and vibrotactile stimuli. Results indicate that the right inferior frontal gyrus (IFG) and both left and right insula were more activated during identification conditions than during localization in both touch and audition. The reverse dissociation was found for the left and right inferior parietal lobules (IPL), the left superior parietal lobule (SPL) and the right precuneus-SPL, which were all more activated during localization conditions in the two modalities. We propose that specialized areas in the right IFG and the left and right insula are multisensory operators for the processing of stimulus identity whereas parts of the left and right IPL and SPL are specialized for the processing of spatial attributes independently of sensory modality. PMID:19726653
Neural signatures of lexical tone reading.
Kwok, Veronica P Y; Wang, Tianfu; Chen, Siping; Yakpo, Kofi; Zhu, Linlin; Fox, Peter T; Tan, Li Hai
2015-01-01
Research on how lexical tone is neuroanatomically represented in the human brain is central to our understanding of cortical regions subserving language. Past studies have exclusively focused on tone perception of the spoken language, and little is known as to the lexical tone processing in reading visual words and its associated brain mechanisms. In this study, we performed two experiments to identify neural substrates in Chinese tone reading. First, we used a tone judgment paradigm to investigate tone processing of visually presented Chinese characters. We found that, relative to baseline, tone perception of printed Chinese characters were mediated by strong brain activation in bilateral frontal regions, left inferior parietal lobule, left posterior middle/medial temporal gyrus, left inferior temporal region, bilateral visual systems, and cerebellum. Surprisingly, no activation was found in superior temporal regions, brain sites well known for speech tone processing. In activation likelihood estimation (ALE) meta-analysis to combine results of relevant published studies, we attempted to elucidate whether the left temporal cortex activities identified in Experiment one is consistent with those found in previous studies of auditory lexical tone perception. ALE results showed that only the left superior temporal gyrus and putamen were critical in auditory lexical tone processing. These findings suggest that activation in the superior temporal cortex associated with lexical tone perception is modality-dependent. © 2014 Wiley Periodicals, Inc.
Rogalsky, Corianne; Love, Tracy; Driscoll, David; Anderson, Steven W.; Hickok, Gregory
2013-01-01
The discovery of mirror neurons in macaque has led to a resurrection of motor theories of speech perception. Although the majority of lesion and functional imaging studies have associated perception with the temporal lobes, it has also been proposed that the ‘human mirror system’, which prominently includes Broca’s area, is the neurophysiological substrate of speech perception. Although numerous studies have demonstrated a tight link between sensory and motor speech processes, few have directly assessed the critical prediction of mirror neuron theories of speech perception, namely that damage to the human mirror system should cause severe deficits in speech perception. The present study measured speech perception abilities of patients with lesions involving motor regions in the left posterior frontal lobe and/or inferior parietal lobule (i.e., the proposed human ‘mirror system’). Performance was at or near ceiling in patients with fronto-parietal lesions. It is only when the lesion encroaches on auditory regions in the temporal lobe that perceptual deficits are evident. This suggests that ‘mirror system’ damage does not disrupt speech perception, but rather that auditory systems are the primary substrate for speech perception. PMID:21207313
Ariai, M Shafie; Eggers, Scott D; Giannini, Caterina; Driscoll, Colin L W; Link, Michael J
2015-10-01
Distant metastasis of mucinous adenocarcinoma from the gastrointestinal tract, ovaries, pancreas, lungs, breast, or urogenital system is a well-described entity. Mucinous adenocarcinomas from different primary sites are histologically identical with gland cells producing a copious amount of mucin. This report describes a very rare solitary metastasis of a mucinous adenocarcinoma of unknown origin to the facial/vestibulocochlear nerve complex in the cerebellopontine angle. A 71-year-old woman presented with several month history of progressive neurological decline and a negative extensive workup performed elsewhere. She presented to our institution with complete left facial weakness, left-sided deafness, gait unsteadiness, headache and anorexia. A repeat magnetic resonance imaging scan of the head revealed a cystic, enhancing abnormality involving the left cerebellopontine angle and internal auditory canal. A left retrosigmoid craniotomy was performed and the lesion was completely resected. The final pathology was a mucinous adenocarcinoma of indeterminate origin. Postoperatively, the patient continued with her preoperative deficits and subsequently died of her systemic disease 6 weeks after discharge. The facial/vestibulocochlear nerve complex is an unusual location for metastatic disease in the central nervous system. Clinicians should consider metastatic tumor as the possible etiology of an unusual appearing mass in this location causing profound neurological deficits. The prognosis after metastatic mucinous adenocarcinoma to the cranial nerves in the cerebellopontine angle may be poor. Copyright © 2015 Elsevier Inc. All rights reserved.
Korostenskaja, Milena; Harris, Elana; Giovanetti, Cathy; Horn, Paul; Wang, Yingying; Rose, Douglas; Fujiwara, Hisako; Xiang, Jing
2013-05-30
Patients with obsessive-compulsive disorder (OCD) often report sensory intolerances which may lead to significant functional impairment. This study used auditory evoked fields (AEFs) to address the question of whether neural correlates of sensory auditory information processing differ in youth with OCD compared with healthy comparison subjects (HCS). AEFs, recorded with a whole head 275-channel magnetoencephalography system, were elicited in response to binaural auditory stimuli from 10 pediatric subjects with OCD (ages 8-13, mean 11 years, 6 males) and 10 age- and gender-matched HCS. Three major neuromagnetic responses were studied: M70 (60-80 ms), M100 (90-120 ms), and M150 (130-190 ms). When compared with HCS, subjects with OCD demonstrated delayed latency of the M100 response. In subjects with OCD the amplitude of the M100 and M150 responses was significantly greater in the right hemisphere compared with the left hemisphere. Current results suggest that when compared with HCS, subjects with OCD have altered auditory information processing, evident from the delayed latency of the M100 response, which is thought to be associated with the encoding of physical stimulus characteristics. Interhemispheric asymmetry with increased M100 and M150 amplitudes over the right hemisphere compared with the left hemisphere was found in young OCD subjects. These results should be interpreted with caution due to the high variability rate of responses in both HCS and OCD subjects, as well as the possible effect of medication in OCD subjects. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Eye-movements intervening between two successive sounds disrupt comparisons of auditory location
Pavani, Francesco; Husain, Masud; Driver, Jon
2008-01-01
Summary Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here we studied instead whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5 secs delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array), or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location. PMID:18566808
Eye-movements intervening between two successive sounds disrupt comparisons of auditory location.
Pavani, Francesco; Husain, Masud; Driver, Jon
2008-08-01
Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here, we studied, instead, whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5-s delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array) or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment 1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in thn the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d') for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.
Auditory priming improves neural synchronization in auditory-motor entrainment.
Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J
2018-05-22
Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions using rhythmic auditory stimulation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cortico-Cortical Connectivity Within Ferret Auditory Cortex.
Bizley, Jennifer K; Bajo, Victoria M; Nodal, Fernando R; King, Andrew J
2015-10-15
Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency-matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non-overlapping, consistent with the non-tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. © 2015 Wiley Periodicals, Inc.
Brain Metabolism during Hallucination-Like Auditory Stimulation in Schizophrenia
Horga, Guillermo; Fernández-Egea, Emilio; Mané, Anna; Font, Mireia; Schatz, Kelly C.; Falcon, Carles; Lomeña, Francisco; Bernardo, Miguel; Parellada, Eduard
2014-01-01
Auditory verbal hallucinations (AVH) in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1) at rest and (2) during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia. PMID:24416328
Integration of auditory and vibrotactile stimuli: Effects of frequency
Wilson, E. Courtenay; Reed, Charlotte M.; Braida, Louis D.
2010-01-01
Perceptual integration of vibrotactile and auditory sinusoidal tone pulses was studied in detection experiments as a function of stimulation frequency. Vibrotactile stimuli were delivered through a single channel vibrator to the left middle fingertip. Auditory stimuli were presented diotically through headphones in a background of 50 dB sound pressure level broadband noise. Detection performance for combined auditory-tactile presentations was measured using stimulus levels that yielded 63% to 77% correct unimodal performance. In Experiment 1, the vibrotactile stimulus was 250 Hz and the auditory stimulus varied between 125 and 2000 Hz. In Experiment 2, the auditory stimulus was 250 Hz and the tactile stimulus varied between 50 and 400 Hz. In Experiment 3, the auditory and tactile stimuli were always equal in frequency and ranged from 50 to 400 Hz. The highest rates of detection for the combined-modality stimulus were obtained when stimulating frequencies in the two modalities were equal or closely spaced (and within the Pacinian range). Combined-modality detection for closely spaced frequencies was generally consistent with an algebraic sum model of perceptual integration; wider-frequency spacings were generally better fit by a Pythagorean sum model. Thus, perceptual integration of auditory and tactile stimuli at near-threshold levels appears to depend both on absolute frequency and relative frequency of stimulation within each modality. PMID:21117754
Visual and Auditory Input in Second-Language Speech Processing
ERIC Educational Resources Information Center
Hardison, Debra M.
2010-01-01
The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…
Improving left spatial neglect through music scale playing.
Bernardi, Nicolò Francesco; Cioffi, Maria Cristina; Ronchi, Roberta; Maravita, Angelo; Bricolo, Emanuela; Zigiotto, Luca; Perucca, Laura; Vallar, Giuseppe
2017-03-01
The study assessed whether the auditory reference provided by a music scale could improve spatial exploration of a standard musical instrument keyboard in right-brain-damaged patients with left spatial neglect. As performing music scales involves the production of predictable successive pitches, the expectation of the subsequent note may facilitate patients to explore a larger extension of space in the left affected side, during the production of music scales from right to left. Eleven right-brain-damaged stroke patients with left spatial neglect, 12 patients without neglect, and 12 age-matched healthy participants played descending scales on a music keyboard. In a counterbalanced design, the participants' exploratory performance was assessed while producing scales in three feedback conditions: With congruent sound, no-sound, or random sound feedback provided by the keyboard. The number of keys played and the timing of key press were recorded. Spatial exploration by patients with left neglect was superior with congruent sound feedback, compared to both Silence and Random sound conditions. Both the congruent and incongruent sound conditions were associated with a greater deceleration in all groups. The frame provided by the music scale improves exploration of the left side of space, contralateral to the right hemisphere, damaged in patients with left neglect. Performing a scale with congruent sounds may trigger at some extent preserved auditory and spatial multisensory representations of successive sounds, thus influencing the time course of space scanning, and ultimately resulting in a more extensive spatial exploration. These findings offer new perspectives also for the rehabilitation of the disorder. © 2015 The British Psychological Society.
Tanaka, T; Kojima, S; Takeda, H; Ino, S; Ifukube, T
2001-12-15
The maintenance of postural balance depends on effective and efficient feedback from various sensory inputs. The importance of auditory inputs in this respect is not, as yet, fully understood. The purpose of this study was to analyse how the moving auditory stimuli could affect the standing balance in healthy adults of different ages. The participants of the study were 12 healthy volunteers, who were divided into two age categories: the young group (mean = 21.9 years) and the elderly group (mean = 68.9 years). The instrument used for evaluation of standing balance was a force plate for measuring body sway parameters. The toe pressure was measured using the F-scan Tactile Sensor System. The moving auditory stimulus produced a white-noise sound and binaural cue using the Beachtron Affordable 3D Audio system. The moving auditory stimulus conditions were employed by having the sound come from the right to left or vice versa at the height of the participant's ears. Participants were asked to stand on the force plate in the Romberg position for 20 s with either eyes opened or eyes closed for analysing the effect of visual input. Simultaneously, all participants tried to remain in the standing position with and without auditory stimulation that the participants heard from the headphone. In addition, the variables of body sway were measured under four conditions for analysing the effect of decreased tactile sensation of toes and feet soles: standing on the normal surface (NS) or soft surface (SS) with and without auditory stimulation. The participants were asked to stand in a total of eight conditions. The results showed that the lateral body sway of the elderly group was more influenced than that of the young group by the lateral moving auditory stimulation. The analysis of toe pressure indicated that all participants used their left feet more than their right feet to maintain balance. Moreover, the elderly had the tendency to be stabilized mainly by use of their heels. The young group were mainly stabilized by the toes of their feet. The results suggest that the elderly may need a more appropriate stimulus of tactile and auditory sense as a feedback system than the young for maintaining and control of their standing postures.
Cortical Representations of Speech in a Multitalker Auditory Scene.
Puvvada, Krishna C; Simon, Jonathan Z
2017-09-20
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.
Auditory-Motor Processing of Speech Sounds
Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.
2013-01-01
The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846
Revisiting Arieti's “Listening Attitude” and Hallucinated Voices
Hoffman, Ralph E.
2010-01-01
Silvano Arieti proposed that auditory/verbal hallucinations (AVHs) are triggered by momentary states of heightened auditory attention that he identified as a “listening attitude.” Studies and clinical observations by our group support this view. Patients enrolled in our repetitive transcranial magnetic stimulation trials, if experiencing a significant curtailment of these hallucinations, often report an episodic sense that their voices are still occurring even if they no longer can be heard, suggesting episodic states of heightened auditory expectancy. Moreover, a functional magnetic resonance study reported by our group detected activation in the left insula prior to hallucination events. This finding is suggestive of activation in the same region detected in healthy subjects during “auditory search” in response to ambiguous sounds when anticipating meaningful speech. AVHs often are experienced with a deep emotional salience and may occur in the context of dramatic social isolation that together could reinforce heightened auditory expectancy. These findings and clinical observations suggest that Arieti's original formulation deserves further study. PMID:20363873
Patel, Tirth R; Shahin, Antoine J; Bhat, Jyoti; Welling, D Bradley; Moberly, Aaron C
2016-10-01
We describe a novel use of cortical auditory evoked potentials in the preoperative workup to determine ear candidacy for cochlear implantation. A 71-year-old male was evaluated who had a long-deafened right ear, had never worn a hearing aid in that ear, and relied heavily on use of a left-sided hearing aid. Electroencephalographic testing was performed using free field auditory stimulation of each ear independently with pure tones at 1000 and 2000 Hz at approximately 10 dB above pure-tone thresholds for each frequency and for each ear. Mature cortical potentials were identified through auditory stimulation of the long-deafened ear. The patient underwent successful implantation of that ear. He experienced progressively improving aided pure-tone thresholds and binaural speech recognition benefit (AzBio score of 74%). Findings suggest that use of cortical auditory evoked potentials may serve a preoperative role in ear selection prior to cochlear implantation. © The Author(s) 2016.
Mondino, Marine; Jardri, Renaud; Suaud-Chagny, Marie-Françoise; Saoud, Mohamed; Poulet, Emmanuel; Brunelin, Jérôme
2016-03-01
Auditory verbal hallucinations (AVH) in patients with schizophrenia are associated with abnormal hyperactivity in the left temporo-parietal junction (TPJ) and abnormal connectivity between frontal and temporal areas. Recent findings suggest that fronto-temporal transcranial Direct Current stimulation (tDCS) with the cathode placed over the left TPJ and the anode over the left prefrontal cortex can alleviate treatment-resistant AVH in patients with schizophrenia. However, brain correlates of the AVH reduction are unclear. Here, we investigated the effect of tDCS on the resting-state functional connectivity (rs-FC) of the left TPJ. Twenty-three patients with schizophrenia and treatment-resistant AVH were randomly allocated to receive 10 sessions of active (2 mA, 20 min) or sham tDCS (2 sessions/d for 5 d). We compared the rs-FC of the left TPJ between patients before and after they received active or sham tDCS. Relative to sham tDCS, active tDCS significantly reduced AVH as well as the negative symptoms. Active tDCS also reduced rs-FC of the left TPJ with the left anterior insula and the right inferior frontal gyrus and increased rs-FC of the left TPJ with the left angular gyrus, the left dorsolateral prefrontal cortex and the precuneus. The reduction of AVH severity was correlated with the reduction of the rs-FC between the left TPJ and the left anterior insula. These findings suggest that the reduction of AVH induced by tDCS is associated with a modulation of the rs-FC within an AVH-related brain network, including brain areas involved in inner speech production and monitoring. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.
Fundamental deficits of auditory perception in Wernicke's aphasia.
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen
2013-01-01
This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
Cerebral responses to local and global auditory novelty under general anesthesia
Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir
2017-01-01
Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046
Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report
2011-01-01
Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees. PMID:21272334
Nishimura, Akio; Yokosawa, Kazuhiko
2009-08-01
In the present article, we investigated the effects of pitch height and the presented ear (laterality) of an auditory stimulus, irrelevant to the ongoing visual task, on horizontal response selection. Performance was better when the response and the stimulated ear spatially corresponded (Simon effect), and when the spatial-musical association of response codes (SMARC) correspondence was maintained-that is, right (left) response with a high-pitched (low-pitched) tone. These findings reveal an automatic activation of spatially and musically associated responses by task-irrelevant auditory accessory stimuli. Pitch height is strong enough to influence the horizontal responses despite modality differences with task target.
What is extinguished in auditory extinction?
Deouell, L Y; Soroker, N
2000-09-11
Extinction is a frequent sequel of brain damage, whereupon patients disregard (extinguish) a contralesional stimulus, and report only the more ipsilesional stimulus, of a pair of stimuli presented simultaneously. We investigated the possibility of a dissociation between the detection and the identification of extinguished phonemes. Fourteen right hemisphere damaged patients with severe auditory extinction were examined using a paradigm that separated the localization of stimuli and the identification of their phonetic content. Patients reported the identity of left-sided phonemes, while extinguishing them at the same time, in the traditional sense of the term. This dissociation suggests that auditory extinction is more about acknowledging the existence of a stimulus in the contralesional hemispace than about the actual processing of the stimulus.
Thakar, A; Deepak, K K; Kumar, S Shyam
2008-10-01
To describe a previously unreported syndrome of recurrent syncopal attacks provoked by light stimulation of the external auditory canal. A 13-year-old girl had been receiving treatment for presumed absence seizures, with inadequate treatment response. Imaging was normal. Careful history taking indicated that the recurrent syncopal attacks were precipitated by external auditory canal stimulation. Targeted autonomic function tests confirmed a hyperactive vagal response, with documented significant bradycardia and lightheadedness, provoked by mild stimulation of the posterior wall of the left external auditory canal. Abstinence from ear scratching led to complete alleviation of symptoms without any pharmacological treatment. Reflex syncope consequent to stimulation of the auricular branch of the vagus nerve is proposed as the pathophysiological mechanism for this previously undocumented syndrome.
Mismatch Negativity in Recent-Onset and Chronic Schizophrenia: A Current Source Density Analysis
Fulham, W. Ross; Michie, Patricia T.; Ward, Philip B.; Rasser, Paul E.; Todd, Juanita; Johnston, Patrick J.; Thompson, Paul M.; Schall, Ulrich
2014-01-01
Mismatch negativity (MMN) is a component of the event-related potential elicited by deviant auditory stimuli. It is presumed to index pre-attentive monitoring of changes in the auditory environment. MMN amplitude is smaller in groups of individuals with schizophrenia compared to healthy controls. We compared duration-deviant MMN in 16 recent-onset and 19 chronic schizophrenia patients versus age- and sex-matched controls. Reduced frontal MMN was found in both patient groups, involved reduced hemispheric asymmetry, and was correlated with Global Assessment of Functioning (GAF) and negative symptom ratings. A cortically-constrained LORETA analysis, incorporating anatomical data from each individual's MRI, was performed to generate a current source density model of the MMN response over time. This model suggested MMN generation within a temporal, parietal and frontal network, which was right hemisphere dominant only in controls. An exploratory analysis revealed reduced CSD in patients in superior and middle temporal cortex, inferior and superior parietal cortex, precuneus, anterior cingulate, and superior and middle frontal cortex. A region of interest (ROI) analysis was performed. For the early phase of the MMN, patients had reduced bilateral temporal and parietal response and no lateralisation in frontal ROIs. For late MMN, patients had reduced bilateral parietal response and no lateralisation in temporal ROIs. In patients, correlations revealed a link between GAF and the MMN response in parietal cortex. In controls, the frontal response onset was 17 ms later than the temporal and parietal response. In patients, onset latency of the MMN response was delayed in secondary, but not primary, auditory cortex. However amplitude reductions were observed in both primary and secondary auditory cortex. These latency delays may indicate relatively intact information processing upstream of the primary auditory cortex, but impaired primary auditory cortex or cortico-cortical or thalamo-cortical communication with higher auditory cortices as a core deficit in schizophrenia. PMID:24949859
Beitel, Ralph E.; Schreiner, Christoph E.; Leake, Patricia A.
2016-01-01
In profoundly deaf cats, behavioral training with intracochlear electric stimulation (ICES) can improve temporal processing in the primary auditory cortex (AI). To investigate whether similar effects are manifest in the auditory midbrain, ICES was initiated in neonatally deafened cats either during development after short durations of deafness (8 wk of age) or in adulthood after long durations of deafness (≥3.5 yr). All of these animals received behaviorally meaningless, “passive” ICES. Some animals also received behavioral training with ICES. Two long-deaf cats received no ICES prior to acute electrophysiological recording. After several months of passive ICES and behavioral training, animals were anesthetized, and neuronal responses to pulse trains of increasing rates were recorded in the central (ICC) and external (ICX) nuclei of the inferior colliculus. Neuronal temporal response patterns (repetition rate coding, minimum latencies, response precision) were compared with results from recordings made in the AI of the same animals (Beitel RE, Vollmer M, Raggio MW, Schreiner CE. J Neurophysiol 106: 944–959, 2011; Vollmer M, Beitel RE. J Neurophysiol 106: 2423–2436, 2011). Passive ICES in long-deaf cats remediated severely degraded temporal processing in the ICC and had no effects in the ICX. In contrast to observations in the AI, behaviorally relevant ICES had no effects on temporal processing in the ICC or ICX, with the single exception of shorter latencies in the ICC in short-deaf cats. The results suggest that independent of deafness duration passive stimulation and behavioral training differentially transform temporal processing in auditory midbrain and cortex, and primary auditory cortex emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf cat. NEW & NOTEWORTHY Behaviorally relevant vs. passive electric stimulation of the auditory nerve differentially affects neuronal temporal processing in the central nucleus of the inferior colliculus (ICC) and the primary auditory cortex (AI) in profoundly short-deaf and long-deaf cats. Temporal plasticity in the ICC depends on a critical amount of electric stimulation, independent of its behavioral relevance. In contrast, the AI emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf auditory system. PMID:27733594
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
Inguinal ovary as a rare diagnostic sign of Mayer-Rokitansky-Küster-Hauser syndrome.
Demirel, Fatma; Kara, Ozlem; Esen, Ihsan
2012-01-01
Mayer-Rokitansky-Küster-Hauser (MRKH) syndrome is a rare syndrome characterized by complete or partial agenesis of the uterus and vagina, due to a congenital defect of the Mullerian duct. Affected individuals have a 46,XX karyotype and a normal female phenotype. MRKH syndrome may be isolated (type I MRKH syndrome) or associated with renal, cardiac, and skeletal anomalies, short stature, and auditory defects. The latter is defined as type II MRKH syndrome or the Müllerian duct aplasia/hypoplasia, renal agenesis/ectopy, and cervicothoracic somite dysplasia (MURCS) association. The majority of patients with MRKH syndrome present with primary amenorrhea. We report a case of type II MRKH syndrome who has been referred by a pediatric surgeon for detection of gonadal function. During an inguinal hernia operation, the left ovary had been observed in the hernia sac. Clinical and radiological evaluation of the patient showed an absence of the uterus and left kidney, and cervical hemi vertebra. Based on these findings, the patient was diagnosed as having type II MRKH syndrome.
AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX
Weinberger, Norman M.
2009-01-01
Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002
An anatomical and functional topography of human auditory cortical areas
Moerel, Michelle; De Martino, Federico; Formisano, Elia
2014-01-01
While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that—whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis—the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions. PMID:25120426
Nir, Yuval; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Banks, Matthew I.; Tononi, Giulio
2015-01-01
Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic “gate,” which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences <8% between states). The processing of deviant tones was also compared in sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13–20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas. PMID:24323498
Hearing loss in older adults affects neural systems supporting speech comprehension.
Peelle, Jonathan E; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur
2011-08-31
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.
Hearing loss in older adults affects neural systems supporting speech comprehension
Peelle, Jonathan E.; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur
2011-01-01
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging (fMRI) to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry (VBM), demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task. PMID:21880924
1990-01-01
through the esophagus into the left lung lobe and fially to the left rib surface. Spinal Process of Sixth Vertebra Seventh Vertebra Scapula Esophagus...for the calculations are as follows: 2.54 cm 12.7 cm 2.64 cm ’ J End Plate + Transducer Plexiglas (hard) or_ Closed-Cell Neoprene (soft) Figure 4
Cortico‐cortical connectivity within ferret auditory cortex
Bajo, Victoria M.; Nodal, Fernando R.; King, Andrew J.
2015-01-01
ABSTRACT Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency‐matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non‐overlapping, consistent with the non‐tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. J. Comp. Neurol. 523:2187–2210, 2015. © 2015 Wiley Periodicals, Inc. PMID:25845831
Primary and multisensory cortical activity is correlated with audiovisual percepts.
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P; Stufflebeam, Steven
2010-04-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. Copyright 2009 Wiley-Liss, Inc.
Primary and Multisensory Cortical Activity is Correlated with Audiovisual Percepts
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P.; Stufflebeam, Steven
2012-01-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. PMID:19780040
ERIC Educational Resources Information Center
Hollander, Cara; de Andrade, Victor Manuel
2014-01-01
Schools located near to airports are exposed to high levels of noise which can cause cognitive, health, and hearing problems. Therefore, this study sought to explore whether this noise may cause auditory language processing (ALP) problems in primary school learners. Sixty-one children attending schools exposed to high levels of noise were matched…
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Lieshout, P.; Renier, W.; Eling, P.
1990-02-01
This case study concerns an 18-year-old bilingual girl who suffered a radiation lesion in the left (dominant) thalamic and temporal region when she was 4 years old. Language and memory assessment revealed deficits in auditory short-term memory, auditory word comprehension, nonword repetition, syntactic processing, word fluency, and confrontation naming tasks. Both languages (English and Dutch) were found to be affected in a similar manner, despite the fact that one language (English) was acquired before and the other (Dutch) after the period of lesion onset. Most of the deficits appear to be related to verbal (short-term) memory dysfunction. Several hypotheses ofmore » subcortical involvement in memory processes are discussed with reference to existing theories in this area.« less
Identifying auditory attention with ear-EEG: cEEGrid versus high-density cap-EEG comparison
NASA Astrophysics Data System (ADS)
Bleichner, Martin G.; Mirkovic, Bojana; Debener, Stefan
2016-12-01
Objective. This study presents a direct comparison of a classical EEG cap setup with a new around-the-ear electrode array (cEEGrid) to gain a better understanding of the potential of ear-centered EEG. Approach. Concurrent EEG was recorded from a classical scalp EEG cap and two cEEGrids that were placed around the left and the right ear. Twenty participants performed a spatial auditory attention task in which three sound streams were presented simultaneously. The sound streams were three seconds long and differed in the direction of origin (front, left, right) and the number of beats (3, 4, 5 respectively), as well as the timbre and pitch. The participants had to attend to either the left or the right sound stream. Main results. We found clear attention modulated ERP effects reflecting the attended sound stream for both electrode setups, which agreed in morphology and effect size. A single-trial template matching classification showed that the direction of attention could be decoded significantly above chance (50%) for at least 16 out of 20 participants for both systems. The comparably high classification results of the single trial analysis underline the quality of the signal recorded with the cEEGrids. Significance. These findings are further evidence for the feasibility of around the-ear EEG recordings and demonstrate that well described ERPs can be measured. We conclude that concealed behind-the-ear EEG recordings can be an alternative to classical cap EEG acquisition for auditory attention monitoring.
Identifying auditory attention with ear-EEG: cEEGrid versus high-density cap-EEG comparison.
Bleichner, Martin G; Mirkovic, Bojana; Debener, Stefan
2016-12-01
This study presents a direct comparison of a classical EEG cap setup with a new around-the-ear electrode array (cEEGrid) to gain a better understanding of the potential of ear-centered EEG. Concurrent EEG was recorded from a classical scalp EEG cap and two cEEGrids that were placed around the left and the right ear. Twenty participants performed a spatial auditory attention task in which three sound streams were presented simultaneously. The sound streams were three seconds long and differed in the direction of origin (front, left, right) and the number of beats (3, 4, 5 respectively), as well as the timbre and pitch. The participants had to attend to either the left or the right sound stream. We found clear attention modulated ERP effects reflecting the attended sound stream for both electrode setups, which agreed in morphology and effect size. A single-trial template matching classification showed that the direction of attention could be decoded significantly above chance (50%) for at least 16 out of 20 participants for both systems. The comparably high classification results of the single trial analysis underline the quality of the signal recorded with the cEEGrids. These findings are further evidence for the feasibility of around the-ear EEG recordings and demonstrate that well described ERPs can be measured. We conclude that concealed behind-the-ear EEG recordings can be an alternative to classical cap EEG acquisition for auditory attention monitoring.
Olichney, John M; Riggins, Brock R; Hillert, Dieter G; Nowacki, Ralph; Tecoma, Evelyn; Kutas, Marta; Iragui, Vicente J
2002-07-01
We studied 14 patients with well-characterized refractory temporal lobe epilepsy (TLE), 7 with right temporal lobe epilepsy (RTE) and 7 with left temporal lobe epilepsy (LTE), on a word repetition ERP experiment. Much prior literature supports the view that patients with left TLE are more likely to develop verbal memory deficits, often attributable to left hippocampal sclerosis. Our main objectives were to test if abnormalities of the N400 or Late Positive Component (LPC, P600) were associated with a left temporal seizure focus, or left temporal lobe dysfunction. A minimum of 19 channels of EEG/EOG data were collected while subjects performed a semantic categorization task. Auditory category statements were followed by a visual target word, which were 50% "congruous" (category exemplars) and 50% "incongruous" (non-category exemplars) with the preceding semantic context. These auditory-visual pairings were repeated pseudo-randomly at time intervals ranging from approximately 10-140 seconds later. The ERP data were submitted to repeated-measures ANOVAs, which showed the RTE group had generally normal effects of word repetition on the LPC and the N400. Also, the N400 component was larger to incongruous than congruous new words, as is normally the case. In contrast, the LTE group did not have statistically significant effects of either word repetition or congruity on their ERPs (N400 or LPC), suggesting that this ERP semantic categorization paradigm is sensitive to left temporal lobe dysfunction. Further studies are ongoing to determine if these ERP abnormalities predict hippocampal sclerosis on histopathology, or outcome after anterior temporal lobectomy.
A Generative Model of Speech Production in Broca’s and Wernicke’s Areas
Price, Cathy J.; Crinion, Jenny T.; MacSweeney, Mairéad
2011-01-01
Speech production involves the generation of an auditory signal from the articulators and vocal tract. When the intended auditory signal does not match the produced sounds, subsequent articulatory commands can be adjusted to reduce the difference between the intended and produced sounds. This requires an internal model of the intended speech output that can be compared to the produced speech. The aim of this functional imaging study was to identify brain activation related to the internal model of speech production after activation related to vocalization, auditory feedback, and movement in the articulators had been controlled. There were four conditions: silent articulation of speech, non-speech mouth movements, finger tapping, and visual fixation. In the speech conditions, participants produced the mouth movements associated with the words “one” and “three.” We eliminated auditory feedback from the spoken output by instructing participants to articulate these words without producing any sound. The non-speech mouth movement conditions involved lip pursing and tongue protrusions to control for movement in the articulators. The main difference between our speech and non-speech mouth movement conditions is that prior experience producing speech sounds leads to the automatic and covert generation of auditory and phonological associations that may play a role in predicting auditory feedback. We found that, relative to non-speech mouth movements, silent speech activated Broca’s area in the left dorsal pars opercularis and Wernicke’s area in the left posterior superior temporal sulcus. We discuss these results in the context of a generative model of speech production and propose that Broca’s and Wernicke’s areas may be involved in predicting the speech output that follows articulation. These predictions could provide a mechanism by which rapid movement of the articulators is precisely matched to the intended speech outputs during future articulations. PMID:21954392
Melo, Ândrea de; Mezzomo, Carolina Lisbôa; Garcia, Michele Vargas; Biaggio, Eliara Pinto Vieira
2018-01-01
Introduction Computerized auditory training (CAT) has been building a good reputation in the stimulation of auditory abilities in cases of auditory processing disorder (APD). Objective To measure the effects of CAT in students with APD, with typical or atypical phonological acquisition, through electrophysiological and subjective measures, correlating them pre- and post-therapy. Methods The sample for this study includes14 children with APD, subdivided into children with APD and typical phonological acquisition (G1), and children with APD and atypical phonological acquisition (G2). Phonological evaluation of children (PEC), long latency auditory evoked potential (LLAEP) and scale of auditory behaviors (SAB) were conducted to help with the composition of the groups and with the therapeutic intervention. The therapeutic intervention was performed using the software Escuta Ativa (CTS Informática, Pato Branco, Brazil) in 12 sessions of 30 minutes, twice a week. For data analysis, the appropriate statistical tests were used. Results A decrease in the latency of negative wave N2 and the positive wave P3 in the left ear in G1, and a decrease of P2 in the right ear in G2 were observed. In the analysis comparing the pre- and post-CAT groups, there was a significant difference in P1 latency in the left ear and P2 latency in the right ear, pre-intervention. Furthermore, eight children had an absence of the P3 wave, pre-CAT, but after the intervention, all of them presented the P3 wave. There were changes in the SAB score pre- and post-CAT in both groups. The presence of correlation between the scale and some LLAEP components was observed. Conclusion The CAT produced an electrophysiological modification, which became evident in the effects of the effects of neural plasticity after CAT. The SAB proved to be useful in measuring the therapeutic effects of the intervention. Moreover, there were behavioral changes in the SAB (higher scores) and correlation with LLAEP.
Fitch, R. Holly; Alexander, Michelle L.; Threlkeld, Steven W.
2013-01-01
Most researchers in the field of neural plasticity are familiar with the “Kennard Principle,” which purports a positive relationship between age at brain injury and severity of subsequent deficits (plateauing in adulthood). As an example, a child with left hemispherectomy can recover seemingly normal language, while an adult with focal injury to sub-regions of left temporal and/or frontal cortex can suffer dramatic and permanent language loss. Here we present data regarding the impact of early brain injury in rat models as a function of type and timing, measuring long-term behavioral outcomes via auditory discrimination tasks varying in temporal demand. These tasks were created to model (in rodents) aspects of human sensory processing that may correlate—both developmentally and functionally—with typical and atypical language. We found that bilateral focal lesions to the cortical plate in rats during active neuronal migration led to worse auditory outcomes than comparable lesions induced after cortical migration was complete. Conversely, unilateral hypoxic-ischemic (HI) injuries (similar to those seen in premature infants and term infants with birth complications) led to permanent auditory processing deficits when induced at a neurodevelopmental point comparable to human “term,” but only transient deficits (undetectable in adulthood) when induced in a “preterm” window. Convergent evidence suggests that regardless of when or how disruption of early neural development occurs, the consequences may be particularly deleterious to rapid auditory processing (RAP) outcomes when they trigger developmental alterations that extend into subcortical structures (i.e., lower sensory processing stations). Collective findings hold implications for the study of behavioral outcomes following early brain injury as well as genetic/environmental disruption, and are relevant to our understanding of the neurologic risk factors underlying developmental language disability in human populations. PMID:24155699
D’Angiulli, Amedeo; Griffiths, Gordon; Marmolejo-Ramos, Fernando
2015-01-01
The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization), followed by a four-picture array (a target plus three distractors; part 2: matching visualization). Children were to select the picture matching the word they heard in part 1. Event-related potentials (ERPs) locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e., <300 ms) was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e., 300–699 ms) and late (i.e., 700–1000 ms) ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a “post-anterior” pathway sequence: occipital, parietal, and temporal areas; conversely, matching visualization involved left-hemispheric activity following an “ant-posterior” pathway sequence: frontal, temporal, parietal, and occipital areas. These results suggest that, similarly, for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying representations. PMID:26175697
Abnormal brain function in neuromyelitis optica: A fMRI investigation of mPASAT.
Wang, Fei; Liu, Yaou; Li, Jianjun; Sondag, Matthew; Law, Meng; Zee, Chi-Shing; Dong, Huiqing; Li, Kuncheng
2017-10-01
Cognitive impairment with the Neuromyelitis Optica (NMO) patients is debated. The present study is to study patterns of brain activation in NMO patients during a pair of task-related fMRI. We studied 20 patients with NMO and 20 control subjects matched for age, gender, education and handedness. All patients with NMO met the 2006 Wingerchuk diagnostic criteria. The fMRI paradigm included an auditory attention monitoring task and a modified version of the Paced Auditory Serial Addition Task (mPASAT). Both tasks were temporally and spatially balanced, with the exception of task difficulty. In mPASAT, Activation regions in control subjects included bilateral superior temporal gyri (BA22), left inferior frontal gyrus (BA45), bilateral inferior parietal lobule (BA7), left cingulate gyrus (BA32), left insula (BA13), and cerebellum. Activation regions in NMO patients included bilateral superior temporal gyri (BA22), left inferior frontal gyrus (BA9), right cingulate gyrus (BA32), right inferior parietal gyrus (BA40), left insula (BA13) and cerebellum. Some dispersed cognition related regions are greater in the patients. The present study showed altered cerebral activation during mPASAT in patients with NMO relative to healthy controls. These results are speculated to provide further evidence for brain plasticity in patients with NMO. Copyright © 2017 Elsevier B.V. All rights reserved.
Extinction of auditory stimuli in hemineglect: Space versus ear.
Spierer, Lucas; Meuli, Reto; Clarke, Stephanie
2007-02-01
Unilateral extinction of auditory stimuli, a key feature of the neglect syndrome, was investigated in 15 patients with right (11), left (3) or bilateral (1) hemispheric lesions using a verbal dichotic condition, in which each ear received simultaneously one word, and a interaural-time-difference (ITD) diotic condition, in which both ears received both words lateralised by means of ITD. Additional investigations included sound localisation, visuo-spatial attention and general cognitive status. Five patients presented a significant asymmetry in the ITD diotic test, due to a decrease of left hemispace reporting but no asymmetry was found in dichotic listening. Six other patients presented a significant asymmetry in the dichotic test due to a significant decrease of left or right ear reporting, but no asymmetry in diotic listening. Ten of the above patients presented mild to severe deficits in sound localisation and eight signs of visuo-spatial neglect (three with selective asymmetry in the diotic and five in the dichotic task). Four other patients presented a significant asymmetry in both the diotic and dichotic listening tasks. Three of them presented moderate deficits in localisation and all four moderate visuo-spatial neglect. Thus, extinction for left ear and left hemispace can double dissociate, suggesting distinct underlying neural processes. Furthermore, the co-occurrence with sound localisation disturbance and with visuo-spatial hemineglect speaks in favour of the involvement of multisensory attentional representations.
Cogné, Mélanie; Violleau, Marie-Hélène; Klinger, Evelyne; Joseph, Pierre-Alain
2018-01-31
Topographical disorientation is frequent among patients after a stroke and can be well explored with virtual environments (VEs). VEs also allow for the addition of stimuli. A previous study did not find any effect of non-contextual auditory stimuli on navigational performance in the virtual action planning-supermarket (VAP-S) simulating a medium-sized 3D supermarket. However, the perceptual or cognitive load of the sounds used was not high. We investigated how non-contextual auditory stimuli with high load affect navigational performance in the VAP-S for patients who have had a stroke and any correlation between this performance and dysexecutive disorders. Four kinds of stimuli were considered: sounds from living beings, sounds from supermarket objects, beeping sounds and names of other products that were not available in the VAP-S. The condition without auditory stimuli was the control. The Groupe de réflexion pour l'évaluation des fonctions exécutives (GREFEX) battery was used to evaluate executive functions of patients. The study included 40 patients who have had a stroke (n=22 right-hemisphere and n=18 left-hemisphere stroke). Patients' navigational performance was decreased under the 4 conditions with non-contextual auditory stimuli (P<0.05), especially for those with dysexecutive disorders. For the 5 conditions, the lower the performance, the more GREFEX tests were failed. Patients felt significantly disadvantaged by the non-contextual sounds sounds from living beings, sounds from supermarket objects and names of other products as compared with beeping sounds (P<0.01). Patients' verbal recall of the collected objects was significantly lower under the condition with names of other products (P<0.001). Left and right brain-damaged patients did not differ in navigational performance in the VAP-S under the 5 auditory conditions. These non-contextual auditory stimuli could be used in neurorehabilitation paradigms to train patients with dysexecutive disorders to inhibit disruptive stimuli. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Díez, Álvaro; Ranlund, Siri; Pinotsis, Dimitris; Calafato, Stella; Shaikh, Madiha; Hall, Mei-Hua; Walshe, Muriel; Nevado, Ángel; Friston, Karl J; Adams, Rick A; Bramon, Elvira
2017-06-01
The "dysconnection hypothesis" of psychosis suggests that a disruption of functional integration underlies cognitive deficits and clinical symptoms. Impairments in the P300 potential are well documented in psychosis. Intrinsic (self-)connectivity in a frontoparietal cortical hierarchy during a P300 experiment was investigated. Dynamic Causal Modeling was used to estimate how evoked activity results from the dynamics of coupled neural populations and how neural coupling changes with the experimental factors. Twenty-four patients with psychotic disorder, twenty-four unaffected relatives, and twenty-five controls underwent EEG recordings during an auditory oddball paradigm. Sixteen frontoparietal network models (including primary auditory, superior parietal, and superior frontal sources) were analyzed and an optimal model of neural coupling, explaining diagnosis and genetic risk effects, as well as their interactions with task condition were identified. The winning model included changes in connectivity at all three hierarchical levels. Patients showed decreased self-inhibition-that is, increased cortical excitability-in left superior frontal gyrus across task conditions, compared with unaffected participants. Relatives had similar increases in excitability in left superior frontal and right superior parietal sources, and a reversal of the normal synaptic gain changes in response to targets relative to standard tones. It was confirmed that both subjects with psychotic disorder and their relatives show a context-independent loss of synaptic gain control at the highest hierarchy levels. The relatives also showed abnormal gain modulation responses to task-relevant stimuli. These may be caused by NMDA-receptor and/or GABAergic pathologies that change the excitability of superficial pyramidal cells and may be a potential biological marker for psychosis. Hum Brain Mapp 38:3262-3276, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Lee, Seung-Hwan; Wynn, Jonathan K; Green, Michael F; Kim, Hyun; Lee, Kang-Joon; Nam, Min; Park, Joong-Kyu; Chung, Young-Cho
2006-04-01
Electrophysiological studies have demonstrated gamma and beta frequency oscillations in response to auditory stimuli. The purpose of this study was to test whether auditory hallucinations (AH) in schizophrenia patients reflect abnormalities in gamma and beta frequency oscillations and to investigate source generators of these abnormalities. This theory was tested using quantitative electroencephalography (qEEG) and low-resolution electromagnetic tomography (LORETA) source imaging. Twenty-five schizophrenia patients with treatment refractory AH, lasting for at least 2 years, and 23 schizophrenia patients with non-AH (N-AH) in the past 2 years were recruited for the study. Spectral analysis of the qEEG and source imaging of frequency bands of artifact-free 30 s epochs were examined during rest. AH patients showed significantly increased beta 1 and beta 2 frequency amplitude compared with N-AH patients. Gamma and beta (2 and 3) frequencies were significantly correlated in AH but not in N-AH patients. Source imaging revealed significantly increased beta (1 and 2) activity in the left inferior parietal lobule and the left medial frontal gyrus in AH versus N-AH patients. These results imply that AH is reflecting increased beta frequency oscillations with neural generators localized in speech-related areas.
Intelligibility of speech in a virtual 3-D environment.
MacDonald, Justin A; Balakrishnan, J D; Orosz, Michael D; Karplus, Walter J
2002-01-01
In a simulated air traffic control task, improvement in the detection of auditory warnings when using virtual 3-D audio depended on the spatial configuration of the sounds. Performance improved substantially when two of four sources were placed to the left and the remaining two were placed to the right of the participant. Surprisingly, little or no benefits were observed for configurations involving the elevation or transverse (front/back) dimensions of virtual space, suggesting that position on the interaural (left/right) axis is the crucial factor to consider in auditory display design. The relative importance of interaural spacing effects was corroborated in a second, free-field (real space) experiment. Two additional experiments showed that (a) positioning signals to the side of the listener is superior to placing them in front even when two sounds are presented in the same location, and (b) the optimal distance on the interaural axis varies with the amplitude of the sounds. These results are well predicted by the behavior of an ideal observer under the different display conditions. This suggests that guidelines for auditory display design that allow for effective perception of speech information can be developed from an analysis of the physical sound patterns.
Modality Specific Cerebro-Cerebellar Activations in Verbal Working Memory: An fMRI Study
Kirschen, Matthew P.; Chen, S. H. Annabel; Desmond, John E.
2010-01-01
Verbal working memory (VWM) engages frontal and temporal/parietal circuits subserving the phonological loop, as well as, superior and inferior cerebellar regions which have projections from these neocortical areas. Different cerebro-cerebellar circuits may be engaged for integrating aurally- and visually-presented information for VWM. The present fMRI study investigated load (2, 4, or 6 letters) and modality (auditory and visual) dependent cerebro-cerebellar VWM activation using a Sternberg task. FMRI revealed modality-independent activations in left frontal (BA 6/9/44), insular, cingulate (BA 32), and bilateral inferior parietal/supramarginal (BA 40) regions, as well as in bilateral superior (HVI) and right inferior (HVIII) cerebellar regions. Visual presentation evoked prominent activations in right superior (HVI/CrusI) cerebellum, bilateral occipital (BA19) and left parietal (BA7/40) cortex while auditory presentation showed robust activations predominately in bilateral temporal regions (BA21/22). In the cerebellum, we noted a visual to auditory emphasis of function progressing from superior to inferior and from lateral to medial regions. These results extend our previous findings of fMRI activation in cerebro-cerebellar networks during VWM, and demonstrate both modality dependent commonalities and differences in activations with increasing memory load. PMID:20714061
Modality specific cerebro-cerebellar activations in verbal working memory: an fMRI study.
Kirschen, Matthew P; Chen, S H Annabel; Desmond, John E
2010-01-01
Verbal working memory (VWM) engages frontal and temporal/parietal circuits subserving the phonological loop, as well as, superior and inferior cerebellar regions which have projections from these neocortical areas. Different cerebro-cerebellar circuits may be engaged for integrating aurally- and visually-presented information for VWM. The present fMRI study investigated load (2, 4, or 6 letters) and modality (auditory and visual) dependent cerebro-cerebellar VWM activation using a Sternberg task. FMRI revealed modality-independent activations in left frontal (BA 6/9/44), insular, cingulate (BA 32), and bilateral inferior parietal/supramarginal (BA 40) regions, as well as in bilateral superior (HVI) and right inferior (HVIII) cerebellar regions. Visual presentation evoked prominent activations in right superior (HVI/CrusI) cerebellum, bilateral occipital (BA19) and left parietal (BA7/40) cortex while auditory presentation showed robust activations predominantly in bilateral temporal regions (BA21/22). In the cerebellum, we noted a visual to auditory emphasis of function progressing from superior to inferior and from lateral to medial regions. These results extend our previous findings of fMRI activation in cerebro-cerebellar networks during VWM, and demonstrate both modality dependent commonalities and differences in activations with increasing memory load.
Structural changes of the corpus callosum in tinnitus
Diesch, Eugen; Schummer, Verena; Kramer, Martin; Rupp, Andre
2012-01-01
Objectives: In tinnitus, several brain regions seem to be structurally altered, including the medial partition of Heschl's gyrus (mHG), the site of the primary auditory cortex. The mHG is smaller in tinnitus patients than in healthy controls. The corpus callosum (CC) is the main interhemispheric commissure of the brain connecting the auditory areas of the left and the right hemisphere. Here, we investigate whether tinnitus status is associated with CC volume. Methods: The midsagittal cross-sectional area of the CC was examined in tinnitus patients and healthy controls in which an examination of the mHG had been carried out earlier. The CC was extracted and segmented into subregions which were defined according to the most common CC morphometry schemes introduced by Witelson (1989) and Hofer and Frahm (2006). Results: For both CC segmentation schemes, the CC posterior midbody was smaller in male patients than in male healthy controls and the isthmus, the anterior midbody, and the genou were larger in female patients than in female controls. With CC size normalized relative to mHG volume, the normalized CC splenium was larger in male patients than male controls and the normalized CC splenium, the isthmus and the genou were larger in female patients than female controls. Normalized CC segment size expresses callosal interconnectivity relative to auditory cortex volume. Conclusion: It may be argued that the predominant function of the CC is excitatory. The stronger callosal interconnectivity in tinnitus patients, compared to healthy controls, may facilitate the emergence and maintenance of a positive feedback loop between tinnitus generators located in the two hemispheres. PMID:22470322
Chirathivat, Napim; Raja, Sahitya C; Gobes, Sharon M H
2015-06-22
Many aspects of song learning in songbirds resemble characteristics of speech acquisition in humans. Genetic, anatomical and behavioural parallels have most recently been extended with demonstrated similarities in hemispheric dominance between humans and songbirds: the avian higher order auditory cortex is left-lateralized for processing song memories in juvenile zebra finches that already have formed a memory of their fathers' song, just like Wernicke's area in the left hemisphere of the human brain is dominant for speech perception. However, it is unclear if hemispheric specialization is due to pre-existing functional asymmetry or the result of learning itself. Here we show that in juvenile male and female zebra finches that had never heard an adult song before, neuronal activation after initial exposure to a conspecific song is bilateral. Thus, like in humans, hemispheric dominance develops with vocal proficiency. A left-lateralized functional system that develops through auditory-vocal learning may be an evolutionary adaptation that could increase the efficiency of transferring information within one hemisphere, benefiting the production and perception of learned communication signals.
Chirathivat, Napim; Raja, Sahitya C.; Gobes, Sharon M. H.
2015-01-01
Many aspects of song learning in songbirds resemble characteristics of speech acquisition in humans. Genetic, anatomical and behavioural parallels have most recently been extended with demonstrated similarities in hemispheric dominance between humans and songbirds: the avian higher order auditory cortex is left-lateralized for processing song memories in juvenile zebra finches that already have formed a memory of their fathers’ song, just like Wernicke’s area in the left hemisphere of the human brain is dominant for speech perception. However, it is unclear if hemispheric specialization is due to pre-existing functional asymmetry or the result of learning itself. Here we show that in juvenile male and female zebra finches that had never heard an adult song before, neuronal activation after initial exposure to a conspecific song is bilateral. Thus, like in humans, hemispheric dominance develops with vocal proficiency. A left-lateralized functional system that develops through auditory-vocal learning may be an evolutionary adaptation that could increase the efficiency of transferring information within one hemisphere, benefiting the production and perception of learned communication signals. PMID:26098840
Absence of auditory 'global interference' in autism.
Foxton, Jessica M; Stewart, Mary E; Barnard, Louise; Rodgers, Jacqui; Young, Allan H; O'Brien, Gregory; Griffiths, Timothy D
2003-12-01
There has been considerable recent interest in the cognitive style of individuals with Autism Spectrum Disorder (ASD). One theory, that of weak central coherence, concerns an inability to combine stimulus details into a coherent whole. Here we test this theory in the case of sound patterns, using a new definition of the details (local structure) and the coherent whole (global structure). Thirteen individuals with a diagnosis of autism or Asperger's syndrome and 15 control participants were administered auditory tests, where they were required to match local pitch direction changes between two auditory sequences. When the other local features of the sequence pairs were altered (the actual pitches and relative time points of pitch direction change), the control participants obtained lower scores compared with when these details were left unchanged. This can be attributed to interference from the global structure, defined as the combination of the local auditory details. In contrast, the participants with ASD did not obtain lower scores in the presence of such mismatches. This was attributed to the absence of interference from an auditory coherent whole. The results are consistent with the presence of abnormal interactions between local and global auditory perception in ASD.
Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro
2016-09-13
The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part.
Auditory Middle Latency Response and Phonological Awareness in Students with Learning Disabilities
Romero, Ana Carla Leite; Funayama, Carolina Araújo Rodrigues; Capellini, Simone Aparecida; Frizzo, Ana Claudia Figueiredo
2015-01-01
Introduction Behavioral tests of auditory processing have been applied in schools and highlight the association between phonological awareness abilities and auditory processing, confirming that low performance on phonological awareness tests may be due to low performance on auditory processing tests. Objective To characterize the auditory middle latency response and the phonological awareness tests and to investigate correlations between responses in a group of children with learning disorders. Methods The study included 25 students with learning disabilities. Phonological awareness and auditory middle latency response were tested with electrodes placed on the left and right hemispheres. The correlation between the measurements was performed using the Spearman rank correlation coefficient. Results There is some correlation between the tests, especially between the Pa component and syllabic awareness, where moderate negative correlation is observed. Conclusion In this study, when phonological awareness subtests were performed, specifically phonemic awareness, the students showed a low score for the age group, although for the objective examination, prolonged Pa latency in the contralateral via was observed. Negative weak to moderate correlation for Pa wave latency was observed, as was positive weak correlation for Na-Pa amplitude. PMID:26491479
Scott, Brian H; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C
2017-11-01
In the primate auditory cortex, information flows serially in the mediolateral dimension from core, to belt, to parabelt. In the caudorostral dimension, stepwise serial projections convey information through the primary, rostral, and rostrotemporal (AI, R, and RT) core areas on the supratemporal plane, continuing to the rostrotemporal polar area (RTp) and adjacent auditory-related areas of the rostral superior temporal gyrus (STGr) and temporal pole. In addition to this cascade of corticocortical connections, the auditory cortex receives parallel thalamocortical projections from the medial geniculate nucleus (MGN). Previous studies have examined the projections from MGN to auditory cortex, but most have focused on the caudal core areas AI and R. In this study, we investigated the full extent of connections between MGN and AI, R, RT, RTp, and STGr using retrograde and anterograde anatomical tracers. Both AI and R received nearly 90% of their thalamic inputs from the ventral subdivision of the MGN (MGv; the primary/lemniscal auditory pathway). By contrast, RT received only ∼45% from MGv, and an equal share from the dorsal subdivision (MGd). Area RTp received ∼25% of its inputs from MGv, but received additional inputs from multisensory areas outside the MGN (30% in RTp vs. 1-5% in core areas). The MGN input to RTp distinguished this rostral extension of auditory cortex from the adjacent auditory-related cortex of the STGr, which received 80% of its thalamic input from multisensory nuclei (primarily medial pulvinar). Anterograde tracers identified complementary descending connections by which highly processed auditory information may modulate thalamocortical inputs. © 2017 Wiley Periodicals, Inc.
2017-01-01
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238
Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L
2017-12-13
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.
Top-down and bottom-up modulation of brain structures involved in auditory discrimination.
Diekhof, Esther K; Biedermann, Franziska; Ruebsamen, Rudolf; Gruber, Oliver
2009-11-10
Auditory deviancy detection comprises both automatic and voluntary processing. Here, we investigated the neural correlates of different components of the sensory discrimination process using functional magnetic resonance imaging. Subliminal auditory processing of deviant events that were not detected led to activation in left superior temporal gyrus. On the other hand, both correct detection of deviancy and false alarms activated a frontoparietal network of attentional processing and response selection, i.e. this network was activated regardless of the physical presence of deviant events. Finally, activation in the putamen, anterior cingulate and middle temporal cortex depended on factual stimulus representations and occurred only during correct deviancy detection. These results indicate that sensory discrimination may rely on dynamic bottom-up and top-down interactions.
Structural and functional correlates for language efficiency in auditory word processing.
Jung, JeYoung; Kim, Sunmi; Cho, Hyesuk; Nam, Kichun
2017-01-01
This study aims to provide convergent understanding of the neural basis of auditory word processing efficiency using a multimodal imaging. We investigated the structural and functional correlates of word processing efficiency in healthy individuals. We acquired two structural imaging (T1-weighted imaging and diffusion tensor imaging) and functional magnetic resonance imaging (fMRI) during auditory word processing (phonological and semantic tasks). Our results showed that better phonological performance was predicted by the greater thalamus activity. In contrary, better semantic performance was associated with the less activation in the left posterior middle temporal gyrus (pMTG), supporting the neural efficiency hypothesis that better task performance requires less brain activation. Furthermore, our network analysis revealed the semantic network including the left anterior temporal lobe (ATL), dorsolateral prefrontal cortex (DLPFC) and pMTG was correlated with the semantic efficiency. Especially, this network acted as a neural efficient manner during auditory word processing. Structurally, DLPFC and cingulum contributed to the word processing efficiency. Also, the parietal cortex showed a significate association with the word processing efficiency. Our results demonstrated that two features of word processing efficiency, phonology and semantics, can be supported in different brain regions and, importantly, the way serving it in each region was different according to the feature of word processing. Our findings suggest that word processing efficiency can be achieved by in collaboration of multiple brain regions involved in language and general cognitive function structurally and functionally.
Structural and functional correlates for language efficiency in auditory word processing
Kim, Sunmi; Cho, Hyesuk; Nam, Kichun
2017-01-01
This study aims to provide convergent understanding of the neural basis of auditory word processing efficiency using a multimodal imaging. We investigated the structural and functional correlates of word processing efficiency in healthy individuals. We acquired two structural imaging (T1-weighted imaging and diffusion tensor imaging) and functional magnetic resonance imaging (fMRI) during auditory word processing (phonological and semantic tasks). Our results showed that better phonological performance was predicted by the greater thalamus activity. In contrary, better semantic performance was associated with the less activation in the left posterior middle temporal gyrus (pMTG), supporting the neural efficiency hypothesis that better task performance requires less brain activation. Furthermore, our network analysis revealed the semantic network including the left anterior temporal lobe (ATL), dorsolateral prefrontal cortex (DLPFC) and pMTG was correlated with the semantic efficiency. Especially, this network acted as a neural efficient manner during auditory word processing. Structurally, DLPFC and cingulum contributed to the word processing efficiency. Also, the parietal cortex showed a significate association with the word processing efficiency. Our results demonstrated that two features of word processing efficiency, phonology and semantics, can be supported in different brain regions and, importantly, the way serving it in each region was different according to the feature of word processing. Our findings suggest that word processing efficiency can be achieved by in collaboration of multiple brain regions involved in language and general cognitive function structurally and functionally. PMID:28892503
Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima
2016-01-01
Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits. PMID:28050213
Visual activity predicts auditory recovery from deafness after adult cochlear implantation.
Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2013-12-01
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Cortical evoked potentials to an auditory illusion: binaural beats.
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi
2009-08-01
To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.
Cortical Evoked Potentials to an Auditory Illusion: Binaural Beats
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi
2009-01-01
Objective: To define brain activity corresponding to an auditory illusion of 3 and 6 Hz binaural beats in 250 Hz or 1,000 Hz base frequencies, and compare it to the sound onset response. Methods: Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000 Hz to one ear and 3 or 6 Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3 Hz and 6 Hz, in base frequencies of 250 Hz and 1000 Hz. Tones were 2,000 ms in duration and presented with approximately 1 s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. Results: All stimuli evoked tone-onset P50, N100 and P200 components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P50 had significantly different sources than the beats-evoked oscillations; and N100 and P200 sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Conclusions: Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Significance: Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp. PMID:19616993
Adamchic, Ilya; Hauptmann, Christian; Tass, Peter A.
2012-01-01
Chronic subjective tinnitus is characterized by abnormal neuronal synchronization in the central auditory system. As shown in a controlled clinical trial, acoustic coordinated reset (CR) neuromodulation causes a significant relief of tinnitus symptoms along with a significant decrease of pathological oscillatory activity in a network comprising auditory and non-auditory brain areas, which is often accompanied with a significant tinnitus pitch change. Here we studied if the tinnitus pitch change correlates with a reduction of tinnitus loudness and/or annoyance as assessed by visual analog scale (VAS) scores. Furthermore, we studied if the changes of the pattern of brain synchrony in tinnitus patients induced by 12 weeks of CR therapy depend on whether or not the patients undergo a pronounced tinnitus pitch change. Therefore, we applied standardized low-resolution brain electromagnetic tomography (sLORETA) to EEG recordings from two groups of patients with a sustained CR-induced relief of tinnitus symptoms with and without tinnitus pitch change. We found that absolute changes of VAS loudness and VAS annoyance scores significantly correlate with the modulus, i.e., the absolute value, of the tinnitus pitch change. Moreover, as opposed to patients with small or no pitch change we found a significantly stronger decrease in gamma power in patients with pronounced tinnitus pitch change in right parietal cortex (Brodmann area, BA 40), right frontal cortex (BA 9, 46), left temporal cortex (BA 22, 42), and left frontal cortex (BA 4, 6), combined with a significantly stronger increase of alpha (10–12 Hz) activity in the right and left anterior cingulate cortex (ACC; BA 32, 24). In addition, we revealed a significantly lower functional connectivity in the gamma band between the right dorsolateral prefrontal cortex (BA 46) and the right ACC (BA 32) after 12 weeks of CR therapy in patients with pronounced pitch change. Our results indicate a substantial, CR-induced reduction of tinnitus-related auditory binding in a pitch processing network. PMID:22493570
Separating pitch chroma and pitch height in the human brain
Warren, J. D.; Uppenkamp, S.; Patterson, R. D.; Griffiths, T. D.
2003-01-01
Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas. PMID:12909719
Separating pitch chroma and pitch height in the human brain.
Warren, J D; Uppenkamp, S; Patterson, R D; Griffiths, T D
2003-08-19
Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas.
Transformation of temporal sequences in the zebra finch auditory system
Lim, Yoonseob; Lagoy, Ryan; Shinn-Cunningham, Barbara G; Gardner, Timothy J
2016-01-01
This study examines how temporally patterned stimuli are transformed as they propagate from primary to secondary zones in the thalamorecipient auditory pallium in zebra finches. Using a new class of synthetic click stimuli, we find a robust mapping from temporal sequences in the primary zone to distinct population vectors in secondary auditory areas. We tested whether songbirds could discriminate synthetic click sequences in an operant setup and found that a robust behavioral discrimination is present for click sequences composed of intervals ranging from 11 ms to 40 ms, but breaks down for stimuli composed of longer inter-click intervals. This work suggests that the analog of the songbird auditory cortex transforms temporal patterns to sequence-selective population responses or ‘spatial codes', and that these distinct population responses contribute to behavioral discrimination of temporally complex sounds. DOI: http://dx.doi.org/10.7554/eLife.18205.001 PMID:27897971
Syllabic (~2-5 Hz) and fluctuation (~1-10 Hz) ranges in speech and auditory processing
Edwards, Erik; Chang, Edward F.
2013-01-01
Given recent interest in syllabic rates (~2-5 Hz) for speech processing, we review the perception of “fluctuation” range (~1-10 Hz) modulations during listening to speech and technical auditory stimuli (AM and FM tones and noises, and ripple sounds). We find evidence that the temporal modulation transfer function (TMTF) of human auditory perception is not simply low-pass in nature, but rather exhibits a peak in sensitivity in the syllabic range (~2-5 Hz). We also address human and animal neurophysiological evidence, and argue that this bandpass tuning arises at the thalamocortical level and is more associated with non-primary regions than primary regions of cortex. The bandpass rather than low-pass TMTF has implications for modeling auditory central physiology and speech processing: this implicates temporal contrast rather than simple temporal integration, with contrast enhancement for dynamic stimuli in the fluctuation range. PMID:24035819
ERIC Educational Resources Information Center
Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram
2009-01-01
Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands…
Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind
Burton, Harold; Sinclair, Robert J.; Agato, Alvin
2012-01-01
We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836
Recognition memory for Braille or spoken words: an fMRI study in early blind.
Burton, Harold; Sinclair, Robert J; Agato, Alvin
2012-02-15
We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.
Complex auditory behaviour emerges from simple reactive steering
NASA Astrophysics Data System (ADS)
Hedwig, Berthold; Poulet, James F. A.
2004-08-01
The recognition and localization of sound signals is fundamental to acoustic communication. Complex neural mechanisms are thought to underlie the processing of species-specific sound patterns even in animals with simple auditory pathways. In female crickets, which orient towards the male's calling song, current models propose pattern recognition mechanisms based on the temporal structure of the song. Furthermore, it is thought that localization is achieved by comparing the output of the left and right recognition networks, which then directs the female to the pattern that most closely resembles the species-specific song. Here we show, using a highly sensitive method for measuring the movements of female crickets, that when walking and flying each sound pulse of the communication signal releases a rapid steering response. Thus auditory orientation emerges from reactive motor responses to individual sound pulses. Although the reactive motor responses are not based on the song structure, a pattern recognition process may modulate the gain of the responses on a longer timescale. These findings are relevant to concepts of insect auditory behaviour and to the development of biologically inspired robots performing cricket-like auditory orientation.
Bishop, Dorothy VM; Hardiman, Mervyn; Uwer, Ruth; von Suchodoletz, Waldemar
2007-01-01
It has been proposed that specific language impairment (SLI) is the consequence of low-level abnormalities in auditory perception. However, studies of long-latency auditory ERPs in children with SLI have generated inconsistent findings. A possible reason for this inconsistency is the heterogeneity of SLI. The intraclass correlation (ICC) has been proposed as a useful statistic for evaluating heterogeneity because it allows one to compare an individual's auditory ERP with the grand average waveform from a typically developing reference group. We used this method to reanalyse auditory ERPs from a sample previously described by Uwer, Albrecht and von Suchodoletz (2002). In a subset of children with receptive SLI, there was less correspondence (i.e. lower ICC) with the normative waveform (based on the control grand average) than for typically developing children. This poorer correspondence was seen in responses to both tone and speech stimuli for the period 100–228 ms post stimulus onset. The effect was lateralized and seen at right- but not left-sided electrodes. PMID:17683344
Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve
Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.
2015-01-01
The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538
Primary auditory cortex regulates threat memory specificity.
Wigestrand, Mattis B; Schiff, Hillary C; Fyhn, Marianne; LeDoux, Joseph E; Sears, Robert M
2017-01-01
Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used muscimol infusions in rats to show that discriminatory threat learning requires Au1 activity specifically during memory acquisition and retrieval, but not during consolidation. Memory specificity was similarly disrupted by infusion of PKMζ inhibitor peptide (ZIP) during memory storage. Our findings show that Au1 is required at critical memory phases and suggest that Au1 plasticity enables stimulus discrimination. © 2016 Wigestrand et al.; Published by Cold Spring Harbor Laboratory Press.
Acute Inactivation of Primary Auditory Cortex Causes a Sound Localisation Deficit in Ferrets
Wood, Katherine C.; Town, Stephen M.; Atilgan, Huriye; Jones, Gareth P.
2017-01-01
The objective of this study was to demonstrate the efficacy of acute inactivation of brain areas by cooling in the behaving ferret and to demonstrate that cooling auditory cortex produced a localisation deficit that was specific to auditory stimuli. The effect of cooling on neural activity was measured in anesthetized ferret cortex. The behavioural effect of cooling was determined in a benchmark sound localisation task in which inactivation of primary auditory cortex (A1) is known to impair performance. Cooling strongly suppressed the spontaneous and stimulus-evoked firing rates of cortical neurons when the cooling loop was held at temperatures below 10°C, and this suppression was reversed when the cortical temperature recovered. Cooling of ferret auditory cortex during behavioural testing impaired sound localisation performance, with unilateral cooling producing selective deficits in the hemifield contralateral to cooling, and bilateral cooling producing deficits on both sides of space. The deficit in sound localisation induced by inactivation of A1 was not caused by motivational or locomotor changes since inactivation of A1 did not affect localisation of visual stimuli in the same context. PMID:28099489
Size and shape variations of the bony components of sperm whale cochleae.
Schnitzler, Joseph G; Frédérich, Bruno; Früchtnicht, Sven; Schaffeld, Tobias; Baltzer, Johannes; Ruser, Andreas; Siebert, Ursula
2017-04-25
Several mass strandings of sperm whales occurred in the North Sea during January and February 2016. Twelve animals were necropsied and sampled around 48 h after their discovery on German coasts of Schleswig Holstein. The present study aims to explore the morphological variation of the primary sensory organ of sperm whales, the left and right auditory system, using high-resolution computerised tomography imaging. We performed a quantitative analysis of size and shape of cochleae using landmark-based geometric morphometrics to reveal inter-individual anatomical variations. A hierarchical cluster analysis based on thirty-one external morphometric characters classified these 12 individuals in two stranding clusters. A relative amount of shape variation could be attributable to geographical differences among stranding locations and clusters. Our geometric data allowed the discrimination of distinct bachelor schools among sperm whales that stranded on German coasts. We argue that the cochleae are individually shaped, varying greatly in dimensions and that the intra-specific variation observed in the morphology of the cochleae may partially reflect their affiliation to their bachelor school. There are increasing concerns about the impact of noise on cetaceans and describing the auditory periphery of odontocetes is a key conservation issue to further assess the effect of noise pollution.
Auditory Neuroscience: Temporal Anticipation Enhances Cortical Processing
Walker, Kerry M. M.; King, Andrew J.
2015-01-01
Summary A recent study shows that expectation about the timing of behaviorally-relevant sounds enhances the responses of neurons in the primary auditory cortex and improves the accuracy and speed with which animals respond to those sounds. PMID:21481759
Olshansky, Michael P; Bar, Rachel J; Fogarty, Mary; DeSouza, Joseph F X
2015-01-01
The current study used functional magnetic resonance imaging to examine the neural activity of an expert dancer with 35 years of break-dancing experience during the kinesthetic motor imagery (KMI) of dance accompanied by highly familiar and unfamiliar music. The goal of this study was to examine the effect of musical familiarity on neural activity underlying KMI within a highly experienced dancer. In order to investigate this in both primary sensory and motor planning cortical areas, we examined the effects of music familiarity on the primary auditory cortex [Heschl's gyrus (HG)] and the supplementary motor area (SMA). Our findings reveal reduced HG activity and greater SMA activity during imagined dance to familiar music compared to unfamiliar music. We propose that one's internal representations of dance moves are influenced by auditory stimuli and may be specific to a dance style and the music accompanying it.
Orlov, Natasza D; Giampietro, Vincent; O'Daly, Owen; Lam, Sheut-Ling; Barker, Gareth J; Rubia, Katya; McGuire, Philip; Shergill, Sukhwinder S; Allen, Paul
2018-02-12
Neurocognitive models and previous neuroimaging work posit that auditory verbal hallucinations (AVH) arise due to increased activity in speech-sensitive regions of the left posterior superior temporal gyrus (STG). Here, we examined if patients with schizophrenia (SCZ) and AVH could be trained to down-regulate STG activity using real-time functional magnetic resonance imaging neurofeedback (rtfMRI-NF). We also examined the effects of rtfMRI-NF training on functional connectivity between the STG and other speech and language regions. Twelve patients with SCZ and treatment-refractory AVH were recruited to participate in the study and were trained to down-regulate STG activity using rtfMRI-NF, over four MRI scanner visits during a 2-week training period. STG activity and functional connectivity were compared pre- and post-training. Patients successfully learnt to down-regulate activity in their left STG over the rtfMRI-NF training. Post- training, patients showed increased functional connectivity between the left STG, the left inferior prefrontal gyrus (IFG) and the inferior parietal gyrus. The post-training increase in functional connectivity between the left STG and IFG was associated with a reduction in AVH symptoms over the training period. The speech-sensitive region of the left STG is a suitable target region for rtfMRI-NF in patients with SCZ and treatment-refractory AVH. Successful down-regulation of left STG activity can increase functional connectivity between speech motor and perception regions. These findings suggest that patients with AVH have the ability to alter activity and connectivity in speech and language regions, and raise the possibility that rtfMRI-NF training could present a novel therapeutic intervention in SCZ.
Restle, Julia; Murakami, Takenobu; Ziemann, Ulf
2012-07-01
The posterior part of the inferior frontal gyrus (pIFG) in the left hemisphere is thought to form part of the putative human mirror neuron system and is assigned a key role in mapping sensory perception onto motor action. Accordingly, the pIFG is involved in motor imitation of the observed actions of others but it is not known to what extent speech repetition of auditory-presented sentences is also a function of the pIFG. Here we applied fMRI-guided facilitating intermittent theta burst transcranial magnetic stimulation (iTBS), or depressant continuous TBS (cTBS), or intermediate TBS (imTBS) over the left pIFG of healthy subjects and compared speech repetition accuracy of foreign Japanese sentences before and after TBS. We found that repetition accuracy improved after iTBS and, to a lesser extent, after imTBS, but remained unchanged after cTBS. In a control experiment, iTBS was applied over the left middle occipital gyrus (MOG), a region not involved in sensorimotor processing of auditory-presented speech. Repetition accuracy remained unchanged after iTBS of MOG. We argue that the stimulation type and stimulation site specific facilitating effect of iTBS over left pIFG on speech repetition accuracy indicates a causal role of the human left-hemispheric pIFG in the translation of phonological perception to motor articulatory output for repetition of speech. This effect may prove useful in rehabilitation strategies that combine repetitive speech training with iTBS of the left pIFG in speech disorders, such as aphasia after cerebral stroke. Copyright © 2012 Elsevier Ltd. All rights reserved.
Badcock, Nicholas A.; Nye, Abigail; Bishop, Dorothy V. M.
2011-01-01
Language is lateralised to the left hemisphere in most people, but it is unclear whether the same degree and direction of lateralisation is found for all verbal tasks and whether laterality is affected by task difficulty. We used functional transcranial Doppler ultrasonography (fTCD) to assess the lateralisation of language processing in 27 young adults using three tasks: word generation (WG), auditory naming (AN), and picture story (PS). WG and AN are active tasks requiring behavioural responses whereas PS is a passive task that involves listening to an auditory story accompanied by pictures. We also examined the effect of task difficulty by a post hoc behavioural categorisation of trials in the WG task and a word frequency manipulation in the AN task. fTCD was used to measure task-dependent blood flow velocity changes in the left and right middle cerebral arteries. All of these tasks were significantly left lateralised: WG, 77% of individuals left, 5% right; AN, 72% left: 4% right; PS, 56% left: 0% right. There were significant positive relationships between WG and AN (r = 0.56) as well as AN and PS (r = .76) but not WG and PS (r = −0.22). The task difficulty manipulation affected accuracy in both WG and AN tasks, as well as reaction time in the AN task, but did not significantly influence laterality indices in either task. It is concluded that verbal tasks are not interchangeable when assessing cerebral lateralisation, but that differences between tasks are not a consequence of task difficulty. PMID:23098198
Pflug, Anja; Gompf, Florian; Kell, Christian Alexander
2017-08-01
In bimanual multifrequency tapping, right-handers commonly use the right hand to tap the relatively higher rate and the left hand to tap the relatively lower rate. This could be due to hemispheric specializations for the processing of relative frequencies. An extension of the double-filtering-by-frequency theory to motor control proposes a left hemispheric specialization for the control of relatively high and a right hemispheric specialization for the control of relatively low tapping rates. We investigated timing variability and rhythmic accentuation in right handers tapping mono- and multifrequent bimanual rhythms to test the predictions of the double-filtering-by-frequency theory. Yet, hemispheric specializations for the processing of relative tapping rates could be masked by a left hemispheric dominance for the control of known sequences. Tapping was thus either performed in an overlearned quadruple meter (tap of the slow rhythm on the first auditory beat) or in a syncopated quadruple meter (tap of the slow rhythm on the fourth auditory beat). Independent of syncopation, the right hand outperformed the left hand in timing accuracy for fast tapping. A left hand timing benefit for slow tapping rates as predicted by the double-filtering-by-frequency theory was only found in the syncopated tapping group. This suggests a right hemisphere preference for the control of slow tapping rates when rhythms are not overlearned. Error rates indicate that overlearned rhythms represent hierarchically structured meters that are controlled by a single timer that could potentially reside in the left hemisphere. Copyright © 2017 Elsevier B.V. All rights reserved.
Lv, Han; Zhao, Pengfei; Liu, Zhaohui; Li, Rui; Zhang, Ling; Wang, Peng; Yan, Fei; Liu, Liheng; Wang, Guopeng; Zeng, Rong; Li, Ting; Dong, Cheng; Gong, Shusheng; Wang, Zhenchang
2017-03-01
Abnormal neural activities can be revealed by resting-state functional magnetic resonance imaging (rs-fMRI) using analyses of the regional activity and functional connectivity (FC) of the networks in the brain. This study was designed to demonstrate the functional network alterations in the patients with pulsatile tinnitus (PT). In this study, we recruited 45 patients with unilateral PT in the early stage of disease (less than 48 months of disease duration) and 45 normal controls. We used regional homogeneity (ReHo) and seed-based FC computational methods to reveal resting-state brain activity features associated with pulsatile tinnitus. Compared with healthy controls, PT patients showed regional abnormalities mainly in the left middle occipital gyrus (MOG), posterior cingulate gyrus (PCC), precuneus and right anterior insula (AI). When these regions were defined as seeds, we demonstrated widespread modification of interaction between the auditory and non-auditory networks. The auditory network was positively connected with the cognitive control network (CCN), which may associate with tinnitus related distress. Both altered regional activity and changed FC were found in the visual network. The modification of interactions of higher order networks were mainly found in the DMN, CCN and limbic networks. Functional connectivity between the left MOG and left parahippocampal gyrus could also be an index to reflect the disease duration. This study helped us gain a better understanding of the characteristics of neural network modifications in patients with pulsatile tinnitus. Copyright © 2017 Elsevier B.V. All rights reserved.
Ross, Bernhard; Miyazaki, Takahiro; Thompson, Jessica; Jamali, Shahab; Fujioka, Takako
2014-10-15
When two tones with slightly different frequencies are presented to both ears, they interact in the central auditory system and induce the sensation of a beating sound. At low difference frequencies, we perceive a single sound, which is moving across the head between the left and right ears. The percept changes to loudness fluctuation, roughness, and pitch with increasing beat rate. To examine the neural representations underlying these different perceptions, we recorded neuromagnetic cortical responses while participants listened to binaural beats at a continuously varying rate between 3 Hz and 60 Hz. Binaural beat responses were analyzed as neuromagnetic oscillations following the trajectory of the stimulus rate. Responses were largest in the 40-Hz gamma range and at low frequencies. Binaural beat responses at 3 Hz showed opposite polarity in the left and right auditory cortices. We suggest that this difference in polarity reflects the opponent neural population code for representing sound location. Binaural beats at any rate induced gamma oscillations. However, the responses were largest at 40-Hz stimulation. We propose that the neuromagnetic gamma oscillations reflect postsynaptic modulation that allows for precise timing of cortical neural firing. Systematic phase differences between bilateral responses suggest that separate sound representations of a sound object exist in the left and right auditory cortices. We conclude that binaural processing at the cortical level occurs with the same temporal acuity as monaural processing whereas the identification of sound location requires further interpretation and is limited by the rate of object representations. Copyright © 2014 the American Physiological Society.
Binaural speech processing in individuals with auditory neuropathy.
Rance, G; Ryan, M M; Carew, P; Corben, L A; Yiu, E; Tan, J; Delatycki, M B
2012-12-13
Auditory neuropathy disrupts the neural representation of sound and may therefore impair processes contingent upon inter-aural integration. The aims of this study were to investigate binaural auditory processing in individuals with axonal (Friedreich ataxia) and demyelinating (Charcot-Marie-Tooth disease type 1A) auditory neuropathy and to evaluate the relationship between the degree of auditory deficit and overall clinical severity in patients with neuropathic disorders. Twenty-three subjects with genetically confirmed Friedreich ataxia and 12 subjects with Charcot-Marie-Tooth disease type 1A underwent psychophysical evaluation of basic auditory processing (intensity discrimination/temporal resolution) and binaural speech perception assessment using the Listening in Spatialized Noise test. Age, gender and hearing-level-matched controls were also tested. Speech perception in noise for individuals with auditory neuropathy was abnormal for each listening condition, but was particularly affected in circumstances where binaural processing might have improved perception through spatial segregation. Ability to use spatial cues was correlated with temporal resolution suggesting that the binaural-processing deficit was the result of disordered representation of timing cues in the left and right auditory nerves. Spatial processing was also related to overall disease severity (as measured by the Friedreich Ataxia Rating Scale and Charcot-Marie-Tooth Neuropathy Score) suggesting that the degree of neural dysfunction in the auditory system accurately reflects generalized neuropathic changes. Measures of binaural speech processing show promise for application in the neurology clinic. In individuals with auditory neuropathy due to both axonal and demyelinating mechanisms the assessment provides a measure of functional hearing ability, a biomarker capable of tracking the natural history of progressive disease and a potential means of evaluating the effectiveness of interventions. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Vahaba, Daniel M; Macedo-Lima, Matheus; Remage-Healey, Luke
2017-01-01
Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor's song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM's established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E 2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches ( Taeniopygia guttata ) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E 2 administration on sensory processing. In sensory-aged subjects, E 2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E 2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E 2 sensitivity that each precisely track a key neural "switch point" from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds.
2017-01-01
Abstract Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor’s song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM’s established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches (Taeniopygia guttata) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E2 administration on sensory processing. In sensory-aged subjects, E2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E2 sensitivity that each precisely track a key neural “switch point” from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds. PMID:29255797
Activation of auditory cortex by anticipating and hearing emotional sounds: an MEG study.
Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina
2013-01-01
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.
Activation of Auditory Cortex by Anticipating and Hearing Emotional Sounds: An MEG Study
Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina
2013-01-01
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period. PMID:24278270
Auditory and visual interhemispheric communication in musicians and non-musicians.
Woelfle, Rebecca; Grahn, Jessica A
2013-01-01
The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.
Phonological Processing in Human Auditory Cortical Fields
Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.
2011-01-01
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252
Seibold, Julia C; Nolden, Sophie; Oberem, Josefa; Fels, Janina; Koch, Iring
2018-06-01
In an auditory attention-switching paradigm, participants heard two simultaneously spoken number-words, each presented to one ear, and decided whether the target number was smaller or larger than 5 by pressing a left or right key. An instructional cue in each trial indicated which feature had to be used to identify the target number (e.g., female voice). Auditory attention-switch costs were found when this feature changed compared to when it repeated in two consecutive trials. Earlier studies employing this paradigm showed mixed results when they examined whether such cued auditory attention-switches can be prepared actively during the cue-stimulus interval. This study systematically assessed which preconditions are necessary for the advance preparation of auditory attention-switches. Three experiments were conducted that controlled for cue-repetition benefits, modality switches between cue and stimuli, as well as for predictability of the switch-sequence. Only in the third experiment, in which predictability for an attention-switch was maximal due to a pre-instructed switch-sequence and predictable stimulus onsets, active switch-specific preparation was found. These results suggest that the cognitive system can prepare auditory attention-switches, and this preparation seems to be triggered primarily by the memorised switching-sequence and valid expectations about the time of target onset.
Beal, Deryk S; Cheyne, Douglas O; Gracco, Vincent L; Quraan, Maher A; Taylor, Margot J; De Nil, Luc F
2010-10-01
We used magnetoencephalography to investigate auditory evoked responses to speech vocalizations and non-speech tones in adults who do and do not stutter. Neuromagnetic field patterns were recorded as participants listened to a 1 kHz tone, playback of their own productions of the vowel /i/ and vowel-initial words, and actively generated the vowel /i/ and vowel-initial words. Activation of the auditory cortex at approximately 50 and 100 ms was observed during all tasks. A reduction in the peak amplitudes of the M50 and M100 components was observed during the active generation versus passive listening tasks dependent on the stimuli. Adults who stutter did not differ in the amount of speech-induced auditory suppression relative to fluent speakers. Adults who stutter had shorter M100 latencies for the actively generated speaking tasks in the right hemisphere relative to the left hemisphere but the fluent speakers showed similar latencies across hemispheres. During passive listening tasks, adults who stutter had longer M50 and M100 latencies than fluent speakers. The results suggest that there are timing, rather than amplitude, differences in auditory processing during speech in adults who stutter and are discussed in relation to hypotheses of auditory-motor integration breakdown in stuttering. Copyright 2010 Elsevier Inc. All rights reserved.
Elevated audiovisual temporal interaction in patients with migraine without aura
2014-01-01
Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903
Song and speech: brain regions involved with perception and covert production.
Callan, Daniel E; Tsytsarev, Vassiliy; Hanakawa, Takashi; Callan, Akiko M; Katsuhara, Maya; Fukuyama, Hidenao; Turner, Robert
2006-07-01
This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of interest calculated from images normalized to the left-right reflected template. Consistent with some hypotheses regarding hemispheric specialization, a pattern of differential laterality for speech over singing (both covert production and listening tasks) occurs in the left temporal lobe, whereas, singing over speech (listening task only) occurs in right temporal lobe.
Speaker-independent phoneme recognition with a binaural auditory image model
NASA Astrophysics Data System (ADS)
Francis, Keith Ivan
1997-09-01
This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.
Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain
Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon
2013-01-01
Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472
Characterizing Response to Elemental Unit of Acoustic Imaging Noise: An fMRI Study
Luh, Wen-Ming; Talavage, Thomas M.
2010-01-01
Acoustic imaging noise produced during functional magnetic resonance imaging (fMRI) studies can hinder auditory fMRI research analysis by altering the properties of the acquired time-series data. Acoustic imaging noise can be especially confounding when estimating the time course of the hemodynamic response (HDR) in auditory event-related fMRI (fMRI) experiments. This study is motivated by the desire to establish a baseline function that can serve not only as a comparison to other quantities of acoustic imaging noise for determining how detrimental is one's experimental noise, but also as a foundation for a model that compensates for the response to acoustic imaging noise. Therefore, the amplitude and spatial extent of the HDR to the elemental unit of acoustic imaging noise (i.e., a single ping) associated with echoplanar acquisition were characterized and modeled. Results from this fMRI study at 1.5 T indicate that the group-averaged HDR in left and right auditory cortex to acoustic imaging noise (duration of 46 ms) has an estimated peak magnitude of 0.29% (right) to 0.48% (left) signal change from baseline, peaks between 3 and 5 s after stimulus presentation, and returns to baseline and remains within the noise range approximately 8 s after stimulus presentation. PMID:19304477
Revisiting gender, race, and ear differences in peripheral auditory function
NASA Astrophysics Data System (ADS)
Boothalingam, Sriram; Klyn, Niall A. M.; Stiepan, Samantha M.; Wilson, Uzma S.; Lee, Jungwha; Siegel, Jonathan H.; Dhar, Sumitrajit
2018-05-01
Various measures of auditory function are reported to be superior in females as compared to males, in African American compared to Caucasian individuals, and in right compared to left ears. We re-examined the influence of these subject variables on hearing thresholds and otoacoustic emissions (OAEs) in a sample of 887 human participants between 10 and 68 years of age. Even though the variables of interest here have been examined before, previous attempts have largely been limited to frequencies up to 8 kHz. We used state-of-the-art signal delivery and recording techniques that compensated for individual differences in ear canal acoustics, allowing us to measure hearing thresholds and OAEs up to 20 kHz. The use of these modern calibration and recording techniques provided the motivation for re-examining these commonly studied variables. While controlling for age, noise exposure history, and general health history, we attempted to isolate the effects of gender, race, and ear (left versus right) on hearing thresholds and OAEs. Our results challenge the notion of a right ear advantage and question the existence of a significant gender and race differences in both hearing thresholds and OAE levels. These results suggest that ear canal anatomy and acoustics should be important considerations when evaluating the influence of gender, race, and ear on peripheral auditory function.
Different neural activities support auditory working memory in musicians and bilinguals.
Alain, Claude; Khatamian, Yasha; He, Yu; Lee, Yunjo; Moreno, Sylvain; Leung, Ada W S; Bialystok, Ellen
2018-05-17
Musical training and bilingualism benefit executive functioning and working memory (WM)-however, the brain networks supporting this advantage are not well specified. Here, we used functional magnetic resonance imaging and the n-back task to assess WM for spatial (sound location) and nonspatial (sound category) auditory information in musician monolingual (musicians), nonmusician bilinguals (bilinguals), and nonmusician monolinguals (controls). Musicians outperformed bilinguals and controls on the nonspatial WM task. Overall, spatial and nonspatial WM were associated with greater activity in dorsal and ventral brain regions, respectively. Increasing WM load yielded similar recruitment of the anterior-posterior attention network in all three groups. In both tasks and both levels of difficulty, musicians showed lower brain activity than controls in superior prefrontal frontal gyrus and dorsolateral prefrontal cortex (DLPFC) bilaterally, a finding that may reflect improved and more efficient use of neural resources. Bilinguals showed enhanced activity in language-related areas (i.e., left DLPFC and left supramarginal gyrus) relative to musicians and controls, which could be associated with the need to suppress interference associated with competing semantic activations from multiple languages. These findings indicate that the auditory WM advantage in musicians and bilinguals is mediated by different neural networks specific to each life experience. © 2018 New York Academy of Sciences.
The phonological short-term store-rehearsal system: patterns of impairment and neural correlates.
Vallar, G; Di Betta, A M; Silveri, M C
1997-06-01
Two left brain-damaged patients (L.A. and T.O.) with a selective impairment of auditory-verbal span are reported. Patient L.A. was unable to hold auditory-verbal material in the phonological store component of short-term memory. His performance was however normal on tasks requiring phonological judgements, which specifically involve the phonological output buffer component of the rehearsal process. He also showed some evidence that rehearsal contributed to the immediate retention of auditory-verbal material. Patient T.O. never made use of the rehearsal process in tasks assessing both immediate retention and the ability to make phonological judgements, but the memory capacity of the phonological short-term store was comparatively preserved. These contrasting patterns of impairment suggest that the phonological store component of verbal short-term memory was severely impaired in patient L.A., and spared, at least in part, in patient T.O. The rehearsal process was preserved in L.A., and primarily defective in T.O. The localisation of the lesions in the left hemisphere (L.A.: inferior parietal lobule, superior and middle temporal gyri; T.O.: sub-cortical premotor and rolandic regions, anterior insula) suggests that these two sub-components of phonological short-term memory have discrete anatomical correlates.
Auditory temporal processing in patients with temporal lobe epilepsy.
Lavasani, Azam Navaei; Mohammadkhani, Ghassem; Motamedi, Mahmoud; Karimi, Leyla Jalilvand; Jalaei, Shohreh; Shojaei, Fereshteh Sadat; Danesh, Ali; Azimi, Hadi
2016-07-01
Auditory temporal processing is the main feature of speech processing ability. Patients with temporal lobe epilepsy, despite their normal hearing sensitivity, may present speech recognition disorders. The present study was carried out to evaluate the auditory temporal processing in patients with unilateral TLE. The present study was carried out on 25 patients with epilepsy: 11 patients with right temporal lobe epilepsy and 14 with left temporal lobe epilepsy with a mean age of 31.1years and 18 control participants with a mean age of 29.4years. The two experimental and control groups were evaluated via gap-in-noise and duration pattern sequence tests. One-way ANOVA was run to analyze the data. The mean of the threshold of the GIN test in the control group was observed to be better than that in participants with LTLE and RTLE. Also, it was observed that the percentage of correct responses on the DPS test in the control group and in participants with RTLE was better than that in participants with LTLE. Patients with TLE have difficulties in temporal processing. Difficulties are more significant in patients with LTLE, likely because the left temporal lobe is specialized for the processing of temporal information. Copyright © 2016 Elsevier Inc. All rights reserved.
Age-Related Changes in Binaural Interaction at Brainstem Level.
Van Yper, Lindsey N; Vermeire, Katrien; De Vel, Eddy F J; Beynon, Andy J; Dhooge, Ingeborg J M
2016-01-01
Age-related hearing loss hampers the ability to understand speech in adverse listening conditions. This is attributed to a complex interaction of changes in the peripheral and central auditory system. One aspect that may deteriorate across the lifespan is binaural interaction. The present study investigates binaural interaction at the level of the auditory brainstem. It is hypothesized that brainstem binaural interaction deteriorates with advancing age. Forty-two subjects of various age participated in the study. Auditory brainstem responses (ABRs) were recorded using clicks and 500 Hz tone-bursts. ABRs were elicited by monaural right, monaural left, and binaural stimulation. Binaural interaction was investigated in two ways. First, grand averages of the binaural interaction component were computed for each age group. Second, wave V characteristics of the binaural ABR were compared with those of the summed left and right ABRs. Binaural interaction in the click ABR was demonstrated by shorter latencies and smaller amplitudes in the binaural compared with the summed monaural responses. For 500 Hz tone-burst ABR, no latency differences were found. However, amplitudes were significantly smaller in the binaural than summed monaural condition. An age-effect was found for 500 Hz tone-burst, but not for click ABR. Brainstem binaural interaction seems to decline with age. Interestingly, these changes seem to be stimulus-dependent.
Wolter, Sibylla; Dudschig, Carolin; Kaup, Barbara
2017-11-01
This study explored differences between pianists and non-musicians during reading of sentences describing high- or low-pitched auditory events. Based on the embodied model of language comprehension, it was hypothesized that the experience of playing the piano encourages a corresponding association between high-pitched sounds and the right and low-pitched sounds and the left. This pitch-space association is assumed to become elicited during understanding of sentences describing either a high- or low-pitched auditory event. In this study, pianists and non-musicians were tested based on the hypothesis that only pianists show a compatibility effect between implied pitch height and horizontal space, because only pianists have the corresponding experience with the piano keyboard. Participants read pitch-related sentences (e.g., the bear growls deeply, the soprano singer sings an aria) and judged whether the sentence was sensible or not by pressing either a left or right response key. The results indicated that only the pianists showed the predicted compatibility effect between implied pitch height and response location. Based on the results, it can be inferred that the experience of playing the piano led to an association between horizontal space and pitch height in pianists, while no such spatial association was elicited in non-musicians.
Hitler: a neurohistorical formulation.
Martindale, C; Hasenfus, N; Hines, D
1976-01-01
It is hypothesized that Adolf Hitler suffered from a constitutional left-side weakness that allowed his cerebral hemisphere to exert a strong influence on his thought and behavior. Physical characteristics such as trembling of the left extremities, lack of a left testicle, and tendency to exhibit leftward eye movements are interpreted as supportive of the hypothesis. Right hemisphere dominance is consistent with a number of Hitler's personal traits such as praise of the irrational, automatic speech, auditory hallucinations, hypochondriasis, uncontrolled rages, and spatial and musical interests. Right hemisphere dominated thought may also have formed a basis for his two basic political policies: Lebensraum and anti-Semitism.
Lawo, Vera; Fels, Janina; Oberem, Josefa; Koch, Iring
2014-10-01
Using an auditory variant of task switching, we examined the ability to intentionally switch attention in a dichotic-listening task. In our study, participants responded selectively to one of two simultaneously presented auditory number words (spoken by a female and a male, one for each ear) by categorizing its numerical magnitude. The mapping of gender (female vs. male) and ear (left vs. right) was unpredictable. The to-be-attended feature for gender or ear, respectively, was indicated by a visual selection cue prior to auditory stimulus onset. In Experiment 1, explicitly cued switches of the relevant feature dimension (e.g., from gender to ear) and switches of the relevant feature within a dimension (e.g., from male to female) occurred in an unpredictable manner. We found large performance costs when the relevant feature switched, but switches of the relevant feature dimension incurred only small additional costs. The feature-switch costs were larger in ear-relevant than in gender-relevant trials. In Experiment 2, we replicated these findings using a simplified design (i.e., only within-dimension switches with blocked dimensions). In Experiment 3, we examined preparation effects by manipulating the cueing interval and found a preparation benefit only when ear was cued. Together, our data suggest that the large part of attentional switch costs arises from reconfiguration at the level of relevant auditory features (e.g., left vs. right) rather than feature dimensions (ear vs. gender). Additionally, our findings suggest that ear-based target selection benefits more from preparation time (i.e., time to direct attention to one ear) than gender-based target selection.
Resting-state brain networks revealed by granger causal connectivity in frogs.
Xue, Fei; Fang, Guangzhan; Yue, Xizi; Zhao, Ermi; Brauth, Steven E; Tang, Yezhong
2016-10-15
Resting-state networks (RSNs) refer to the spontaneous brain activity generated under resting conditions, which maintain the dynamic connectivity of functional brain networks for automatic perception or higher order cognitive functions. Here, Granger causal connectivity analysis (GCCA) was used to explore brain RSNs in the music frog (Babina daunchina) during different behavioral activity phases. The results reveal that a causal network in the frog brain can be identified during the resting state which reflects both brain lateralization and sexual dimorphism. Specifically (1) ascending causal connections from the left mesencephalon to both sides of the telencephalon are significantly higher than those from the right mesencephalon, while the right telencephalon gives rise to the strongest efferent projections among all brain regions; (2) causal connections from the left mesencephalon in females are significantly higher than those in males and (3) these connections are similar during both the high and low behavioral activity phases in this species although almost all electroencephalograph (EEG) spectral bands showed higher power in the high activity phase for all nodes. The functional features of this network match important characteristics of auditory perception in this species. Thus we propose that this causal network maintains auditory perception during the resting state for unexpected auditory inputs as resting-state networks do in other species. These results are also consistent with the idea that females are more sensitive to auditory stimuli than males during the reproductive season. In addition, these results imply that even when not behaviorally active, the frogs remain vigilant for detecting external stimuli. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Koyama, S; Gunji, A; Yabe, H; Oiwa, S; Akahane-Yamada, R; Kakigi, R; Näätänen, R
2000-09-01
Evoked magnetic responses to speech sounds [R. Näätänen, A. Lehtokoski, M. Lennes, M. Cheour, M. Huotilainen, A. Iivonen, M. Vainio, P. Alku, R.J. Ilmoniemi, A. Luuk, J. Allik, J. Sinkkonen and K. Alho, Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385 (1997) 432-434.] were recorded from 13 Japanese subjects (right-handed). Infrequently presented vowels ([o]) among repetitive vowels ([e]) elicited the magnetic counterpart of mismatch negativity, MMNm (Bilateral, nine subjects; Left hemisphere alone, three subjects; Right hemisphere alone, one subject). The estimated source of the MMNm was stronger in the left than in the right auditory cortex. The sources were located posteriorly in the left than in the right auditory cortex. These findings are consistent with the results obtained in Finnish [R. Näätänen, A. Lehtokoski, M. Lennes, M. Cheour, M. Huotilainen, A. Iivonen, M.Vainio, P.Alku, R.J. Ilmoniemi, A. Luuk, J. Allik, J. Sinkkonen and K. Alho, Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385 (1997) 432-434.][T. Rinne, K. Alho, P. Alku, M. Holi, J. Sinkkonen, J. Virtanen, O. Bertrand and R. Näätänen, Analysis of speech sounds is left-hemisphere predominant at 100-150 ms after sound onset. Neuroreport, 10 (1999) 1113-1117.] and English [K. Alho, J.F. Connolly, M. Cheour, A. Lehtokoski, M. Huotilainen, J. Virtanen, R. Aulanko and R.J. Ilmoniemi, Hemispheric lateralization in preattentive processing of speech sounds. Neurosci. Lett., 258 (1998) 9-12.] subjects. Instead of the P1m observed in Finnish [M. Tervaniemi, A. Kujala, K. Alho, J. Virtanen, R.J. Ilmoniemi and R. Näätänen, Functional specialization of the human auditory cortex in processing phonetic and musical sounds: A magnetoencephalographic (MEG) study. Neuroimage, 9 (1999) 330-336.] and English [K. Alho, J. F. Connolly, M. Cheour, A. Lehtokoski, M. Huotilainen, J. Virtanen, R. Aulanko and R.J. Ilmoniemi, Hemispheric lateralization in preattentive processing of speech sounds. Neurosci. Lett., 258 (1998) 9-12.] subjects, prior to the MMNm, M60, was elicited by both rare and frequent sounds. Both MMNm and M60 sources were posteriorly located in the left than the right hemisphere.
Sitek, Kevin R; Cai, Shanqing; Beal, Deryk S; Perkell, Joseph S; Guenther, Frank H; Ghosh, Satrajit S
2016-01-01
Persistent developmental stuttering is characterized by speech production disfluency and affects 1% of adults. The degree of impairment varies widely across individuals and the neural mechanisms underlying the disorder and this variability remain poorly understood. Here we elucidate compensatory mechanisms related to this variability in impairment using whole-brain functional and white matter connectivity analyses in persistent developmental stuttering. We found that people who stutter had stronger functional connectivity between cerebellum and thalamus than people with fluent speech, while stutterers with the least severe symptoms had greater functional connectivity between left cerebellum and left orbitofrontal cortex (OFC). Additionally, people who stutter had decreased functional and white matter connectivity among the perisylvian auditory, motor, and speech planning regions compared to typical speakers, but greater functional connectivity between the right basal ganglia and bilateral temporal auditory regions. Structurally, disfluency ratings were negatively correlated with white matter connections to left perisylvian regions and to the brain stem. Overall, we found increased connectivity among subcortical and reward network structures in people who stutter compared to controls. These connections were negatively correlated with stuttering severity, suggesting the involvement of cerebellum and OFC may underlie successful compensatory mechanisms by more fluent stutterers.
A lateralized functional auditory network is involved in anuran sexual selection.
Xue, Fei; Fang, Guangzhan; Yue, Xizi; Zhao, Ermi; Brauth, Steven E; Tang, Yezhong
2016-12-01
Right ear advantage (REA) exists in many land vertebrates in which the right ear and left hemisphere preferentially process conspecific acoustic stimuli such as those related to sexual selection. Although ecological and neural mechanisms for sexual selection have been widely studied, the brain networks involved are still poorly understood. In this study we used multi-channel electroencephalographic data in combination with Granger causal connectivity analysis to demonstrate, for the first time, that auditory neural network interconnecting the left and right midbrain and forebrain function asymmetrically in the Emei music frog (Babina daunchina), an anuran species which exhibits REA. The results showed the network was lateralized. Ascending connections between the mesencephalon and telencephalon were stronger in the left side while descending ones were stronger in the right, which matched with the REA in this species and implied that inhibition from the forebrainmay induce REA partly. Connections from the telencephalon to ipsilateral mesencephalon in response to white noise were the highest in the non-reproductive stage while those to advertisement calls were the highest in reproductive stage, implying the attention resources and living strategy shift when entered the reproductive season. Finally, these connection changes were sexually dimorphic, revealing sex differences in reproductive roles.
Impact of Audio-Visual Asynchrony on Lip-Reading Effects -Neuromagnetic and Psychophysical Study-
Yahata, Izumi; Kanno, Akitake; Sakamoto, Shuichi; Takanashi, Yoshitaka; Takata, Shiho; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2016-01-01
The effects of asynchrony between audio and visual (A/V) stimuli on the N100m responses of magnetoencephalography in the left hemisphere were compared with those on the psychophysical responses in 11 participants. The latency and amplitude of N100m were significantly shortened and reduced in the left hemisphere by the presentation of visual speech as long as the temporal asynchrony between A/V stimuli was within 100 ms, but were not significantly affected with audio lags of -500 and +500 ms. However, some small effects were still preserved on average with audio lags of 500 ms, suggesting similar asymmetry of the temporal window to that observed in psychophysical measurements, which tended to be more robust (wider) for audio lags; i.e., the pattern of visual-speech effects as a function of A/V lag observed in the N100m in the left hemisphere grossly resembled that in psychophysical measurements on average, although the individual responses were somewhat varied. The present results suggest that the basic configuration of the temporal window of visual effects on auditory-speech perception could be observed from the early auditory processing stage. PMID:28030631
Disentangling syntax and intelligibility in auditory language comprehension.
Friederici, Angela D; Kotz, Sonja A; Scott, Sophie K; Obleser, Jonas
2010-03-01
Studies of the neural basis of spoken language comprehension typically focus on aspects of auditory processing by varying signal intelligibility, or on higher-level aspects of language processing such as syntax. Most studies in either of these threads of language research report brain activation including peaks in the superior temporal gyrus (STG) and/or the superior temporal sulcus (STS), but it is not clear why these areas are recruited in functionally different studies. The current fMRI study aims to disentangle the functional neuroanatomy of intelligibility and syntax in an orthogonal design. The data substantiate functional dissociations between STS and STG in the left and right hemispheres: first, manipulations of speech intelligibility yield bilateral mid-anterior STS peak activation, whereas syntactic phrase structure violations elicit strongly left-lateralized mid STG and posterior STS activation. Second, ROI analyses indicate all interactions of speech intelligibility and syntactic correctness to be located in the left frontal and temporal cortex, while the observed right-hemispheric activations reflect less specific responses to intelligibility and syntax. Our data demonstrate that the mid-to-anterior STS activation is associated with increasing speech intelligibility, while the mid-to-posterior STG/STS is more sensitive to syntactic information within the speech. 2009 Wiley-Liss, Inc.
Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.
Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina
2015-07-01
It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H
2018-05-02
A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.
2013-01-01
Background There is an accumulating body of evidence indicating that neuronal functional specificity to basic sensory stimulation is mutable and subject to experience. Although fMRI experiments have investigated changes in brain activity after relative to before perceptual learning, brain activity during perceptual learning has not been explored. This work investigated brain activity related to auditory frequency discrimination learning using a variational Bayesian approach for source localization, during simultaneous EEG and fMRI recording. We investigated whether the practice effects are determined solely by activity in stimulus-driven mechanisms or whether high-level attentional mechanisms, which are linked to the perceptual task, control the learning process. Results The results of fMRI analyses revealed significant attention and learning related activity in left and right superior temporal gyrus STG as well as the left inferior frontal gyrus IFG. Current source localization of simultaneously recorded EEG data was estimated using a variational Bayesian method. Analysis of current localized to the left inferior frontal gyrus and the right superior temporal gyrus revealed gamma band activity correlated with behavioral performance. Conclusions Rapid improvement in task performance is accompanied by plastic changes in the sensory cortex as well as superior areas gated by selective attention. Together the fMRI and EEG results suggest that gamma band activity in the right STG and left IFG plays an important role during perceptual learning. PMID:23316957
Electrophysiological Evidence for the Sources of the Masking Level Difference.
Fowler, Cynthia G
2017-08-16
The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD). A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used in protocols similar to those used to generate the behavioral MLD. Temporal coding of the signals necessary for generating the MLD occurs in the auditory periphery and brainstem. Brainstem disorders up to wave III of the auditory brainstem response (ABR) can disrupt the MLD. The full MLD requires input to the generators of the auditory late latency potentials to produce all characteristics of the MLD; these characteristics include threshold differences for various binaural signal and noise conditions. Studies using central auditory lesions are beginning to identify the cortical effects on the MLD. The MLD requires auditory processing from the periphery to cortical areas. A healthy auditory periphery and brainstem codes temporal synchrony, which is essential for the ABR. Threshold differences require engaging cortical function beyond the primary auditory cortex. More studies using cortical lesions and evoked potentials or imaging should clarify the specific cortical areas involved in the MLD.
Primary Synovial Sarcoma of External Auditory Canal: A Case Report.
Devi, Aarani; Jayakumar, Krishnannair L L
2017-07-20
Synovial sarcoma is a rare malignant tumor of mesenchymal origin. Primary synovial sarcoma of the ear is extremely rare and to date only two cases have been published in English medical literature. Though the tumor is reported to have an aggressive nature, early diagnosis and treatment may improve the outcome. Here, we report a rare case of synovial sarcoma of the external auditory canal in an 18-year-old male who was managed by chemotherapy and referred for palliation due to tumor progression.
Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.
Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi
2015-08-01
To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Sensory-to-motor integration during auditory repetition: a combined fMRI and lesion study
Parker Jones, ‘Ōiwi; Prejawa, Susan; Hope, Thomas M. H.; Oberhuber, Marion; Seghier, Mohamed L.; Leff, Alex P.; Green, David W.; Price, Cathy J.
2014-01-01
The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor translation during auditory repetition of unfamiliar pseudowords. We tested two different hypotheses. First we used functional magnetic resonance imaging in 25 healthy subjects to determine whether a functionally defined area in the left temporo-parietal junction (TPJ), referred to as Sylvian-parietal-temporal region (Spt), reflected the demands on auditory-to-motor integration during the repetition of pseudowords relative to a semantically mediated nonverbal sound-naming task. The experiment also allowed us to test alternative accounts of Spt function, namely that Spt is involved in subvocal articulation or auditory processing that can be driven either bottom-up or top-down. The results did not provide convincing evidence that activation increased in either Spt or any other cortical area when non-semantic auditory inputs were being translated into motor outputs. Instead, the results were most consistent with Spt responding to bottom up or top down auditory processing, independent of the demands on auditory-to-motor integration. Second, we investigated the lesion sites in eight patients who had selective difficulties repeating heard words but with preserved word comprehension, picture naming and verbal fluency (i.e., conduction aphasia). All eight patients had white-matter tract damage in the vicinity of the arcuate fasciculus and only one of the eight patients had additional damage to the Spt region, defined functionally in our fMRI data. Our results are therefore most consistent with the neurological tradition that emphasizes the importance of the arcuate fasciculus in the non-semantic integration of auditory and motor speech processing. PMID:24550807
Auditory risk estimates for youth target shooting
Meinke, Deanna K.; Murphy, William J.; Finan, Donald S.; Lankford, James E.; Flamme, Gregory A.; Stewart, Michael; Soendergaard, Jacob; Jerome, Trevor W.
2015-01-01
Objective To characterize the impulse noise exposure and auditory risk for youth recreational firearm users engaged in outdoor target shooting events. The youth shooting positions are typically standing or sitting at a table, which places the firearm closer to the ground or reflective surface when compared to adult shooters. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 26 firearm/ammunition configurations representing rifles, shotguns, and pistols used by youth. Measurements were obtained relative to a youth shooter’s left ear. Results All firearms generated peak levels that exceeded the 120 dB peak limit suggested by the WHO for children. In general, shooting from the seated position over a tabletop increases the peak levels, LAeq8 and reduces the unprotected maximum permissible exposures (MPEs) for both rifles and pistols. Pistols pose the greatest auditory risk when fired over a tabletop. Conclusion Youth should utilize smaller caliber weapons, preferably from the standing position, and always wear hearing protection whenever engaging in shooting activities to reduce the risk for auditory damage. PMID:24564688
Brown, Trecia A; Joanisse, Marc F; Gati, Joseph S; Hughes, Sarah M; Nixon, Pam L; Menon, Ravi S; Lomber, Stephen G
2013-01-01
Much of what is known about the cortical organization for audition in humans draws from studies of auditory cortex in the cat. However, these data build largely on electrophysiological recordings that are both highly invasive and provide less evidence concerning macroscopic patterns of brain activation. Optical imaging, using intrinsic signals or dyes, allows visualization of surface-based activity but is also quite invasive. Functional magnetic resonance imaging (fMRI) overcomes these limitations by providing a large-scale perspective of distributed activity across the brain in a non-invasive manner. The present study used fMRI to characterize stimulus-evoked activity in auditory cortex of an anesthetized (ketamine/isoflurane) cat, focusing specifically on the blood-oxygen-level-dependent (BOLD) signal time course. Functional images were acquired for adult cats in a 7 T MRI scanner. To determine the BOLD signal time course, we presented 1s broadband noise bursts between widely spaced scan acquisitions at randomized delays (1-12 s in 1s increments) prior to each scan. Baseline trials in which no stimulus was presented were also acquired. Our results indicate that the BOLD response peaks at about 3.5s in primary auditory cortex (AI) and at about 4.5 s in non-primary areas (AII, PAF) of cat auditory cortex. The observed peak latency is within the range reported for humans and non-human primates (3-4 s). The time course of hemodynamic activity in cat auditory cortex also occurs on a comparatively shorter scale than in cat visual cortex. The results of this study will provide a foundation for future auditory fMRI studies in the cat to incorporate these hemodynamic response properties into appropriate analyses of cat auditory cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
Serotonin 1A receptors, depression, and memory in temporal lobe epilepsy.
Theodore, William H; Wiggs, Edythe A; Martinez, Ashley R; Dustin, Irene H; Khan, Omar I; Appel, Shmuel; Reeves-Tyer, Pat; Sato, Susumu
2012-01-01
Memory deficits and depression are common in patients with temporal lobe epilepsy (TLE). Previous positron emission tomography (PET) studies have shown reduced mesial temporal 5HT1A-receptor binding in these patients. We examined the relationships among verbal memory performance, depression, and 5HT1A-receptor binding measured with 18F-trans-4-fluoro-N-2-[4-(2-methoxyphenyl)piperazin-1-yl]ethyl-N-(2-pyridyl) cyclohexane carboxamide (18FCWAY) PET in a cross-sectional study. We studied 40 patients (24 male; mean age 34.5 ± 10.7 years) with TLE. Seizure diagnosis and focus localization were based on ictal video-electroencephalography (EEG) recording. Patients had neuropsychological testing with Wechsler Adult Intelligence Score III (WAIS III) and Wechsler Memory Score III (WMS III) on stable antiepileptic drug (AED) regimens at least 24 h since the last seizure. Beck Depression Inventory (BDI) scores were obtained. We performed interictal PET with 18FCWAY, a fluorinated derivative of WAY 100635, a highly specific 5HT1A ligand, and structural magnetic resonance imaging (MRI) scans to estimate partial volume and plasma free fraction corrected 18FCWAY volume of distribution (V/f1). Hippocampal V/f1 was significantly lower in area ipsilateral than contralateral to the epileptic focus (73.7 ± 27.3 vs. 95.4 ± 28.4; p < 0.001). We found a significant relation between both left hippocampal 18FCWAY V/f1 (r = 0.41; p < 0.02) and left hippocampal volume (r = 0.36; p < 0.03) and delayed auditory memory score. On multiple regression, there was a significant effect of the interaction of left hippocampal 18FCWAY V/f1 and left hippocampal volume on delayed auditory memory, but not of either alone. High collinearity was present. In an analysis of variance including the side of the seizure focus, the effect of left hippocampal 18FCWAY V/f1 but not focus laterality retained significance. Mean BDI was 8.3 ± 7.0. There was a significant inverse relation between BDI and 18FCWAY V/f1 ipsilateral to the patient's epileptic focus (r = 0.38 p < 0.02). There was no difference between patients with a right or left temporal focus. There was no relation between BDI and immediate or delayed auditory memory. Our study suggests that reduced left hippocampal 5HT1A-receptor binding may play a role in memory impairment in patients with TLE. Wiley Periodicals, Inc. © 2011 International League Against Epilepsy.
Plonek, M; Nicpoń, J; Kubiak, K; Wrzosek, M
2017-03-01
Auditory plasticity in response to unilateral deafness has been reported in various animal species. Subcortical changes occurring in unilaterally deaf young dogs using the brainstem auditory evoked response have not been evaluated yet. The aim of this study was to assess the brainstem auditory evoked response findings in dogs with unilateral hearing loss, and compare them with recordings obtained from healthy dogs. Brainstem auditory evoked responses (amplitudes and latencies of waves I, II, III, V, the V/I wave amplitude ratio, wave I-V, I-III and III-V interpeak intervals) were studied retrospectively in forty-six privately owned dogs, which were either unilaterally deaf or had bilateral hearing. The data obtained from the hearing ears in unilaterally deaf dogs were compared to values obtained from their healthy littermates. Statistically significant differences in the amplitude of wave III and the V/I wave amplitude ratio at 75 dB nHL were found between the group of unilaterally deaf puppies and the control group. The recordings of dogs with single-sided deafness were compared, and the results showed no statistically significant differences in the latencies and amplitudes of the waves between left- (AL) and right-sided (AR) deafness. The recordings of the brainstem auditory evoked response in canines with unilateral inborn deafness in this study varied compared to recordings from healthy dogs. Future studies looking into electrophysiological assessment of hearing in conjunction with imaging modalities to determine subcortical auditory plasticity and auditory lateralization in unilaterally deaf dogs are warranted.
Effect of conductive hearing loss on central auditory function.
Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher
It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: p<0.001). Individuals with CHL had significantly lower correct responses than individuals with normal hearing for both sides (p<0.001). No correlation was found between GIN performance and degree of hearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
A comparison of aphasic and non-brain-injured adults on a dichotic CV-syllable listening task.
Shanks, J; Ryan, W
1976-06-01
A dichotic CV-syllable listening task was administered to a group of eleven non-brain-injured adults and to a group of eleven adult aphasics. The results of this study may be summarized as follows: 1)The group of non-brain-injured adults showed a slight right ear advantage for dichotically presented CV-syllables. 2)In comparison with the control group the asphasic group showed a bilateral deficit in response to the dichotic CV-syllables, superimposed on a non-significant right ear advantage. 3) The asphasic group demonstrated a great deal of intersubject variability on the dichotic task with six aphasics showing a right ear preference for the stimuli. The non-brain-injured subjects performed more homogeneously on the task. 4) The two subgroups of aphasics, a right ear advantage group and a left ear advantage group, performed significantly different on the dichotic listening task. 5) Single correct data analysis proved valuable by deleting accuracy of report for an examination of trials in which there was true competition for the single left hemispheric speech processor. These results were analyzed in terms of a functional model of auditory processing. In view of this model, the bilateral deficit in dichotic performance of the asphasic group was accounted for by the presence of a lesion within the dominant left hemisphere, where the speech signals from both ears converge for final processing. The right ear advantage shown by one asphasic subgroup was explained by a lesion interfering with the corpus callosal pathways from the left hemisphere; the left ear advantage observed within the other subgroup was explained by a lesion in the area of the auditory processor of the left hemisphere.
Auditory and Visual Interhemispheric Communication in Musicians and Non-Musicians
Woelfle, Rebecca; Grahn, Jessica A.
2013-01-01
The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer. PMID:24386382
Benzodiazepine temazepam suppresses the transient auditory 40-Hz response amplitude in humans.
Jääskeläinen, I P; Hirvonen, J; Saher, M; Pekkonen, E; Sillanaukee, P; Näätänen, R; Tiitinen, H
1999-06-18
To discern the role of the GABA(A) receptors in the generation and attentive modulation of the transient auditory 40-Hz response, the effects of the benzodiazepine temazepam (10 mg) were studied in 10 healthy social drinkers, using a double-blind placebo-controlled design. Three hundred Hertz standard and 330 Hz rare deviant tones were presented to the left, and 1000 Hz standards and 1100 Hz deviants to the right ear of the subjects. Subjects attended to a designated ear and were to detect deviants therein while ignoring tones to the other. Temazepam significantly suppressed the amplitude of the 40-Hz response, the effect being equal for attended and non-attended tone responses. This suggests involvement of GABA(A) receptors in transient auditory 40-Hz response generation, however, not in the attentive modulation of the 40-Hz response.
Primary Auditory Cortex is Required for Anticipatory Motor Response.
Li, Jingcheng; Liao, Xiang; Zhang, Jianxiong; Wang, Meng; Yang, Nian; Zhang, Jun; Lv, Guanghui; Li, Haohong; Lu, Jian; Ding, Ran; Li, Xingyi; Guang, Yu; Yang, Zhiqi; Qin, Han; Jin, Wenjun; Zhang, Kuan; He, Chao; Jia, Hongbo; Zeng, Shaoqun; Hu, Zhian; Nelken, Israel; Chen, Xiaowei
2017-06-01
The ability of the brain to predict future events based on the pattern of recent sensory experience is critical for guiding animal's behavior. Neocortical circuits for ongoing processing of sensory stimuli are extensively studied, but their contributions to the anticipation of upcoming sensory stimuli remain less understood. We, therefore, used in vivo cellular imaging and fiber photometry to record mouse primary auditory cortex to elucidate its role in processing anticipated stimulation. We found neuronal ensembles in layers 2/3, 4, and 5 which were activated in relationship to anticipated sound events following rhythmic stimulation. These neuronal activities correlated with the occurrence of anticipatory motor responses in an auditory learning task. Optogenetic manipulation experiments revealed an essential role of such neuronal activities in producing the anticipatory behavior. These results strongly suggest that the neural circuits of primary sensory cortex are critical for coding predictive information and transforming it into anticipatory motor behavior. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Cannabis Dampens the Effects of Music in Brain Regions Sensitive to Reward and Emotion
Pope, Rebecca A; Wall, Matthew B; Bisby, James A; Luijten, Maartje; Hindocha, Chandni; Mokrysz, Claire; Lawn, Will; Moss, Abigail; Bloomfield, Michael A P; Morgan, Celia J A; Nutt, David J; Curran, H Valerie
2018-01-01
Abstract Background Despite the current shift towards permissive cannabis policies, few studies have investigated the pleasurable effects users seek. Here, we investigate the effects of cannabis on listening to music, a rewarding activity that frequently occurs in the context of recreational cannabis use. We additionally tested how these effects are influenced by cannabidiol, which may offset cannabis-related harms. Methods Across 3 sessions, 16 cannabis users inhaled cannabis with cannabidiol, cannabis without cannabidiol, and placebo. We compared their response to music relative to control excerpts of scrambled sound during functional Magnetic Resonance Imaging within regions identified in a meta-analysis of music-evoked reward and emotion. All results were False Discovery Rate corrected (P<.05). Results Compared with placebo, cannabis without cannabidiol dampened response to music in bilateral auditory cortex (right: P=.005, left: P=.008), right hippocampus/parahippocampal gyrus (P=.025), right amygdala (P=.025), and right ventral striatum (P=.033). Across all sessions, the effects of music in this ventral striatal region correlated with pleasure ratings (P=.002) and increased functional connectivity with auditory cortex (right: P< .001, left: P< .001), supporting its involvement in music reward. Functional connectivity between right ventral striatum and auditory cortex was increased by cannabidiol (right: P=.003, left: P=.030), and cannabis with cannabidiol did not differ from placebo on any functional Magnetic Resonance Imaging measures. Both types of cannabis increased ratings of wanting to listen to music (P<.002) and enhanced sound perception (P<.001). Conclusions Cannabis dampens the effects of music in brain regions sensitive to reward and emotion. These effects were offset by a key cannabis constituent, cannabidol. PMID:29025134
Functional connectivity studies of patients with auditory verbal hallucinations.
Hoffman, Ralph E; Hampson, Michelle
2011-12-02
Functional connectivity (FC) studies of brain mechanisms leading to auditory verbal hallucinations (AVHs) utilizing functional magnetic resonance imaging (fMRI) data are reviewed. Initial FC studies utilized fMRI data collected during performance of various tasks, which suggested frontotemporal disconnection and/or source-monitoring disturbances. Later FC studies have utilized resting (no-task) fMRI data. These studies have produced a mixed picture of disconnection and hyperconnectivity involving different pathways associated with AVHs. Results of our most recent FC study of AVHs are reviewed in detail. This study suggests that the core mechanism producing AVHs involves not a single pathway, but a more complex functional loop. Components of this loop include Wernicke's area and its right homologue, the left inferior frontal cortex, and the putamen. It is noteworthy that the putamen appears to play a critical role in the generation of spontaneous language, and in determining whether auditory stimuli are registered consciously as percepts. Excessive functional coordination linking this region with the Wernicke's seed region in patients with schizophrenia could, therefore, generate an overabundance of potentially conscious language representations. In our model, intact FC in the other two legs of corticostriatal loop (Wernicke's with left IFG, and left IFG with putamen) appeared to allow hyperconnectivity linking the putamen and Wernicke's area (common to schizophrenia overall) to be expressed as conscious hallucinations of speech. Recommendations for future studies are discussed, including inclusion of multiple methodologies applied to the same subjects in order to compare and contrast different mechanistic hypotheses, utilizing EEG to better parse time-course of neural synchronization leading to AVHs, and ascertaining experiential subtypes of AVHs that may reflect distinct mechanisms.
How do auditory cortex neurons represent communication sounds?
Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc
2013-11-01
A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
Cortical Interactions Underlying the Production of Speech Sounds
ERIC Educational Resources Information Center
Guenther, Frank H.
2006-01-01
Speech production involves the integration of auditory, somatosensory, and motor information in the brain. This article describes a model of speech motor control in which a feedforward control system, involving premotor and primary motor cortex and the cerebellum, works in concert with auditory and somatosensory feedback control systems that…
The Kindergarten Auditory Screening Test as a Predictor of Reading Disability
ERIC Educational Resources Information Center
Margolis, Howard
1976-01-01
Correlation coefficients were obtained between the Kindergarten Auditory Screening Test (KAST), the Metropolitan Readiness Test (MRT), and the Gates MacGinitie Reading Tests, Primary Form (GMRT). Neither the coefficients obtained nor an examination of extreme groups indicated that the KAST was an effective predictor of reading disability. (Author)
Auditory hallucinations: nomenclature and classification.
Blom, Jan Dirk; Sommer, Iris E C
2010-03-01
The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an overview of the nomenclature and classification of auditory hallucinations. Relevant data were obtained from books, PubMed, Embase, and the Cochrane Library. The results are presented in the form of several classificatory arrangements of auditory hallucinations, governed by the principles of content, perceived source, perceived vivacity, relation to the sleep-wake cycle, and association with suspected neurobiologic correlates. This overview underscores the necessity to reappraise the concepts of auditory hallucinations developed during the era of classic psychiatry, to incorporate them into our current nomenclature and classification of auditory hallucinations, and to test them empirically with the aid of the structural and functional imaging techniques currently available.
Ebisumoto, Koji; Okami, Kenji; Hamada, Masashi; Maki, Daisuke; Sakai, Akihiro; Saito, Kosuke; Shimizu, Fukuko; Kaneda, Shoji; Iida, Masahiro
2018-06-01
The prognosis of advanced temporal bone cancer is poor, because complete surgical resection is difficult to achieve. Chemoradiotherapy is one of the available curative treatment options; however, its systemic effects on the patient restrict the use of this treatment. A 69-year-old female (who needed peritoneal dialysis) presented at our clinic with T4 left external auditory canal cancer and was treated with cetuximab plus radiotherapy (RT). The primary lesion showed complete response. The patient is currently alive with no evidence of disease two years after completion of the treatment and does not show any late toxicity. This is the first advanced temporal bone cancer patient treated with RT plus cetuximab. Cetuximab plus RT might be a treatment alternative for patients with advanced temporal bone cancer. Copyright © 2017 Elsevier B.V. All rights reserved.
Alderson-Day, Ben; McCarthy-Jones, Simon; Fernyhough, Charles
2018-01-01
Resting state networks (RSNs) are thought to reflect the intrinsic functional connectivity of brain regions. Alterations to RSNs have been proposed to underpin various kinds of psychopathology, including the occurrence of auditory verbal hallucinations (AVH). This review outlines the main hypotheses linking AVH and the resting state, and assesses the evidence for alterations to intrinsic connectivity provided by studies of resting fMRI in AVH. The influence of hallucinations during data acquisition, medication confounds, and movement are also considered. Despite a large variety of analytic methods and designs being deployed, it is possible to conclude that resting connectivity in the left temporal lobe in general and left superior temporal gyrus in particular are disrupted in AVH. There is also preliminary evidence of atypical connectivity in the default mode network and its interaction with other RSNs. Recommendations for future research include the adoption of a common analysis protocol to allow for more overlapping datasets and replication of intrinsic functional connectivity alterations. PMID:25956256
Species-specific calls evoke asymmetric activity in the monkey's temporal poles.
Poremba, Amy; Malloy, Megan; Saunders, Richard C; Carson, Richard E; Herscovitch, Peter; Mishkin, Mortimer
2004-01-29
It has often been proposed that the vocal calls of monkeys are precursors of human speech, in part because they provide critical information to other members of the species who rely on them for survival and social interactions. Both behavioural and lesion studies suggest that monkeys, like humans, use the auditory system of the left hemisphere preferentially to process vocalizations. To investigate the pattern of neural activity that might underlie this particular form of functional asymmetry in monkeys, we measured local cerebral metabolic activity while the animals listened passively to species-specific calls compared with a variety of other classes of sound. Within the superior temporal gyrus, significantly greater metabolic activity occurred on the left side than on the right, only in the region of the temporal pole and only in response to monkey calls. This functional asymmetry was absent when these regions were separated by forebrain commissurotomy, suggesting that the perception of vocalizations elicits concurrent interhemispheric interactions that focus the auditory processing within a specialized area of one hemisphere.
Born with an ear for dialects? Structural plasticity in the expert phonetician brain.
Golestani, Narly; Price, Cathy J; Scott, Sophie K
2011-03-16
Are experts born with particular predispositions, or are they made through experience? We examined brain structure in expert phoneticians, individuals who are highly trained to analyze and transcribe speech. We found a positive correlation between the size of left pars opercularis and years of phonetic transcription training experience, illustrating how learning may affect brain structure. Phoneticians were also more likely to have multiple or split left transverse gyri in the auditory cortex than nonexpert controls, and the amount of phonetic transcription training did not predict auditory cortex morphology. The transverse gyri are thought to be established in utero; our results thus suggest that this gross morphological difference may have existed before the onset of phonetic training, and that its presence confers an advantage of sufficient magnitude to affect career choices. These results suggest complementary influences of domain-specific predispositions and experience-dependent brain malleability, influences that likely interact in determining not only how experience shapes the human brain but also why some individuals become engaged by certain fields of expertise.
Foundas, Anne L; Mock, Jeffrey R; Corey, David M; Golob, Edward J; Conture, Edward G
2013-08-01
The SpeechEasy is an electronic device designed to alleviate stuttering by manipulating auditory feedback via time delays and frequency shifts. Device settings (control, default, custom), ear-placement (left, right), speaking task, and cognitive variables were examined in people who stutter (PWS) (n=14) compared to controls (n=10). Among the PWS there was a significantly greater reduction in stuttering (compared to baseline) with custom device settings compared to the non-altered feedback (control) condition. Stuttering was reduced the most during reading, followed by narrative and conversation. For the conversation task, stuttering was reduced more when the device was worn in the left ear. Those individuals with a more severe stuttering rate at baseline had a greater benefit from the use of the device compared to individuals with less severe stuttering. Our results support the view that overt stuttering is associated with defective speech-language monitoring that can be influenced by manipulating auditory feedback. Copyright © 2013 Elsevier Inc. All rights reserved.
Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.
Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M
2003-05-13
Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.
Xia, Shuang; Song, TianBin; Che, Jing; Li, Qiang; Chai, Chao; Zheng, Meizhu; Shen, Wen
2017-01-01
Early hearing deprivation could affect the development of auditory, language, and vision ability. Insufficient or no stimulation of the auditory cortex during the sensitive periods of plasticity could affect the function of hearing, language, and vision development. Twenty-three infants with congenital severe sensorineural hearing loss (CSSHL) and 17 age and sex matched normal hearing subjects were recruited. The amplitude of low frequency fluctuations (ALFF) and regional homogeneity (ReHo) of the auditory, language, and vision related brain areas were compared between deaf infants and normal subjects. Compared with normal hearing subjects, decreased ALFF and ReHo were observed in auditory and language-related cortex. Increased ALFF and ReHo were observed in vision related cortex, which suggest that hearing and language function were impaired and vision function was enhanced due to the loss of hearing. ALFF of left Brodmann area 45 (BA45) was negatively correlated with deaf duration in infants with CSSHL. ALFF of right BA39 was positively correlated with deaf duration in infants with CSSHL. In conclusion, ALFF and ReHo can reflect the abnormal brain function in language, auditory, and visual information processing in infants with CSSHL. This demonstrates that the development of auditory, language, and vision processing function has been affected by congenital severe sensorineural hearing loss before 4 years of age.
Auditory perceptual simulation: Simulating speech rates or accents?
Zhou, Peiyun; Christianson, Kiel
2016-07-01
When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. Copyright © 2016 Elsevier B.V. All rights reserved.
Auditory and visual spatial impression: Recent studies of three auditoria
NASA Astrophysics Data System (ADS)
Nguyen, Andy; Cabrera, Densil
2004-10-01
Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.
Skouras, Stavros; Lohmann, Gabriele
2018-01-01
Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions. PMID:29385142
Bendixen, Alexandra; Scharinger, Mathias; Strauß, Antje; Obleser, Jonas
2014-04-01
Speech signals are often compromised by disruptions originating from external (e.g., masking noise) or internal (e.g., inaccurate articulation) sources. Speech comprehension thus entails detecting and replacing missing information based on predictive and restorative neural mechanisms. The present study targets predictive mechanisms by investigating the influence of a speech segment's predictability on early, modality-specific electrophysiological responses to this segment's omission. Predictability was manipulated in simple physical terms in a single-word framework (Experiment 1) or in more complex semantic terms in a sentence framework (Experiment 2). In both experiments, final consonants of the German words Lachs ([laks], salmon) or Latz ([lats], bib) were occasionally omitted, resulting in the syllable La ([la], no semantic meaning), while brain responses were measured with multi-channel electroencephalography (EEG). In both experiments, the occasional presentation of the fragment La elicited a larger omission response when the final speech segment had been predictable. The omission response occurred ∼125-165 msec after the expected onset of the final segment and showed characteristics of the omission mismatch negativity (MMN), with generators in auditory cortical areas. Suggestive of a general auditory predictive mechanism at work, this main observation was robust against varying source of predictive information or attentional allocation, differing between the two experiments. Source localization further suggested the omission response enhancement by predictability to emerge from left superior temporal gyrus and left angular gyrus in both experiments, with additional experiment-specific contributions. These results are consistent with the existence of predictive coding mechanisms in the central auditory system, and suggestive of the general predictive properties of the auditory system to support spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.
Attention effects on the processing of task-relevant and task-irrelevant speech sounds and letters
Mittag, Maria; Inauri, Karina; Huovilainen, Tatu; Leminen, Miika; Salo, Emma; Rinne, Teemu; Kujala, Teija; Alho, Kimmo
2013-01-01
We used event-related brain potentials (ERPs) to study effects of selective attention on the processing of attended and unattended spoken syllables and letters. Participants were presented with syllables randomly occurring in the left or right ear and spoken by different voices and with a concurrent foveal stream of consonant letters written in darker or lighter fonts. During auditory phonological (AP) and non-phonological tasks, they responded to syllables in a designated ear starting with a vowel and spoken by female voices, respectively. These syllables occurred infrequently among standard syllables starting with a consonant and spoken by male voices. During visual phonological and non-phonological tasks, they responded to consonant letters with names starting with a vowel and to letters written in dark fonts, respectively. These letters occurred infrequently among standard letters with names starting with a consonant and written in light fonts. To examine genuine effects of attention and task on ERPs not overlapped by ERPs associated with target processing or deviance detection, these effects were studied only in ERPs to auditory and visual standards. During selective listening to syllables in a designated ear, ERPs to the attended syllables were negatively displaced during both phonological and non-phonological auditory tasks. Selective attention to letters elicited an early negative displacement and a subsequent positive displacement (Pd) of ERPs to attended letters being larger during the visual phonological than non-phonological task suggesting a higher demand for attention during the visual phonological task. Active suppression of unattended speech during the AP and non-phonological tasks and during the visual phonological tasks was suggested by a rejection positivity (RP) to unattended syllables. We also found evidence for suppression of the processing of task-irrelevant visual stimuli in visual ERPs during auditory tasks involving left-ear syllables. PMID:24348324
The Effect of Lexical Content on Dichotic Speech Recognition in Older Adults.
Findlen, Ursula M; Roup, Christina M
2016-01-01
Age-related auditory processing deficits have been shown to negatively affect speech recognition for older adult listeners. In contrast, older adults gain benefit from their ability to make use of semantic and lexical content of the speech signal (i.e., top-down processing), particularly in complex listening situations. Assessment of auditory processing abilities among aging adults should take into consideration semantic and lexical content of the speech signal. The purpose of this study was to examine the effects of lexical and attentional factors on dichotic speech recognition performance characteristics for older adult listeners. A repeated measures design was used to examine differences in dichotic word recognition as a function of lexical and attentional factors. Thirty-five older adults (61-85 yr) with sensorineural hearing loss participated in this study. Dichotic speech recognition was evaluated using consonant-vowel-consonant (CVC) word and nonsense CVC syllable stimuli administered in the free recall, directed recall right, and directed recall left response conditions. Dichotic speech recognition performance for nonsense CVC syllables was significantly poorer than performance for CVC words. Dichotic recognition performance varied across response condition for both stimulus types, which is consistent with previous studies on dichotic speech recognition. Inspection of individual results revealed that five listeners demonstrated an auditory-based left ear deficit for one or both stimulus types. Lexical content of stimulus materials affects performance characteristics for dichotic speech recognition tasks in the older adult population. The use of nonsense CVC syllable material may provide a way to assess dichotic speech recognition performance while potentially lessening the effects of lexical content on performance (i.e., measuring bottom-up auditory function both with and without top-down processing). American Academy of Audiology.
Primary Synovial Sarcoma of External Auditory Canal: A Case Report
Jayakumar, Krishnannair l L
2017-01-01
Synovial sarcoma is a rare malignant tumor of mesenchymal origin. Primary synovial sarcoma of the ear is extremely rare and to date only two cases have been published in English medical literature. Though the tumor is reported to have an aggressive nature, early diagnosis and treatment may improve the outcome. Here, we report a rare case of synovial sarcoma of the external auditory canal in an 18-year-old male who was managed by chemotherapy and referred for palliation due to tumor progression. PMID:28948118
Rieger, Kathryn; Rarra, Marie-Helene; Moor, Nicolas; Diaz Hernandez, Laura; Baenninger, Anja; Razavi, Nadja; Dierks, Thomas; Hubl, Daniela; Koenig, Thomas
2018-03-01
Previous studies showed a global reduction of the event-related potential component N100 in patients with schizophrenia, a phenomenon that is even more pronounced during auditory verbal hallucinations. This reduction assumingly results from dysfunctional activation of the primary auditory cortex by inner speech, which reduces its responsiveness to external stimuli. With this study, we tested the feasibility of enhancing the responsiveness of the primary auditory cortex to external stimuli with an upregulation of the event-related potential component N100 in healthy control subjects. A total of 15 healthy subjects performed 8 double-sessions of EEG-neurofeedback training over 2 weeks. The results of the used linear mixed effect model showed a significant active learning effect within sessions ( t = 5.99, P < .001) against an unspecific habituation effect that lowered the N100 amplitude over time. Across sessions, a significant increase in the passive condition ( t = 2.42, P = .03), named as carry-over effect, was observed. Given that the carry-over effect is one of the ultimate aims of neurofeedback, it seems reasonable to apply this neurofeedback training protocol to influence the N100 amplitude in patients with schizophrenia. This intervention could provide an alternative treatment option for auditory verbal hallucinations in these patients.
Anderson, L A; Christianson, G B; Linden, J F
2009-02-03
Cytochrome oxidase (CYO) and acetylcholinesterase (AChE) staining density varies across the cortical layers in many sensory areas. The laminar variations likely reflect differences between the layers in levels of metabolic activity and cholinergic modulation. The question of whether these laminar variations differ between primary sensory cortices has never been systematically addressed in the same set of animals, since most studies of sensory cortex focus on a single sensory modality. Here, we compared the laminar distribution of CYO and AChE activity in the primary auditory, visual, and somatosensory cortices of the mouse, using Nissl-stained sections to define laminar boundaries. Interestingly, for both CYO and AChE, laminar patterns of enzyme activity were similar in the visual and somatosensory cortices, but differed in the auditory cortex. In the visual and somatosensory areas, staining densities for both enzymes were highest in layers III/IV or IV and in lower layer V. In the auditory cortex, CYO activity showed a reliable peak only at the layer III/IV border, while AChE distribution was relatively homogeneous across layers. These results suggest that laminar patterns of metabolic activity and cholinergic influence are similar in the mouse visual and somatosensory cortices, but differ in the auditory cortex.
A physiologically based model for temporal envelope encoding in human primary auditory cortex.
Dugué, Pierre; Le Bouquin-Jeannès, Régine; Edeline, Jean-Marc; Faucon, Gérard
2010-09-01
Communication sounds exhibit temporal envelope fluctuations in the low frequency range (<70 Hz) and human speech has prominent 2-16 Hz modulations with a maximum at 3-4 Hz. Here, we propose a new phenomenological model of the human auditory pathway (from cochlea to primary auditory cortex) to simulate responses to amplitude-modulated white noise. To validate the model, performance was estimated by quantifying temporal modulation transfer functions (TMTFs). Previous models considered either the lower stages of the auditory system (up to the inferior colliculus) or only the thalamocortical loop. The present model, divided in two stages, is based on anatomical and physiological findings and includes the entire auditory pathway. The first stage, from the outer ear to the colliculus, incorporates inhibitory interneurons in the cochlear nucleus to increase performance at high stimuli levels. The second stage takes into account the anatomical connections of the thalamocortical system and includes the fast and slow excitatory and inhibitory currents. After optimizing the parameters of the model to reproduce the diversity of TMTFs obtained from human subjects, a patient-specific model was derived and the parameters were optimized to effectively reproduce both spontaneous activity and the oscillatory part of the evoked response. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Association between heart rhythm and cortical sound processing.
Marcomini, Renata S; Frizzo, Ana Claúdia F; de Góes, Viviane B; Regaçone, Simone F; Garner, David M; Raimundo, Rodrigo D; Oliveira, Fernando R; Valenti, Vitor E
2018-04-26
Sound signal processing signifies an important factor for human conscious communication and it may be assessed through cortical auditory evoked potentials (CAEP). Heart rate variability (HRV) provides information about heart rate autonomic regulation. We investigated the association between resting HRV and CAEP. We evaluated resting HRV in the time and frequency domain and the CAEP components. The subjects remained at rest for 10 minutes for HRV recording, then they performed the CAEP examinations through frequency and duration protocols in both ears. Linear regression indicated that the amplitude of the N2 wave of the CAEP in the left ear (not right ear) was significantly influenced by standard deviation of normal-to-normal RR-intervals (17.7%) and percentage of adjacent RR-intervals with a difference of duration greater than 50 milliseconds (25.3%) time domain HRV indices in the frequency protocol. In the duration protocol and in the left ear the latency of the P2 wave was significantly influenced by low (LF) (20.8%) and high frequency (HF) bands in normalized units (21%) and LF/HF ratio (22.4%) indices of HRV spectral analysis. The latency of the N2 wave was significantly influenced by LF (25.8%), HF (25.9%) and LF/HF (28.8%). In conclusion, we promote the supposition that resting heart rhythm is associated with thalamo-cortical, cortical-cortical and auditory cortex pathways involved with auditory processing in the right hemisphere.
Auditory and motion metaphors have different scalp distributions: an ERP study
Schmidt-Snoek, Gwenda L.; Drew, Ashley R.; Barile, Elizabeth C.; Agauas, Stephen J.
2015-01-01
While many links have been established between sensory-motor words used literally (kick the ball) and sensory-motor regions of the brain, it is less clear whether metaphorically used words (kick the habit) also show such signs of “embodiment.” Additionally, not much is known about the timing or nature of the connection between language and sensory-motor neural processing. We used stimuli divided into three figurativeness conditions—literal, metaphor, and anomalous—and two modality conditions—auditory (Her limousine was a privileged snort) and motion (The editorial was a brass-knuckle punch). The conditions were matched on a large number of potentially confounding factors including cloze probability. The electroencephalographic response to the final word of each sentence was measured at 64 electrode sites on the scalp of 22 participants and event-related potentials (ERPs) calculated. Analysis revealed greater amplitudes for metaphorical than literal sentences in both 350–500 ms and 500–650 ms timeframes. Results supported the possibility of different neural substrates for motion and auditory sentences. Greater differences for motion sentences were seen in the left posterior and left central electrode sites than elsewhere on the scalp. These findings are consistent with a sensory-motor neural categorization of language and with the integration of modal and amodal information during the N400 and P600 timeframes. PMID:25821433
Zhang, Dan; Hong, Bo; Gao, Shangkai; Röder, Brigitte
2017-05-01
While the behavioral dynamics as well as the functional network of sustained and transient attention have extensively been studied, their underlying neural mechanisms have most often been investigated in separate experiments. In the present study, participants were instructed to perform an audio-visual spatial attention task. They were asked to attend to either the left or the right hemifield and to respond to deviant transient either auditory or visual stimuli. Steady-state visual evoked potentials (SSVEPs) elicited by two task irrelevant pattern reversing checkerboards flickering at 10 and 15 Hz in the left and the right hemifields, respectively, were used to continuously monitor the locus of spatial attention. The amplitude and phase of the SSVEPs were extracted for single trials and were separately analyzed. Sustained attention to one hemifield (spatial attention) as well as to the auditory modality (intermodal attention) increased the inter-trial phase locking of the SSVEP responses, whereas briefly presented visual and auditory stimuli decreased the single-trial SSVEP amplitude between 200 and 500 ms post-stimulus. This transient change of the single-trial amplitude was restricted to the SSVEPs elicited by the reversing checkerboard in the spatially attended hemifield and thus might reflect a transient re-orienting of attention towards the brief stimuli. Thus, the present results demonstrate independent, but interacting neural mechanisms of sustained and transient attentional orienting.
Effects of smoking marijuana on focal attention and brain blood flow.
O'Leary, Daniel S; Block, Robert I; Koeppel, Julie A; Schultz, Susan K; Magnotta, Vincent A; Ponto, Laura Boles; Watkins, G Leonard; Hichwa, Richard D
2007-04-01
Using an attention task to control cognitive state, we previously found that smoking marijuana changes regional cerebral blood flow (rCBF). The present study measured rCBF during tasks requiring attention to left and right ears in different conditions. Twelve occasional marijuana users (mean age 23.5 years) were imaged with PET using [15O]water after smoking marijuana or placebo cigarettes as they performed a reaction time (RT) baseline task, and a dichotic listening task with attend-right- and attend-left-ear instructions. Smoking marijuana, but not placebo, resulted in increased normalized rCBF in orbital frontal cortex, anterior cingulate, temporal pole, insula, and cerebellum. RCBF was reduced in visual and auditory cortices. These changes occurred in all three tasks and replicated our earlier studies. They appear to reflect the direct effects of marijuana on the brain. Smoking marijuana lowered rCBF in auditory cortices compared to placebo but did not alter the normal pattern of attention-related rCBF asymmetry (i.e., greater rCBF in the temporal lobe contralateral to the direction of attention) that was also observed after placebo. These data indicate that marijuana has dramatic direct effects on rCBF, but causes relatively little change in the normal pattern of task-related rCBF on this auditory focused attention task. Copyright 2007 John Wiley & Sons, Ltd.
Fiveash, Anna; Thompson, William Forde; Badcock, Nicholas A; McArthur, Genevieve
2018-07-01
Music and language both rely on the processing of spectral (pitch, timbre) and temporal (rhythm) information to create structure and meaning from incoming auditory streams. Behavioral results have shown that interrupting a melodic stream with unexpected changes in timbre leads to reduced syntactic processing. Such findings suggest that syntactic processing is conditional on successful streaming of incoming sequential information. The current study used event-related potentials (ERPs) to investigate whether (1) the effect of alternating timbres on syntactic processing is reflected in a reduced brain response to syntactic violations, and (2) the phenomenon is similar for music and language. Participants listened to melodies and sentences with either one timbre (piano or one voice) or three timbres (piano, guitar, and vibraphone, or three different voices). Half the stimuli contained syntactic violations: an out-of-key note in the melodies, and a phrase-structure violation in the sentences. We found smaller ERPs to syntactic violations in music in the three-timbre compared to the one-timbre condition, reflected in a reduced early right anterior negativity (ERAN). A similar but non-significant pattern was observed for language stimuli in both the early left anterior negativity (ELAN) and the left anterior negativity (LAN) ERPs. The results suggest that disruptions to auditory streaming may interfere with syntactic processing, especially for melodic sequences. Copyright © 2018 Elsevier B.V. All rights reserved.
Structural covariance in the hallucinating brain: a voxel-based morphometry study
Modinos, Gemma; Vercammen, Ans; Mechelli, Andrea; Knegtering, Henderikus; McGuire, Philip K.; Aleman, André
2009-01-01
Background Neuroimaging studies have indicated that a number of cortical regions express altered patterns of structural covariance in schizophrenia. The relation between these alterations and specific psychotic symptoms is yet to be investigated. We used voxel-based morphometry to examine regional grey matter volumes and structural covariance associated with severity of auditory verbal hallucinations. Methods We applied optimized voxel-based morphometry to volumetric magnetic resonance imaging data from 26 patients with medication-resistant auditory verbal hallucinations (AVHs); statistical inferences were made at p < 0.05 after correction for multiple comparisons. Results Grey matter volume in the left inferior frontal gyrus was positively correlated with severity of AVHs. Hallucination severity influenced the pattern of structural covariance between this region and the left superior/middle temporal gyri, the right inferior frontal gyrus and hippocampus, and the insula bilaterally. Limitations The results are based on self-reported severity of auditory hallucinations. Complementing with a clinician-based instrument could have made the findings more compelling. Future studies would benefit from including a measure to control for other symptoms that may covary with AVHs and for the effects of antipsychotic medication. Conclusion The results revealed that overall severity of AVHs modulated cortical intercorrelations between frontotemporal regions involved in language production and verbal monitoring, supporting the critical role of this network in the pathophysiology of hallucinations. PMID:19949723
2011-01-01
Background Schizophrenia is a chronic and disabling disease that presents with delusions and hallucinations. Auditory hallucinations are usually expressed as voices speaking to or about the patient. Previous studies have examined the effect of repetitive transcranial magnetic stimulation (TMS) over the temporoparietal cortex on auditory hallucinations in schizophrenic patients. Our aim was to explore the potential effect of deep TMS, using the H coil over the same brain region on auditory hallucinations. Patients and methods Eight schizophrenic patients with refractory auditory hallucinations were recruited, mainly from Beer Ya'akov Mental Health Institution (Tel Aviv university, Israel) ambulatory clinics, as well as from other hospitals outpatient populations. Low-frequency deep TMS was applied for 10 min (600 pulses per session) to the left temporoparietal cortex for either 10 or 20 sessions. Deep TMS was applied using Brainsway's H1 coil apparatus. Patients were evaluated using the Auditory Hallucinations Rating Scale (AHRS) as well as the Scale for the Assessment of Positive Symptoms scores (SAPS), Clinical Global Impressions (CGI) scale, and the Scale for Assessment of Negative Symptoms (SANS). Results This preliminary study demonstrated a significant improvement in AHRS score (an average reduction of 31.7% ± 32.2%) and to a lesser extent improvement in SAPS results (an average reduction of 16.5% ± 20.3%). Conclusions In this study, we have demonstrated the potential of deep TMS treatment over the temporoparietal cortex as an add-on treatment for chronic auditory hallucinations in schizophrenic patients. Larger samples in a double-blind sham-controlled design are now being preformed to evaluate the effectiveness of deep TMS treatment for auditory hallucinations. Trial registration This trial is registered with clinicaltrials.gov (identifier: NCT00564096). PMID:21303566
Sörös, Peter; Michael, Nikolaus; Tollkötter, Melanie; Pfleiderer, Bettina
2006-01-01
Background A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i) the amplitude of the N1m response and (ii) its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. Results Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak) and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. Conclusion These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex. PMID:16884545
The role of primary auditory and visual cortices in temporal processing: A tDCS approach.
Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F
2016-10-15
Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.
Kenet, T.; Froemke, R. C.; Schreiner, C. E.; Pessah, I. N.; Merzenich, M. M.
2007-01-01
Noncoplanar polychlorinated biphenyls (PCBs) are widely dispersed in human environment and tissues. Here, an exemplar noncoplanar PCB was fed to rat dams during gestation and throughout three subsequent nursing weeks. Although the hearing sensitivity and brainstem auditory responses of pups were normal, exposure resulted in the abnormal development of the primary auditory cortex (A1). A1 was irregularly shaped and marked by internal nonresponsive zones, its topographic organization was grossly abnormal or reversed in about half of the exposed pups, the balance of neuronal inhibition to excitation for A1 neurons was disturbed, and the critical period plasticity that underlies normal postnatal auditory system development was significantly altered. These findings demonstrate that developmental exposure to this class of environmental contaminant alters cortical development. It is proposed that exposure to noncoplanar PCBs may contribute to common developmental disorders, especially in populations with heritable imbalances in neurotransmitter systems that regulate the ratio of inhibition and excitation in the brain. We conclude that the health implications associated with exposure to noncoplanar PCBs in human populations merit a more careful examination. PMID:17460041
Sommer, Iris E; Selten, Jean-Paul; Diederen, Kelly M; Blom, Jan Dirk
2010-01-01
This study proposes a theoretical framework which dissects auditory verbal hallucinations (AVH) into 2 essential components: audibility and alienation. Audibility, the perceptual aspect of AVH, may result from a disinhibition of the auditory cortex in response to self-generated speech. In isolation, this aspect leads to audible thoughts: Gedankenlautwerden. The second component is alienation, which is the failure to recognize the content of AVH as self-generated. This failure may be related to the fact that cerebral activity associated with AVH is predominantly present in the speech production area of the right hemisphere. Since normal inner speech is derived from the left speech area, an aberrant source may lead to confusion about the origin of the language fragments. When alienation is not accompanied by audibility, it will result in the experience of thought insertion. The 2 hypothesized components are illustrated using case vignettes. Copyright 2010 S. Karger AG, Basel.
Atypical coordination of cortical oscillations in response to speech in autism
Jochaut, Delphine; Lehongre, Katia; Saitovitch, Ana; Devauchelle, Anne-Dominique; Olasagasti, Itsaso; Chabane, Nadia; Zilbovicius, Monica; Giraud, Anne-Lise
2015-01-01
Subjects with autism often show language difficulties, but it is unclear how they relate to neurophysiological anomalies of cortical speech processing. We used combined EEG and fMRI in 13 subjects with autism and 13 control participants and show that in autism, gamma and theta cortical activity do not engage synergistically in response to speech. Theta activity in left auditory cortex fails to track speech modulations, and to down-regulate gamma oscillations in the group with autism. This deficit predicts the severity of both verbal impairment and autism symptoms in the affected sample. Finally, we found that oscillation-based connectivity between auditory and other language cortices is altered in autism. These results suggest that the verbal disorder in autism could be associated with an altered balance of slow and fast auditory oscillations, and that this anomaly could compromise the mapping between sensory input and higher-level cognitive representations. PMID:25870556
[Agraphia and preservation of music writing in a bilingual piano teacher].
Assal, G; Buttet, J
1983-01-01
A bilingual virtuoso piano teacher developed aphasia and amusia, probably due to cerebral embolism. The perfectly demarcated and unique lesion was located in the left posterior temporoparietal region. Language examinations in French and Italian demonstrated entirely comparable difficulties in both languages. The linguistic course was favorable after a period of auditory agnosia and global aphasia. Language became fluent again 3 months after the onset, with a marked vocabulary loss and phonemic paraphasias with attempts at self-correction. Repetition was altered markedly with a deficit in auditory comprehension but no remaining elements of auditory agnosia. Reading was possible, but with some difficulty and total agraphia and acalculia persisted. Musical ability was better conserved, particularly with respect to repetition and above all to writing, the sparing of the latter constituting a fairly uncommon dissociation in relation to agraphia. Findings are discussed in relation to data in the literature concerning hemispheric participation in various musical tasks.
Recognition of emotion with temporal lobe epilepsy and asymmetrical amygdala damage.
Fowler, Helen L; Baker, Gus A; Tipples, Jason; Hare, Dougal J; Keller, Simon; Chadwick, David W; Young, Andrew W
2006-08-01
Impairments in emotion recognition occur when there is bilateral damage to the amygdala. In this study, ability to recognize auditory and visual expressions of emotion was investigated in people with asymmetrical amygdala damage (AAD) and temporal lobe epilepsy (TLE). Recognition of five emotions was tested across three participant groups: those with right AAD and TLE, those with left AAD and TLE, and a comparison group. Four tasks were administered: recognition of emotion from facial expressions, sentences describing emotion-laden situations, nonverbal sounds, and prosody. Accuracy scores for each task and emotion were analysed, and no consistent overall effect of AAD on emotion recognition was found. However, some individual participants with AAD were significantly impaired at recognizing emotions, in both auditory and visual domains. The findings indicate that a minority of individuals with AAD have impairments in emotion recognition, but no evidence of specific impairments (e.g., visual or auditory) was found.
Regional homogeneity changes in prelingually deafened patients: a resting-state fMRI study
NASA Astrophysics Data System (ADS)
Li, Wenjing; He, Huiguang; Xian, Junfang; Lv, Bin; Li, Meng; Li, Yong; Liu, Zhaohui; Wang, Zhenchang
2010-03-01
Resting-state functional magnetic resonance imaging (fMRI) is a technique that measures the intrinsic function of brain and has some advantages over task-induced fMRI. Regional homogeneity (ReHo) assesses the similarity of the time series of a given voxel with its nearest neighbors on a voxel-by-voxel basis, which reflects the temporal homogeneity of the regional BOLD signal. In the present study, we used the resting state fMRI data to investigate the ReHo changes of the whole brain in the prelingually deafened patients relative to normal controls. 18 deaf patients and 22 healthy subjects were scanned. Kendall's coefficient of concordance (KCC) was calculated to measure the degree of regional coherence of fMRI time courses. We found that regional coherence significantly decreased in the left frontal lobe, bilateral temporal lobes and right thalamus, and increased in the postcentral gyrus, cingulate gyrus, left temporal lobe, left thalamus and cerebellum in deaf patients compared with controls. These results show that the prelingually deafened patients have higher degree of regional coherence in the paleocortex, and lower degree in neocortex. Since neocortex plays an important role in the development of auditory, these evidences may suggest that the deaf persons reorganize the paleocortex to offset the loss of auditory.
Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram
2009-03-01
Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands audiovisual processing both in speech and language treatment and in the diagnosis of oral-facial apraxia. The purpose of this study was to investigate differences in audiovisual perception of speech as compared to non-speech oral gestures. Bimodal and unimodal speech and non-speech items were used and additionally discordant stimuli constructed, which were presented for imitation. This study examined a group of healthy volunteers and a group of patients with lesions of the left hemisphere. Patients made substantially more errors than controls, but the factors influencing imitation accuracy were more or less the same in both groups. Error analyses in both groups suggested different types of representations for speech as compared to the non-speech domain, with speech having a stronger weight on the auditory modality and non-speech processing on the visual modality. Additionally, this study was able to show that the McGurk effect is not limited to speech.
Dykstra, Andrew R; Burchard, Daniel; Starzynski, Christian; Riedel, Helmut; Rupp, Andre; Gutschalk, Alexander
2016-08-01
We used magnetoencephalography to examine lateralization and binaural interaction of the middle-latency and late-brainstem components of the auditory evoked response (the MLR and SN10, respectively). Click stimuli were presented either monaurally, or binaurally with left- or right-leading interaural time differences (ITDs). While early MLR components, including the N19 and P30, were larger for monaural stimuli presented contralaterally (by approximately 30 and 36 % in the left and right hemispheres, respectively), later components, including the N40 and P50, were larger ipsilaterally. In contrast, MLRs elicited by binaural clicks with left- or right-leading ITDs did not differ. Depending on filter settings, weak binaural interaction could be observed as early as the P13 but was clearly much larger for later components, beginning at the P30, indicating some degree of binaural linearity up to early stages of cortical processing. The SN10, an obscure late-brainstem component, was observed consistently in individuals and showed linear binaural additivity. The results indicate that while the MLR is lateralized in response to monaural stimuli-and not ITDs-this lateralization reverses from primarily contralateral to primarily ipsilateral as early as 40 ms post stimulus and is never as large as that seen with fMRI.
Sitek, Kevin R.; Cai, Shanqing; Beal, Deryk S.; Perkell, Joseph S.; Guenther, Frank H.; Ghosh, Satrajit S.
2016-01-01
Persistent developmental stuttering is characterized by speech production disfluency and affects 1% of adults. The degree of impairment varies widely across individuals and the neural mechanisms underlying the disorder and this variability remain poorly understood. Here we elucidate compensatory mechanisms related to this variability in impairment using whole-brain functional and white matter connectivity analyses in persistent developmental stuttering. We found that people who stutter had stronger functional connectivity between cerebellum and thalamus than people with fluent speech, while stutterers with the least severe symptoms had greater functional connectivity between left cerebellum and left orbitofrontal cortex (OFC). Additionally, people who stutter had decreased functional and white matter connectivity among the perisylvian auditory, motor, and speech planning regions compared to typical speakers, but greater functional connectivity between the right basal ganglia and bilateral temporal auditory regions. Structurally, disfluency ratings were negatively correlated with white matter connections to left perisylvian regions and to the brain stem. Overall, we found increased connectivity among subcortical and reward network structures in people who stutter compared to controls. These connections were negatively correlated with stuttering severity, suggesting the involvement of cerebellum and OFC may underlie successful compensatory mechanisms by more fluent stutterers. PMID:27199712
Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina
2016-02-01
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Cortical thickness development of human primary visual cortex related to the age of blindness onset.
Li, Qiaojun; Song, Ming; Xu, Jiayuan; Qin, Wen; Yu, Chunshui; Jiang, Tianzi
2017-08-01
Blindness primarily induces structural alteration in the primary visual cortex (V1). Some studies have found that the early blind subjects had a thicker V1 compared to sighted controls, whereas late blind subjects showed no significant differences in the V1. This implies that the age of blindness onset may exert significant effects on the development of cortical thickness of the V1. However, no previous research used a trajectory of the age of blindness onset-related changes to investigate these effects. Here we explored this issue by mapping the cortical thickness trajectory of the V1 against the age of blindness onset using data from 99 blind individuals whose age of blindness onset ranged from birth to 34 years. We found that the cortical thickness of the V1 could be fitted well with a quadratic curve in both the left (F = 11.59, P = 3 × 10 -5 ) and right hemispheres (F = 6.54, P = 2 × 10 -3 ). Specifically, the cortical thickness of the V1 thinned rapidly during childhood and adolescence and did not change significantly thereafter. This trend was not observed in the primary auditory cortex (A1), primary motor cortex (M1), or primary somatosensory cortex (S1). These results provide evidence that an onset of blindness before adulthood significantly affects the cortical thickness of the V1 and suggest a critical period for cortical development of the human V1.
Foxe, David; Leyton, Cristian E; Hodges, John R; Burrell, James R; Irish, Muireann; Piguet, Olivier
2016-10-01
Logopenic progressive aphasia (lv-PPA) is a form of primary progressive aphasia and is predominantly associated with Alzheimer's disease (AD) pathology. The neuropsychological profiles of lv-PPA and typical clinical AD are, however, distinct. In particular, these two syndromes differ on attention span measures, where auditory attention span is more impaired in lv-PPA than in AD and visuospatial span appears more impaired in AD than in lv-PPA. The neural basis of these span profiles, however, remains unclear. Sixteen lv-PPA and 21 AD matched patients, and 15 education-matched healthy controls were recruited. All participants were assessed by a neurologist and completed a neuropsychological assessment that included the Wechsler Memory Scale-III Digit and Spatial Span tasks, and underwent a high-resolution structural brain MRI to conduct cortical thickness analyses. Patient groups were impaired on all span tasks compared to Controls. In addition, performance on Digit Span Forward (DSF) was significantly lower in the lv-PPA than the AD group, while Spatial Span Forward (SSF) was significantly lower in the AD than the lv-PPA group. No differences were found between patient groups on the Digit or Spatial Span Backward tasks. Neuroimaging analyses revealed that reduced DSF performance correlated to thinning of the left superior temporal gyrus in the lv-PPA group, whereas reduced SSF performance was related to bilateral precentral sulcus and parieto-occipital thinning in the AD group. Analyses of the backward span tasks revealed that reduced Spatial Span Backward (SSB) performance in the lv-PPA group related to cortical thinning of the left superior parietal lobule. This study demonstrates that while lv-PPA and AD commonly share the same underlying neuropathology, their span profiles are distinct and are mediated by divergent patterns of cortical degeneration. Copyright © 2016 Elsevier Ltd. All rights reserved.
From Vivaldi to Beatles and back: predicting lateralized brain responses to music.
Alluri, Vinoo; Toiviainen, Petri; Lund, Torben E; Wallentin, Mikkel; Vuust, Peter; Nandi, Asoke K; Ristaniemi, Tapani; Brattico, Elvira
2013-12-01
We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised continuous fMRI responses of musically trained participants to an Argentinean tango. Individual models for the two musical medleys revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions could be predicted. Notably, activations in the medial orbitofrontal region and the anterior cingulate cortex, relevant for self-referential appraisal and aesthetic judgments, could be predicted successfully. Cross-validation across musical stimuli and participant pools helped identify a region of the right superior temporal gyrus, encompassing the planum polare and the Heschl's gyrus, as the core structure that processed complex acoustic features of musical pieces from various genres, with or without lyrics. Models based on purely instrumental music were able to predict activation in the bilateral auditory cortices, parietal, somatosensory, and left hemispheric primary and supplementary motor areas. The presence of lyrics on the other hand weakened the prediction of activations in the left superior temporal gyrus. Our results suggest spontaneous emotion-related processing during naturalistic listening to music and provide supportive evidence for the hemispheric specialization for categorical sounds with realistic stimuli. We herewith introduce a powerful means to predict brain responses to music, speech, or soundscapes across a large variety of contexts. © 2013.
Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher
2017-09-05
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher
2017-01-01
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357
Discrimination of timbre in early auditory responses of the human brain.
Seol, Jaeho; Oh, MiAe; Kim, June Sic; Jin, Seung-Hyun; Kim, Sun Il; Chung, Chun Kee
2011-01-01
The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1)-testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.
Holmes, Nicholas P; Dakwar, Azar R
2015-12-01
Movements aimed towards objects occasionally have to be adjusted when the object moves. These online adjustments can be very rapid, occurring in as little as 100ms. More is known about the latency and neural basis of online control of movements to visual than to auditory target objects. We examined the latency of online corrections in reaching-to-point movements to visual and auditory targets that could change side and/or modality at movement onset. Visual or auditory targets were presented on the left or right sides, and participants were instructed to reach and point to them as quickly and as accurately as possible. On half of the trials, the targets changed side at movement onset, and participants had to correct their movements to point to the new target location as quickly as possible. Given different published approaches to measuring the latency for initiating movement corrections, we examined several different methods systematically. What we describe here as the optimal methods involved fitting a straight-line model to the velocity of the correction movement, rather than using a statistical criterion to determine correction onset. In the multimodal experiment, these model-fitting methods produced significantly lower latencies for correcting movements away from the auditory targets than away from the visual targets. Our results confirm that rapid online correction is possible for auditory targets, but further work is required to determine whether the underlying control system for reaching and pointing movements is the same for auditory and visual targets. Copyright © 2015 Elsevier Ltd. All rights reserved.
Auditory, visual, and bimodal data link displays and how they support pilot performance.
Steelman, Kelly S; Talleur, Donald; Carbonari, Ronald; Yamani, Yusuke; Nunes, Ashley; McCarley, Jason S
2013-06-01
The design of data link messaging systems to ensure optimal pilot performance requires empirical guidance. The current study examined the effects of display format (auditory, visual, or bimodal) and visual display position (adjacent to instrument panel or mounted on console) on pilot performance. Subjects performed five 20-min simulated single-pilot flights. During each flight, subjects received messages from a simulated air traffic controller. Messages were delivered visually, auditorily, or bimodally. Subjects were asked to read back each message aloud and then perform the instructed maneuver. Visual and bimodal displays engendered lower subjective workload and better altitude tracking than auditory displays. Readback times were shorter with the two unimodal visual formats than with any of the other three formats. Advantages for the unimodal visual format ranged in size from 2.8 s to 3.8 s relative to the bimodal upper left and auditory formats, respectively. Auditory displays allowed slightly more head-up time (3 to 3.5 seconds per minute) than either visual or bimodal displays. Position of the visual display had only modest effects on any measure. Combined with the results from previous studies by Helleberg and Wickens and Lancaster and Casali the current data favor visual and bimodal displays over auditory displays; unimodal auditory displays were favored by only one measure, head-up time, and only very modestly. Data evinced no statistically significant effects of visual display position on performance, suggesting that, contrary to expectations, the placement of a visual data link display may be of relatively little consequence to performance.
Sinai, A; Crone, N E; Wied, H M; Franaszczuk, P J; Miglioretti, D; Boatman-Reich, D
2009-01-01
We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping.
Intracranial mapping of auditory perception: Event-related responses and electrocortical stimulation
Sinai, A.; Crone, N.E.; Wied, H.M.; Franaszczuk, P.J.; Miglioretti, D.; Boatman-Reich, D.
2010-01-01
Objective We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Methods Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. Results ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60 Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Conclusions Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. Significance These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping. PMID:19070540
Sensory-motor interactions for vocal pitch monitoring in non-primary human auditory cortex.
Greenlee, Jeremy D W; Behroozmand, Roozbeh; Larson, Charles R; Jackson, Adam W; Chen, Fangxiang; Hansen, Daniel R; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A
2013-01-01
The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (-100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70-150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.
Sensory-Motor Interactions for Vocal Pitch Monitoring in Non-Primary Human Auditory Cortex
Larson, Charles R.; Jackson, Adam W.; Chen, Fangxiang; Hansen, Daniel R.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.
2013-01-01
The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (−100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70–150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control. PMID:23577157
Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model
Nakao, Kazuhito; Nakazawa, Kazu
2014-01-01
In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs) are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR) hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP) activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 s, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1) a broadband increase in spontaneous LFP power in the absence of external inputs, and (2) a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The “paradoxically” high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception. PMID:25018691
Michael, E B; Keller, T A; Carpenter, P A; Just, M A
2001-08-01
The neural substrate underlying reading vs. listening comprehension of sentences was compared using fMRI. One way in which this issue was addressed was by comparing the patterns of activation particularly in cortical association areas that classically are implicated in language processing. The precise locations of the activation differed between the two modalities. In the left inferior frontal gyrus (Broca's area), the activation associated with listening was more anterior and inferior than the activation associated with reading, suggesting more semantic processing during listening comprehension. In the left posterior superior and middle temporal region (roughly, Wernicke's area), the activation for listening was closer to primary auditory cortex (more anterior and somewhat more lateral) than the activation for reading. In several regions, the activation was much more left lateralized for reading than for listening. In addition to differences in the location of the activation, there were also differences in the total amount of activation in the two modalities in several regions. A second way in which the modality comparison was addressed was by examining how the neural systems responded to comprehension workload in the two modalities by systematically varying the structural complexity of the sentences to be processed. Here, the distribution of the workload increase associated with the processing of additional structural complexity was very similar across the two input modalities. The results suggest a number of subtle differences in the cognitive processing underlying listening vs. reading comprehension. Copyright 2001 Wiley-Liss, Inc.