Congenital deafness affects deep layers in primary and secondary auditory cortex
Berger, Christoph; Kühne, Daniela; Scheper, Verena
2017-01-01
Abstract Congenital deafness leads to functional deficits in the auditory cortex for which early cochlear implantation can effectively compensate. Most of these deficits have been demonstrated functionally. Furthermore, the majority of previous studies on deafness have involved the primary auditory cortex; knowledge of higher‐order areas is limited to effects of cross‐modal reorganization. In this study, we compared the cortical cytoarchitecture of four cortical areas in adult hearing and congenitally deaf cats (CDCs): the primary auditory field A1, two secondary auditory fields, namely the dorsal zone and second auditory field (A2); and a reference visual association field (area 7) in the same section stained either using Nissl or SMI‐32 antibodies. The general cytoarchitectonic pattern and the area‐specific characteristics in the auditory cortex remained unchanged in animals with congenital deafness. Whereas area 7 did not differ between the groups investigated, all auditory fields were slightly thinner in CDCs, this being caused by reduced thickness of layers IV–VI. The study documents that, while the cytoarchitectonic patterns are in general independent of sensory experience, reduced layer thickness is observed in both primary and higher‐order auditory fields in layer IV and infragranular layers. The study demonstrates differences in effects of congenital deafness between supragranular and other cortical layers, but similar dystrophic effects in all investigated auditory fields. PMID:28643417
Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie
2015-01-01
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430
An anatomical and functional topography of human auditory cortical areas
Moerel, Michelle; De Martino, Federico; Formisano, Elia
2014-01-01
While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that—whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis—the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions. PMID:25120426
Analyzing pitch chroma and pitch height in the human brain.
Warren, Jason D; Uppenkamp, Stefan; Patterson, Roy D; Griffiths, Timothy D
2003-11-01
The perceptual pitch dimensions of chroma and height have distinct representations in the human brain: chroma is represented in cortical areas anterior to primary auditory cortex, whereas height is represented posterior to primary auditory cortex.
Scott, Brian H; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C
2017-11-01
In the primate auditory cortex, information flows serially in the mediolateral dimension from core, to belt, to parabelt. In the caudorostral dimension, stepwise serial projections convey information through the primary, rostral, and rostrotemporal (AI, R, and RT) core areas on the supratemporal plane, continuing to the rostrotemporal polar area (RTp) and adjacent auditory-related areas of the rostral superior temporal gyrus (STGr) and temporal pole. In addition to this cascade of corticocortical connections, the auditory cortex receives parallel thalamocortical projections from the medial geniculate nucleus (MGN). Previous studies have examined the projections from MGN to auditory cortex, but most have focused on the caudal core areas AI and R. In this study, we investigated the full extent of connections between MGN and AI, R, RT, RTp, and STGr using retrograde and anterograde anatomical tracers. Both AI and R received nearly 90% of their thalamic inputs from the ventral subdivision of the MGN (MGv; the primary/lemniscal auditory pathway). By contrast, RT received only ∼45% from MGv, and an equal share from the dorsal subdivision (MGd). Area RTp received ∼25% of its inputs from MGv, but received additional inputs from multisensory areas outside the MGN (30% in RTp vs. 1-5% in core areas). The MGN input to RTp distinguished this rostral extension of auditory cortex from the adjacent auditory-related cortex of the STGr, which received 80% of its thalamic input from multisensory nuclei (primarily medial pulvinar). Anterograde tracers identified complementary descending connections by which highly processed auditory information may modulate thalamocortical inputs. © 2017 Wiley Periodicals, Inc.
Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D
2015-09-01
To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.
Spatial processing in the auditory cortex of the macaque monkey
NASA Astrophysics Data System (ADS)
Recanzone, Gregg H.
2000-10-01
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
Primary and multisensory cortical activity is correlated with audiovisual percepts.
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P; Stufflebeam, Steven
2010-04-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. Copyright 2009 Wiley-Liss, Inc.
Primary and Multisensory Cortical Activity is Correlated with Audiovisual Percepts
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P.; Stufflebeam, Steven
2012-01-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. PMID:19780040
Cortico-Cortical Connectivity Within Ferret Auditory Cortex.
Bizley, Jennifer K; Bajo, Victoria M; Nodal, Fernando R; King, Andrew J
2015-10-15
Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency-matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non-overlapping, consistent with the non-tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. © 2015 Wiley Periodicals, Inc.
Olshansky, Michael P; Bar, Rachel J; Fogarty, Mary; DeSouza, Joseph F X
2015-01-01
The current study used functional magnetic resonance imaging to examine the neural activity of an expert dancer with 35 years of break-dancing experience during the kinesthetic motor imagery (KMI) of dance accompanied by highly familiar and unfamiliar music. The goal of this study was to examine the effect of musical familiarity on neural activity underlying KMI within a highly experienced dancer. In order to investigate this in both primary sensory and motor planning cortical areas, we examined the effects of music familiarity on the primary auditory cortex [Heschl's gyrus (HG)] and the supplementary motor area (SMA). Our findings reveal reduced HG activity and greater SMA activity during imagined dance to familiar music compared to unfamiliar music. We propose that one's internal representations of dance moves are influenced by auditory stimuli and may be specific to a dance style and the music accompanying it.
Cortical Representations of Speech in a Multitalker Auditory Scene.
Puvvada, Krishna C; Simon, Jonathan Z
2017-09-20
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.
Vanneste, Sven; De Ridder, Dirk
2012-01-01
Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375
Sato, M; Yasui, N; Isobe, I; Kobayashi, T
1982-10-01
A-49-year-old right-handed female was reported. She showed pure word deafness and auditory agnosia because of bilateral temporo-parietal lesions. The left lesion resulted from angiospasm of the left anterior and middle cerebral arteries after subarachnoid hemorrhage due to a ruptured aneurysm of the left carotid artery, and the right one resulted from subcortical hematoma after the V-P shunt operation. CT scan revealed the abnormal low density area on the bilateral temporo-parietal regions seven months after onset. Neurophychological findings were as follows: there were no aphasic symptoms such as paraphasia, word finding difficulties, or disturbances of spontaneous writing, reading and calculation. But her auditory comprehension was severely disturbed, and she could neither repeat words after the tester nor write from dictation. She also could not recognize meaningful sounds and music in spite of normal hearing sensitivity for pure tone, BSR and AER. We discussed the neuropsychological mechanisms of auditory recognition, and assumed that each hemisphere might process both verbal and non-verbal auditory stimuli in the secondary auditory area. The auditory input may be recognized at the left association area, the final level of this mechanism. Pure word deafness and auditory agnosia of this case might be caused by the disruption of the right secondary auditory area, the pathway between the left primary auditory area and the left secondary auditory area, and between the left and right secondary auditory areas.
Cortico‐cortical connectivity within ferret auditory cortex
Bajo, Victoria M.; Nodal, Fernando R.; King, Andrew J.
2015-01-01
ABSTRACT Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency‐matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non‐overlapping, consistent with the non‐tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. J. Comp. Neurol. 523:2187–2210, 2015. © 2015 Wiley Periodicals, Inc. PMID:25845831
Emergent selectivity for task-relevant stimuli in higher-order auditory cortex
Atiani, Serin; David, Stephen V.; Elgueda, Diego; Locastro, Michael; Radtke-Schuller, Susanne; Shamma, Shihab A.; Fritz, Jonathan B.
2014-01-01
A variety of attention-related effects have been demonstrated in primary auditory cortex (A1). However, an understanding of the functional role of higher auditory cortical areas in guiding attention to acoustic stimuli has been elusive. We recorded from neurons in two tonotopic cortical belt areas in the dorsal posterior ectosylvian gyrus (dPEG) of ferrets trained on a simple auditory discrimination task. Neurons in dPEG showed similar basic auditory tuning properties to A1, but during behavior we observed marked differences between these areas. In the belt areas, changes in neuronal firing rate and response dynamics greatly enhanced responses to target stimuli relative to distractors, allowing for greater attentional selection during active listening. Consistent with existing anatomical evidence, the pattern of sensory tuning and behavioral modulation in auditory belt cortex links the spectro-temporal representation of the whole acoustic scene in A1 to a more abstracted representation of task-relevant stimuli observed in frontal cortex. PMID:24742467
Separating pitch chroma and pitch height in the human brain
Warren, J. D.; Uppenkamp, S.; Patterson, R. D.; Griffiths, T. D.
2003-01-01
Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas. PMID:12909719
Separating pitch chroma and pitch height in the human brain.
Warren, J D; Uppenkamp, S; Patterson, R D; Griffiths, T D
2003-08-19
Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas.
Multisensory connections of monkey auditory cerebral cortex
Smiley, John F.; Falchier, Arnaud
2009-01-01
Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628
Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex
Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie
2013-01-01
Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225
Nir, Yuval; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Banks, Matthew I.; Tononi, Giulio
2015-01-01
Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic “gate,” which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences <8% between states). The processing of deviant tones was also compared in sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13–20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas. PMID:24323498
Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.
Meredith, M Alex; Allman, Brian L
2015-03-01
The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.
Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer
2010-09-29
Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole.
Electrophysiological Evidence for the Sources of the Masking Level Difference.
Fowler, Cynthia G
2017-08-16
The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD). A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used in protocols similar to those used to generate the behavioral MLD. Temporal coding of the signals necessary for generating the MLD occurs in the auditory periphery and brainstem. Brainstem disorders up to wave III of the auditory brainstem response (ABR) can disrupt the MLD. The full MLD requires input to the generators of the auditory late latency potentials to produce all characteristics of the MLD; these characteristics include threshold differences for various binaural signal and noise conditions. Studies using central auditory lesions are beginning to identify the cortical effects on the MLD. The MLD requires auditory processing from the periphery to cortical areas. A healthy auditory periphery and brainstem codes temporal synchrony, which is essential for the ABR. Threshold differences require engaging cortical function beyond the primary auditory cortex. More studies using cortical lesions and evoked potentials or imaging should clarify the specific cortical areas involved in the MLD.
Partially Overlapping Brain Networks for Singing and Cello Playing.
Segado, Melanie; Hollinger, Avrum; Thibodeau, Joseph; Penhune, Virginia; Zatorre, Robert J
2018-01-01
This research uses an MR-Compatible cello to compare functional brain activation during singing and cello playing within the same individuals to determine the extent to which arbitrary auditory-motor associations, like those required to play the cello, co-opt functional brain networks that evolved for singing. Musical instrument playing and singing both require highly specific associations between sounds and movements. Because these are both used to produce musical sounds, it is often assumed in the literature that their neural underpinnings are highly similar. However, singing is an evolutionarily old human trait, and the auditory-motor associations used for singing are also used for speech and non-speech vocalizations. This sets it apart from the arbitrary auditory-motor associations required to play musical instruments. The pitch range of the cello is similar to that of the human voice, but cello playing is completely independent of the vocal apparatus, and can therefore be used to dissociate the auditory-vocal network from that of the auditory-motor network. While in the MR-Scanner, 11 expert cellists listened to and subsequently produced individual tones either by singing or cello playing. All participants were able to sing and play the target tones in tune (<50C deviation from target). We found that brain activity during cello playing directly overlaps with brain activity during singing in many areas within the auditory-vocal network. These include primary motor, dorsal pre-motor, and supplementary motor cortices (M1, dPMC, SMA),the primary and periprimary auditory cortices within the superior temporal gyrus (STG) including Heschl's gyrus, anterior insula (aINS), anterior cingulate cortex (ACC), and intraparietal sulcus (IPS), and Cerebellum but, notably, exclude the periaqueductal gray (PAG) and basal ganglia (Putamen). Second, we found that activity within the overlapping areas is positively correlated with, and therefore likely contributing to, both singing and playing in tune determined with performance measures. Third, we found that activity in auditory areas is functionally connected with activity in dorsal motor and pre-motor areas, and that the connectivity between them is positively correlated with good performance on this task. This functional connectivity suggests that the brain areas are working together to contribute to task performance and not just coincidently active. Last, our findings showed that cello playing may directly co-opt vocal areas (including larynx area of motor cortex), especially if musical training begins before age 7.
Partially Overlapping Brain Networks for Singing and Cello Playing
Segado, Melanie; Hollinger, Avrum; Thibodeau, Joseph; Penhune, Virginia; Zatorre, Robert J.
2018-01-01
This research uses an MR-Compatible cello to compare functional brain activation during singing and cello playing within the same individuals to determine the extent to which arbitrary auditory-motor associations, like those required to play the cello, co-opt functional brain networks that evolved for singing. Musical instrument playing and singing both require highly specific associations between sounds and movements. Because these are both used to produce musical sounds, it is often assumed in the literature that their neural underpinnings are highly similar. However, singing is an evolutionarily old human trait, and the auditory-motor associations used for singing are also used for speech and non-speech vocalizations. This sets it apart from the arbitrary auditory-motor associations required to play musical instruments. The pitch range of the cello is similar to that of the human voice, but cello playing is completely independent of the vocal apparatus, and can therefore be used to dissociate the auditory-vocal network from that of the auditory-motor network. While in the MR-Scanner, 11 expert cellists listened to and subsequently produced individual tones either by singing or cello playing. All participants were able to sing and play the target tones in tune (<50C deviation from target). We found that brain activity during cello playing directly overlaps with brain activity during singing in many areas within the auditory-vocal network. These include primary motor, dorsal pre-motor, and supplementary motor cortices (M1, dPMC, SMA),the primary and periprimary auditory cortices within the superior temporal gyrus (STG) including Heschl's gyrus, anterior insula (aINS), anterior cingulate cortex (ACC), and intraparietal sulcus (IPS), and Cerebellum but, notably, exclude the periaqueductal gray (PAG) and basal ganglia (Putamen). Second, we found that activity within the overlapping areas is positively correlated with, and therefore likely contributing to, both singing and playing in tune determined with performance measures. Third, we found that activity in auditory areas is functionally connected with activity in dorsal motor and pre-motor areas, and that the connectivity between them is positively correlated with good performance on this task. This functional connectivity suggests that the brain areas are working together to contribute to task performance and not just coincidently active. Last, our findings showed that cello playing may directly co-opt vocal areas (including larynx area of motor cortex), especially if musical training begins before age 7. PMID:29892211
Geissler, Diana B; Ehret, Günter
2004-02-01
Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies.
Psychophysics and Neuronal Bases of Sound Localization in Humans
Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.
2013-01-01
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698
Anderson, L A; Christianson, G B; Linden, J F
2009-02-03
Cytochrome oxidase (CYO) and acetylcholinesterase (AChE) staining density varies across the cortical layers in many sensory areas. The laminar variations likely reflect differences between the layers in levels of metabolic activity and cholinergic modulation. The question of whether these laminar variations differ between primary sensory cortices has never been systematically addressed in the same set of animals, since most studies of sensory cortex focus on a single sensory modality. Here, we compared the laminar distribution of CYO and AChE activity in the primary auditory, visual, and somatosensory cortices of the mouse, using Nissl-stained sections to define laminar boundaries. Interestingly, for both CYO and AChE, laminar patterns of enzyme activity were similar in the visual and somatosensory cortices, but differed in the auditory cortex. In the visual and somatosensory areas, staining densities for both enzymes were highest in layers III/IV or IV and in lower layer V. In the auditory cortex, CYO activity showed a reliable peak only at the layer III/IV border, while AChE distribution was relatively homogeneous across layers. These results suggest that laminar patterns of metabolic activity and cholinergic influence are similar in the mouse visual and somatosensory cortices, but differ in the auditory cortex.
Scott, Brian H.; Leccese, Paul A.; Saleem, Kadharbatcha S.; Kikuchi, Yukiko; Mullarkey, Matthew P.; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C.
2017-01-01
Abstract In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. PMID:26620266
Bioacoustic Signal Classification in Cat Auditory Cortex
1994-01-01
for fast FM sweeps. A second maximum (i.e., sub- In Fig. 8D (87-001) the orie.-tation of the mapped area Iwo 11 .MWRN NOWO 0 lo 74 was tilted 214...Brashear, H.R., and Heilman, K.M. Pure word deafness after bilateral primary auditory cortex infarcts. Neuroiogy 34: 347 -352, 1984. Cranford, J.L., Stream
Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin
2014-08-15
Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Transformation of temporal sequences in the zebra finch auditory system
Lim, Yoonseob; Lagoy, Ryan; Shinn-Cunningham, Barbara G; Gardner, Timothy J
2016-01-01
This study examines how temporally patterned stimuli are transformed as they propagate from primary to secondary zones in the thalamorecipient auditory pallium in zebra finches. Using a new class of synthetic click stimuli, we find a robust mapping from temporal sequences in the primary zone to distinct population vectors in secondary auditory areas. We tested whether songbirds could discriminate synthetic click sequences in an operant setup and found that a robust behavioral discrimination is present for click sequences composed of intervals ranging from 11 ms to 40 ms, but breaks down for stimuli composed of longer inter-click intervals. This work suggests that the analog of the songbird auditory cortex transforms temporal patterns to sequence-selective population responses or ‘spatial codes', and that these distinct population responses contribute to behavioral discrimination of temporally complex sounds. DOI: http://dx.doi.org/10.7554/eLife.18205.001 PMID:27897971
How do auditory cortex neurons represent communication sounds?
Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc
2013-11-01
A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.
Morrill, Ryan J; Hasenstaub, Andrea R
2018-03-14
The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.
Spectral and Temporal Processing in Rat Posterior Auditory Cortex
Pandya, Pritesh K.; Rathbun, Daniel L.; Moucha, Raluca; Engineer, Navzer D.; Kilgard, Michael P.
2009-01-01
The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex. PMID:17615251
Scott, Brian H; Leccese, Paul A; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Mullarkey, Matthew P; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C
2017-01-01
In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T
2013-10-01
Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity. Copyright © 2013 Elsevier Inc. All rights reserved.
Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J
2018-03-01
Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.
Acute Inactivation of Primary Auditory Cortex Causes a Sound Localisation Deficit in Ferrets
Wood, Katherine C.; Town, Stephen M.; Atilgan, Huriye; Jones, Gareth P.
2017-01-01
The objective of this study was to demonstrate the efficacy of acute inactivation of brain areas by cooling in the behaving ferret and to demonstrate that cooling auditory cortex produced a localisation deficit that was specific to auditory stimuli. The effect of cooling on neural activity was measured in anesthetized ferret cortex. The behavioural effect of cooling was determined in a benchmark sound localisation task in which inactivation of primary auditory cortex (A1) is known to impair performance. Cooling strongly suppressed the spontaneous and stimulus-evoked firing rates of cortical neurons when the cooling loop was held at temperatures below 10°C, and this suppression was reversed when the cortical temperature recovered. Cooling of ferret auditory cortex during behavioural testing impaired sound localisation performance, with unilateral cooling producing selective deficits in the hemifield contralateral to cooling, and bilateral cooling producing deficits on both sides of space. The deficit in sound localisation induced by inactivation of A1 was not caused by motivational or locomotor changes since inactivation of A1 did not affect localisation of visual stimuli in the same context. PMID:28099489
Auditory cortical volumes and musical ability in Williams syndrome.
Martens, Marilee A; Reutens, David C; Wilson, Sarah J
2010-07-01
Individuals with Williams syndrome (WS) have been shown to have atypical morphology in the auditory cortex, an area associated with aspects of musicality. Some individuals with WS have demonstrated specific musical abilities, despite intellectual delays. Primary auditory cortex and planum temporale volumes were manually segmented in 25 individuals with WS and 25 control participants, and the participants also underwent testing of musical abilities. Left and right planum temporale volumes were significantly larger in the participants with WS than in controls, with no significant difference noted between groups in planum temporale asymmetry or primary auditory cortical volumes. Left planum temporale volume was significantly increased in a subgroup of the participants with WS who demonstrated specific musical strengths, as compared to the remaining WS participants, and was highly correlated with scores on a musical task. These findings suggest that differences in musical ability within WS may be in part associated with variability in the left auditory cortical region, providing further evidence of cognitive and neuroanatomical heterogeneity within this syndrome. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Evidence for pitch chroma mapping in human auditory cortex.
Briley, Paul M; Breakey, Charlotte; Krumbholz, Katrin
2013-11-01
Some areas in auditory cortex respond preferentially to sounds that elicit pitch, such as musical sounds or voiced speech. This study used human electroencephalography (EEG) with an adaptation paradigm to investigate how pitch is represented within these areas and, in particular, whether the representation reflects the physical or perceptual dimensions of pitch. Physically, pitch corresponds to a single monotonic dimension: the repetition rate of the stimulus waveform. Perceptually, however, pitch has to be described with 2 dimensions, a monotonic, "pitch height," and a cyclical, "pitch chroma," dimension, to account for the similarity of the cycle of notes (c, d, e, etc.) across different octaves. The EEG adaptation effect mirrored the cyclicality of the pitch chroma dimension, suggesting that auditory cortex contains a representation of pitch chroma. Source analysis indicated that the centroid of this pitch chroma representation lies somewhat anterior and lateral to primary auditory cortex.
Evidence for Pitch Chroma Mapping in Human Auditory Cortex
Briley, Paul M.; Breakey, Charlotte; Krumbholz, Katrin
2013-01-01
Some areas in auditory cortex respond preferentially to sounds that elicit pitch, such as musical sounds or voiced speech. This study used human electroencephalography (EEG) with an adaptation paradigm to investigate how pitch is represented within these areas and, in particular, whether the representation reflects the physical or perceptual dimensions of pitch. Physically, pitch corresponds to a single monotonic dimension: the repetition rate of the stimulus waveform. Perceptually, however, pitch has to be described with 2 dimensions, a monotonic, “pitch height,” and a cyclical, “pitch chroma,” dimension, to account for the similarity of the cycle of notes (c, d, e, etc.) across different octaves. The EEG adaptation effect mirrored the cyclicality of the pitch chroma dimension, suggesting that auditory cortex contains a representation of pitch chroma. Source analysis indicated that the centroid of this pitch chroma representation lies somewhat anterior and lateral to primary auditory cortex. PMID:22918980
Connections of cat auditory cortex: III. Corticocortical system.
Lee, Charles C; Winer, Jeffery A
2008-04-20
The mammalian auditory cortex (AC) is essential for computing the source and decoding the information contained in sound. Knowledge of AC corticocortical connections is modest other than in the primary auditory regions, nor is there an anatomical framework in the cat for understanding the patterns of connections among the many auditory areas. To address this issue we investigated cat AC connectivity in 13 auditory regions. Retrograde tracers were injected in the same area or in different areas to reveal the areal and laminar sources of convergent input to each region. Architectonic borders were established in Nissl and SMI-32 immunostained material. We assessed the topography, convergence, and divergence of the labeling. Intrinsic input constituted >50% of the projection cells in each area, and extrinsic inputs were strongest from functionally related areas. Each area received significant convergent ipsilateral input from several fields (5 to 8; mean 6). These varied in their laminar origin and projection density. Major extrinsic projections were preferentially from areas of the same functional type (tonotopic to tonotopic, nontonotopic to nontonotopic, limbic-related to limbic-related, multisensory-to-multisensory), while smaller projections link areas belonging to different groups. Branched projections between areas were <2% with deposits of two tracers in an area or in different areas. All extrinsic projections to each area were highly and equally topographic and clustered. Intrinsic input arose from all layers except layer I, and extrinsic input had unique, area-specific infragranular and supragranular origins. The many areal and laminar sources of input may contribute to the complexity of physiological responses in AC and suggest that many projections of modest size converge within each area rather than a simpler area-to-area serial or hierarchical pattern of corticocortical connectivity. (c) 2008 Wiley-Liss, Inc.
Yamamoto, Katsura; Tabei, Kenichi; Katsuyama, Narumi; Taira, Masato; Kitamura, Ken
2017-01-01
Patients with unilateral sensorineural hearing loss (UHL) often complain of hearing difficulties in noisy environments. To clarify this, we compared brain activation in patients with UHL with that of healthy participants during speech perception in a noisy environment, using functional magnetic resonance imaging (fMRI). A pure tone of 1 kHz, or 14 monosyllabic speech sounds at 65‒70 dB accompanied by MRI scan noise at 75 dB, were presented to both ears for 1 second each and participants were instructed to press a button when they could hear the pure tone or speech sound. Based on the activation areas of healthy participants, the primary auditory cortex, the anterior auditory association areas, and the posterior auditory association areas were set as regions of interest (ROI). In each of these regions, we compared brain activity between healthy participants and patients with UHL. The results revealed that patients with right-side UHL showed different brain activity in the right posterior auditory area during perception of pure tones versus monosyllables. Clinically, left-side and right-side UHL are not presently differentiated and are similarly diagnosed and treated; however, the results of this study suggest that a lateralityspecific treatment should be chosen.
Zhao, Zhenling; Liu, Yongchun; Ma, Lanlan; Sato, Yu; Qin, Ling
2015-01-01
Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS) delivered in the primary and secondary auditory fields (A1 and A2, respectively). After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal’s behavioral decision process and had an implication for the development of cortical auditory prosthetics. PMID:25964744
Zhao, Zhenling; Liu, Yongchun; Ma, Lanlan; Sato, Yu; Qin, Ling
2015-01-01
Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS) delivered in the primary and secondary auditory fields (A1 and A2, respectively). After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal's behavioral decision process and had an implication for the development of cortical auditory prosthetics.
Functional specialization of medial auditory belt cortex in the alert rhesus monkey.
Kusmierek, Pawel; Rauschecker, Josef P
2009-09-01
Responses of neural units in two areas of the medial auditory belt (middle medial area [MM] and rostral medial area [RM]) were tested with tones, noise bursts, monkey calls (MC), and environmental sounds (ES) in microelectrode recordings from two alert rhesus monkeys. For comparison, recordings were also performed from two core areas (primary auditory area [A1] and rostral area [R]) of the auditory cortex. All four fields showed cochleotopic organization, with best (center) frequency [BF(c)] gradients running in opposite directions in A1 and MM than in R and RM. The medial belt was characterized by a stronger preference for band-pass noise than for pure tones found medially to the core areas. Response latencies were shorter for the two more posterior (middle) areas MM and A1 than for the two rostral areas R and RM, reaching values as low as 6 ms for high BF(c) in MM and A1, and strongly depended on BF(c). The medial belt areas exhibited a higher selectivity to all stimuli, in particular to noise bursts, than the core areas. An increased selectivity to tones and noise bursts was also found in the anterior fields; the opposite was true for highly temporally modulated ES. Analysis of the structure of neural responses revealed that neurons were driven by low-level acoustic features in all fields. Thus medial belt areas RM and MM have to be considered early stages of auditory cortical processing. The anteroposterior difference in temporal processing indices suggests that R and RM may belong to a different hierarchical level or a different computational network than A1 and MM.
Bajo, Victoria M.; Nodal, Fernando R.; Bizley, Jennifer K.; King, Andrew J.
2010-01-01
Descending cortical inputs to the superior colliculus (SC) contribute to the unisensory response properties of the neurons found there and are critical for multisensory integration. However, little is known about the relative contribution of different auditory cortical areas to this projection or the distribution of their terminals in the SC. We characterized this projection in the ferret by injecting tracers in the SC and auditory cortex. Large pyramidal neurons were labeled in layer V of different parts of the ectosylvian gyrus after tracer injections in the SC. Those cells were most numerous in the anterior ectosylvian gyrus (AEG), and particularly in the anterior ventral field, which receives both auditory and visual inputs. Labeling was also found in the posterior ectosylvian gyrus (PEG), predominantly in the tonotopically organized posterior suprasylvian field. Profuse anterograde labeling was present in the SC following tracer injections at the site of acoustically responsive neurons in the AEG or PEG, with terminal fields being both more prominent and clustered for inputs originating from the AEG. Terminals from both cortical areas were located throughout the intermediate and deep layers, but were most concentrated in the posterior half of the SC, where peripheral stimulus locations are represented. No inputs were identified from primary auditory cortical areas, although some labeling was found in the surrounding sulci. Our findings suggest that higher level auditory cortical areas, including those involved in multisensory processing, may modulate SC function via their projections into its deeper layers. PMID:20640247
Klinke, R; Kral, A; Heid, S; Tillein, J; Hartmann, R
1999-09-10
In congenitally deaf cats, the central auditory system is deprived of acoustic input because of degeneration of the organ of Corti before the onset of hearing. Primary auditory afferents survive and can be stimulated electrically. By means of an intracochlear implant and an accompanying sound processor, congenitally deaf kittens were exposed to sounds and conditioned to respond to tones. After months of exposure to meaningful stimuli, the cortical activity in chronically implanted cats produced field potentials of higher amplitudes, expanded in area, developed long latency responses indicative of intracortical information processing, and showed more synaptic efficacy than in naïve, unstimulated deaf cats. The activity established by auditory experience resembles activity in hearing animals.
Knopf, Julian P; Hof, Patrick R; Oelschläger, Helmut H A
2016-01-01
We investigated the morphology of four primary neocortical projection areas (somatomotor, somatosensory, auditory, visual) qualitatively and quantitatively in the Indian river dolphins (Platanista gangetica gangetica, P. gangetica minor) with histological and stereological methods. For comparison, we included brains of other toothed whale species. Design-based stereology was applied to the primary neocortical areas (M1, S1, A1, V1) of the Indian river dolphins and compared to those of the bottlenose dolphin with respect to layers III and V. These neocortical fields were identified using existing electrophysiological and morphological data from marine dolphins as to their topography and histological structure, including the characteristics of the neuron populations concerned. In contrast to other toothed whales, the visual area (V1) of the 'blind' river dolphins seems to be rather small. M1 is displaced laterally and the auditory area (A1) is larger than in marine species with respect to total brain size. The layering is similar in the cortices of all the toothed whale brains investigated; a layer IV could not be identified. Cell density in layer III is always higher than in layer V. The maximal neuron density in P. gangetica gangetica is found in layer III of A1, followed by layers III in V1, S1, and M1. The cell density in layer V is at a similar level in all primary areas. There are, however, some differences in neuron density between the two subspecies of Indian river dolphins. Taken as a whole, it appears that the neocortex of platanistids exhibits a considerable expansion of the auditory field. Even more than other toothed whales, they seem to depend on their biosonar abilities for navigation, hunting, and communication in their riverine habitat. © 2016 S. Karger AG, Basel.
Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako
2012-01-25
The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.
New perspectives on the auditory cortex: learning and memory.
Weinberger, Norman M
2015-01-01
Primary ("early") sensory cortices have been viewed as stimulus analyzers devoid of function in learning, memory, and cognition. However, studies combining sensory neurophysiology and learning protocols have revealed that associative learning systematically modifies the encoding of stimulus dimensions in the primary auditory cortex (A1) to accentuate behaviorally important sounds. This "representational plasticity" (RP) is manifest at different levels. The sensitivity and selectivity of signal tones increase near threshold, tuning above threshold shifts toward the frequency of acoustic signals, and their area of representation can increase within the tonotopic map of A1. The magnitude of area gain encodes the level of behavioral stimulus importance and serves as a substrate of memory strength. RP has the same characteristics as behavioral memory: it is associative, specific, develops rapidly, consolidates, and can last indefinitely. Pairing tone with stimulation of the cholinergic nucleus basalis induces RP and implants specific behavioral memory, while directly increasing the representational area of a tone in A1 produces matching behavioral memory. Thus, RP satisfies key criteria for serving as a substrate of auditory memory. The findings suggest a basis for posttraumatic stress disorder in abnormally augmented cortical representations and emphasize the need for a new model of the cerebral cortex. © 2015 Elsevier B.V. All rights reserved.
Boumans, Tiny; Gobes, Sharon M. H.; Poirier, Colline; Theunissen, Frederic E.; Vandersmissen, Liesbeth; Pintjens, Wouter; Verhoye, Marleen; Bolhuis, Johan J.; Van der Linden, Annemie
2008-01-01
Background Male songbirds learn their songs from an adult tutor when they are young. A network of brain nuclei known as the ‘song system’ is the likely neural substrate for sensorimotor learning and production of song, but the neural networks involved in processing the auditory feedback signals necessary for song learning and maintenance remain unknown. Determining which regions show preferential responsiveness to the bird's own song (BOS) is of great importance because neurons sensitive to self-generated vocalisations could mediate this auditory feedback process. Neurons in the song nuclei and in a secondary auditory area, the caudal medial mesopallium (CMM), show selective responses to the BOS. The aim of the present study is to investigate the emergence of BOS selectivity within the network of primary auditory sub-regions in the avian pallium. Methods and Findings Using blood oxygen level-dependent (BOLD) fMRI, we investigated neural responsiveness to natural and manipulated self-generated vocalisations and compared the selectivity for BOS and conspecific song in different sub-regions of the thalamo-recipient area Field L. Zebra finch males were exposed to conspecific song, BOS and to synthetic variations on BOS that differed in spectro-temporal and/or modulation phase structure. We found significant differences in the strength of BOLD responses between regions L2a, L2b and CMM, but no inter-stimuli differences within regions. In particular, we have shown that the overall signal strength to song and synthetic variations thereof was different within two sub-regions of Field L2: zone L2a was significantly more activated compared to the adjacent sub-region L2b. Conclusions Based on our results we suggest that unlike nuclei in the song system, sub-regions in the primary auditory pallium do not show selectivity for the BOS, but appear to show different levels of activity with exposure to any sound according to their place in the auditory processing stream. PMID:18781203
Extinction reveals that primary sensory cortex predicts reinforcement outcome
Bieszczad, Kasia M.; Weinberger, Norman M.
2011-01-01
Primary sensory cortices are traditionally regarded as stimulus analyzers. However, studies of associative learning-induced plasticity in the primary auditory cortex (A1) indicate involvement in learning, memory and other cognitive processes. For example, the area of representation of a tone becomes larger for stronger auditory memories and the magnitude of area gain is proportional to the degree that a tone becomes behaviorally important. Here, we used extinction to investigate whether “behavioral importance” specifically reflects a sound’s ability to predict reinforcement (reward or punishment) vs. to predict any significant change in the meaning of a sound. If the former, then extinction should reverse area gains as the signal no longer predicts reinforcement. Rats (n = 11) were trained to bar-press to a signal tone (5.0 kHz) for water-rewards, to induce signal-specific area gains in A1. After subsequent withdrawal of reward, A1 was mapped to determine representational areas. Signal-specific area gains — estimated from a previously established brain–behavior quantitative function — were reversed, supporting the “reinforcement prediction” hypothesis. Area loss was specific to the signal tone vs. test tones, further indicating that withdrawal of reinforcement, rather than unreinforced tone presentation per se, was responsible for area loss. Importantly, the amount of area loss was correlated with the amount of extinction (r = 0.82, p < 0.01). These findings show that primary sensory cortical representation can encode behavioral importance as a signal’s value to predict reinforcement, and that the number of cells tuned to a stimulus can dictate its ability to command behavior. PMID:22304434
ERIC Educational Resources Information Center
Murakami, Takenobu; Restle, Julia; Ziemann, Ulf
2012-01-01
A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…
2017-01-01
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238
Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L
2017-12-13
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.
Maffei, Chiara; Capasso, Rita; Cazzolli, Giulia; Colosimo, Cesare; Dell'Acqua, Flavio; Piludu, Francesca; Catani, Marco; Miceli, Gabriele
2017-12-01
Pure Word Deafness (PWD) is a rare disorder, characterized by selective loss of speech input processing. Its most common cause is temporal damage to the primary auditory cortex of both hemispheres, but it has been reported also following unilateral lesions. In unilateral cases, PWD has been attributed to the disconnection of Wernicke's area from both right and left primary auditory cortex. Here we report behavioral and neuroimaging evidence from a new case of left unilateral PWD with both cortical and white matter damage due to a relatively small stroke lesion in the left temporal gyrus. Selective impairment in auditory language processing was accompanied by intact processing of nonspeech sounds and normal speech, reading and writing. Performance on dichotic listening was characterized by a reversal of the right-ear advantage typically observed in healthy subjects. Cortical thickness and gyral volume were severely reduced in the left superior temporal gyrus (STG), although abnormalities were not uniformly distributed and residual intact cortical areas were detected, for example in the medial portion of the Heschl's gyrus. Diffusion tractography documented partial damage to the acoustic radiations (AR), callosal temporal connections and intralobar tracts dedicated to single words comprehension. Behavioral and neuroimaging results in this case are difficult to integrate in a pure cortical or disconnection framework, as damage to primary auditory cortex in the left STG was only partial and Wernicke's area was not completely isolated from left or right-hemisphere input. On the basis of our findings we suggest that in this case of PWD, concurrent partial topological (cortical) and disconnection mechanisms have contributed to a selective impairment of speech sounds. The discrepancy between speech and non-speech sounds suggests selective damage to a language-specific left lateralized network involved in phoneme processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Naito, Y; Okazawa, H; Honjo, I; Hirano, S; Takahashi, H; Shiomi, Y; Hoji, W; Kawano, M; Ishizu, K; Yonekura, Y
1995-07-01
Six postlingually deaf patients using multi-channel cochlear implants were examined by positron emission tomography (PET) using 15O-labeled water. Changes in regional cerebral blood flow (rCBF) were measured during different sound stimuli. The stimulation paradigms employed consisted of two sets of three different conditions; (1) no sound stimulation with the speech processor of the cochlear implant system switched off, (2) hearing white noise and (3) hearing sequential Japanese sentences. In the primary auditory area, the mean rCBF increase during noise stimulation was significantly greater on the side contralateral to the implant than on the ipsilateral side. Speech stimulation caused significantly greater rCBF increase compared with noise stimulation in the left immediate auditory association area (P < 0.01), the bilateral auditory association areas (P < 0.01), the posterior part of the bilateral inferior frontal gyri; the Broca's area (P < 0.01) and its right hemisphere homologue (P < 0.05). Activation of cortices related to verbal and non-verbal sound recognition was clearly demonstrated in the current subjects probably because complete silence was attained in the control condition.
Engineer, C.T.; Centanni, T.M.; Im, K.W.; Borland, M.S.; Moreno, N.A.; Carraway, R.S.; Wilson, L.G.; Kilgard, M.P.
2014-01-01
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. PMID:24639033
Architectonic subdivisions of neocortex in the tree shrew (Tupaia belangeri)
Wong, Peiyan; Kaas, Jon H.
2010-01-01
Tree shrews are small mammals that bear some semblance to squirrels, but are actually close relatives of primates. Thus, they have been extensively studied as a model for the early stages of primate evolution. In the present study, subdivisions of cortex were reconstructed from brain sections cut in the coronal, sagittal or horizontal planes, and processed for parvalbumin (PV), SMI-32 immunopositive neurofilament protein epitopes, vesicle glutamate transporter 2 (VGluT2), free ionic zinc, myelin, cytochrome oxidase (CO) and Nissl substance. These different procedures revealed similar boundaries between areas, suggesting the detection of functionally relevant borders and allowed a more precise demarcation of cortical areal boundaries. Primary cortical areas were most clearly revealed by the zinc stain, due to the poor staining of layer 4, as thalamocortical terminations lack free ionic zinc. Area 17 (V1) was especially prominent, as the broad layer 4 was nearly free of zinc stain. However, this feature was less pronounced in primary auditory and somatosensory, cortex. In primary sensory areas, thalamocortical terminations in layer 4 densely express VGluT2. Auditory cortex consists of two architectonically distinct subdivisions, a primary core region (Ac), surrounded by a belt region (Ab) that had a slightly less developed koniocellular appearance. Primary motor cortex (M1) was identified by the absence of VGluT2 staining in the poorly developed granular layer 4 and the presence of SMI-32 labeled pyramidal cells in layers 3 and 5. The presence of well-differentiated cortical areas in tree shrews indicates their usefulness in studies of cortical organization and function. PMID:19462403
Brown, Trecia A; Joanisse, Marc F; Gati, Joseph S; Hughes, Sarah M; Nixon, Pam L; Menon, Ravi S; Lomber, Stephen G
2013-01-01
Much of what is known about the cortical organization for audition in humans draws from studies of auditory cortex in the cat. However, these data build largely on electrophysiological recordings that are both highly invasive and provide less evidence concerning macroscopic patterns of brain activation. Optical imaging, using intrinsic signals or dyes, allows visualization of surface-based activity but is also quite invasive. Functional magnetic resonance imaging (fMRI) overcomes these limitations by providing a large-scale perspective of distributed activity across the brain in a non-invasive manner. The present study used fMRI to characterize stimulus-evoked activity in auditory cortex of an anesthetized (ketamine/isoflurane) cat, focusing specifically on the blood-oxygen-level-dependent (BOLD) signal time course. Functional images were acquired for adult cats in a 7 T MRI scanner. To determine the BOLD signal time course, we presented 1s broadband noise bursts between widely spaced scan acquisitions at randomized delays (1-12 s in 1s increments) prior to each scan. Baseline trials in which no stimulus was presented were also acquired. Our results indicate that the BOLD response peaks at about 3.5s in primary auditory cortex (AI) and at about 4.5 s in non-primary areas (AII, PAF) of cat auditory cortex. The observed peak latency is within the range reported for humans and non-human primates (3-4 s). The time course of hemodynamic activity in cat auditory cortex also occurs on a comparatively shorter scale than in cat visual cortex. The results of this study will provide a foundation for future auditory fMRI studies in the cat to incorporate these hemodynamic response properties into appropriate analyses of cat auditory cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
2017-05-05
Directed Attention Mediated by Real -Time fMRI Neurofeedback presented at/published to 2017 Radiological Society of North America Conference in...DATE Sherwood - p.1 Self-regulation of the primary auditory cortex attention via directed attention mediated by real -time fMRI neurofeedback M S...auditory cortex hyperactivity by self-regulation of the primary auditory cortex (A 1) based on real -time functional magnetic resonance imaging neurofeedback
Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier
2007-08-29
In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.
Chaves-Coira, Irene; Barros-Zulaica, Natali; Rodrigo-Angulo, Margarita; Núñez, Ángel
2016-01-01
Neocortical cholinergic activity plays a fundamental role in sensory processing and cognitive functions. Previous results have suggested a refined anatomical and functional topographical organization of basal forebrain (BF) projections that may control cortical sensory processing in a specific manner. We have used retrograde anatomical procedures to demonstrate the existence of specific neuronal groups in the BF involved in the control of specific sensory cortices. Fluoro-Gold (FlGo) and Fast Blue (FB) fluorescent retrograde tracers were deposited into the primary somatosensory (S1) and primary auditory (A1) cortices in mice. Our results revealed that the BF is a heterogeneous area in which neurons projecting to different cortical areas are segregated into different neuronal groups. Most of the neurons located in the horizontal limb of the diagonal band of Broca (HDB) projected to the S1 cortex, indicating that this area is specialized in the sensory processing of tactile stimuli. However, the nucleus basalis magnocellularis (B) nucleus shows a similar number of cells projecting to the S1 as to the A1 cortices. In addition, we analyzed the cholinergic effects on the S1 and A1 cortical sensory responses by optogenetic stimulation of the BF neurons in urethane-anesthetized transgenic mice. We used transgenic mice expressing the light-activated cation channel, channelrhodopsin-2, tagged with a fluorescent protein (ChR2-YFP) under the control of the choline-acetyl transferase promoter (ChAT). Cortical evoked potentials were induced by whisker deflections or by auditory clicks. According to the anatomical results, optogenetic HDB stimulation induced more extensive facilitation of tactile evoked potentials in S1 than auditory evoked potentials in A1, while optogenetic stimulation of the B nucleus facilitated either tactile or auditory evoked potentials equally. Consequently, our results suggest that cholinergic projections to the cortex are organized into segregated pools of neurons that may modulate specific cortical areas. PMID:27147975
Chaves-Coira, Irene; Barros-Zulaica, Natali; Rodrigo-Angulo, Margarita; Núñez, Ángel
2016-01-01
Neocortical cholinergic activity plays a fundamental role in sensory processing and cognitive functions. Previous results have suggested a refined anatomical and functional topographical organization of basal forebrain (BF) projections that may control cortical sensory processing in a specific manner. We have used retrograde anatomical procedures to demonstrate the existence of specific neuronal groups in the BF involved in the control of specific sensory cortices. Fluoro-Gold (FlGo) and Fast Blue (FB) fluorescent retrograde tracers were deposited into the primary somatosensory (S1) and primary auditory (A1) cortices in mice. Our results revealed that the BF is a heterogeneous area in which neurons projecting to different cortical areas are segregated into different neuronal groups. Most of the neurons located in the horizontal limb of the diagonal band of Broca (HDB) projected to the S1 cortex, indicating that this area is specialized in the sensory processing of tactile stimuli. However, the nucleus basalis magnocellularis (B) nucleus shows a similar number of cells projecting to the S1 as to the A1 cortices. In addition, we analyzed the cholinergic effects on the S1 and A1 cortical sensory responses by optogenetic stimulation of the BF neurons in urethane-anesthetized transgenic mice. We used transgenic mice expressing the light-activated cation channel, channelrhodopsin-2, tagged with a fluorescent protein (ChR2-YFP) under the control of the choline-acetyl transferase promoter (ChAT). Cortical evoked potentials were induced by whisker deflections or by auditory clicks. According to the anatomical results, optogenetic HDB stimulation induced more extensive facilitation of tactile evoked potentials in S1 than auditory evoked potentials in A1, while optogenetic stimulation of the B nucleus facilitated either tactile or auditory evoked potentials equally. Consequently, our results suggest that cholinergic projections to the cortex are organized into segregated pools of neurons that may modulate specific cortical areas.
Language networks in anophthalmia: maintained hierarchy of processing in 'visual' cortex.
Watkins, Kate E; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M; Smith, Stephen M; Ragge, Nicola; Bridge, Holly
2012-05-01
Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an auditory covert naming task in five bilaterally anophthalmic subjects, who have never received visual input. When listening to auditory definitions and covertly retrieving words, these subjects activated lateral occipital cortex bilaterally in addition to the language areas activated in sighted controls. This activity was significantly greater than that present in a control condition of listening to reversed speech. The lateral occipital cortex was also recruited into a left-lateralized resting-state network that usually comprises anterior and posterior language areas. Levels of activation to the auditory naming and reversed speech conditions did not differ in the calcarine (striate) cortex. This primary 'visual' cortex was not recruited to the left-lateralized resting-state network and showed high interhemispheric correlation of activity at rest, as is typically seen in unimodal cortical areas. In contrast, the interhemispheric correlation of resting activity in extrastriate areas was reduced in anophthalmia to the level of cortical areas that are heteromodal, such as the inferior frontal gyrus. Previous imaging studies in the congenitally blind show that primary visual cortex is activated in higher-order tasks, such as language and memory to a greater extent than during more basic sensory processing, resulting in a reversal of the normal hierarchy of functional organization across 'visual' areas. Our data do not support such a pattern of organization in anophthalmia. Instead, the patterns of activity during task and the functional connectivity at rest are consistent with the known hierarchy of processing in these areas normally seen for vision. The differences in cortical organization between bilateral anophthalmia and other forms of congenital blindness are considered to be due to the total absence of stimulation in 'visual' cortex by light or retinal activity in the former condition, and suggests development of subcortical auditory input to the geniculo-striate pathway.
ERIC Educational Resources Information Center
Bornstein, Joan L.
The booklet outlines ways to help children with learning disabilities in specific subject areas. Characteristic behavior and remedial exercises are listed for seven areas of auditory problems: auditory reception, auditory association, auditory discrimination, auditory figure ground, auditory closure and sound blending, auditory memory, and grammar…
Perrone-Bertolotti, Marcela; Kujala, Jan; Vidal, Juan R; Hamame, Carlos M; Ossandon, Tomas; Bertrand, Olivier; Minotti, Lorella; Kahane, Philippe; Jerbi, Karim; Lachaux, Jean-Philippe
2012-12-05
As you might experience it while reading this sentence, silent reading often involves an imagery speech component: we can hear our own "inner voice" pronouncing words mentally. Recent functional magnetic resonance imaging studies have associated that component with increased metabolic activity in the auditory cortex, including voice-selective areas. It remains to be determined, however, whether this activation arises automatically from early bottom-up visual inputs or whether it depends on late top-down control processes modulated by task demands. To answer this question, we collaborated with four epileptic human patients recorded with intracranial electrodes in the auditory cortex for therapeutic purposes, and measured high-frequency (50-150 Hz) "gamma" activity as a proxy of population level spiking activity. Temporal voice-selective areas (TVAs) were identified with an auditory localizer task and monitored as participants viewed words flashed on screen. We compared neural responses depending on whether words were attended or ignored and found a significant increase of neural activity in response to words, strongly enhanced by attention. In one of the patients, we could record that response at 800 ms in TVAs, but also at 700 ms in the primary auditory cortex and at 300 ms in the ventral occipital temporal cortex. Furthermore, single-trial analysis revealed a considerable jitter between activation peaks in visual and auditory cortices. Altogether, our results demonstrate that the multimodal mental experience of reading is in fact a heterogeneous complex of asynchronous neural responses, and that auditory and visual modalities often process distinct temporal frames of our environment at the same time.
Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M
2016-01-01
This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. Copyright © 2015 Elsevier B.V. All rights reserved.
The role of primary auditory and visual cortices in temporal processing: A tDCS approach.
Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F
2016-10-15
Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.
Dual Gamma Rhythm Generators Control Interlaminar Synchrony in Auditory Cortex
Ainsworth, Matthew; Lee, Shane; Cunningham, Mark O.; Roopun, Anita K.; Traub, Roger D.; Kopell, Nancy J.; Whittington, Miles A.
2013-01-01
Rhythmic activity in populations of cortical neurons accompanies, and may underlie, many aspects of primary sensory processing and short-term memory. Activity in the gamma band (30 Hz up to > 100 Hz) is associated with such cognitive tasks and is thought to provide a substrate for temporal coupling of spatially separate regions of the brain. However, such coupling requires close matching of frequencies in co-active areas, and because the nominal gamma band is so spectrally broad, it may not constitute a single underlying process. Here we show that, for inhibition-based gamma rhythms in vitro in rat neocortical slices, mechanistically distinct local circuit generators exist in different laminae of rat primary auditory cortex. A persistent, 30 – 45 Hz, gap-junction-dependent gamma rhythm dominates rhythmic activity in supragranular layers 2/3, whereas a tonic depolarization-dependent, 50 – 80 Hz, pyramidal/interneuron gamma rhythm is expressed in granular layer 4 with strong glutamatergic excitation. As a consequence, altering the degree of excitation of the auditory cortex causes bifurcation in the gamma frequency spectrum and can effectively switch temporal control of layer 5 from supragranular to granular layers. Computational modeling predicts the pattern of interlaminar connections may help to stabilize this bifurcation. The data suggest that different strategies are used by primary auditory cortex to represent weak and strong inputs, with principal cell firing rate becoming increasingly important as excitation strength increases. PMID:22114273
Processing of spectral and amplitude envelope of animal vocalizations in the human auditory cortex.
Altmann, Christian F; Gomes de Oliveira Júnior, Cícero; Heinemann, Linda; Kaiser, Jochen
2010-08-01
In daily life, we usually identify sounds effortlessly and efficiently. Two properties are particularly salient and of importance for sound identification: the sound's overall spectral envelope and its temporal amplitude envelope. In this study, we aimed at investigating the representation of these two features in the human auditory cortex by using a functional magnetic resonance imaging adaptation paradigm. We presented pairs of sound stimuli derived from animal vocalizations that preserved the time-averaged frequency spectrum of the animal vocalizations and the amplitude envelope. We presented the pairs in four different conditions: (a) pairs with the same amplitude envelope and mean spectral envelope, (b) same amplitude envelope, but different mean spectral envelope, (c) different amplitude envelope, but same mean spectral envelope and (d) both different amplitude envelope and mean spectral envelope. We found fMRI adaptation effects for both the mean spectral envelope and the amplitude envelope of animal vocalizations in overlapping cortical areas in the bilateral superior temporal gyrus posterior to Heschl's gyrus. Areas sensitive to the amplitude envelope extended further anteriorly along the lateral superior temporal gyrus in the left hemisphere, while areas sensitive to the spectral envelope extended further anteriorly along the right lateral superior temporal gyrus. Posterior tonotopic areas within the left superior temporal lobe displayed sensitivity for the mean spectrum. Our findings suggest involvement of primary auditory areas in the representation of spectral cues and encoding of general spectro-temporal features of natural sounds in non-primary posterior and lateral superior temporal cortex. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina
2016-02-01
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Cortical systems associated with covert music rehearsal.
Langheim, Frederick J P; Callicott, Joseph H; Mattay, Venkata S; Duyn, Jeff H; Weinberger, Daniel R
2002-08-01
Musical representation and overt music production are necessarily complex cognitive phenomena. While overt musical performance may be observed and studied, the act of performance itself necessarily skews results toward the importance of primary sensorimotor and auditory cortices. However, imagined musical performance (IMP) represents a complex behavioral task involving components suited to exploring the physiological underpinnings of musical cognition in music performance without the sensorimotor and auditory confounds of overt performance. We mapped the blood oxygenation level-dependent fMRI activation response associated with IMP in experienced musicians independent of the piece imagined. IMP consistently activated supplementary motor and premotor areas, right superior parietal lobule, right inferior frontal gyrus, bilateral mid-frontal gyri, and bilateral lateral cerebellum in contrast with rest, in a manner distinct from fingertapping versus rest and passive listening to the same piece versus rest. These data implicate an associative network independent of primary sensorimotor and auditory activity, likely representing the cortical elements most intimately linked to music production.
Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro
2016-10-01
Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation
Butler, Blake E.; Trainor, Laurel J.
2012-01-01
Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch. PMID:22740836
Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.
Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd
2014-11-01
In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and sensory brain activation rather mirrored expectation than stimulation. Silent music reading probably relies on these basic neurocognitive mechanisms. Copyright © 2014 Elsevier Inc. All rights reserved.
Neural Responses to Complex Auditory Rhythms: The Role of Attending
Chapin, Heather L.; Zanto, Theodore; Jantzen, Kelly J.; Kelso, Scott J. A.; Steinberg, Fred; Large, Edward W.
2010-01-01
The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse-related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory cortex, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus. PMID:21833279
Involvement of the human midbrain and thalamus in auditory deviance detection.
Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César
2015-02-01
Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sensory-motor interactions for vocal pitch monitoring in non-primary human auditory cortex.
Greenlee, Jeremy D W; Behroozmand, Roozbeh; Larson, Charles R; Jackson, Adam W; Chen, Fangxiang; Hansen, Daniel R; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A
2013-01-01
The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (-100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70-150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.
Sensory-Motor Interactions for Vocal Pitch Monitoring in Non-Primary Human Auditory Cortex
Larson, Charles R.; Jackson, Adam W.; Chen, Fangxiang; Hansen, Daniel R.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.
2013-01-01
The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (−100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70–150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control. PMID:23577157
Takahashi, Kuniyuki; Hishida, Ryuichi; Kubota, Yamato; Kudoh, Masaharu; Takahashi, Sugata; Shibuki, Katsuei
2006-03-01
Functional brain imaging using endogenous fluorescence of mitochondrial flavoprotein is useful for investigating mouse cortical activities via the intact skull, which is thin and sufficiently transparent in mice. We applied this method to investigate auditory cortical plasticity regulated by acoustic environments. Normal mice of the C57BL/6 strain, reared in various acoustic environments for at least 4 weeks after birth, were anaesthetized with urethane (1.7 g/kg, i.p.). Auditory cortical images of endogenous green fluorescence in blue light were recorded by a cooled CCD camera via the intact skull. Cortical responses elicited by tonal stimuli (5, 10 and 20 kHz) exhibited mirror-symmetrical tonotopic maps in the primary auditory cortex (AI) and anterior auditory field (AAF). Depression of auditory cortical responses regarding response duration was observed in sound-deprived mice compared with naïve mice reared in a normal acoustic environment. When mice were exposed to an environmental tonal stimulus at 10 kHz for more than 4 weeks after birth, the cortical responses were potentiated in a frequency-specific manner in respect to peak amplitude of the responses in AI, but not for the size of the responsive areas. Changes in AAF were less clear than those in AI. To determine the modified synapses by acoustic environments, neural responses in cortical slices were investigated with endogenous fluorescence imaging. The vertical thickness of responsive areas after supragranular electrical stimulation was significantly reduced in the slices obtained from sound-deprived mice. These results suggest that acoustic environments regulate the development of vertical intracortical circuits in the mouse auditory cortex.
Retrosplenial cortex is required for the retrieval of remote memory for auditory cues.
Todd, Travis P; Mehlman, Max L; Keene, Christopher S; DeAngeli, Nicole E; Bucci, David J
2016-06-01
The restrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of the RSC to recently acquired auditory fear memories. Since neocortical regions have been implicated in the permanent storage of remote memories, we examined the contribution of the RSC to remotely acquired auditory fear memories. In Experiment 1, retrieval of a remotely acquired auditory fear memory was impaired when permanent lesions (either electrolytic or neurotoxic) were made several weeks after initial conditioning. In Experiment 2, using a chemogenetic approach, we observed impairments in the retrieval of remote memory for an auditory cue when the RSC was temporarily inactivated during testing. In Experiment 3, after injection of a retrograde tracer into the RSC, we observed labeled cells in primary and secondary auditory cortices, as well as the claustrum, indicating that the RSC receives direct projections from auditory regions. Overall our results indicate the RSC has a critical role in the retrieval of remotely acquired auditory fear memories, and we suggest this is related to the quality of the memory, with less precise memories being RSC dependent. © 2016 Todd et al.; Published by Cold Spring Harbor Laboratory Press.
Neural plasticity expressed in central auditory structures with and without tinnitus
Roberts, Larry E.; Bosnyak, Daniel J.; Thompson, David C.
2012-01-01
Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To assess this assumption, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by electroencephalography (EEG) are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated (AM) sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR) and P2 transient response known to localize to primary and non-primary auditory cortex, respectively. P2 amplitude increased over training sessions equally in participants with tinnitus and in control subjects, suggesting normal remodeling of non-primary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls the phase delay between the 40-Hz response and stimulus waveforms reduced by about 10° over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects) did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not non-primary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected. PMID:22654738
Brain activity related to phonation in young patients with adductor spasmodic dysphonia.
Kiyuna, Asanori; Maeda, Hiroyuki; Higa, Asano; Shingaki, Kouta; Uehara, Takayuki; Suzuki, Mikio
2014-06-01
This study investigated the brain activities during phonation of young patients with adductor spasmodic dysphonia (ADSD) of relatively short disease duration (<10 years). Six subjects with ADSD of short duration (mean age: 24. 3 years; mean disease duration: 41 months) and six healthy controls (mean age: 30.8 years) underwent functional magnetic resonance imaging (fMRI) using a sparse sampling method to identify brain activity during vowel phonation (/i:/). Intragroup and intergroup analyses were performed using statistical parametric mapping software. Areas of activation in the ADSD and control groups were similar to those reported previously for vowel phonation. All of the activated areas were observed bilaterally and symmetrically. Intergroup analysis revealed higher brain activities in the SD group in the auditory-related areas (Brodmann's areas [BA] 40, 41), motor speech areas (BA44, 45), bilateral insula (BA13), bilateral cerebellum, and middle frontal gyrus (BA46). Areas with lower activation were in the left primary sensory area (BA1-3) and bilateral subcortical nucleus (putamen and globus pallidus). The auditory cortical responses observed may reflect that young ADSD patients control their voice by use of the motor speech area, insula, inferior parietal cortex, and cerebellum. Neural activity in the primary sensory area and basal ganglia may affect the voice symptoms of young ADSD patients with short disease duration. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Primary Auditory Cortex Regulates Threat Memory Specificity
ERIC Educational Resources Information Center
Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.
2017-01-01
Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…
Sensory maps in the claustrum of the cat.
Olson, C R; Graybiel, A M
1980-12-04
The claustrum is a telencephalic cell group (Fig. 1A, B) possessing widespread reciprocal connections with the neocortex. In this regard, it bears a unique and striking resemblance to the thalamus. We have now examined the anatomical ordering of pathways linking the claustrum with sensory areas of the cat neocortex and, in parallel electrophysiological experiments, have studied the functional organization of claustral sensory zones so identified. Our findings indicate that there are discrete visual and somatosensory subdivisions in the claustrum interconnected with the corresponding primary sensory areas of the neocortex and that the respective zones contain orderly retinotopic and somatotopic maps. A third claustral region receiving fibre projections from the auditory cortex in or near area Ep was found to contain neurones responsive to auditory stimulation. We conclude that loops connecting sensory areas of the neocortex with satellite zones in the claustrum contribute to the early processing of exteroceptive information by the forebrain.
Teki, Sundeep; Barnes, Gareth R; Penny, William D; Iverson, Paul; Woodhead, Zoe V J; Griffiths, Timothy D; Leff, Alexander P
2013-06-01
In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.
Barnes, Gareth R.; Penny, William D.; Iverson, Paul; Woodhead, Zoe V. J.; Griffiths, Timothy D.; Leff, Alexander P.
2013-01-01
In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics’ speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired. PMID:23715097
Langguth, Berthold; Zowe, Marc; Landgrebe, Michael; Sand, Philipp; Kleinjung, Tobias; Binder, Harald; Hajak, Göran; Eichhammer, Peter
2006-01-01
Auditory phantom perceptions are associated with hyperactivity of the central auditory system. Neuronavigation guided repetitive transcranial magnetic stimulation (rTMS) of the area of increased activity was demonstrated to reduce tinnitus perception. The study aimed at developing an easy applicable standard procedure for transcranial magnetic stimulation of the primary auditory cortex and to investigate this coil positioning strategy for the treatment of chronic tinnitus in clinical practice. The left gyrus of Heschl was targeted in 25 healthy subjects using a frameless stereotactical system. Based on individual scalp coordinates of the coil, a positioning strategy with reference to the 10--20-EEG system was developed. Using this coil positioning approach we started an open treatment trial. 28 patients with chronic tinnitus received 10 sessions of rTMS (intensity 110% of motor threshold, 1 Hz, 2000 Stimuli/day). Being within a range of about 20 mm diameter, the scalp coordinates for stimulating the primary auditory cortex allowed to determine a standard procedure for coil positioning. Clinical validation of this coil positioning method resulted in a significant improvement of tinnitus complaints (p<0.001). The newly developed coil positioning strategy may have the potential to offer a more easy-to-use stimulation approach for treating chronic tinnitus as compared with highly sophisticated, imaging guided treatment methods.
Integrating Information from Different Senses in the Auditory Cortex
King, Andrew J.; Walker, Kerry M.M.
2015-01-01
Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies. PMID:22798035
Pérez, Miguel Ángel; Pérez-Valenzuela, Catherine; Rojas-Thomas, Felipe; Ahumada, Juan; Fuenzalida, Marco; Dagnino-Subiabre, Alexies
2013-08-29
Chronic stress induces dendritic atrophy in the rat primary auditory cortex (A1), a key brain area for auditory attention. The aim of this study was to determine whether repeated restraint stress affects auditory attention and synaptic transmission in A1. Male Sprague-Dawley rats were trained in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance over 80% of correct trials in the 2-ACT were randomly assigned to control and restraint stress experimental groups. To analyze the effects of restraint stress on the auditory attention, trained rats of both groups were subjected to 50 2-ACT trials one day before and one day after of the stress period. A difference score was determined by subtracting the number of correct trials after from those before the stress protocol. Another set of rats was used to study the synaptic transmission in A1. Restraint stress decreased the number of correct trials by 28% compared to the performance of control animals (p < 0.001). Furthermore, stress reduced the frequency of spontaneous inhibitory postsynaptic currents (sIPSC) and miniature IPSC in A1, whereas glutamatergic efficacy was not affected. Our results demonstrate that restraint stress decreased auditory attention and GABAergic synaptic efficacy in A1. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
1984-08-01
90de It noce..etrv wnd identify by block numberl .’-- This work reviews the areas of monaural and binaural signal detection, auditory discrimination And...AUDITORY DISPLAYS This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to...pertaining to the major areas of auditory processing in humans. The areas covered in the reviews presented here are monaural and binaural siqnal detection
Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123
Micheyl, Christophe; Steinschneider, Mitchell
2016-01-01
Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198
Plasticity of spatial hearing: behavioural effects of cortical inactivation
Nodal, Fernando R; Bajo, Victoria M; King, Andrew J
2012-01-01
The contribution of auditory cortex to spatial information processing was explored behaviourally in adult ferrets by reversibly deactivating different cortical areas by subdural placement of a polymer that released the GABAA agonist muscimol over a period of weeks. The spatial extent and time course of cortical inactivation were determined electrophysiologically. Muscimol-Elvax was placed bilaterally over the anterior (AEG), middle (MEG) or posterior ectosylvian gyrus (PEG), so that different regions of the auditory cortex could be deactivated in different cases. Sound localization accuracy in the horizontal plane was assessed by measuring both the initial head orienting and approach-to-target responses made by the animals. Head orienting behaviour was unaffected by silencing any region of the auditory cortex, whereas the accuracy of approach-to-target responses to brief sounds (40 ms noise bursts) was reduced by muscimol-Elvax but not by drug-free implants. Modest but significant localization impairments were observed after deactivating the MEG, AEG or PEG, although the largest deficits were produced in animals in which the MEG, where the primary auditory fields are located, was silenced. We also examined experience-induced spatial plasticity by reversibly plugging one ear. In control animals, localization accuracy for both approach-to-target and head orienting responses was initially impaired by monaural occlusion, but recovered with training over the next few days. Deactivating any part of the auditory cortex resulted in less complete recovery than in controls, with the largest deficits observed after silencing the higher-level cortical areas in the AEG and PEG. Although suggesting that each region of auditory cortex contributes to spatial learning, differences in the localization deficits and degree of adaptation between groups imply a regional specialization in the processing of spatial information across the auditory cortex. PMID:22547635
Werner, Sebastian; Noppeney, Uta
2010-02-17
Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects' multisensory benefits in performance accuracy.
Information fusion via isocortex-based Area 37 modeling
NASA Astrophysics Data System (ADS)
Peterson, James K.
2004-08-01
A simplified model of information processing in the brain can be constructed using primary sensory input from two modalities (auditory and visual) and recurrent connections to the limbic subsystem. Information fusion would then occur in Area 37 of the temporal cortex. The creation of meta concepts from the low order primary inputs is managed by models of isocortex processing. Isocortex algorithms are used to model parietal (auditory), occipital (visual), temporal (polymodal fusion) cortex and the limbic system. Each of these four modules is constructed out of five cortical stacks in which each stack consists of three vertically oriented six layer isocortex models. The input to output training of each cortical model uses the OCOS (on center - off surround) and FFP (folded feedback pathway) circuitry of (Grossberg, 1) which is inherently a recurrent network type of learning characterized by the identification of perceptual groups. Models of this sort are thus closely related to cognitive models as it is difficult to divorce the sensory processing subsystems from the higher level processing in the associative cortex. The overall software architecture presented is biologically based and is presented as a potential architectural prototype for the development of novel sensory fusion strategies. The algorithms are motivated to some degree by specific data from projects on musical composition and autonomous fine art painting programs, but only in the sense that these projects use two specific types of auditory and visual cortex data. Hence, the architectures are presented for an artificial information processing system which utilizes two disparate sensory sources. The exact nature of the two primary sensory input streams is irrelevant.
The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex
Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J
2014-01-01
Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. PMID:24945075
Missing a trick: Auditory load modulates conscious awareness in audition.
Fairnie, Jake; Moore, Brian C J; Remington, Anna
2016-07-01
In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Holcomb, H H; Medoff, D R; Caudill, P J; Zhao, Z; Lahti, A C; Dannals, R F; Tamminga, C A
1998-09-01
Tone recognition is partially subserved by neural activity in the right frontal and primary auditory cortices. First we determined the brain areas associated with tone perception and recognition. This study then examined how regional cerebral blood flow (rCBF) in these and other brain regions correlates with the behavioral characteristics of a difficult tone recognition task. rCBF changes were assessed using H2(15)O positron emission tomography. Subtraction procedures were used to localize significant change regions and correlational analyses were applied to determine how response times (RT) predicted rCBF patterns. Twelve trained normal volunteers were studied in three conditions: REST, sensory motor control (SMC) and decision (DEC). The SMC-REST contrast revealed bilateral activation of primary auditory cortices, cerebellum and bilateral inferior frontal gyri. DEC-SMC produced significant clusters in the right middle and inferior frontal gyri, insula and claustrum; the anterior cingulate gyrus and supplementary motor area; the left insula/claustrum; and the left cerebellum. Correlational analyses, RT versus rCBF from DEC scans, showed a positive correlation in right inferior and middle frontal cortex; rCBF in bilateral auditory cortices and cerebellum exhibited significant negative correlations with RT These changes suggest that neural activity in the right frontal, superior temporal and cerebellar regions shifts back and forth in magnitude depending on whether tone recognition RT is relatively fast or slow, during a difficult, accurate assessment.
Auditory, Vestibular and Cognitive Effects due to Repeated Blast Exposure on the Warfighter
2012-10-01
Gaze Horizontal (Left and Right) Description: The primary purpose of the Gaze Horizontal subtest was to detect nystagmus when the head is fixed and...to detect nystagmus when the head is fixed and the eyes are gazing off center from the primary (straight ahead) gaze position. This test is designed...physiological target area and examiner instructions for testing): Spontaneous Nystagmus Smooth Harmonic Acceleration (.01, .08, .32, .64, 1.75
Moyer, Caitlin E.; Delevich, Kristen M.; Fish, Kenneth N.; Asafu-Adjei, Josephine K.; Sampson, Allan R.; Dorph-Petersen, Karl-Anton; Lewis, David A.; Sweet, Robert A.
2012-01-01
Background Schizophrenia is associated with perceptual and physiological auditory processing impairments that may result from primary auditory cortex excitatory and inhibitory circuit pathology. High-frequency oscillations are important for auditory function and are often reported to be disrupted in schizophrenia. These oscillations may, in part, depend on upregulation of gamma-aminobutyric acid synthesis by glutamate decarboxylase 65 (GAD65) in response to high interneuron firing rates. It is not known whether levels of GAD65 protein or GAD65-expressing boutons are altered in schizophrenia. Methods We studied two cohorts of subjects with schizophrenia and matched control subjects, comprising 27 pairs of subjects. Relative fluorescence intensity, density, volume, and number of GAD65-immunoreactive boutons in primary auditory cortex were measured using quantitative confocal microscopy and stereologic sampling methods. Bouton fluorescence intensities were used to compare the relative expression of GAD65 protein within boutons between diagnostic groups. Additionally, we assessed the correlation between previously measured dendritic spine densities and GAD65-immunoreactive bouton fluorescence intensities. Results GAD65-immunoreactive bouton fluorescence intensity was reduced by 40% in subjects with schizophrenia and was correlated with previously measured reduced spine density. The reduction was greater in subjects who were not living independently at time of death. In contrast, GAD65-immunoreactive bouton density and number were not altered in deep layer 3 of primary auditory cortex of subjects with schizophrenia. Conclusions Decreased expression of GAD65 protein within inhibitory boutons could contribute to auditory impairments in schizophrenia. The correlated reductions in dendritic spines and GAD65 protein suggest a relationship between inhibitory and excitatory synapse pathology in primary auditory cortex. PMID:22624794
Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J
2014-01-01
Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.
Frontal Cortex Activation Causes Rapid Plasticity of Auditory Cortical Processing
Winkowski, Daniel E.; Bandyopadhyay, Sharba; Shamma, Shihab A.
2013-01-01
Neurons in the primary auditory cortex (A1) can show rapid changes in receptive fields when animals are engaged in sound detection and discrimination tasks. The source of a signal to A1 that triggers these changes is suspected to be in frontal cortical areas. How or whether activity in frontal areas can influence activity and sensory processing in A1 and the detailed changes occurring in A1 on the level of single neurons and in neuronal populations remain uncertain. Using electrophysiological techniques in mice, we found that pairing orbitofrontal cortex (OFC) stimulation with sound stimuli caused rapid changes in the sound-driven activity within A1 that are largely mediated by noncholinergic mechanisms. By integrating in vivo two-photon Ca2+ imaging of A1 with OFC stimulation, we found that pairing OFC activity with sounds caused dynamic and selective changes in sensory responses of neural populations in A1. Further, analysis of changes in signal and noise correlation after OFC pairing revealed improvement in neural population-based discrimination performance within A1. This improvement was frequency specific and dependent on correlation changes. These OFC-induced influences on auditory responses resemble behavior-induced influences on auditory responses and demonstrate that OFC activity could underlie the coordination of rapid, dynamic changes in A1 to dynamic sensory environments. PMID:24227723
Auditory-Cortex Short-Term Plasticity Induced by Selective Attention
Jääskeläinen, Iiro P.; Ahveninen, Jyrki
2014-01-01
The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance. PMID:24551458
Zheng, Leilei; Chai, Hao; Yu, Shaohua; Xu, You; Chen, Wanzhen; Wang, Wei
2015-01-01
The exact mechanism behind auditory hallucinations in schizophrenia remains unknown. A corollary discharge dysfunction hypothesis has been put forward, but it requires further confirmation. Electroencephalography (EEG) of the Deutsch octave illusion might offer more insight, by demonstrating an abnormal cerebral activation similar to that under auditory hallucinations in schizophrenic patients. We invited 23 first-episode schizophrenic patients with auditory hallucinations and 23 healthy participants to listen to silence and two sound sequences, which consisted of alternating 400- and 800-Hz tones. EEG spectral power and coherence values of different frequency bands, including theta rhythm (3.5-7.5 Hz), were computed using 32 scalp electrodes. Task-related spectral power changes and task-related coherence differences were also calculated. Clinical characteristics of patients were rated using the Positive and Negative Syndrome Scale. After both sequences of octave illusion, the task-related theta power change values of frontal and temporal areas were significantly lower, and the task-related theta coherence difference values of intrahemispheric frontal-temporal areas were significantly higher in schizophrenic patients than in healthy participants. Moreover, the task-related power change values in both hemispheres were negatively correlated and the task-related coherence difference values in the right hemisphere were positively correlated with the hallucination score in schizophrenic patients. We only tested the Deutsch octave illusion in primary schizophrenic patients with acute first episode. Further studies might adopt other illusions or employ other forms of schizophrenia. Our results showed a lower activation but higher connection within frontal and temporal areas in schizophrenic patients under octave illusion. This suggests an oversynchronized but weak frontal area to exert an action to the ipsilateral temporal area, which supports the corollary discharge dysfunction hypothesis. © 2014 S. Karger AG, Basel.
2012-04-14
flow or electrical activity in the primary auditory cortex and sound intensity level. Other studies (Brechmann et al., 2002; Hart et al., 2003; Tanji et...duration. Decoding of per- ceived loudness from brain signals may have important applications for the calibration of stimulation levels of cochlear implants
Decoding sound level in the marmoset primary auditory cortex.
Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L
2017-10-01
Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.
Tuning in to the Voices: A Multisite fMRI Study of Auditory Hallucinations
Ford, Judith M.; Roach, Brian J.; Jorgensen, Kasper W.; Turner, Jessica A.; Brown, Gregory G.; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A.; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S.; Lim, Kelvin O.; Glover, Gary; Potkin, Steven G.; Mathalon, Daniel H.
2009-01-01
Introduction: Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically “tuned” to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. Methods: At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Results: Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Discussion: Although “voices” are the anticipated sensory experience, it appears that even primary auditory cortex is “turned on” and “tuned in” to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample. PMID:18987102
Long-range synchrony of gamma oscillations and auditory hallucination symptoms in schizophrenia
Mulert, C.; Kirsch; Pascual-Marqui, Roberto; McCarley, Robert W.; Spencer, Kevin M.
2010-01-01
Phase locking in the gamma-band range has been shown to be diminished in patients with schizophrenia. Moreover, there have been reports of positive correlations between phase locking in the gamma-band range and positive symptoms, especially hallucinations. The aim of the present study was to use a new methodological approach in order to investigate gamma-band phase synchronization between the left and right auditory cortex in patients with schizophrenia and its relationship to auditory hallucinations. Subjects were 18 patients with chronic schizophrenia (SZ) and 16 healthy control (HC) subjects. Auditory hallucination symptom scores were obtained using the Scale for the Assessment of Positive Symptoms. Stimuli were 40-Hz binaural click trains. The generators of the 40 Hz-ASSR were localized using eLORETA and based on the computed intracranial signals lagged interhemispheric phase locking between primary and secondary auditory cortices was analyzed. Current source density of the 40 ASSR response was significantly diminished in SZ in comparison to HC in the right superior and middle temporal gyrus (p<0.05). Interhemispheric phase locking was reduced in SZ in comparison to HC for the primary auditory cortices (p<0.05) but not in the secondary auditory cortices. A significant positive correlation was found between auditory hallucination symptom scores and phase synchronization between the primary auditory cortices (p<0.05, corrected for multiple testing) but not for the secondary auditory cortices. These results suggest that long-range synchrony of gamma oscillations is disturbed in schizophrenia and that this deficit is related to clinical symptoms such as auditory hallucinations. PMID:20713096
Hierarchical differences in population coding within auditory cortex.
Downer, Joshua D; Niwa, Mamiko; Sutter, Mitchell L
2017-08-01
Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation ( r noise ) between simultaneously recorded neurons and found that whereas engagement decreased average r noise in A1, engagement increased average r noise in ML. This finding surprised us, because attentive states are commonly reported to decrease average r noise We analyzed the effect of r noise on AM coding in both A1 and ML and found that whereas engagement-related shifts in r noise in A1 enhance AM coding, r noise shifts in ML have little effect. These results imply that the effect of r noise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing r noise Therefore, the hierarchical emergence of r noise -robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity. NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their population coding strategies. In this study, we compared population coding between primary and secondary auditory cortex. Our findings demonstrate striking differences between the two areas and highlight the importance of considering the diversity of neural structures as we develop models of population coding. Copyright © 2017 the American Physiological Society.
Contextual modulation of primary visual cortex by auditory signals.
Petro, L S; Paton, A T; Muckli, L
2017-02-19
Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Contextual modulation of primary visual cortex by auditory signals
Paton, A. T.
2017-01-01
Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015
The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex.
Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J
2014-09-01
Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. © 2014 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Nieto-Diego, Javier; Malmierca, Manuel S.
2016-01-01
Stimulus-specific adaptation (SSA) in single neurons of the auditory cortex was suggested to be a potential neural correlate of the mismatch negativity (MMN), a widely studied component of the auditory event-related potentials (ERP) that is elicited by changes in the auditory environment. However, several aspects on this SSA/MMN relation remain unresolved. SSA occurs in the primary auditory cortex (A1), but detailed studies on SSA beyond A1 are lacking. To study the topographic organization of SSA, we mapped the whole rat auditory cortex with multiunit activity recordings, using an oddball paradigm. We demonstrate that SSA occurs outside A1 and differs between primary and nonprimary cortical fields. In particular, SSA is much stronger and develops faster in the nonprimary than in the primary fields, paralleling the organization of subcortical SSA. Importantly, strong SSA is present in the nonprimary auditory cortex within the latency range of the MMN in the rat and correlates with an MMN-like difference wave in the simultaneously recorded local field potentials (LFP). We present new and strong evidence linking SSA at the cellular level to the MMN, a central tool in cognitive and clinical neuroscience. PMID:26950883
Phillips, D P; Farmer, M E
1990-11-15
This paper explores the nature of the processing disorder which underlies the speech discrimination deficit in the syndrome of acquired word deafness following from pathology to the primary auditory cortex. A critical examination of the evidence on this disorder revealed the following. First, the most profound forms of the condition are expressed not only in an isolation of the cerebral linguistic processor from auditory input, but in a failure of even the perceptual elaboration of the relevant sounds. Second, in agreement with earlier studies, we conclude that the perceptual dimension disturbed in word deafness is a temporal one. We argue, however, that it is not a generalized disorder of auditory temporal processing, but one which is largely restricted to the processing of sounds with temporal content in the milliseconds to tens-of-milliseconds time frame. The perceptual elaboration of sounds with temporal content outside that range, in either direction, may survive the disorder. Third, we present neurophysiological evidence that the primary auditory cortex has a special role in the representation of auditory events in that time frame, but not in the representation of auditory events with temporal grains outside that range.
Auditory Resting-State Network Connectivity in Tinnitus: A Functional MRI Study
Maudoux, Audrey; Lefebvre, Philippe; Cabay, Jean-Evrard; Demertzi, Athena; Vanhaudenhuyse, Audrey; Laureys, Steven; Soddu, Andrea
2012-01-01
The underlying functional neuroanatomy of tinnitus remains poorly understood. Few studies have focused on functional cerebral connectivity changes in tinnitus patients. The aim of this study was to test if functional MRI “resting-state” connectivity patterns in auditory network differ between tinnitus patients and normal controls. Thirteen chronic tinnitus subjects and fifteen age-matched healthy controls were studied on a 3 tesla MRI. Connectivity was investigated using independent component analysis and an automated component selection approach taking into account the spatial and temporal properties of each component. Connectivity in extra-auditory regions such as brainstem, basal ganglia/NAc, cerebellum, parahippocampal, right prefrontal, parietal, and sensorimotor areas was found to be increased in tinnitus subjects. The right primary auditory cortex, left prefrontal, left fusiform gyrus, and bilateral occipital regions showed a decreased connectivity in tinnitus. These results show that there is a modification of cortical and subcortical functional connectivity in tinnitus encompassing attentional, mnemonic, and emotional networks. Our data corroborate the hypothesized implication of non-auditory regions in tinnitus physiopathology and suggest that various regions of the brain seem involved in the persistent awareness of the phenomenon as well as in the development of the associated distress leading to disabling chronic tinnitus. PMID:22574141
Milner, Rafał; Rusiniak, Mateusz; Lewandowska, Monika; Wolak, Tomasz; Ganc, Małgorzata; Piątkowska-Janko, Ewa; Bogorodzki, Piotr; Skarżyński, Henryk
2014-01-01
Background The neural underpinnings of auditory information processing have often been investigated using the odd-ball paradigm, in which infrequent sounds (deviants) are presented within a regular train of frequent stimuli (standards). Traditionally, this paradigm has been applied using either high temporal resolution (EEG) or high spatial resolution (fMRI, PET). However, used separately, these techniques cannot provide information on both the location and time course of particular neural processes. The goal of this study was to investigate the neural correlates of auditory processes with a fine spatio-temporal resolution. A simultaneous auditory evoked potentials (AEP) and functional magnetic resonance imaging (fMRI) technique (AEP-fMRI), together with an odd-ball paradigm, were used. Material/Methods Six healthy volunteers, aged 20–35 years, participated in an odd-ball simultaneous AEP-fMRI experiment. AEP in response to acoustic stimuli were used to model bioelectric intracerebral generators, and electrophysiological results were integrated with fMRI data. Results fMRI activation evoked by standard stimuli was found to occur mainly in the primary auditory cortex. Activity in these regions overlapped with intracerebral bioelectric sources (dipoles) of the N1 component. Dipoles of the N1/P2 complex in response to standard stimuli were also found in the auditory pathway between the thalamus and the auditory cortex. Deviant stimuli induced fMRI activity in the anterior cingulate gyrus, insula, and parietal lobes. Conclusions The present study showed that neural processes evoked by standard stimuli occur predominantly in subcortical and cortical structures of the auditory pathway. Deviants activate areas non-specific for auditory information processing. PMID:24413019
de Borst, Aline W; de Gelder, Beatrice
2017-08-01
Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Auditory processing deficits in individuals with primary open-angle glaucoma.
Rance, Gary; O'Hare, Fleur; O'Leary, Stephen; Starr, Arnold; Ly, Anna; Cheng, Belinda; Tomlin, Dani; Graydon, Kelley; Chisari, Donella; Trounce, Ian; Crowston, Jonathan
2012-01-01
The high energy demand of the auditory and visual pathways render these sensory systems prone to diseases that impair mitochondrial function. Primary open-angle glaucoma, a neurodegenerative disease of the optic nerve, has recently been associated with a spectrum of mitochondrial abnormalities. This study sought to investigate auditory processing in individuals with open-angle glaucoma. DESIGN/STUDY SAMPLE: Twenty-seven subjects with open-angle glaucoma underwent electrophysiologic (auditory brainstem response), auditory temporal processing (amplitude modulation detection), and speech perception (monosyllabic words in quiet and background noise) assessment in each ear. A cohort of age, gender and hearing level matched control subjects was also tested. While the majority of glaucoma subjects in this study demonstrated normal auditory function, there were a significant number (6/27 subjects, 22%) who showed abnormal auditory brainstem responses and impaired auditory perception in one or both ears. The finding that a significant proportion of subjects with open-angle glaucoma presented with auditory dysfunction provides evidence of systemic neuronal susceptibility. Affected individuals may suffer significant communication difficulties in everyday listening situations.
Tani, Toshiki; Abe, Hiroshi; Hayami, Taku; Banno, Taku; Kitamura, Naohito; Mashiko, Hiromi
2018-01-01
Abstract Natural sound is composed of various frequencies. Although the core region of the primate auditory cortex has functionally defined sound frequency preference maps, how the map is organized in the auditory areas of the belt and parabelt regions is not well known. In this study, we investigated the functional organizations of the core, belt, and parabelt regions encompassed by the lateral sulcus and the superior temporal sulcus in the common marmoset (Callithrix jacchus). Using optical intrinsic signal imaging, we obtained evoked responses to band-pass noise stimuli in a range of sound frequencies (0.5–16 kHz) in anesthetized adult animals and visualized the preferred sound frequency map on the cortical surface. We characterized the functionally defined organization using histologically defined brain areas in the same animals. We found tonotopic representation of a set of sound frequencies (low to high) within the primary (A1), rostral (R), and rostrotemporal (RT) areas of the core region. In the belt region, the tonotopic representation existed only in the mediolateral (ML) area. This representation was symmetric with that found in A1 along the border between areas A1 and ML. The functional structure was not very clear in the anterolateral (AL) area. Low frequencies were mainly preferred in the rostrotemplatal (RTL) area, while high frequencies were preferred in the caudolateral (CL) area. There was a portion of the parabelt region that strongly responded to higher sound frequencies (>5.8 kHz) along the border between the rostral parabelt (RPB) and caudal parabelt (CPB) regions. PMID:29736410
Changes in resting-state connectivity in musicians with embouchure dystonia.
Haslinger, Bernhard; Noé, Jonas; Altenmüller, Eckart; Riedl, Valentin; Zimmer, Claus; Mantel, Tobias; Dresel, Christian
2017-03-01
Embouchure dystonia is a highly disabling task-specific dystonia in professional brass musicians leading to spasms of perioral muscles while playing the instrument. As they are asymptomatic at rest, resting-state functional magnetic resonance imaging in these patients can reveal changes in functional connectivity within and between brain networks independent from dystonic symptoms. We therefore compared embouchure dystonia patients to healthy musicians with resting-state functional magnetic resonance imaging in combination with independent component analyses. Patients showed increased functional connectivity of the bilateral sensorimotor mouth area and right secondary somatosensory cortex, but reduced functional connectivity of the bilateral sensorimotor hand representation, left inferior parietal cortex, and mesial premotor cortex within the lateral motor function network. Within the auditory function network, the functional connectivity of bilateral secondary auditory cortices, right posterior parietal cortex and left sensorimotor hand area was increased, the functional connectivity of right primary auditory cortex, right secondary somatosensory cortex, right sensorimotor mouth representation, bilateral thalamus, and anterior cingulate cortex was reduced. Negative functional connectivity between the cerebellar and lateral motor function network and positive functional connectivity between the cerebellar and primary visual network were reduced. Abnormal resting-state functional connectivity of sensorimotor representations of affected and unaffected body parts suggests a pathophysiological predisposition for abnormal sensorimotor and audiomotor integration in embouchure dystonia. Altered connectivity to the cerebellar network highlights the important role of the cerebellum in this disease. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.
Information flow in the auditory cortical network
Hackett, Troy A.
2011-01-01
Auditory processing in the cerebral cortex is comprised of an interconnected network of auditory and auditory-related areas distributed throughout the forebrain. The nexus of auditory activity is located in temporal cortex among several specialized areas, or fields, that receive dense inputs from the medial geniculate complex. These areas are collectively referred to as auditory cortex. Auditory activity is extended beyond auditory cortex via connections with auditory-related areas elsewhere in the cortex. Within this network, information flows between areas to and from countless targets, but in a manner that is characterized by orderly regional, areal and laminar patterns. These patterns reflect some of the structural constraints that passively govern the flow of information at all levels of the network. In addition, the exchange of information within these circuits is dynamically regulated by intrinsic neurochemical properties of projecting neurons and their targets. This article begins with an overview of the principal circuits and how each is related to information flow along major axes of the network. The discussion then turns to a description of neurochemical gradients along these axes, highlighting recent work on glutamate transporters in the thalamocortical projections to auditory cortex. The article concludes with a brief discussion of relevant neurophysiological findings as they relate to structural gradients in the network. PMID:20116421
Hierarchical Processing of Auditory Objects in Humans
Kumar, Sukhbinder; Stephan, Klaas E; Warren, Jason D; Friston, Karl J; Griffiths, Timothy D
2007-01-01
This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG), containing the primary auditory cortex, planum temporale (PT), and superior temporal sulcus (STS), and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal “templates” in the PT before further analysis of the abstracted form in anterior temporal lobe areas. PMID:17542641
Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings
Singh, Nilkamal; Telles, Shirley
2015-01-01
Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control. PMID:26137479
Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings.
Singh, Nilkamal; Telles, Shirley
2015-01-01
Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control.
Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna
2017-11-01
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.
76 FR 61655 - Definition of Part 15 Auditory Assistance Device
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-05
... allocated on a primary basis for radio astronomy, and the 74.8-75.2 MHz band is allocated on a primary basis... radiodetermination, radio astronomy, and TV broadcast services are in bands adjacent to the part 15 auditory...
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing
Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.
Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.
Lamas, Verónica; Estévez, Sheila; Pernía, Marianni; Plaza, Ignacio; Merchán, Miguel A
2017-10-11
The rat auditory cortex (AC) is becoming popular among auditory neuroscience investigators who are interested in experience-dependence plasticity, auditory perceptual processes, and cortical control of sound processing in the subcortical auditory nuclei. To address new challenges, a procedure to accurately locate and surgically expose the auditory cortex would expedite this research effort. Stereotactic neurosurgery is routinely used in pre-clinical research in animal models to engraft a needle or electrode at a pre-defined location within the auditory cortex. In the following protocol, we use stereotactic methods in a novel way. We identify four coordinate points over the surface of the temporal bone of the rat to define a window that, once opened, accurately exposes both the primary (A1) and secondary (Dorsal and Ventral) cortices of the AC. Using this method, we then perform a surgical ablation of the AC. After such a manipulation is performed, it is necessary to assess the localization, size, and extension of the lesions made in the cortex. Thus, we also describe a method to easily locate the AC ablation postmortem using a coordinate map constructed by transferring the cytoarchitectural limits of the AC to the surface of the brain.The combination of the stereotactically-guided location and ablation of the AC with the localization of the injured area in a coordinate map postmortem facilitates the validation of information obtained from the animal, and leads to a better analysis and comprehension of the data.
Theoretical Limitations on Functional Imaging Resolution in Auditory Cortex
Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.
2010-01-01
Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging historically both with functional imaging and with electrophysiology. A possible limitation affecting any methodology using pooled neuronal measures may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. One neuronal response type inherited from the cochlea, for example, exhibits a receptive field that increases in size (i.e., decreases in selectivity) at higher stimulus intensities. Even though these neurons appear to represent a minority of auditory cortex neurons, they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite selective neurons, resulting in a relatively sparse activation map, have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments. PMID:20079343
Potential Mechanisms Underlying Intercortical Signal Regulation via Cholinergic Neuromodulators
Whittington, Miles A.; Kopell, Nancy J.
2015-01-01
The dynamical behavior of the cortex is extremely complex, with different areas and even different layers of a cortical column displaying different temporal patterns. A major open question is how the signals from different layers and different brain regions are coordinated in a flexible manner to support function. Here, we considered interactions between primary auditory cortex and adjacent association cortex. Using a biophysically based model, we show how top-down signals in the beta and gamma regimes can interact with a bottom-up gamma rhythm to provide regulation of signals between the cortical areas and among layers. The flow of signals depends on cholinergic modulation: with only glutamatergic drive, we show that top-down gamma rhythms may block sensory signals. In the presence of cholinergic drive, top-down beta rhythms can lift this blockade and allow signals to flow reciprocally between primary sensory and parietal cortex. SIGNIFICANCE STATEMENT Flexible coordination of multiple cortical areas is critical for complex cognitive functions, but how this is accomplished is not understood. Using computational models, we studied the interactions between primary auditory cortex (A1) and association cortex (Par2). Our model is capable of replicating interaction patterns observed in vitro and the simulations predict that the coordination between top-down gamma and beta rhythms is central to the gating process regulating bottom-up sensory signaling projected from A1 to Par2 and that cholinergic modulation allows this coordination to occur. PMID:26558772
Karak, Somdatta; Jacobs, Julie S; Kittelmann, Maike; Spalthoff, Christian; Katana, Radoslaw; Sivan-Loukianova, Elena; Schon, Michael A; Kernan, Maurice J; Eberl, Daniel F; Göpfert, Martin C
2015-11-26
Much like vertebrate hair cells, the chordotonal sensory neurons that mediate hearing in Drosophila are motile and amplify the mechanical input of the ear. Because the neurons bear mechanosensory primary cilia whose microtubule axonemes display dynein arms, we hypothesized that their motility is powered by dyneins. Here, we describe two axonemal dynein proteins that are required for Drosophila auditory neuron function, localize to their primary cilia, and differently contribute to mechanical amplification in hearing. Promoter fusions revealed that the two axonemal dynein genes Dmdnah3 (=CG17150) and Dmdnai2 (=CG6053) are expressed in chordotonal neurons, including the auditory ones in the fly's ear. Null alleles of both dyneins equally abolished electrical auditory neuron responses, yet whereas mutations in Dmdnah3 facilitated mechanical amplification, amplification was abolished by mutations in Dmdnai2. Epistasis analysis revealed that Dmdnah3 acts downstream of Nan-Iav channels in controlling the amplificatory gain. Dmdnai2, in addition to being required for amplification, was essential for outer dynein arms in auditory neuron cilia. This establishes diverse roles of axonemal dyneins in Drosophila auditory neuron function and links auditory neuron motility to primary cilia and axonemal dyneins. Mutant defects in sperm competition suggest that both dyneins also function in sperm motility.
Temporal characteristics of audiovisual information processing.
Fuhrmann Alpert, Galit; Hein, Grit; Tsai, Nancy; Naumer, Marcus J; Knight, Robert T
2008-05-14
In complex natural environments, auditory and visual information often have to be processed simultaneously. Previous functional magnetic resonance imaging (fMRI) studies focused on the spatial localization of brain areas involved in audiovisual (AV) information processing, but the temporal characteristics of AV information flow in these regions remained unclear. In this study, we used fMRI and a novel information-theoretic approach to study the flow of AV sensory information. Subjects passively perceived sounds and images of objects presented either alone or simultaneously. Applying the measure of mutual information, we computed for each voxel the latency in which the blood oxygenation level-dependent signal had the highest information content about the preceding stimulus. The results indicate that, after AV stimulation, the earliest informative activity occurs in right Heschl's gyrus, left primary visual cortex, and the posterior portion of the superior temporal gyrus, which is known as a region involved in object-related AV integration. Informative activity in the anterior portion of superior temporal gyrus, middle temporal gyrus, right occipital cortex, and inferior frontal cortex was found at a later latency. Moreover, AV presentation resulted in shorter latencies in multiple cortical areas compared with isolated auditory or visual presentation. The results provide evidence for bottom-up processing from primary sensory areas into higher association areas during AV integration in humans and suggest that AV presentation shortens processing time in early sensory cortices.
Joachimsthaler, Bettina; Uhlmann, Michaela; Miller, Frank; Ehret, Günter; Kurt, Simone
2014-01-01
Because of its great genetic potential, the mouse (Mus musculus) has become a popular model species for studies on hearing and sound processing along the auditory pathways. Here, we present the first comparative study on the representation of neuronal response parameters to tones in primary and higher-order auditory cortical fields of awake mice. We quantified 12 neuronal properties of tone processing in order to estimate similarities and differences of function between the fields, and to discuss how far auditory cortex (AC) function in the mouse is comparable to that in awake monkeys and cats. Extracellular recordings were made from 1400 small clusters of neurons from cortical layers III/IV in the primary fields AI (primary auditory field) and AAF (anterior auditory field), and the higher-order fields AII (second auditory field) and DP (dorsoposterior field). Field specificity was shown with regard to spontaneous activity, correlation between spontaneous and evoked activity, tone response latency, sharpness of frequency tuning, temporal response patterns (occurrence of phasic responses, phasic-tonic responses, tonic responses, and off-responses), and degree of variation between the characteristic frequency (CF) and the best frequency (BF) (CF–BF relationship). Field similarities were noted as significant correlations between CFs and BFs, V-shaped frequency tuning curves, similar minimum response thresholds and non-monotonic rate-level functions in approximately two-thirds of the neurons. Comparative and quantitative analyses showed that the measured response characteristics were, to various degrees, susceptible to influences of anesthetics. Therefore, studies of neuronal responses in the awake AC are important in order to establish adequate relationships between neuronal data and auditory perception and acoustic response behavior. PMID:24506843
Guinchard, A-C; Ghazaleh, Naghmeh; Saenz, M; Fornari, E; Prior, J O; Maeder, P; Adib, S; Maire, R
2016-11-01
We studied possible brain changes with functional MRI (fMRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) in a patient with a rare, high-intensity "objective tinnitus" (high-level SOAEs) in the left ear of 10 years duration, with no associated hearing loss. This is the first case of objective cochlear tinnitus to be investigated with functional neuroimaging. The objective cochlear tinnitus was measured by Spontaneous Otoacoustic Emissions (SOAE) equipment (frequency 9689 Hz, intensity 57 dB SPL) and is clearly audible to anyone standing near the patient. Functional modifications in primary auditory areas and other brain regions were evaluated using 3T and 7T fMRI and FDG-PET. In the fMRI evaluations, a saturation of the auditory cortex at the tinnitus frequency was observed, but the global cortical tonotopic organization remained intact when compared to the results of fMRI of healthy subjects. The FDG-PET showed no evidence of an increase or decrease of activity in the auditory cortices or in the limbic system as compared to normal subjects. In this patient with high-intensity objective cochlear tinnitus, fMRI and FDG-PET showed no significant brain reorganization in auditory areas and/or in the limbic system, as reported in the literature in patients with chronic subjective tinnitus. Copyright © 2016 Elsevier B.V. All rights reserved.
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg
2016-01-01
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg
2016-01-01
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.
Functional mapping of the primate auditory system.
Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer
2003-01-24
Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.
Scanning silence: mental imagery of complex sounds.
Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz
2005-07-15
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.
An eye movement analysis of the effect of interruption modality on primary task resumption.
Ratwani, Raj; Trafton, J Gregory
2010-06-01
We examined the effect of interruption modality (visual or auditory) on primary task (visual) resumption to determine which modality was the least disruptive. Theories examining interruption modality have focused on specific periods of the interruption timeline. Preemption theory has focused on the switch from the primary task to the interrupting task. Multiple resource theory has focused on interrupting tasks that are to be performed concurrently with the primary task. Our focus was on examining how interruption modality influences task resumption.We leverage the memory-for-goals theory, which suggests that maintaining an associative link between environmental cues and the suspended primary task goal is important for resumption. Three interruption modality conditions were examined: auditory interruption with the primary task visible, auditory interruption with a blank screen occluding the primary task, and a visual interruption occluding the primary task. Reaction time and eye movement data were collected. The auditory condition with the primary task visible was the least disruptive. Eye movement data suggest that participants in this condition were actively maintaining an associative link between relevant environmental cues on the primary task interface and the suspended primary task goal during the interruption. These data suggest that maintaining cue association is the important factor for reducing the disruptiveness of interruptions, not interruption modality. Interruption-prone computing environments should be designed to allow for the user to have access to relevant primary task cues during an interruption to minimize disruptiveness.
Neurons and Objects: The Case of Auditory Cortex
Nelken, Israel; Bar-Yosef, Omer
2008-01-01
Sounds are encoded into electrical activity in the inner ear, where they are represented (roughly) as patterns of energy in narrow frequency bands. However, sounds are perceived in terms of their high-order properties. It is generally believed that this transformation is performed along the auditory hierarchy, with low-level physical cues computed at early stages of the auditory system and high-level abstract qualities at high-order cortical areas. The functional position of primary auditory cortex (A1) in this scheme is unclear – is it ‘early’, encoding physical cues, or is it ‘late’, already encoding abstract qualities? Here we argue that neurons in cat A1 show sensitivity to high-level features of sounds. In particular, these neurons may already show sensitivity to ‘auditory objects’. The evidence for this claim comes from studies in which individual sounds are presented singly and in mixtures. Many neurons in cat A1 respond to mixtures in the same way they respond to one of the individual components of the mixture, and in many cases neurons may respond to a low-level component of the mixture rather than to the acoustically dominant one, even though the same neurons respond to the acoustically-dominant component when presented alone. PMID:18982113
Click train encoding in primary and non-primary auditory cortex of anesthetized macaque monkeys.
Oshurkova, E; Scheich, H; Brosch, M
2008-06-02
We studied encoding of temporally modulated sounds in 28 multiunits in the primary auditory cortical field (AI) and in 35 multiunits in the secondary auditory cortical field (caudomedial auditory cortical field, CM) by presenting periodic click trains with click rates between 1 and 300 Hz lasting for 2-4 s. We found that all multiunits increased or decreased their firing rate during the steady state portion of the click train and that all except two multiunits synchronized their firing to individual clicks in the train. Rate increases and synchronized responses were most prevalent and strongest at low click rates, as expressed by best modulation frequency, limiting frequency, percentage of responsive multiunits, and average rate response and vector strength. Synchronized responses occurred up to 100 Hz; rate response occurred up to 300 Hz. Both auditory fields responded similarly to low click rates but differed at click rates above approximately 12 Hz at which more multiunits in AI than in CM exhibited synchronized responses and increased rate responses and more multiunits in CM exhibited decreased rate responses. These findings suggest that the auditory cortex of macaque monkeys encodes temporally modulated sounds similar to the auditory cortex of other mammals. Together with other observations presented in this and other reports, our findings also suggest that AI and CM have largely overlapping sensitivities for acoustic stimulus features but encode these features differently.
Sadovsky, Alexander J.
2013-01-01
Mapping the flow of activity through neocortical microcircuits provides key insights into the underlying circuit architecture. Using a comparative analysis we determined the extent to which the dynamics of microcircuits in mouse primary somatosensory barrel field (S1BF) and auditory (A1) neocortex generalize. We imaged the simultaneous dynamics of up to 1126 neurons spanning multiple columns and layers using high-speed multiphoton imaging. The temporal progression and reliability of reactivation of circuit events in both regions suggested common underlying cortical design features. We used circuit activity flow to generate functional connectivity maps, or graphs, to test the microcircuit hypothesis within a functional framework. S1BF and A1 present a useful test of the postulate as both regions map sensory input anatomically, but each area appears organized according to different design principles. We projected the functional topologies into anatomical space and found benchmarks of organization that had been previously described using physiology and anatomical methods, consistent with a close mapping between anatomy and functional dynamics. By comparing graphs representing activity flow we found that each region is similarly organized as highlighted by hallmarks of small world, scale free, and hierarchical modular topologies. Models of prototypical functional circuits from each area of cortex were sufficient to recapitulate experimentally observed circuit activity. Convergence to common behavior by these models was accomplished using preferential attachment to scale from an auditory up to a somatosensory circuit. These functional data imply that the microcircuit hypothesis be framed as scalable principles of neocortical circuit design. PMID:23986241
ERIC Educational Resources Information Center
Ikeda, Kohei; Higashi, Toshio; Sugawara, Kenichi; Tomori, Kounosuke; Kinoshita, Hiroshi; Kasai, Tatsuya
2012-01-01
The effect of visual and auditory enhancements of finger movement on corticospinal excitability during motor imagery (MI) was investigated using the transcranial magnetic stimulation technique. Motor-evoked potentials were elicited from the abductor digit minimi muscle during MI with auditory, visual and, auditory and visual information, and no…
Okuda, Yuji; Shikata, Hiroshi; Song, Wen-Jie
2011-09-01
As a step to develop auditory prosthesis by cortical stimulation, we tested whether a single train of pulses applied to the primary auditory cortex could elicit classically conditioned behavior in guinea pigs. Animals were trained using a tone as the conditioned stimulus and an electrical shock to the right eyelid as the unconditioned stimulus. After conditioning, a train of 11 pulses applied to the left AI induced the conditioned eye-blink response. Cortical stimulation induced no response after extinction. Our results support the feasibility of auditory prosthesis by electrical stimulation of the cortex. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
High-Field Functional Imaging of Pitch Processing in Auditory Cortex of the Cat
Butler, Blake E.; Hall, Amee J.; Lomber, Stephen G.
2015-01-01
The perception of pitch is a widely studied and hotly debated topic in human hearing. Many of these studies combine functional imaging techniques with stimuli designed to disambiguate the percept of pitch from frequency information present in the stimulus. While useful in identifying potential “pitch centres” in cortex, the existence of truly pitch-responsive neurons requires single neuron-level measures that can only be undertaken in animal models. While a number of animals have been shown to be sensitive to pitch, few studies have addressed the location of cortical generators of pitch percepts in non-human models. The current study uses high-field functional magnetic resonance imaging (fMRI) of the feline brain in an attempt to identify regions of cortex that show increased activity in response to pitch-evoking stimuli. Cats were presented with iterated rippled noise (IRN) stimuli, narrowband noise stimuli with the same spectral profile but no perceivable pitch, and a processed IRN stimulus in which phase components were randomized to preserve slowly changing modulations in the absence of pitch (IRNo). Pitch-related activity was not observed to occur in either primary auditory cortex (A1) or the anterior auditory field (AAF) which comprise the core auditory cortex in cats. Rather, cortical areas surrounding the posterior ectosylvian sulcus responded preferentially to the IRN stimulus when compared to narrowband noise, with group analyses revealing bilateral activity centred in the posterior auditory field (PAF). This study demonstrates that fMRI is useful for identifying pitch-related processing in cat cortex, and identifies cortical areas that warrant further investigation. Moreover, we have taken the first steps in identifying a useful animal model for the study of pitch perception. PMID:26225563
Intracerebral evidence of rhythm transform in the human auditory cortex.
Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis
2017-07-01
Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (<30 Hz) and high-frequency (>30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.
Differential coding of conspecific vocalizations in the ventral auditory cortical stream.
Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B
2014-03-26
The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.
Differential Coding of Conspecific Vocalizations in the Ventral Auditory Cortical Stream
Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.
2014-01-01
The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012
Beckers, Gabriël J L; Gahr, Manfred
2012-08-01
Auditory systems bias responses to sounds that are unexpected on the basis of recent stimulus history, a phenomenon that has been widely studied using sequences of unmodulated tones (mismatch negativity; stimulus-specific adaptation). Such a paradigm, however, does not directly reflect problems that neural systems normally solve for adaptive behavior. We recorded multiunit responses in the caudomedial auditory forebrain of anesthetized zebra finches (Taeniopygia guttata) at 32 sites simultaneously, to contact calls that recur probabilistically at a rate that is used in communication. Neurons in secondary, but not primary, auditory areas respond preferentially to calls when they are unexpected (deviant) compared with the same calls when they are expected (standard). This response bias is predominantly due to sites more often not responding to standard events than to deviant events. When two call stimuli alternate between standard and deviant roles, most sites exhibit a response bias to deviant events of both stimuli. This suggests that biases are not based on a use-dependent decrease in response strength but involve a more complex mechanism that is sensitive to auditory deviance per se. Furthermore, between many secondary sites, responses are tightly synchronized, a phenomenon that is driven by internal neuronal interactions rather than by the timing of stimulus acoustic features. We hypothesize that this deviance-sensitive, internally synchronized network of neurons is involved in the involuntary capturing of attention by unexpected and behaviorally potentially relevant events in natural auditory scenes.
Schall, Sonja; von Kriegstein, Katharina
2014-01-01
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
Rapid extraction of auditory feature contingencies.
Bendixen, Alexandra; Prinz, Wolfgang; Horváth, János; Trujillo-Barreto, Nelson J; Schröger, Erich
2008-07-01
Contingent relations between sensory events render the environment predictable and thus facilitate adaptive behavior. The human capacity to detect such relations has been comprehensively demonstrated in paradigms in which contingency rules were task-relevant or in which they applied to motor behavior. The extent to which contingencies can also be extracted from events that are unrelated to the current goals of the organism has remained largely unclear. The present study addressed the emergence of contingency-related effects for behaviorally irrelevant auditory stimuli and the cortical areas involved in the processing of such contingency rules. Contingent relations between different features of temporally separate events were embedded in a new dynamic protocol. Participants were presented with the auditory stimulus sequences while their attention was captured by a video. The mismatch negativity (MMN) component of the event-related brain potential (ERP) was employed as an electrophysiological correlate of contingency detection. MMN generators were localized by means of scalp current density (SCD) and primary current density (PCD) analyses with variable resolution electromagnetic tomography (VARETA). Results show that task-irrelevant contingencies can be extracted from about fifteen to twenty successive events conforming to the contingent relation. Topographic and tomographic analyses reveal the involvement of the auditory cortex in the processing of contingency violations. The present data provide evidence for the rapid encoding of complex extrapolative relations in sensory areas. This capacity is of fundamental importance for the organism in its attempt to model the sensory environment outside the focus of attention.
Interdependent encoding of pitch, timbre and spatial location in auditory cortex
Bizley, Jennifer K.; Walker, Kerry M. M.; Silverman, Bernard W.; King, Andrew J.; Schnupp, Jan W. H.
2009-01-01
Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects. PMID:19228960
Auditory interfaces: The human perceiver
NASA Technical Reports Server (NTRS)
Colburn, H. Steven
1991-01-01
A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.
Topographic EEG activations during timbre and pitch discrimination tasks using musical sounds.
Auzou, P; Eustache, F; Etevenon, P; Platel, H; Rioux, P; Lambert, J; Lechevalier, B; Zarifian, E; Baron, J C
1995-01-01
Successive auditory stimulation sequences were presented binaurally to 18 young normal volunteers. Five conditions were investigated: two reference tasks, assumed to involve passive listening to couples of musical sounds, and three discrimination tasks, one dealing with pitch, and two with timbre (either with or without the attack). A symmetrical montage of 16 EEG channels was recorded for each subject across the different conditions. Two quantitative parameters of EEG activity were compared among the different sequences within five distinct frequency bands. As compared to a rest (no stimulation) condition, both passive listening conditions led to changes in primary auditory cortex areas. Both discrimination tasks for pitch and timbre led to right hemisphere EEG changes, organized in two poles: an anterior one and a posterior one. After discussing the electrophysiological aspects of this work, these results are interpreted in terms of a network including the right temporal neocortex and the right frontal lobe to maintain the acoustical information in an auditory working memory necessary to carry out the discrimination task.
Sharma, Mridula; Purdy, Suzanne C; Kelly, Andrea S
2012-07-01
The primary purpose of the study was to compare intervention approaches for children with auditory processing disorder (APD): bottom-up training including activities focused on auditory perception, discrimination, and phonological awareness, and top-down training including a range of language activities. Another purpose was to determine the benefits of personal FM systems. The study is a randomized control trial where participants were allocated to groups receiving one of the two interventions, with and without personal FM, or to the no intervention group. The six-week intervention included weekly one-hour sessions with a therapist in the clinic, plus 1-2 hours per week of parent-directed homework. 55 children (7 to 13 years) with APD participated in the study. Intervention outcomes included reading, language, and auditory processing. Positive outcomes were observed for both training approaches and personal FM systems on several measures. Pre-intervention nonverbal IQ, age, and severity of APD did not influence outcomes. Performance of control group participants did not change when retested after the intervention period. Both intervention approaches were beneficial and there were additional benefits with the use of personal FM. Positive results were not limited to the areas specifically targeted by the interventions.
Franken, Matthias K; Eisner, Frank; Acheson, Daniel J; McQueen, James M; Hagoort, Peter; Schoffelen, Jan-Mathijs
2018-06-21
Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
2003-01-01
stability. The ectosylvian gyrus, which includes the primary auditory cortex, was exposed by craniotomy and the dura was reflected. The contralateral... awake monkey. Journal Revista de Acustica, 33:84–87985–06–8. Victor, J. and Knight, B. (1979). Nonlinear analysis with an arbitrary stimulus ensemble
Hardy, Chris J D; Agustus, Jennifer L; Marshall, Charles R; Clark, Camilla N; Russell, Lucy L; Bond, Rebecca L; Brotherhood, Emilie V; Thomas, David L; Crutch, Sebastian J; Rohrer, Jonathan D; Warren, Jason D
2017-07-27
Non-verbal auditory impairment is increasingly recognised in the primary progressive aphasias (PPAs) but its relationship to speech processing and brain substrates has not been defined. Here we addressed these issues in patients representing the non-fluent variant (nfvPPA) and semantic variant (svPPA) syndromes of PPA. We studied 19 patients with PPA in relation to 19 healthy older individuals. We manipulated three key auditory parameters-temporal regularity, phonemic spectral structure and prosodic predictability (an index of fundamental information content, or entropy)-in sequences of spoken syllables. The ability of participants to process these parameters was assessed using two-alternative, forced-choice tasks and neuroanatomical associations of task performance were assessed using voxel-based morphometry of patients' brain magnetic resonance images. Relative to healthy controls, both the nfvPPA and svPPA groups had impaired processing of phonemic spectral structure and signal predictability while the nfvPPA group additionally had impaired processing of temporal regularity in speech signals. Task performance correlated with standard disease severity and neurolinguistic measures. Across the patient cohort, performance on the temporal regularity task was associated with grey matter in the left supplementary motor area and right caudate, performance on the phoneme processing task was associated with grey matter in the left supramarginal gyrus, and performance on the prosodic predictability task was associated with grey matter in the right putamen. Our findings suggest that PPA syndromes may be underpinned by more generic deficits of auditory signal analysis, with a distributed cortico-subcortical neuraoanatomical substrate extending beyond the canonical language network. This has implications for syndrome classification and biomarker development.
Spatial localization deficits and auditory cortical dysfunction in schizophrenia
Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.
2014-01-01
Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608
Escera, Carles; Leung, Sumie; Grimm, Sabine
2014-07-01
Detection of changes in the acoustic environment is critical for survival, as it prevents missing potentially relevant events outside the focus of attention. In humans, deviance detection based on acoustic regularity encoding has been associated with a brain response derived from the human EEG, the mismatch negativity (MMN) auditory evoked potential, peaking at about 100-200 ms from deviance onset. By its long latency and cerebral generators, the cortical nature of both the processes of regularity encoding and deviance detection has been assumed. Yet, intracellular, extracellular, single-unit and local-field potential recordings in rats and cats have shown much earlier (circa 20-30 ms) and hierarchically lower (primary auditory cortex, medial geniculate body, inferior colliculus) deviance-related responses. Here, we review the recent evidence obtained with the complex auditory brainstem response (cABR), the middle latency response (MLR) and magnetoencephalography (MEG) demonstrating that human auditory deviance detection based on regularity encoding-rather than on refractoriness-occurs at latencies and in neural networks comparable to those revealed in animals. Specifically, encoding of simple acoustic-feature regularities and detection of corresponding deviance, such as an infrequent change in frequency or location, occur in the latency range of the MLR, in separate auditory cortical regions from those generating the MMN, and even at the level of human auditory brainstem. In contrast, violations of more complex regularities, such as those defined by the alternation of two different tones or by feature conjunctions (i.e., frequency and location) fail to elicit MLR correlates but elicit sizable MMNs. Altogether, these findings support the emerging view that deviance detection is a basic principle of the functional organization of the auditory system, and that regularity encoding and deviance detection is organized in ascending levels of complexity along the auditory pathway expanding from the brainstem up to higher-order areas of the cerebral cortex.
ERIC Educational Resources Information Center
Mokhemar, Mary Ann
This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…
Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies
2016-01-01
Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle-treated animals. These results indicate that NE has a key role in A1 and attention of stressed rats during tone discrimination. PMID:28082872
Schall, Sonja; von Kriegstein, Katharina
2014-01-01
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026
Representations of pitch and slow modulation in auditory cortex
Barker, Daphne; Plack, Christopher J.; Hall, Deborah A.
2013-01-01
Iterated ripple noise (IRN) is a type of pitch-evoking stimulus that is commonly used in neuroimaging studies of pitch processing. When contrasted with a spectrally matched Gaussian noise, it is known to produce a consistent response in a region of auditory cortex that includes an area antero-lateral to the primary auditory fields (lateral Heschl's gyrus). The IRN-related response has often been attributed to pitch, although recent evidence suggests that it is more likely driven by slowly varying spectro-temporal modulations not related to pitch. The present functional magnetic resonance imaging (fMRI) study showed that both pitch-related temporal regularity and slow modulations elicited a significantly greater response than a baseline Gaussian noise in an area that has been pre-defined as pitch-responsive. The region was sensitive to both pitch salience and slow modulation salience. The responses to pitch and spectro-temporal modulations interacted in a saturating manner, suggesting that there may be an overlap in the populations of neurons coding these features. However, the interaction may have been influenced by the fact that the two pitch stimuli used (IRN and unresolved harmonic complexes) differed in terms of pitch salience. Finally, the results support previous findings suggesting that the cortical response to IRN is driven in part by slow modulations, not by pitch. PMID:24106464
Crinion, Jenny; Price, Cathy J
2005-12-01
Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.
Cortical modulation of auditory processing in the midbrain
Bajo, Victoria M.; King, Andrew J.
2013-01-01
In addition to their ascending pathways that originate at the receptor cells, all sensory systems are characterized by extensive descending projections. Although the size of these connections often outweighs those that carry information in the ascending auditory pathway, we still have a relatively poor understanding of the role they play in sensory processing. In the auditory system one of the main corticofugal projections links layer V pyramidal neurons with the inferior colliculus (IC) in the midbrain. All auditory cortical fields contribute to this projection, with the primary areas providing the largest outputs to the IC. In addition to medium and large pyramidal cells in layer V, a variety of cell types in layer VI make a small contribution to the ipsilateral corticocollicular projection. Cortical neurons innervate the three IC subdivisions bilaterally, although the contralateral projection is relatively small. The dorsal and lateral cortices of the IC are the principal targets of corticocollicular axons, but input to the central nucleus has also been described in some studies and is distinctive in its laminar topographic organization. Focal electrical stimulation and inactivation studies have shown that the auditory cortex can modify almost every aspect of the response properties of IC neurons, including their sensitivity to sound frequency, intensity, and location. Along with other descending pathways in the auditory system, the corticocollicular projection appears to continually modulate the processing of acoustical signals at subcortical levels. In particular, there is growing evidence that these circuits play a critical role in the plasticity of neural processing that underlies the effects of learning and experience on auditory perception by enabling changes in cortical response properties to spread to subcortical nuclei. PMID:23316140
Hertz, Uri; Amedi, Amir
2015-01-01
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756
Hertz, Uri; Amedi, Amir
2015-08-01
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.
Visual and Auditory Input in Second-Language Speech Processing
ERIC Educational Resources Information Center
Hardison, Debra M.
2010-01-01
The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…
Rhone, Ariane E; Nourski, Kirill V; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob
In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.
Touch activates human auditory cortex.
Schürmann, Martin; Caetano, Gina; Hlushchuk, Yevhen; Jousmäki, Veikko; Hari, Riitta
2006-05-01
Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment.
Dynamic speech representations in the human temporal lobe.
Leonard, Matthew K; Chang, Edward F
2014-09-01
Speech perception requires rapid integration of acoustic input with context-dependent knowledge. Recent methodological advances have allowed researchers to identify underlying information representations in primary and secondary auditory cortex and to examine how context modulates these representations. We review recent studies that focus on contextual modulations of neural activity in the superior temporal gyrus (STG), a major hub for spectrotemporal encoding. Recent findings suggest a highly interactive flow of information processing through the auditory ventral stream, including influences of higher-level linguistic and metalinguistic knowledge, even within individual areas. Such mechanisms may give rise to more abstract representations, such as those for words. We discuss the importance of characterizing representations of context-dependent and dynamic patterns of neural activity in the approach to speech perception research. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mismatch Negativity in Recent-Onset and Chronic Schizophrenia: A Current Source Density Analysis
Fulham, W. Ross; Michie, Patricia T.; Ward, Philip B.; Rasser, Paul E.; Todd, Juanita; Johnston, Patrick J.; Thompson, Paul M.; Schall, Ulrich
2014-01-01
Mismatch negativity (MMN) is a component of the event-related potential elicited by deviant auditory stimuli. It is presumed to index pre-attentive monitoring of changes in the auditory environment. MMN amplitude is smaller in groups of individuals with schizophrenia compared to healthy controls. We compared duration-deviant MMN in 16 recent-onset and 19 chronic schizophrenia patients versus age- and sex-matched controls. Reduced frontal MMN was found in both patient groups, involved reduced hemispheric asymmetry, and was correlated with Global Assessment of Functioning (GAF) and negative symptom ratings. A cortically-constrained LORETA analysis, incorporating anatomical data from each individual's MRI, was performed to generate a current source density model of the MMN response over time. This model suggested MMN generation within a temporal, parietal and frontal network, which was right hemisphere dominant only in controls. An exploratory analysis revealed reduced CSD in patients in superior and middle temporal cortex, inferior and superior parietal cortex, precuneus, anterior cingulate, and superior and middle frontal cortex. A region of interest (ROI) analysis was performed. For the early phase of the MMN, patients had reduced bilateral temporal and parietal response and no lateralisation in frontal ROIs. For late MMN, patients had reduced bilateral parietal response and no lateralisation in temporal ROIs. In patients, correlations revealed a link between GAF and the MMN response in parietal cortex. In controls, the frontal response onset was 17 ms later than the temporal and parietal response. In patients, onset latency of the MMN response was delayed in secondary, but not primary, auditory cortex. However amplitude reductions were observed in both primary and secondary auditory cortex. These latency delays may indicate relatively intact information processing upstream of the primary auditory cortex, but impaired primary auditory cortex or cortico-cortical or thalamo-cortical communication with higher auditory cortices as a core deficit in schizophrenia. PMID:24949859
Beitel, Ralph E.; Schreiner, Christoph E.; Leake, Patricia A.
2016-01-01
In profoundly deaf cats, behavioral training with intracochlear electric stimulation (ICES) can improve temporal processing in the primary auditory cortex (AI). To investigate whether similar effects are manifest in the auditory midbrain, ICES was initiated in neonatally deafened cats either during development after short durations of deafness (8 wk of age) or in adulthood after long durations of deafness (≥3.5 yr). All of these animals received behaviorally meaningless, “passive” ICES. Some animals also received behavioral training with ICES. Two long-deaf cats received no ICES prior to acute electrophysiological recording. After several months of passive ICES and behavioral training, animals were anesthetized, and neuronal responses to pulse trains of increasing rates were recorded in the central (ICC) and external (ICX) nuclei of the inferior colliculus. Neuronal temporal response patterns (repetition rate coding, minimum latencies, response precision) were compared with results from recordings made in the AI of the same animals (Beitel RE, Vollmer M, Raggio MW, Schreiner CE. J Neurophysiol 106: 944–959, 2011; Vollmer M, Beitel RE. J Neurophysiol 106: 2423–2436, 2011). Passive ICES in long-deaf cats remediated severely degraded temporal processing in the ICC and had no effects in the ICX. In contrast to observations in the AI, behaviorally relevant ICES had no effects on temporal processing in the ICC or ICX, with the single exception of shorter latencies in the ICC in short-deaf cats. The results suggest that independent of deafness duration passive stimulation and behavioral training differentially transform temporal processing in auditory midbrain and cortex, and primary auditory cortex emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf cat. NEW & NOTEWORTHY Behaviorally relevant vs. passive electric stimulation of the auditory nerve differentially affects neuronal temporal processing in the central nucleus of the inferior colliculus (ICC) and the primary auditory cortex (AI) in profoundly short-deaf and long-deaf cats. Temporal plasticity in the ICC depends on a critical amount of electric stimulation, independent of its behavioral relevance. In contrast, the AI emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf auditory system. PMID:27733594
Neural mechanisms underlying auditory feedback control of speech
Reilly, Kevin J.; Guenther, Frank H.
2013-01-01
The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557
Engle, James R.; Recanzone, Gregg H.
2012-01-01
Age-related hearing deficits are a leading cause of disability among the aged. While some forms of hearing deficits are peripheral in origin, others are centrally mediated. One such deficit is the ability to localize sounds, a critical component for segregating different acoustic objects and events, which is dependent on the auditory cortex. Recent evidence indicates that in aged animals the normal sharpening of spatial tuning between neurons in primary auditory cortex to the caudal lateral field does not occur as it does in younger animals. As a decrease in inhibition with aging is common in the ascending auditory system, it is possible that this lack of spatial tuning sharpening is due to a decrease in inhibition at different periods within the response. It is also possible that spatial tuning was decreased as a consequence of reduced inhibition at non-best locations. In this report we found that aged animals had greater activity throughout the response period, but primarily during the onset of the response. This was most prominent at non-best directions, which is consistent with the hypothesis that inhibition is a primary mechanism for sharpening spatial tuning curves. We also noted that in aged animals the latency of the response was much shorter than in younger animals, which is consistent with a decrease in pre-onset inhibition. These results can be interpreted in the context of a failure of the timing and efficiency of feed-forward thalamo-cortical and cortico-cortical circuits in aged animals. Such a mechanism, if generalized across cortical areas, could play a major role in age-related cognitive decline. PMID:23316160
AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX
Weinberger, Norman M.
2009-01-01
Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002
Cholinergic Neuromodulation Controls Directed Temporal Communication in Neocortex in Vitro
Roopun, Anita K.; LeBeau, Fiona E.N.; Rammell, James; Cunningham, Mark O.; Traub, Roger D.; Whittington, Miles A.
2010-01-01
Acetylcholine is the primary neuromodulator involved in cortical arousal in mammals. Cholinergic modulation is involved in conscious awareness, memory formation and attention – processes that involve intercommunication between different cortical regions. Such communication is achieved in part through temporal structuring of neuronal activity by population rhythms, particularly in the beta and gamma frequency ranges (12–80 Hz). Here we demonstrate, using in vitro and in silico models, that spectrally identical patterns of beta2 and gamma rhythms are generated in primary sensory areas and polymodal association areas by fundamentally different local circuit mechanisms: Glutamatergic excitation induced beta2 frequency population rhythms only in layer 5 association cortex whereas cholinergic neuromodulation induced this rhythm only in layer 5 primary sensory cortex. This region-specific sensitivity of local circuits to cholinergic modulation allowed for control of the extent of cortical temporal interactions. Furthermore, the contrasting mechanisms underlying these beta2 rhythms produced a high degree of directionality, favouring an influence of association cortex over primary auditory cortex. PMID:20407636
Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H R; Schmidt, Marc
2013-06-01
Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. Copyright © 2013 Elsevier Ltd. All rights reserved.
Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H.R.; Schmidt, Marc
2015-01-01
Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC’s auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf’s involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. PMID:23603062
Common Sense in Choice: The Effect of Sensory Modality on Neural Value Representations.
Shuster, Anastasia; Levy, Dino J
2018-01-01
Although it is well established that the ventromedial prefrontal cortex (vmPFC) represents value using a common currency across categories of rewards, it is unknown whether the vmPFC represents value irrespective of the sensory modality in which alternatives are presented. In the current study, male and female human subjects completed a decision-making task while their neural activity was recorded using functional magnetic resonance imaging. On each trial, subjects chose between a safe alternative and a lottery, which was presented visually or aurally. A univariate conjunction analysis revealed that the anterior portion of the vmPFC tracks subjective value (SV) irrespective of the sensory modality. Using a novel cross-modality multivariate classifier, we were able to decode auditory value based on visual trials and vice versa. In addition, we found that the visual and auditory sensory cortices, which were identified using functional localizers, are also sensitive to the value of stimuli, albeit in a modality-specific manner. Whereas both primary and higher-order auditory cortices represented auditory SV (aSV), only a higher-order visual area represented visual SV (vSV). These findings expand our understanding of the common currency network of the brain and shed a new light on the interplay between sensory and value information processing.
Common Sense in Choice: The Effect of Sensory Modality on Neural Value Representations
2018-01-01
Abstract Although it is well established that the ventromedial prefrontal cortex (vmPFC) represents value using a common currency across categories of rewards, it is unknown whether the vmPFC represents value irrespective of the sensory modality in which alternatives are presented. In the current study, male and female human subjects completed a decision-making task while their neural activity was recorded using functional magnetic resonance imaging. On each trial, subjects chose between a safe alternative and a lottery, which was presented visually or aurally. A univariate conjunction analysis revealed that the anterior portion of the vmPFC tracks subjective value (SV) irrespective of the sensory modality. Using a novel cross-modality multivariate classifier, we were able to decode auditory value based on visual trials and vice versa. In addition, we found that the visual and auditory sensory cortices, which were identified using functional localizers, are also sensitive to the value of stimuli, albeit in a modality-specific manner. Whereas both primary and higher-order auditory cortices represented auditory SV (aSV), only a higher-order visual area represented visual SV (vSV). These findings expand our understanding of the common currency network of the brain and shed a new light on the interplay between sensory and value information processing. PMID:29619408
Direct Recordings of Pitch Responses from Human Auditory Cortex
Griffiths, Timothy D.; Kumar, Sukhbinder; Sedley, William; Nourski, Kirill V.; Kawasaki, Hiroto; Oya, Hiroyuki; Patterson, Roy D.; Brugge, John F.; Howard, Matthew A.
2010-01-01
Summary Pitch is a fundamental percept with a complex relationship to the associated sound structure [1]. Pitch perception requires brain representation of both the structure of the stimulus and the pitch that is perceived. We describe direct recordings of local field potentials from human auditory cortex made while subjects perceived the transition between noise and a noise with a regular repetitive structure in the time domain at the millisecond level called regular-interval noise (RIN) [2]. RIN is perceived to have a pitch when the rate is above the lower limit of pitch [3], at approximately 30 Hz. Sustained time-locked responses are observed to be related to the temporal regularity of the stimulus, commonly emphasized as a relevant stimulus feature in models of pitch perception (e.g., [1]). Sustained oscillatory responses are also demonstrated in the high gamma range (80–120 Hz). The regularity responses occur irrespective of whether the response is associated with pitch perception. In contrast, the oscillatory responses only occur for pitch. Both responses occur in primary auditory cortex and adjacent nonprimary areas. The research suggests that two types of pitch-related activity occur in humans in early auditory cortex: time-locked neural correlates of stimulus regularity and an oscillatory response related to the pitch percept. PMID:20605456
Hearing loss in older adults affects neural systems supporting speech comprehension.
Peelle, Jonathan E; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur
2011-08-31
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.
Hearing loss in older adults affects neural systems supporting speech comprehension
Peelle, Jonathan E.; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur
2011-01-01
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging (fMRI) to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry (VBM), demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task. PMID:21880924
ERIC Educational Resources Information Center
Hollander, Cara; de Andrade, Victor Manuel
2014-01-01
Schools located near to airports are exposed to high levels of noise which can cause cognitive, health, and hearing problems. Therefore, this study sought to explore whether this noise may cause auditory language processing (ALP) problems in primary school learners. Sixty-one children attending schools exposed to high levels of noise were matched…
Intrinsic, stimulus-driven and task-dependent connectivity in human auditory cortex.
Häkkinen, Suvi; Rinne, Teemu
2018-06-01
A hierarchical and modular organization is a central hypothesis in the current primate model of auditory cortex (AC) but lacks validation in humans. Here we investigated whether fMRI connectivity at rest and during active tasks is informative of the functional organization of human AC. Identical pitch-varying sounds were presented during a visual discrimination (i.e. no directed auditory attention), pitch discrimination, and two versions of pitch n-back memory tasks. Analysis based on fMRI connectivity at rest revealed a network structure consisting of six modules in supratemporal plane (STP), temporal lobe, and inferior parietal lobule (IPL) in both hemispheres. In line with the primate model, in which higher-order regions have more longer-range connections than primary regions, areas encircling the STP module showed the highest inter-modular connectivity. Multivariate pattern analysis indicated significant connectivity differences between the visual task and rest (driven by the presentation of sounds during the visual task), between auditory and visual tasks, and between pitch discrimination and pitch n-back tasks. Further analyses showed that these differences were particularly due to connectivity modulations between the STP and IPL modules. While the results are generally in line with the primate model, they highlight the important role of human IPL during the processing of both task-irrelevant and task-relevant auditory information. Importantly, the present study shows that fMRI connectivity at rest, during presentation of sounds, and during active listening provides novel information about the functional organization of human AC.
Meaning in the avian auditory cortex: Neural representation of communication calls
Elie, Julie E; Theunissen, Frédéric E
2014-01-01
Understanding how the brain extracts the behavioral meaning carried by specific vocalization types that can be emitted by various vocalizers and in different conditions is a central question in auditory research. This semantic categorization is a fundamental process required for acoustic communication and presupposes discriminative and invariance properties of the auditory system for conspecific vocalizations. Songbirds have been used extensively to study vocal learning, but the communicative function of all their vocalizations and their neural representation has yet to be examined. In our research, we first generated a library containing almost the entire zebra finch vocal repertoire and organized communication calls along 9 different categories based on their behavioral meaning. We then investigated the neural representations of these semantic categories in the primary and secondary auditory areas of 6 anesthetized zebra finches. To analyze how single units encode these call categories, we described neural responses in terms of their discrimination, selectivity and invariance properties. Quantitative measures for these neural properties were obtained using an optimal decoder based both on spike counts and spike patterns. Information theoretic metrics show that almost half of the single units encode semantic information. Neurons achieve higher discrimination of these semantic categories by being more selective and more invariant. These results demonstrate that computations necessary for semantic categorization of meaningful vocalizations are already present in the auditory cortex and emphasize the value of a neuro-ethological approach to understand vocal communication. PMID:25728175
Task-specific reorganization of the auditory cortex in deaf humans
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-01
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964
Task-specific reorganization of the auditory cortex in deaf humans.
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-24
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.
The Role of Auditory Cues in the Spatial Knowledge of Blind Individuals
ERIC Educational Resources Information Center
Papadopoulos, Konstantinos; Papadimitriou, Kimon; Koutsoklenis, Athanasios
2012-01-01
The study presented here sought to explore the role of auditory cues in the spatial knowledge of blind individuals by examining the relation between the perceived auditory cues and the landscape of a given area and by investigating how blind individuals use auditory cues to create cognitive maps. The findings reveal that several auditory cues…
Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex
Romanski, L. M.; Tian, B.; Fritz, J.; Mishkin, M.; Goldman-Rakic, P. S.; Rauschecker, J. P.
2009-01-01
‘What’ and ‘where’ visual streams define ventrolateral object and dorsolateral spatial processing domains in the prefrontal cortex of nonhuman primates. We looked for similar streams for auditory–prefrontal connections in rhesus macaques by combining microelectrode recording with anatomical tract-tracing. Injection of multiple tracers into physiologically mapped regions AL, ML and CL of the auditory belt cortex revealed that anterior belt cortex was reciprocally connected with the frontal pole (area 10), rostral principal sulcus (area 46) and ventral prefrontal regions (areas 12 and 45), whereas the caudal belt was mainly connected with the caudal principal sulcus (area 46) and frontal eye fields (area 8a). Thus separate auditory streams originate in caudal and rostral auditory cortex and target spatial and non-spatial domains of the frontal lobe, respectively. PMID:10570492
Syllabic (~2-5 Hz) and fluctuation (~1-10 Hz) ranges in speech and auditory processing
Edwards, Erik; Chang, Edward F.
2013-01-01
Given recent interest in syllabic rates (~2-5 Hz) for speech processing, we review the perception of “fluctuation” range (~1-10 Hz) modulations during listening to speech and technical auditory stimuli (AM and FM tones and noises, and ripple sounds). We find evidence that the temporal modulation transfer function (TMTF) of human auditory perception is not simply low-pass in nature, but rather exhibits a peak in sensitivity in the syllabic range (~2-5 Hz). We also address human and animal neurophysiological evidence, and argue that this bandpass tuning arises at the thalamocortical level and is more associated with non-primary regions than primary regions of cortex. The bandpass rather than low-pass TMTF has implications for modeling auditory central physiology and speech processing: this implicates temporal contrast rather than simple temporal integration, with contrast enhancement for dynamic stimuli in the fluctuation range. PMID:24035819
Inspector, Michael; Manor, David; Amir, Noam; Kushnir, Tamar; Karni, Avi
2013-01-01
Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words.
Inspector, Michael; Manor, David; Amir, Noam; Kushnir, Tamar; Karni, Avi
2013-01-01
Objectives Intonation may serve as a cue for facilitated recognition and processing of spoken words and it has been suggested that the pitch contour of spoken words is implicitly remembered. Thus, using the repetition suppression (RS) effect of BOLD-fMRI signals, we tested whether the same spoken words are differentially processed in language and auditory brain areas depending on whether or not they retain an arbitrary intonation pattern. Experimental design Words were presented repeatedly in three blocks for passive and active listening tasks. There were three prosodic conditions in each of which a different set of words was used and specific task-irrelevant intonation changes were applied: (i) All words presented in a set flat monotonous pitch contour (ii) Each word had an arbitrary pitch contour that was set throughout the three repetitions. (iii) Each word had a different arbitrary pitch contour in each of its repetition. Principal findings The repeated presentations of words with a set pitch contour, resulted in robust behavioral priming effects as well as in significant RS of the BOLD signals in primary auditory cortex (BA 41), temporal areas (BA 21 22) bilaterally and in Broca's area. However, changing the intonation of the same words on each successive repetition resulted in reduced behavioral priming and the abolition of RS effects. Conclusions Intonation patterns are retained in memory even when the intonation is task-irrelevant. Implicit memory traces for the pitch contour of spoken words were reflected in facilitated neuronal processing in auditory and language associated areas. Thus, the results lend support for the notion that prosody and specifically pitch contour is strongly associated with the memory representation of spoken words. PMID:24391713
Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve
Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.
2015-01-01
The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538
Primary auditory cortex regulates threat memory specificity.
Wigestrand, Mattis B; Schiff, Hillary C; Fyhn, Marianne; LeDoux, Joseph E; Sears, Robert M
2017-01-01
Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used muscimol infusions in rats to show that discriminatory threat learning requires Au1 activity specifically during memory acquisition and retrieval, but not during consolidation. Memory specificity was similarly disrupted by infusion of PKMζ inhibitor peptide (ZIP) during memory storage. Our findings show that Au1 is required at critical memory phases and suggest that Au1 plasticity enables stimulus discrimination. © 2016 Wigestrand et al.; Published by Cold Spring Harbor Laboratory Press.
Beer, Anton L.; Plank, Tina; Meyer, Georg; Greenlee, Mark W.
2013-01-01
Functional magnetic resonance imaging (MRI) showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS), the lateral superior temporal gyrus (lSTG), and the extrastriate body area (EBA). A region-of-interest (ROI) analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli) in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital cortex, the inferior occipital cortex (IOC), and the superior temporal sulcus (STS). However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex. PMID:23407860
Auditory Neuroscience: Temporal Anticipation Enhances Cortical Processing
Walker, Kerry M. M.; King, Andrew J.
2015-01-01
Summary A recent study shows that expectation about the timing of behaviorally-relevant sounds enhances the responses of neurons in the primary auditory cortex and improves the accuracy and speed with which animals respond to those sounds. PMID:21481759
Visual activity predicts auditory recovery from deafness after adult cochlear implantation.
Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2013-12-01
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Raij, Tuukka T.; Riekki, Tapani J.J.
2012-01-01
Neuronal underpinnings of auditory verbal hallucination remain poorly understood. One suggested mechanism is brain activation that is similar to verbal imagery but occurs without the proper activation of the neuronal systems that are required to tag the origins of verbal imagery in one's mind. Such neuronal systems involve the supplementary motor area. The supplementary motor area has been associated with awareness of intention to make a hand movement, but whether this region is related to the sense of ownership of one's verbal thought remains poorly known. We hypothesized that the supplementary motor area is related to the distinction between one's own mental processing (auditory verbal imagery) and similar processing that is attributed to non-self author (auditory verbal hallucination). To test this hypothesis, we asked patients to signal the onset and offset of their auditory verbal hallucinations during functional magnetic resonance imaging. During non-hallucination periods, we asked the same patients to imagine the hallucination they had previously experienced. In addition, healthy control subjects signaled the onset and offset of self-paced imagery of similar voices. Both hallucinations and the imagery of hallucinations were associated with similar activation strengths of the fronto-temporal language-related circuitries, but the supplementary motor area was activated more strongly during the imagery than during hallucination. These findings suggest that auditory verbal hallucination resembles verbal imagery in language processing, but without the involvement of the supplementary motor area, which may subserve the sense of ownership of one's own verbal imagery. PMID:24179739
Seymour, Jenessa L; Low, Kathy A; Maclin, Edward L; Chiarelli, Antonio M; Mathewson, Kyle E; Fabiani, Monica; Gratton, Gabriele; Dye, Matthew W G
2017-01-01
Theories of brain plasticity propose that, in the absence of input from the preferred sensory modality, some specialized brain areas may be recruited when processing information from other modalities, which may result in improved performance. The Useful Field of View task has previously been used to demonstrate that early deafness positively impacts peripheral visual attention. The current study sought to determine the neural changes associated with those deafness-related enhancements in visual performance. Based on previous findings, we hypothesized that recruitment of posterior portions of Brodmann area 22, a brain region most commonly associated with auditory processing, would be correlated with peripheral selective attention as measured using the Useful Field of View task. We report data from severe to profoundly deaf adults and normal-hearing controls who performed the Useful Field of View task while cortical activity was recorded using the event-related optical signal. Behavioral performance, obtained in a separate session, showed that deaf subjects had lower thresholds (i.e., better performance) on the Useful Field of View task. The event-related optical data indicated greater activity for the deaf adults than for the normal-hearing controls during the task in the posterior portion of Brodmann area 22 in the right hemisphere. Furthermore, the behavioral thresholds correlated significantly with this neural activity. This work provides further support for the hypothesis that cross-modal plasticity in deaf individuals appears in higher-order auditory cortices, whereas no similar evidence was obtained for primary auditory areas. It is also the only neuroimaging study to date that has linked deaf-related changes in the right temporal lobe to visual task performance outside of the imaging environment. The event-related optical signal is a valuable technique for studying cross-modal plasticity in deaf humans. The non-invasive and relatively quiet characteristics of this technique have great potential utility in research with clinical populations such as deaf children and adults who have received cochlear or auditory brainstem implants. Copyright © 2016 Elsevier B.V. All rights reserved.
A bilateral cortical network responds to pitch perturbations in speech feedback
Kort, Naomi S.; Nagarajan, Srikantan S.; Houde, John F.
2014-01-01
Auditory feedback is used to monitor and correct for errors in speech production, and one of the clearest demonstrations of this is the pitch perturbation reflex. During ongoing phonation, speakers respond rapidly to shifts of the pitch of their auditory feedback, altering their pitch production to oppose the direction of the applied pitch shift. In this study, we examine the timing of activity within a network of brain regions thought to be involved in mediating this behavior. To isolate auditory feedback processing relevant for motor control of speech, we used magnetoencephalography (MEG) to compare neural responses to speech onset and to transient (400ms) pitch feedback perturbations during speaking with responses to identical acoustic stimuli during passive listening. We found overlapping, but distinct bilateral cortical networks involved in monitoring speech onset and feedback alterations in ongoing speech. Responses to speech onset during speaking were suppressed in bilateral auditory and left ventral supramarginal gyrus/posterior superior temporal sulcus (vSMG/pSTS). In contrast, during pitch perturbations, activity was enhanced in bilateral vSMG/pSTS, bilateral premotor cortex, right primary auditory cortex, and left higher order auditory cortex. We also found speaking-induced delays in responses to both unaltered and altered speech in bilateral primary and secondary auditory regions, the left vSMG/pSTS and right premotor cortex. The network dynamics reveal the cortical processing involved in both detecting the speech error and updating the motor plan to create the new pitch output. These results implicate vSMG/pSTS as critical in both monitoring auditory feedback and initiating rapid compensation to feedback errors. PMID:24076223
Phonological Processing in Human Auditory Cortical Fields
Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.
2011-01-01
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252
Clinical significance and developmental changes of auditory-language-related gamma activity
Kojima, Katsuaki; Brown, Erik C.; Rothermel, Robert; Carlson, Alanna; Fuerst, Darren; Matsuzaki, Naoyuki; Shah, Aashit; Atkinson, Marie; Basha, Maysaa; Mittal, Sandeep; Sood, Sandeep; Asano, Eishi
2012-01-01
OBJECTIVE We determined the clinical impact and developmental changes of auditory-language-related augmentation of gamma activity at 50–120 Hz recorded on electrocorticography (ECoG). METHODS We analyzed data from 77 epileptic patients ranging 4 – 56 years in age. We determined the effects of seizure-onset zone, electrode location, and patient-age upon gamma-augmentation elicited by an auditory-naming task. RESULTS Gamma-augmentation was less frequently elicited within seizure-onset sites compared to other sites. Regardless of age, gamma-augmentation most often involved the 80–100 Hz frequency band. Gamma-augmentation initially involved bilateral superior-temporal regions, followed by left-side dominant involvement in the middle-temporal, medial-temporal, inferior-frontal, dorsolateral-premotor, and medial-frontal regions and concluded with bilateral inferior-Rolandic involvement. Compared to younger patients, those older than 10 years had a larger proportion of left dorsolateral-premotor and right inferior-frontal sites showing gamma-augmentation. The incidence of a post-operative language deficit requiring speech therapy was predicted by the number of resected sites with gamma-augmentation in the superior-temporal, inferior-frontal, dorsolateral-premotor, and inferior-Rolandic regions of the left hemisphere assumed to contain essential language function (r2=0.59; p=0.001; odds ratio=6.04 [95% confidence-interval: 2.26 to 16.15]). CONCLUSIONS Auditory-language-related gamma-augmentation can provide additional information useful to localize the primary language areas. SIGNIFICANCE These results derived from a large sample of patients support the utility of auditory-language-related gamma-augmentation in presurgical evaluation. PMID:23141882
NASA Astrophysics Data System (ADS)
Mulligan, B. E.; Goodman, L. S.; McBride, D. K.; Mitchell, T. M.; Crosby, T. N.
1984-08-01
This work reviews the areas of auditory attention, recognition, memory and auditory perception of patterns, pitch, and loudness. The review was written from the perspective of human engineering and focuses primarily on auditory processing of information contained in acoustic signals. The impetus for this effort was to establish a data base to be utilized in the design and evaluation of acoustic displays.
Multiple Transmitter Receptors in Regions and Layers of the Human Cerebral Cortex
Zilles, Karl; Palomero-Gallagher, Nicola
2017-01-01
We measured the densities (fmol/mg protein) of 15 different receptors of various transmitter systems in the supragranular, granular and infragranular strata of 44 areas of visual, somatosensory, auditory and multimodal association systems of the human cerebral cortex. Receptor densities were obtained after labeling of the receptors using quantitative in vitro receptor autoradiography in human postmortem brains. The mean density of each receptor type over all cortical layers and of each of the three major strata varies between cortical regions. In a single cortical area, the multi-receptor fingerprints of its strata (i.e., polar plots, each visualizing the densities of multiple different receptor types in supragranular, granular or infragranular layers of the same cortical area) differ in shape and size indicating regional and laminar specific balances between the receptors. Furthermore, the three strata are clearly segregated into well definable clusters by their receptor fingerprints. Fingerprints of different cortical areas systematically vary between functional networks, and with the hierarchical levels within sensory systems. Primary sensory areas are clearly separated from all other cortical areas particularly by their very high muscarinic M2 and nicotinic α4β2 receptor densities, and to a lesser degree also by noradrenergic α2 and serotonergic 5-HT2 receptors. Early visual areas of the dorsal and ventral streams are segregated by their multi-receptor fingerprints. The results are discussed on the background of functional segregation, cortical hierarchies, microstructural types, and the horizontal (layers) and vertical (columns) organization in the cerebral cortex. We conclude that a cortical column is composed of segments, which can be assigned to the cortical strata. The segments differ by their patterns of multi-receptor balances, indicating different layer-specific signal processing mechanisms. Additionally, the differences between the strata-and area-specific fingerprints of the 44 areas reflect the segregation of the cerebral cortex into functionally and topographically definable groups of cortical areas (visual, auditory, somatosensory, limbic, motor), and reveals their hierarchical position (primary and unimodal (early) sensory to higher sensory and finally to multimodal association areas). Highlights Densities of transmitter receptors vary between areas of human cerebral cortex.Multi-receptor fingerprints segregate cortical layers.The densities of all examined receptor types together reach highest values in the supragranular stratum of all areas.The lowest values are found in the infragranular stratum.Multi-receptor fingerprints of entire areas and their layers segregate functional systemsCortical types (primary sensory, motor, multimodal association) differ in their receptor fingerprints. PMID:28970785
NASA Astrophysics Data System (ADS)
Mulligan, B. E.; Goodman, L. S.; McBride, D. K.; Mitchell, T. M.; Crosby, T. N.
1984-08-01
This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to acoustic signals. The review was written from the perspective of human engineering and focuses primarily on auditory processing of information contained in acoustic signals. The impetus for this effort was to establish a data base to be utilized in the design and evaluation of acoustic displays. Appendix 1 also contains citations of the scientific literature on which was based the answers to each question. There are nineteen questions and answers, and more than two hundred citations contained in the list of references given in Appendix 2. This is one of two related works, the other of which reviewed the literature in the areas of auditory attention, recognition memory, and auditory perception of patterns, pitch, and loudness.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
Thalamic input to auditory cortex is locally heterogeneous but globally tonotopic
Vasquez-Lopez, Sebastian A; Weissenberger, Yves; Lohse, Michael; Keating, Peter; King, Andrew J
2017-01-01
Topographic representation of the receptor surface is a fundamental feature of sensory cortical organization. This is imparted by the thalamus, which relays information from the periphery to the cortex. To better understand the rules governing thalamocortical connectivity and the origin of cortical maps, we used in vivo two-photon calcium imaging to characterize the properties of thalamic axons innervating different layers of mouse auditory cortex. Although tonotopically organized at a global level, we found that the frequency selectivity of individual thalamocortical axons is surprisingly heterogeneous, even in layers 3b/4 of the primary cortical areas, where the thalamic input is dominated by the lemniscal projection. We also show that thalamocortical input to layer 1 includes collaterals from axons innervating layers 3b/4 and is largely in register with the main input targeting those layers. Such locally varied thalamocortical projections may be useful in enabling rapid contextual modulation of cortical frequency representations. PMID:28891466
Brain-stem evoked potentials and noise effects in seagulls.
Counter, S A
1985-01-01
Brain-stem auditory evoked potentials (BAEP) recorded from the seagull were large-amplitude, short-latency, vertex-positive deflections which originate in the eighth nerve and several brain-stem nuclei. BAEP waveforms were similar in latency and configurations to that reported for certain other lower vertebrates and some mammals. BAEP recorded at several pure tone frequencies throughout the seagull's auditory spectrum showed an area of heightened auditory sensitivity between 1 and 3 kHz. This range was also found to be the primary bandwidth of the vocalization output of young seagulls. Masking by white noise and pure tones had remarkable effects on several parameters of the BAEP. In general, the tone- and click-induced BAEP were either reduced or obliterated by both pure tone and white noise maskers of specific signal to noise ratios and high intensity levels. The masking effects observed in this study may be related to the manner in which seagulls respond to intense environmental noise. One possible conclusion is that intense environmental noise, such as aircraft engine noise, may severely alter the seagull's localization apparatus and induce sonogenic stress, both of which could cause collisions with low-flying aircraft.
Mapping a lateralization gradient within the ventral stream for auditory speech perception.
Specht, Karsten
2013-01-01
Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic resonance imaging studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesized, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a "lateralization" gradient. This increasing leftward lateralization was particularly evident for the left superior temporal sulcus and more anterior parts of the temporal lobe.
Evaluation of auditory perception development in neonates by event-related potential technique.
Zhang, Qinfen; Li, Hongxin; Zheng, Aibin; Dong, Xuan; Tu, Wenjuan
2017-08-01
To investigate auditory perception development in neonates and correlate it with days after birth, left and right hemisphere development and sex using event-related potential (ERP) technique. Sixty full-term neonates, consisting of 32 males and 28 females, aged 2-28days were included in this study. An auditory oddball paradigm was used to elicit ERPs. N2 wave latencies and areas were recorded at different days after birth, to study on relationship between auditory perception and age, and comparison of left and right hemispheres, and males and females. Average wave forms of ERPs in neonates started from relatively irregular flat-bottomed troughs to relatively regular steep-sided ripples. A good linear relationship between ERPs and days after birth in neonates was observed. As days after birth increased, N2 latencies gradually and significantly shortened, and N2 areas gradually and significantly increased (both P<0.01). N2 areas in the central part of the brain were significantly greater, and N2 latencies in the central part were significantly shorter in the left hemisphere compared with the right, indicative of left hemisphere dominance (both P<0.05). N2 areas were greater and N2 latencies shorter in female neonates compared with males. The neonatal period is one of rapid auditory perception development. In the days following birth, the auditory perception ability of neonates gradually increases. This occurs predominantly in the left hemisphere, with auditory perception ability appearing to develop earlier in female neonates than in males. ERP can be used as an objective index used to evaluate auditory perception development in neonates. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H
2018-05-02
A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.
Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S
2016-12-01
Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated temporal lobe structures, which are resected during ATLR, more frequently than did verbal fluency. Controlling for auditory and visual input resulted in more left-lateralised activations. We hypothesise that these paradigms may be more predictive of postoperative language decline than verbal fluency fMRI. Copyright © 2016 Elsevier B.V. All rights reserved.
Primary Synovial Sarcoma of External Auditory Canal: A Case Report.
Devi, Aarani; Jayakumar, Krishnannair L L
2017-07-20
Synovial sarcoma is a rare malignant tumor of mesenchymal origin. Primary synovial sarcoma of the ear is extremely rare and to date only two cases have been published in English medical literature. Though the tumor is reported to have an aggressive nature, early diagnosis and treatment may improve the outcome. Here, we report a rare case of synovial sarcoma of the external auditory canal in an 18-year-old male who was managed by chemotherapy and referred for palliation due to tumor progression.
Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt.
Hickok, Gregory; Buchsbaum, Bradley; Humphries, Colin; Muftuler, Tugan
2003-07-01
The concept of auditory-motor interaction pervades speech science research, yet the cortical systems supporting this interface have not been elucidated. Drawing on experimental designs used in recent work in sensory-motor integration in the cortical visual system, we used fMRI in an effort to identify human auditory regions with both sensory and motor response properties, analogous to single-unit responses in known visuomotor integration areas. The sensory phase of the task involved listening to speech (nonsense sentences) or music (novel piano melodies); the "motor" phase of the task involved covert rehearsal/humming of the auditory stimuli. A small set of areas in the superior temporal and temporal-parietal cortex responded both during the listening phase and the rehearsal/humming phase. A left lateralized region in the posterior Sylvian fissure at the parietal-temporal boundary, area Spt, showed particularly robust responses to both phases of the task. Frontal areas also showed combined auditory + rehearsal responsivity consistent with the claim that the posterior activations are part of a larger auditory-motor integration circuit. We hypothesize that this circuit plays an important role in speech development as part of the network that enables acoustic-phonetic input to guide the acquisition of language-specific articulatory-phonetic gestures; this circuit may play a role in analogous musical abilities. In the adult, this system continues to support aspects of speech production, and, we suggest, supports verbal working memory.
Sex differences in the representation of call stimuli in a songbird secondary auditory area
Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine
2015-01-01
Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the bird's auditory experience in females. PMID:26578918
Sex differences in the representation of call stimuli in a songbird secondary auditory area.
Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine
2015-01-01
Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the bird's auditory experience in females.
Relationship between Speech Production and Perception in People Who Stutter.
Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter
2016-01-01
Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl's gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS.
Rapid Effects of Hearing Song on Catecholaminergic Activity in the Songbird Auditory Pathway
Matragrano, Lisa L.; Beaulieu, Michaël; Phillip, Jessica O.; Rae, Ali I.; Sanford, Sara E.; Sockman, Keith W.; Maney, Donna L.
2012-01-01
Catecholaminergic (CA) neurons innervate sensory areas and affect the processing of sensory signals. For example, in birds, CA fibers innervate the auditory pathway at each level, including the midbrain, thalamus, and forebrain. We have shown previously that in female European starlings, CA activity in the auditory forebrain can be enhanced by exposure to attractive male song for one week. It is not known, however, whether hearing song can initiate that activity more rapidly. Here, we exposed estrogen-primed, female white-throated sparrows to conspecific male song and looked for evidence of rapid synthesis of catecholamines in auditory areas. In one hemisphere of the brain, we used immunohistochemistry to detect the phosphorylation of tyrosine hydroxylase (TH), a rate-limiting enzyme in the CA synthetic pathway. We found that immunoreactivity for TH phosphorylated at serine 40 increased dramatically in the auditory forebrain, but not the auditory thalamus and midbrain, after 15 min of song exposure. In the other hemisphere, we used high pressure liquid chromatography to measure catecholamines and their metabolites. We found that two dopamine metabolites, dihydroxyphenylacetic acid and homovanillic acid, increased in the auditory forebrain but not the auditory midbrain after 30 min of exposure to conspecific song. Our results are consistent with the hypothesis that exposure to a behaviorally relevant auditory stimulus rapidly induces CA activity, which may play a role in auditory responses. PMID:22724011
Jiang, Fang; Stecker, G. Christopher; Boynton, Geoffrey M.; Fine, Ione
2016-01-01
Early blind subjects exhibit superior abilities for processing auditory motion, which are accompanied by enhanced BOLD responses to auditory motion within hMT+ and reduced responses within right planum temporale (rPT). Here, by comparing BOLD responses to auditory motion in hMT+ and rPT within sighted controls, early blind, late blind, and sight-recovery individuals, we were able to separately examine the effects of developmental and adult visual deprivation on cortical plasticity within these two areas. We find that both the enhanced auditory motion responses in hMT+ and the reduced functionality in rPT are driven by the absence of visual experience early in life; neither loss nor recovery of vision later in life had a discernable influence on plasticity within these areas. Cortical plasticity as a result of blindness has generally be presumed to be mediated by competition across modalities within a given cortical region. The reduced functionality within rPT as a result of early visual loss implicates an additional mechanism for cross modal plasticity as a result of early blindness—competition across different cortical areas for functional role. PMID:27458357
Sanju, Himanshu Kumar; Kumar, Prawin
2016-10-01
Introduction Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.
Primary Auditory Cortex is Required for Anticipatory Motor Response.
Li, Jingcheng; Liao, Xiang; Zhang, Jianxiong; Wang, Meng; Yang, Nian; Zhang, Jun; Lv, Guanghui; Li, Haohong; Lu, Jian; Ding, Ran; Li, Xingyi; Guang, Yu; Yang, Zhiqi; Qin, Han; Jin, Wenjun; Zhang, Kuan; He, Chao; Jia, Hongbo; Zeng, Shaoqun; Hu, Zhian; Nelken, Israel; Chen, Xiaowei
2017-06-01
The ability of the brain to predict future events based on the pattern of recent sensory experience is critical for guiding animal's behavior. Neocortical circuits for ongoing processing of sensory stimuli are extensively studied, but their contributions to the anticipation of upcoming sensory stimuli remain less understood. We, therefore, used in vivo cellular imaging and fiber photometry to record mouse primary auditory cortex to elucidate its role in processing anticipated stimulation. We found neuronal ensembles in layers 2/3, 4, and 5 which were activated in relationship to anticipated sound events following rhythmic stimulation. These neuronal activities correlated with the occurrence of anticipatory motor responses in an auditory learning task. Optogenetic manipulation experiments revealed an essential role of such neuronal activities in producing the anticipatory behavior. These results strongly suggest that the neural circuits of primary sensory cortex are critical for coding predictive information and transforming it into anticipatory motor behavior. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Fundamental deficits of auditory perception in Wernicke's aphasia.
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen
2013-01-01
This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
Zhang, G-Y; Yang, M; Liu, B; Huang, Z-C; Li, J; Chen, J-Y; Chen, H; Zhang, P-P; Liu, L-J; Wang, J; Teng, G-J
2016-01-28
Previous studies often report that early auditory deprivation or congenital deafness contributes to cross-modal reorganization in the auditory-deprived cortex, and this cross-modal reorganization limits clinical benefit from cochlear prosthetics. However, there are inconsistencies among study results on cortical reorganization in those subjects with long-term unilateral sensorineural hearing loss (USNHL). It is also unclear whether there exists a similar cross-modal plasticity of the auditory cortex for acquired monaural deafness and early or congenital deafness. To address this issue, we constructed the directional brain functional networks based on entropy connectivity of resting-state functional MRI and researched changes of the networks. Thirty-four long-term USNHL individuals and seventeen normally hearing individuals participated in the test, and all USNHL patients had acquired deafness. We found that certain brain regions of the sensorimotor and visual networks presented enhanced synchronous output entropy connectivity with the left primary auditory cortex in the left long-term USNHL individuals as compared with normally hearing individuals. Especially, the left USNHL showed more significant changes of entropy connectivity than the right USNHL. No significant plastic changes were observed in the right USNHL. Our results indicate that the left primary auditory cortex (non-auditory-deprived cortex) in patients with left USNHL has been reorganized by visual and sensorimotor modalities through cross-modal plasticity. Furthermore, the cross-modal reorganization also alters the directional brain functional networks. The auditory deprivation from the left or right side generates different influences on the human brain. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim
2015-06-15
Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim
2015-01-01
Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433
Cortical Interactions Underlying the Production of Speech Sounds
ERIC Educational Resources Information Center
Guenther, Frank H.
2006-01-01
Speech production involves the integration of auditory, somatosensory, and motor information in the brain. This article describes a model of speech motor control in which a feedforward control system, involving premotor and primary motor cortex and the cerebellum, works in concert with auditory and somatosensory feedback control systems that…
The Kindergarten Auditory Screening Test as a Predictor of Reading Disability
ERIC Educational Resources Information Center
Margolis, Howard
1976-01-01
Correlation coefficients were obtained between the Kindergarten Auditory Screening Test (KAST), the Metropolitan Readiness Test (MRT), and the Gates MacGinitie Reading Tests, Primary Form (GMRT). Neither the coefficients obtained nor an examination of extreme groups indicated that the KAST was an effective predictor of reading disability. (Author)
Auditory hallucinations: nomenclature and classification.
Blom, Jan Dirk; Sommer, Iris E C
2010-03-01
The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an overview of the nomenclature and classification of auditory hallucinations. Relevant data were obtained from books, PubMed, Embase, and the Cochrane Library. The results are presented in the form of several classificatory arrangements of auditory hallucinations, governed by the principles of content, perceived source, perceived vivacity, relation to the sleep-wake cycle, and association with suspected neurobiologic correlates. This overview underscores the necessity to reappraise the concepts of auditory hallucinations developed during the era of classic psychiatry, to incorporate them into our current nomenclature and classification of auditory hallucinations, and to test them empirically with the aid of the structural and functional imaging techniques currently available.
Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi
2015-11-01
Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.
Auditory perceptual simulation: Simulating speech rates or accents?
Zhou, Peiyun; Christianson, Kiel
2016-07-01
When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. Copyright © 2016 Elsevier B.V. All rights reserved.
Auditory and visual spatial impression: Recent studies of three auditoria
NASA Astrophysics Data System (ADS)
Nguyen, Andy; Cabrera, Densil
2004-10-01
Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.
Skouras, Stavros; Lohmann, Gabriele
2018-01-01
Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions. PMID:29385142
A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities
Dubus, Gaël; Bresin, Roberto
2013-01-01
The field of sonification has progressed greatly over the past twenty years and currently constitutes an established area of research. This article aims at exploiting and organizing the knowledge accumulated in previous experimental studies to build a foundation for future sonification works. A systematic review of these studies may reveal trends in sonification design, and therefore support the development of design guidelines. To this end, we have reviewed and analyzed 179 scientific publications related to sonification of physical quantities. Using a bottom-up approach, we set up a list of conceptual dimensions belonging to both physical and auditory domains. Mappings used in the reviewed works were identified, forming a database of 495 entries. Frequency of use was analyzed among these conceptual dimensions as well as higher-level categories. Results confirm two hypotheses formulated in a preliminary study: pitch is by far the most used auditory dimension in sonification applications, and spatial auditory dimensions are almost exclusively used to sonify kinematic quantities. To detect successful as well as unsuccessful sonification strategies, assessment of mapping efficiency conducted in the reviewed works was considered. Results show that a proper evaluation of sonification mappings is performed only in a marginal proportion of publications. Additional aspects of the publication database were investigated: historical distribution of sonification works is presented, projects are classified according to their primary function, and the sonic material used in the auditory display is discussed. Finally, a mapping-based approach for characterizing sonification is proposed. PMID:24358192
Neural correlates of auditory scene analysis and perception
Cohen, Yale E.
2014-01-01
The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354
Suga, Nobuo
2011-01-01
The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and non-lemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascending system. The corticocortical and corticofugal projections modulate auditory signal processing and play an essential role in the plasticity of the auditory system. Focal electric stimulation -- comparable to repetitive tonal stimulation -- of the lemniscal system evokes three major types of changes in the physiological properties, such as the tuning to specific values of acoustic parameters of cortical and subcortical auditory neurons through different combinations of facilitation and inhibition. For such changes, a neuromodulator, acetylcholine, plays an essential role. Electric stimulation of the nonlemniscal system evokes changes in the lemniscal system that is different from those evoked by the lemniscal stimulation. Auditory signals ascending from the lemniscal and nonlemniscal thalamic nuclei to the cortical auditory areas appear to be selected or adjusted by a “differential” gating mechanism. Conditioning for associative learning and pseudo-conditioning for nonassociative learning respectively elicit tone-specific and nonspecific plastic changes. The lemniscal, corticofugal and cholinergic systems are involved in eliciting the former, but not the latter. The current article reviews the recent progress in the research of corticocortical and corticofugal modulations of the auditory system and its plasticity elicited by conditioning and pseudo-conditioning. PMID:22155273
Primary Synovial Sarcoma of External Auditory Canal: A Case Report
Jayakumar, Krishnannair l L
2017-01-01
Synovial sarcoma is a rare malignant tumor of mesenchymal origin. Primary synovial sarcoma of the ear is extremely rare and to date only two cases have been published in English medical literature. Though the tumor is reported to have an aggressive nature, early diagnosis and treatment may improve the outcome. Here, we report a rare case of synovial sarcoma of the external auditory canal in an 18-year-old male who was managed by chemotherapy and referred for palliation due to tumor progression. PMID:28948118
Rieger, Kathryn; Rarra, Marie-Helene; Moor, Nicolas; Diaz Hernandez, Laura; Baenninger, Anja; Razavi, Nadja; Dierks, Thomas; Hubl, Daniela; Koenig, Thomas
2018-03-01
Previous studies showed a global reduction of the event-related potential component N100 in patients with schizophrenia, a phenomenon that is even more pronounced during auditory verbal hallucinations. This reduction assumingly results from dysfunctional activation of the primary auditory cortex by inner speech, which reduces its responsiveness to external stimuli. With this study, we tested the feasibility of enhancing the responsiveness of the primary auditory cortex to external stimuli with an upregulation of the event-related potential component N100 in healthy control subjects. A total of 15 healthy subjects performed 8 double-sessions of EEG-neurofeedback training over 2 weeks. The results of the used linear mixed effect model showed a significant active learning effect within sessions ( t = 5.99, P < .001) against an unspecific habituation effect that lowered the N100 amplitude over time. Across sessions, a significant increase in the passive condition ( t = 2.42, P = .03), named as carry-over effect, was observed. Given that the carry-over effect is one of the ultimate aims of neurofeedback, it seems reasonable to apply this neurofeedback training protocol to influence the N100 amplitude in patients with schizophrenia. This intervention could provide an alternative treatment option for auditory verbal hallucinations in these patients.
A physiologically based model for temporal envelope encoding in human primary auditory cortex.
Dugué, Pierre; Le Bouquin-Jeannès, Régine; Edeline, Jean-Marc; Faucon, Gérard
2010-09-01
Communication sounds exhibit temporal envelope fluctuations in the low frequency range (<70 Hz) and human speech has prominent 2-16 Hz modulations with a maximum at 3-4 Hz. Here, we propose a new phenomenological model of the human auditory pathway (from cochlea to primary auditory cortex) to simulate responses to amplitude-modulated white noise. To validate the model, performance was estimated by quantifying temporal modulation transfer functions (TMTFs). Previous models considered either the lower stages of the auditory system (up to the inferior colliculus) or only the thalamocortical loop. The present model, divided in two stages, is based on anatomical and physiological findings and includes the entire auditory pathway. The first stage, from the outer ear to the colliculus, incorporates inhibitory interneurons in the cochlear nucleus to increase performance at high stimuli levels. The second stage takes into account the anatomical connections of the thalamocortical system and includes the fast and slow excitatory and inhibitory currents. After optimizing the parameters of the model to reproduce the diversity of TMTFs obtained from human subjects, a patient-specific model was derived and the parameters were optimized to effectively reproduce both spontaneous activity and the oscillatory part of the evoked response. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Evaluation of high-resolution MRI for preoperative screening for cochlear implantation
NASA Astrophysics Data System (ADS)
Madzivire, Mambidzeni; Camp, Jon J.; Lane, John; Witte, Robert J.; Robb, Richard A.
2002-05-01
The success of a cochlear implant is dependent on a functioning auditory nerve. An accurate noninvasive method for screening cochlear implant patients to help determine viability of the auditory nerve would allow physicians to better predict the success of the operation. In this study we measured the size of the auditory nerve relative to the size of the juxtaposed facial nerve and correlated these measurements with audiologic test results. The study involved 15 patients, and three normal volunteers. Noninvasive high-resolution bilateral MRI images were acquired from both 1.5T and 3T scanners. The images were reformatted to obtain an anatomically referenced oblique plane perpendicular to the auditory nerve. The cross- sectional areas of the auditory and facial nerves were determined in this plane. Assessment of the data is encouraging. The ratios of auditory to facial nerve size in the control subjects are close to the expected value of 1.0. Patient data ratios range from 0.73 to 1.3, with numbers significantly less than 1.0 suggesting auditory nerve atrophy. The acoustic nerve area correlated to audiologic test findings, particularly (R2equals0.68) to the count of words understood from a list of 100 words. These preliminary analyses suggest that a threshold of size may be determined to differentiate functional from nonfunctional auditory nerves.
Network and external perturbation induce burst synchronisation in cat cerebral cortex
NASA Astrophysics Data System (ADS)
Lameu, Ewandson L.; Borges, Fernando S.; Borges, Rafael R.; Batista, Antonio M.; Baptista, Murilo S.; Viana, Ricardo L.
2016-05-01
The brain of mammals are divided into different cortical areas that are anatomically connected forming larger networks which perform cognitive tasks. The cat cerebral cortex is composed of 65 areas organised into the visual, auditory, somatosensory-motor and frontolimbic cognitive regions. We have built a network of networks, in which networks are connected among themselves according to the connections observed in the cat cortical areas aiming to study how inputs drive the synchronous behaviour in this cat brain-like network. We show that without external perturbations it is possible to observe high level of bursting synchronisation between neurons within almost all areas, except for the auditory area. Bursting synchronisation appears between neurons in the auditory region when an external perturbation is applied in another cognitive area. This is a clear evidence that burst synchronisation and collective behaviour in the brain might be a process mediated by other brain areas under stimulation.
Reduced variability of auditory alpha activity in chronic tinnitus.
Schlee, Winfried; Schecklmann, Martin; Lehner, Astrid; Kreuzer, Peter M; Vielsmeier, Veronika; Poeppl, Timm B; Langguth, Berthold
2014-01-01
Subjective tinnitus is characterized by the conscious perception of a phantom sound which is usually more prominent under silence. Resting state recordings without any auditory stimulation demonstrated a decrease of cortical alpha activity in temporal areas of subjects with an ongoing tinnitus perception. This is often interpreted as an indicator for enhanced excitability of the auditory cortex in tinnitus. In this study we want to further investigate this effect by analysing the moment-to-moment variability of the alpha activity in temporal areas. Magnetoencephalographic resting state recordings of 21 tinnitus subjects and 21 healthy controls were analysed with respect to the mean and the variability of spectral power in the alpha frequency band over temporal areas. A significant decrease of auditory alpha activity was detected for the low alpha frequency band (8-10 Hz) but not for the upper alpha band (10-12 Hz). Furthermore, we found a significant decrease of alpha variability for the tinnitus group. This result was significant for the lower alpha frequency range and not significant for the upper alpha frequencies. Tinnitus subjects with a longer history of tinnitus showed less variability of their auditory alpha activity which might be an indicator for reduced adaptability of the auditory cortex in chronic tinnitus.
Simulation of talking faces in the human brain improves auditory speech recognition
von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.
2008-01-01
Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648
The role of the primary auditory cortex in the neural mechanism of auditory verbal hallucinations
Kompus, Kristiina; Falkenberg, Liv E.; Bless, Josef J.; Johnsen, Erik; Kroken, Rune A.; Kråkvik, Bodil; Larøi, Frank; Løberg, Else-Marie; Vedul-Kjelsås, Einar; Westerhausen, René; Hugdahl, Kenneth
2013-01-01
Auditory verbal hallucinations (AVHs) are a subjective experience of “hearing voices” in the absence of corresponding physical stimulation in the environment. The most remarkable feature of AVHs is their perceptual quality, that is, the experience is subjectively often as vivid as hearing an actual voice, as opposed to mental imagery or auditory memories. This has lead to propositions that dysregulation of the primary auditory cortex (PAC) is a crucial component of the neural mechanism of AVHs. One possible mechanism by which the PAC could give rise to the experience of hallucinations is aberrant patterns of neuronal activity whereby the PAC is overly sensitive to activation arising from internal processing, while being less responsive to external stimulation. In this paper, we review recent research relevant to the role of the PAC in the generation of AVHs. We present new data from a functional magnetic resonance imaging (fMRI) study, examining the responsivity of the left and right PAC to parametrical modulation of the intensity of auditory verbal stimulation, and corresponding attentional top-down control in non-clinical participants with AVHs, and non-clinical participants with no AVHs. Non-clinical hallucinators showed reduced activation to speech sounds but intact attentional modulation in the right PAC. Additionally, we present data from a group of schizophrenia patients with AVHs, who do not show attentional modulation of left or right PAC. The context-appropriate modulation of the PAC may be a protective factor in non-clinical hallucinations. PMID:23630479
Auditory cortex of bats and primates: managing species-specific calls for social communication
Kanwal, Jagmeet S.; Rauschecker, Josef P.
2014-01-01
Individuals of many animal species communicate with each other using sounds or “calls” that are made up of basic acoustic patterns and their combinations. We are interested in questions about the processing of communication calls and their representation within the mammalian auditory cortex. Our studies compare in particular two species for which a large body of data has accumulated: the mustached bat and the rhesus monkey. We conclude that the brains of both species share a number of functional and organizational principles, which differ only in the extent to which and how they are implemented. For instance, neurons in both species use “combination-sensitivity” (nonlinear spectral and temporal integration of stimulus components) as a basic mechanism to enable exquisite sensitivity to and selectivity for particular call types. Whereas combination-sensitivity is already found abundantly at the primary auditory cortical and also at subcortical levels in bats, it becomes prevalent only at the level of the lateral belt in the secondary auditory cortex of monkeys. A parallel-hierarchical framework for processing complex sounds up to the level of the auditory cortex in bats and an organization into parallel-hierarchical, cortico-cortical auditory processing streams in monkeys is another common principle. Response specialization of neurons seems to be more pronounced in bats than in monkeys, whereas a functional specialization into “what” and “where” streams in the cerebral cortex is more pronounced in monkeys than in bats. These differences, in part, are due to the increased number and larger size of auditory areas in the parietal and frontal cortex in primates. Accordingly, the computational prowess of neural networks and the functional hierarchy resulting in specializations is established early and accelerated across brain regions in bats. The principles proposed here for the neural “management” of species-specific calls in bats and primates can be tested by studying the details of call processing in additional species. Also, computational modeling in conjunction with coordinated studies in bats and monkeys can help to clarify the fundamental question of perceptual invariance (or “constancy”) in call recognition, which has obvious relevance for understanding speech perception and its disorders in humans. PMID:17485400
Kenet, T.; Froemke, R. C.; Schreiner, C. E.; Pessah, I. N.; Merzenich, M. M.
2007-01-01
Noncoplanar polychlorinated biphenyls (PCBs) are widely dispersed in human environment and tissues. Here, an exemplar noncoplanar PCB was fed to rat dams during gestation and throughout three subsequent nursing weeks. Although the hearing sensitivity and brainstem auditory responses of pups were normal, exposure resulted in the abnormal development of the primary auditory cortex (A1). A1 was irregularly shaped and marked by internal nonresponsive zones, its topographic organization was grossly abnormal or reversed in about half of the exposed pups, the balance of neuronal inhibition to excitation for A1 neurons was disturbed, and the critical period plasticity that underlies normal postnatal auditory system development was significantly altered. These findings demonstrate that developmental exposure to this class of environmental contaminant alters cortical development. It is proposed that exposure to noncoplanar PCBs may contribute to common developmental disorders, especially in populations with heritable imbalances in neurotransmitter systems that regulate the ratio of inhibition and excitation in the brain. We conclude that the health implications associated with exposure to noncoplanar PCBs in human populations merit a more careful examination. PMID:17460041
Positron Emission Tomography in Cochlear Implant and Auditory Brainstem Implant Recipients.
ERIC Educational Resources Information Center
Miyamoto, Richard T.; Wong, Donald
2001-01-01
Positron emission tomography imaging was used to evaluate the brain's response to auditory stimulation, including speech, in deaf adults (five with cochlear implants and one with an auditory brainstem implant). Functional speech processing was associated with activation in areas classically associated with speech processing. (Contains five…
Children's Auditory Perception of Movement of Traffic Sounds.
ERIC Educational Resources Information Center
Pfeffer, K.; Barnecutt, P.
1996-01-01
Examined children's auditory perception of traffic sounds, focusing on identification of vehicle movement. Subjects were 60 children of 5, 8, and 11 years. Results indicated that the auditory perception of movement was a problem area for children, especially five-year olds. Discussed the role of attention-demanding characteristics of some traffic…
Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model
Nakao, Kazuhito; Nakazawa, Kazu
2014-01-01
In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs) are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR) hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP) activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 s, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1) a broadband increase in spontaneous LFP power in the absence of external inputs, and (2) a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The “paradoxically” high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception. PMID:25018691
van den Hurk, Job; Van Baelen, Marc; Op de Beeck, Hans P.
2017-01-01
To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience. PMID:28507127
Functional magnetic resonance imaging (FMRI) with auditory stimulation in songbirds.
Van Ruijssevelt, Lisbeth; De Groof, Geert; Van der Kant, Anne; Poirier, Colline; Van Audekerke, Johan; Verhoye, Marleen; Van der Linden, Annemie
2013-06-03
The neurobiology of birdsong, as a model for human speech, is a pronounced area of research in behavioral neuroscience. Whereas electrophysiology and molecular approaches allow the investigation of either different stimuli on few neurons, or one stimulus in large parts of the brain, blood oxygenation level dependent (BOLD) functional Magnetic Resonance Imaging (fMRI) allows combining both advantages, i.e. compare the neural activation induced by different stimuli in the entire brain at once. fMRI in songbirds is challenging because of the small size of their brains and because their bones and especially their skull comprise numerous air cavities, inducing important susceptibility artifacts. Gradient-echo (GE) BOLD fMRI has been successfully applied to songbirds (1-5) (for a review, see (6)). These studies focused on the primary and secondary auditory brain areas, which are regions free of susceptibility artifacts. However, because processes of interest may occur beyond these regions, whole brain BOLD fMRI is required using an MRI sequence less susceptible to these artifacts. This can be achieved by using spin-echo (SE) BOLD fMRI (7,8) . In this article, we describe how to use this technique in zebra finches (Taeniopygia guttata), which are small songbirds with a bodyweight of 15-25 g extensively studied in behavioral neurosciences of birdsong. The main topic of fMRI studies on songbirds is song perception and song learning. The auditory nature of the stimuli combined with the weak BOLD sensitivity of SE (compared to GE) based fMRI sequences makes the implementation of this technique very challenging.
Structural changes of the corpus callosum in tinnitus
Diesch, Eugen; Schummer, Verena; Kramer, Martin; Rupp, Andre
2012-01-01
Objectives: In tinnitus, several brain regions seem to be structurally altered, including the medial partition of Heschl's gyrus (mHG), the site of the primary auditory cortex. The mHG is smaller in tinnitus patients than in healthy controls. The corpus callosum (CC) is the main interhemispheric commissure of the brain connecting the auditory areas of the left and the right hemisphere. Here, we investigate whether tinnitus status is associated with CC volume. Methods: The midsagittal cross-sectional area of the CC was examined in tinnitus patients and healthy controls in which an examination of the mHG had been carried out earlier. The CC was extracted and segmented into subregions which were defined according to the most common CC morphometry schemes introduced by Witelson (1989) and Hofer and Frahm (2006). Results: For both CC segmentation schemes, the CC posterior midbody was smaller in male patients than in male healthy controls and the isthmus, the anterior midbody, and the genou were larger in female patients than in female controls. With CC size normalized relative to mHG volume, the normalized CC splenium was larger in male patients than male controls and the normalized CC splenium, the isthmus and the genou were larger in female patients than female controls. Normalized CC segment size expresses callosal interconnectivity relative to auditory cortex volume. Conclusion: It may be argued that the predominant function of the CC is excitatory. The stronger callosal interconnectivity in tinnitus patients, compared to healthy controls, may facilitate the emergence and maintenance of a positive feedback loop between tinnitus generators located in the two hemispheres. PMID:22470322
Shared neural substrates for song discrimination in parental and parasitic songbirds.
Louder, Matthew I M; Voss, Henning U; Manna, Thomas J; Carryl, Sophia S; London, Sarah E; Balakrishnan, Christopher N; Hauber, Mark E
2016-05-27
In many social animals, early exposure to conspecific stimuli is critical for the development of accurate species recognition. Obligate brood parasitic songbirds, however, forego parental care and young are raised by heterospecific hosts in the absence of conspecific stimuli. Having evolved from non-parasitic, parental ancestors, how brood parasites recognize their own species remains unclear. In parental songbirds (e.g. zebra finch Taeniopygia guttata), the primary and secondary auditory forebrain areas are known to be critical in the differential processing of conspecific vs. heterospecific songs. Here we demonstrate that the same auditory brain regions underlie song discrimination in adult brood parasitic pin-tailed whydahs (Vidua macroura), a close relative of the zebra finch lineage. Similar to zebra finches, whydahs showed stronger behavioral responses during conspecific vs. heterospecific song and tone pips as well as increased neural responses within the auditory forebrain, as measured by both functional magnetic resonance imaging (fMRI) and immediate early gene (IEG) expression. Given parallel behavioral and neuroanatomical patterns of song discrimination, our results suggest that the evolutionary transition to brood parasitism from parental songbirds likely involved an "evolutionary tinkering" of existing proximate mechanisms, rather than the wholesale reworking of the neural substrates of species recognition. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Relationship between Speech Production and Perception in People Who Stutter
Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter
2016-01-01
Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl’s gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS. PMID:27242487
Matragrano, Lisa L.; Sanford, Sara E.; Salvante, Katrina G.; Beaulieu, Michaël; Sockman, Keith W.; Maney, Donna L.
2011-01-01
Because no organism lives in an unchanging environment, sensory processes must remain plastic so that in any context, they emphasize the most relevant signals. As the behavioral relevance of sociosexual signals changes along with reproductive state, the perception of those signals is altered by reproductive hormones such as estradiol (E2). We showed previously that in white-throated sparrows, immediate early gene responses in the auditory pathway of females are selective for conspecific male song only when plasma E2 is elevated to breeding-typical levels. In this study, we looked for evidence that E2-dependent modulation of auditory responses is mediated by serotonergic systems. In female nonbreeding white-throated sparrows treated with E2, the density of fibers immunoreactive for serotonin transporter innervating the auditory midbrain and rostral auditory forebrain increased compared with controls. E2 treatment also increased the concentration of the serotonin metabolite 5-HIAA in the caudomedial mesopallium of the auditory forebrain. In a second experiment, females exposed to 30 min of conspecific male song had higher levels of 5-HIAA in the caudomedial nidopallium of the auditory forebrain than birds not exposed to song. Overall, we show that in this seasonal breeder, (1) serotonergic fibers innervate auditory areas; (2) the density of those fibers is higher in females with breeding-typical levels of E2 than in nonbreeding, untreated females; and (3) serotonin is released in the auditory forebrain within minutes in response to conspecific vocalizations. Our results are consistent with the hypothesis that E2 acts via serotonin systems to alter auditory processing. PMID:21942431
Auditory Perceptual Abilities Are Associated with Specific Auditory Experience
Zaltz, Yael; Globerson, Eitan; Amir, Noam
2017-01-01
The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318
Representations of Pitch and Timbre Variation in Human Auditory Cortex
2017-01-01
Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex. SIGNIFICANCE STATEMENT Pitch and timbre are two crucial aspects of auditory perception. Pitch governs our perception of musical melodies and harmonies, and conveys both prosodic and (in tone languages) lexical information in speech. Brightness—an aspect of timbre or sound quality—allows us to distinguish different musical instruments and speech sounds. Frequency-mapping studies have revealed tonotopic organization in primary auditory cortex, but the use of pure tones or noise bands has precluded the possibility of dissociating pitch from brightness. Our results suggest a distributed code, with no clear anatomical distinctions between auditory cortical regions responsive to changes in either pitch or timbre, but also reveal a population code that can differentiate between changes in either dimension within the same cortical regions. PMID:28025255
Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.
Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie
2016-12-07
A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.
Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude
2016-06-01
Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Aedo, Cristian; Terreros, Gonzalo; León, Alex; Delano, Paul H.
2016-01-01
Background and Objective The auditory efferent system is a complex network of descending pathways, which mainly originate in the primary auditory cortex and are directed to several auditory subcortical nuclei. These descending pathways are connected to olivocochlear neurons, which in turn make synapses with auditory nerve neurons and outer hair cells (OHC) of the cochlea. The olivocochlear function can be studied using contralateral acoustic stimulation, which suppresses auditory nerve and cochlear responses. In the present work, we tested the proposal that the corticofugal effects that modulate the strength of the olivocochlear reflex on auditory nerve responses are produced through cholinergic synapses between medial olivocochlear (MOC) neurons and OHCs via alpha-9/10 nicotinic receptors. Methods We used wild type (WT) and alpha-9 nicotinic receptor knock-out (KO) mice, which lack cholinergic transmission between MOC neurons and OHC, to record auditory cortex evoked potentials and to evaluate the consequences of auditory cortex electrical microstimulation in the effects produced by contralateral acoustic stimulation on auditory brainstem responses (ABR). Results Auditory cortex evoked potentials at 15 kHz were similar in WT and KO mice. We found that auditory cortex microstimulation produces an enhancement of contralateral noise suppression of ABR waves I and III in WT mice but not in KO mice. On the other hand, corticofugal modulations of wave V amplitudes were significant in both genotypes. Conclusion These findings show that the corticofugal modulation of contralateral acoustic suppressions of auditory nerve (ABR wave I) and superior olivary complex (ABR wave III) responses are mediated through MOC synapses. PMID:27195498
ERIC Educational Resources Information Center
Furno, Lois Ehrler
2012-01-01
Effective learning occurs in auditory environments. Background noise is inherent to classrooms with recommended levels 15 decibels softer than instruction, which is rarely achieved. Learning is diminished by interference to the auditory reception of information, especially for students who are hard of hearing other diagnoses. Sound-field…
ERIC Educational Resources Information Center
Rosen, Stuart; Adlard, Alan; van der Lely, Heather K. J.
2009-01-01
Purpose: We investigated claims that specific language impairment (SLI) typically arises from nonspeech auditory deficits by measuring tone-in-noise thresholds in a relatively homogeneous SLI subgroup exhibiting a primary deficit restricted to grammar (Grammatical[G]-SLI). Method: Fourteen children (mostly teenagers) with G-SLI were compared to…
Verbal Recall of Auditory and Visual Signals by Normal and Deficient Reading Children.
ERIC Educational Resources Information Center
Levine, Maureen Julianne
Verbal recall of bisensory memory tasks was compared among 48 9- to 12-year old boys in three groups: normal readers, primary deficit readers, and secondary deficit readers. Auditory and visual stimulus pairs composed of digits, which incorporated variations of intersensory and intrasensory conditions were administered to Ss through a Bell and…
De Martino, Federico; Moerel, Michelle; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia
2015-12-29
Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that-in this highly columnar cortex-task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.
Memory for sound, with an ear toward hearing in complex auditory scenes.
Snyder, Joel S; Gregg, Melissa K
2011-10-01
An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.
Review of auditory subliminal psychodynamic activation experiments.
Fudin, R; Benjamin, C
1991-12-01
Subliminal psychodynamic activation experiments using auditory stimuli have yielded only a modicum of support for the contention that such activation produces predictable behavioral changes. Problems in many auditory subliminal psychodynamic activation experiments indicate that those predictions have not been tested adequately. The auditory mode of presentation, however, has several methodological advantages over the visual one, the method used in the vast majority of subliminal psychodynamic activation experiments. Consequently, it should be considered in subsequent research in this area.
Testing the dual-pathway model for auditory processing in human cortex.
Zündorf, Ida C; Lewald, Jörg; Karnath, Hans-Otto
2016-01-01
Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.
Antenatal Corticosteroid Exposure Disrupts Myelination in the Auditory Nerve of Preterm Sheep.
Rittenschober-Böhm, Judith; Rodger, Jennifer; Jobe, Alan H; Kallapur, Suhas G; Doherty, Dorota A; Kramer, Boris W; Payne, Matthew S; Archer, Michael; Rittenschober, Christian; Newnham, John P; Miura, Yuichiro; Berger, Angelika; Matthews, Stephen G; Kemp, Matthew W
2018-04-17
Antenatal corticosteroids (ACS) improve preterm neonatal outcomes. However, uncertainty remains regarding the safety of ACS exposure for the developing fetus, particularly its neurosensory development. We investigated the effect of single and multiple ACS exposures on auditory nerve development in an ovine model of pregnancy. Ewes with a single fetus (gestational age [GA] 100 days) received an intramuscular injection of 150 mg medroxyprogesterone-acetate, followed by intramuscular (i) betamethasone (0.5 mg/kg) on days 104, 111, and 118 GA; (ii) betamethasone on day 104 and saline on days 111 and 118 GA; or (iii) saline on days 104, 111, and 118 GA, with delivery on day 125 GA. Transmission electron microscope images of lamb auditory nerve preparations were digitally analyzed to determine auditory nerve morphology and myelination. Relative to the control, mean auditory nerve myelin area was significantly increased in the multiple-treatment group (p < 0.001), but not in the single-treatment group. Increased myelin thickness was significantly changed only in a subgroup analysis for those axons with myelin thickness greater than the median value (p < 0.001). Morphological assessments showed that the increased myelin area was due to an increased likelihood of decompacted areas (p = 0.005; OR = 2.14, 95% CI 1.26-3.63; 31.6 vs. 18.2% in controls) and irregular myelin deposition (p = 0.001; OR = 5.91, 95% CI 2.16-16.19; 49.0 vs. 16.8% in controls) in the myelin sheath. In preterm sheep, ACS exposure increased auditory nerve myelin area, potentially due to disruption of normal myelin deposition. © 2018 S. Karger AG, Basel.
Suga, Nobuo
2018-04-01
For echolocation, mustached bats emit velocity-sensitive orientation sounds (pulses) containing a constant-frequency component consisting of four harmonics (CF 1-4 ). They show unique behavior called Doppler-shift compensation for Doppler-shifted echoes and hunting behavior for frequency and amplitude modulated echoes from fluttering insects. Their peripheral auditory system is highly specialized for fine frequency analysis of CF 2 (∼61.0 kHz) and detecting echo CF 2 from fluttering insects. In their central auditory system, lateral inhibition occurring at multiple levels sharpens V-shaped frequency-tuning curves at the periphery and creates sharp spindle-shaped tuning curves and amplitude tuning. The large CF 2 -tuned area of the auditory cortex systematically represents the frequency and amplitude of CF 2 in a frequency-versus-amplitude map. "CF/CF" neurons are tuned to a specific combination of pulse CF 1 and Doppler-shifted echo CF 2 or 3 . They are tuned to specific velocities. CF/CF neurons cluster in the CC ("C" stands for CF) and DIF (dorsal intrafossa) areas of the auditory cortex. The CC area has the velocity map for Doppler imaging. The DIF area is particularly for Dopper imaging of other bats approaching in cruising flight. To optimize the processing of behaviorally relevant sounds, cortico-cortical interactions and corticofugal feedback modulate the frequency tuning of cortical and sub-cortical auditory neurons and cochlear hair cells through a neural net consisting of positive feedback associated with lateral inhibition. Copyright © 2018 Elsevier B.V. All rights reserved.
Dai, Jennifer B; Chen, Yining; Sakata, Jon T
2018-05-21
Distinguishing between familiar and unfamiliar individuals is an important task that shapes the expression of social behavior. As such, identifying the neural populations involved in processing and learning the sensory attributes of individuals is important for understanding mechanisms of behavior. Catecholamine-synthesizing neurons have been implicated in sensory processing, but relatively little is known about their contribution to auditory learning and processing across various vertebrate taxa. Here we investigated the extent to which immediate early gene expression in catecholaminergic circuitry reflects information about the familiarity of social signals and predicts immediate early gene expression in sensory processing areas in songbirds. We found that male zebra finches readily learned to differentiate between familiar and unfamiliar acoustic signals ('songs') and that playback of familiar songs led to fewer catecholaminergic neurons in the locus coeruleus (but not in the ventral tegmental area, substantia nigra, or periaqueductal gray) expressing the immediate early gene, EGR-1, than playback of unfamiliar songs. The pattern of EGR-1 expression in the locus coeruleus was similar to that observed in two auditory processing areas implicated in auditory learning and memory, namely the caudomedial nidopallium (NCM) and the caudal medial mesopallium (CMM), suggesting a contribution of catecholamines to sensory processing. Consistent with this, the pattern of catecholaminergic innervation onto auditory neurons co-varied with the degree to which song playback affected the relative intensity of EGR-1 expression. Together, our data support the contention that catecholamines like norepinephrine contribute to social recognition and the processing of social information. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
GABA Immunoreactivity in Auditory and Song Control Brain Areas of Zebra Finches
Pinaud, Raphael; Mello, Claudio V.
2009-01-01
Inhibitory transmission is critical to sensory and motor processing and is believed to play a role in experience-dependent plasticity. The main inhibitory neurotransmitter in vertebrates, GABA, has been implicated in both sensory and motor aspects of vocalization in songbirds. To understand the role of GABAergic mechanisms in vocal communication, GABAergic elements must be characterized fully. Hence, we investigated GABA immunohistochemistry in the zebra finch brain, emphasizing auditory areas and song control nuclei. Several nuclei of the ascending auditory pathway showed a moderate to high density of GABAergic neurons including the cochlear nuclei, nucleus laminaris, superior olivary nucleus, mesencephalic nucleus lateralis pars dorsalis, and nucleus ovoidalis. Telencephalic auditory areas, including field L subfields L1, L2a and L3, as well as the caudomedial nidopallium (NCM) and mesopallium (CMM), contained GABAergic cells at particularly high densities. Considerable GABA labeling was also seen in the shelf area of caudodorsal nidopallium, and the cup area in the arcopallium, as well as in area X, the lateral magnocellular nucleus of the anterior nidopallium, the robust nucleus of the arcopallium and nidopallial nucleus HVC. GABAergic cells were typically small, most likely local inhibitory interneurons, although large GABA-positive cells that were sparsely distributed were also identified. GABA-positive neurites and puncta were identified in most nuclei of the ascending auditory pathway and in song control nuclei. Our data are in accordance with a prominent role of GABAergic mechanisms in regulating the neural circuits involved in song perceptual processing, motor production, and vocal learning in songbirds. PMID:17466487
Source Space Estimation of Oscillatory Power and Brain Connectivity in Tinnitus
Zobay, Oliver; Palmer, Alan R.; Hall, Deborah A.; Sereda, Magdalena; Adjamian, Peyman
2015-01-01
Tinnitus is the perception of an internally generated sound that is postulated to emerge as a result of structural and functional changes in the brain. However, the precise pathophysiology of tinnitus remains unknown. Llinas’ thalamocortical dysrhythmia model suggests that neural deafferentation due to hearing loss causes a dysregulation of coherent activity between thalamus and auditory cortex. This leads to a pathological coupling of theta and gamma oscillatory activity in the resting state, localised to the auditory cortex where normally alpha oscillations should occur. Numerous studies also suggest that tinnitus perception relies on the interplay between auditory and non-auditory brain areas. According to the Global Brain Model, a network of global fronto—parietal—cingulate areas is important in the generation and maintenance of the conscious perception of tinnitus. Thus, the distress experienced by many individuals with tinnitus is related to the top—down influence of this global network on auditory areas. In this magnetoencephalographic study, we compare resting-state oscillatory activity of tinnitus participants and normal-hearing controls to examine effects on spectral power as well as functional and effective connectivity. The analysis is based on beamformer source projection and an atlas-based region-of-interest approach. We find increased functional connectivity within the auditory cortices in the alpha band. A significant increase is also found for the effective connectivity from a global brain network to the auditory cortices in the alpha and beta bands. We do not find evidence of effects on spectral power. Overall, our results provide only limited support for the thalamocortical dysrhythmia and Global Brain models of tinnitus. PMID:25799178
McCullagh, Elizabeth A; Salcedo, Ernesto; Huntsman, Molly M; Klug, Achim
2017-11-01
Hyperexcitability and the imbalance of excitation/inhibition are one of the leading causes of abnormal sensory processing in Fragile X syndrome (FXS). The precise timing and distribution of excitation and inhibition is crucial for auditory processing at the level of the auditory brainstem, which is responsible for sound localization ability. Sound localization is one of the sensory abilities disrupted by loss of the Fragile X Mental Retardation 1 (Fmr1) gene. Using triple immunofluorescence staining we tested whether there were alterations in the number and size of presynaptic structures for the three primary neurotransmitters (glutamate, glycine, and GABA) in the auditory brainstem of Fmr1 knockout mice. We found decreases in either glycinergic or GABAergic inhibition to the medial nucleus of the trapezoid body (MNTB) specific to the tonotopic location within the nucleus. MNTB is one of the primary inhibitory nuclei in the auditory brainstem and participates in the sound localization process with fast and well-timed inhibition. Thus, a decrease in inhibitory afferents to MNTB neurons should lead to greater inhibitory output to the projections from this nucleus. In contrast, we did not see any other significant alterations in balance of excitation/inhibition in any of the other auditory brainstem nuclei measured, suggesting that the alterations observed in the MNTB are both nucleus and frequency specific. We furthermore show that glycinergic inhibition may be an important contributor to imbalances in excitation and inhibition in FXS and that the auditory brainstem is a useful circuit for testing these imbalances. © 2017 Wiley Periodicals, Inc.
Prospects for Replacement of Auditory Neurons by Stem Cells
Shi, Fuxin; Edge, Albert S.B.
2013-01-01
Sensorineural hearing loss is caused by degeneration of hair cells or auditory neurons. Spiral ganglion cells, the primary afferent neurons of the auditory system, are patterned during development and send out projections to hair cells and to the brainstem under the control of largely unknown guidance molecules. The neurons do not regenerate after loss and even damage to their projections tends to be permanent. The genesis of spiral ganglion neurons and their synapses forms a basis for regenerative approaches. In this review we critically present the current experimental findings on auditory neuron replacement. We discuss the latest advances with a focus on (a) exogenous stem cell transplantation into the cochlea for neural replacement, (b) expression of local guidance signals in the cochlea after loss of auditory neurons, (c) the possibility of neural replacement from an endogenous cell source, and (d) functional changes from cell engraftment. PMID:23370457
The spectrotemporal filter mechanism of auditory selective attention
Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.
2013-01-01
SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126
Noise-invariant Neurons in the Avian Auditory Cortex: Hearing the Song in Noise
Moore, R. Channing; Lee, Tyler; Theunissen, Frédéric E.
2013-01-01
Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. PMID:23505354
Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.
Moore, R Channing; Lee, Tyler; Theunissen, Frédéric E
2013-01-01
Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.
Premotor cortex is sensitive to auditory-visual congruence for biological motion.
Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F
2012-03-01
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
Kostopoulos, Penelope; Petrides, Michael
2016-02-16
There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.
Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.
Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina
2015-07-01
It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory
2017-01-01
Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom sound perception and potentially serve as an objective measure of central neural pathology. PMID:28604786
Jakkamsetti, Vikram; Chang, Kevin Q.
2012-01-01
Environmental enrichment induces powerful changes in the adult cerebral cortex. Studies in primary sensory cortex have observed that environmental enrichment modulates neuronal response strength, selectivity, speed of response, and synchronization to rapid sensory input. Other reports suggest that nonprimary sensory fields are more plastic than primary sensory cortex. The consequences of environmental enrichment on information processing in nonprimary sensory cortex have yet to be studied. Here we examine physiological effects of enrichment in the posterior auditory field (PAF), a field distinguished from primary auditory cortex (A1) by wider receptive fields, slower response times, and a greater preference for slowly modulated sounds. Environmental enrichment induced a significant increase in spectral and temporal selectivity in PAF. PAF neurons exhibited narrower receptive fields and responded significantly faster and for a briefer period to sounds after enrichment. Enrichment increased time-locking to rapidly successive sensory input in PAF neurons. Compared with previous enrichment studies in A1, we observe a greater magnitude of reorganization in PAF after environmental enrichment. Along with other reports observing greater reorganization in nonprimary sensory cortex, our results in PAF suggest that nonprimary fields might have a greater capacity for reorganization compared with primary fields. PMID:22131375
Auditory Cortex Basal Activity Modulates Cochlear Responses in Chinchillas
León, Alex; Elgueda, Diego; Silva, María A.; Hamamé, Carlos M.; Delano, Paul H.
2012-01-01
Background The auditory efferent system has unique neuroanatomical pathways that connect the cerebral cortex with sensory receptor cells. Pyramidal neurons located in layers V and VI of the primary auditory cortex constitute descending projections to the thalamus, inferior colliculus, and even directly to the superior olivary complex and to the cochlear nucleus. Efferent pathways are connected to the cochlear receptor by the olivocochlear system, which innervates outer hair cells and auditory nerve fibers. The functional role of the cortico-olivocochlear efferent system remains debated. We hypothesized that auditory cortex basal activity modulates cochlear and auditory-nerve afferent responses through the efferent system. Methodology/Principal Findings Cochlear microphonics (CM), auditory-nerve compound action potentials (CAP) and auditory cortex evoked potentials (ACEP) were recorded in twenty anesthetized chinchillas, before, during and after auditory cortex deactivation by two methods: lidocaine microinjections or cortical cooling with cryoloops. Auditory cortex deactivation induced a transient reduction in ACEP amplitudes in fifteen animals (deactivation experiments) and a permanent reduction in five chinchillas (lesion experiments). We found significant changes in the amplitude of CM in both types of experiments, being the most common effect a CM decrease found in fifteen animals. Concomitantly to CM amplitude changes, we found CAP increases in seven chinchillas and CAP reductions in thirteen animals. Although ACEP amplitudes were completely recovered after ninety minutes in deactivation experiments, only partial recovery was observed in the magnitudes of cochlear responses. Conclusions/Significance These results show that blocking ongoing auditory cortex activity modulates CM and CAP responses, demonstrating that cortico-olivocochlear circuits regulate auditory nerve and cochlear responses through a basal efferent tone. The diversity of the obtained effects suggests that there are at least two functional pathways from the auditory cortex to the cochlea. PMID:22558383
Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.
Reimers, Stian; Stewart, Neil
2016-09-01
Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.
Auditory connections and functions of prefrontal cortex
Plakke, Bethany; Romanski, Lizabeth M.
2014-01-01
The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931
A role for descending auditory cortical projections in songbird vocal learning
Mandelblat-Cerf, Yael; Las, Liora; Denisenko, Natalia; Fee, Michale S
2014-01-01
Many learned motor behaviors are acquired by comparing ongoing behavior with an internal representation of correct performance, rather than using an explicit external reward. For example, juvenile songbirds learn to sing by comparing their song with the memory of a tutor song. At present, the brain regions subserving song evaluation are not known. In this study, we report several findings suggesting that song evaluation involves an avian 'cortical' area previously shown to project to the dopaminergic midbrain and other downstream targets. We find that this ventral portion of the intermediate arcopallium (AIV) receives inputs from auditory cortical areas, and that lesions of AIV result in significant deficits in vocal learning. Additionally, AIV neurons exhibit fast responses to disruptive auditory feedback presented during singing, but not during nonsinging periods. Our findings suggest that auditory cortical areas may guide learning by transmitting song evaluation signals to the dopaminergic midbrain and/or other subcortical targets. DOI: http://dx.doi.org/10.7554/eLife.02152.001 PMID:24935934
Abrams, Daniel A; Nicol, Trent; White-Schwoch, Travis; Zecker, Steven; Kraus, Nina
2017-05-01
Speech perception relies on a listener's ability to simultaneously resolve multiple temporal features in the speech signal. Little is known regarding neural mechanisms that enable the simultaneous coding of concurrent temporal features in speech. Here we show that two categories of temporal features in speech, the low-frequency speech envelope and periodicity cues, are processed by distinct neural mechanisms within the same population of cortical neurons. We measured population activity in primary auditory cortex of anesthetized guinea pig in response to three variants of a naturally produced sentence. Results show that the envelope of population responses closely tracks the speech envelope, and this cortical activity more closely reflects wider bandwidths of the speech envelope compared to narrow bands. Additionally, neuronal populations represent the fundamental frequency of speech robustly with phase-locked responses. Importantly, these two temporal features of speech are simultaneously observed within neuronal ensembles in auditory cortex in response to clear, conversation, and compressed speech exemplars. Results show that auditory cortical neurons are adept at simultaneously resolving multiple temporal features in extended speech sentences using discrete coding mechanisms. Copyright © 2017 Elsevier B.V. All rights reserved.
Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.
Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V
2013-11-15
Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.
Ai, Hiroyuki; Kai, Kazuki; Kumaraswamy, Ajayrama; Ikeno, Hidetoshi; Wachtler, Thomas
2017-11-01
Female honeybees use the "waggle dance" to communicate the location of nectar sources to their hive mates. Distance information is encoded in the duration of the waggle phase (von Frisch, 1967). During the waggle phase, the dancer produces trains of vibration pulses, which are detected by the follower bees via Johnston's organ located on the antennae. To uncover the neural mechanisms underlying the encoding of distance information in the waggle dance follower, we investigated morphology, physiology, and immunohistochemistry of interneurons arborizing in the primary auditory center of the honeybee ( Apis mellifera ). We identified major interneuron types, named DL-Int-1, DL-Int-2, and bilateral DL-dSEG-LP, that responded with different spiking patterns to vibration pulses applied to the antennae. Experimental and computational analyses suggest that inhibitory connection plays a role in encoding and processing the duration of vibration pulse trains in the primary auditory center of the honeybee. SIGNIFICANCE STATEMENT The waggle dance represents a form of symbolic communication used by honeybees to convey the location of food sources via species-specific sound. The brain mechanisms used to decipher this symbolic information are unknown. We examined interneurons in the honeybee primary auditory center and identified different neuron types with specific properties. The results of our computational analyses suggest that inhibitory connection plays a role in encoding waggle dance signals. Our results are critical for understanding how the honeybee deciphers information from the sound produced by the waggle dance and provide new insights regarding how common neural mechanisms are used by different species to achieve communication. Copyright © 2017 the authors 0270-6474/17/3710624-12$15.00/0.
Speech training alters consonant and vowel responses in multiple auditory cortex fields
Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.
2015-01-01
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927
Demopoulos, Carly; Hopkins, Joyce; Kopald, Brandon E; Paulson, Kim; Doyle, Lauren; Andrews, Whitney E; Lewine, Jeffrey David
2015-11-01
The primary aim of this study was to examine whether there is an association between magnetoencephalography-based (MEG) indices of basic cortical auditory processing and vocal affect recognition (VAR) ability in individuals with autism spectrum disorder (ASD). MEG data were collected from 25 children/adolescents with ASD and 12 control participants using a paired-tone paradigm to measure quality of auditory physiology, sensory gating, and rapid auditory processing. Group differences were examined in auditory processing and vocal affect recognition ability. The relationship between differences in auditory processing and vocal affect recognition deficits was examined in the ASD group. Replicating prior studies, participants with ASD showed longer M1n latencies and impaired rapid processing compared with control participants. These variables were significantly related to VAR, with the linear combination of auditory processing variables accounting for approximately 30% of the variability after controlling for age and language skills in participants with ASD. VAR deficits in ASD are typically interpreted as part of a core, higher order dysfunction of the "social brain"; however, these results suggest they also may reflect basic deficits in auditory processing that compromise the extraction of socially relevant cues from the auditory environment. As such, they also suggest that therapeutic targeting of sensory dysfunction in ASD may have additional positive implications for other functional deficits. (c) 2015 APA, all rights reserved).
Dozo, M T
1987-01-01
A natural endocranial cast which represents a complete brain of a specimen of Hapalops indifferens is described. Comparing this cast to brains of actual Tardigrada, it shows a telencephalic morphology and a pattern of neocortical sulci that resemble more the brain of Bradypus rather than that of Choloepus. The neocortical sulci homologate the lateral or corono-lateral, suprasylvian and pseudosylvian sulci. Taking into account the studies of cortical maps in Bradypus and the notable similitude of the pattern of neocortical sulci between Bradypus and H. indifferens, the possible representation of the primary sensitive and motor somatic areas, secondary sensitive somatic area, visual and auditory areas are inferred. As in Bradypus, the primary sensitive and motor somatotopic organizations would be overlapped and would not be mirror images; they would show a predominance of the area of the forelimb. The relative brain size of H. indifferens is similar or higher than that of sloths of the genus Bradypus. The close resemblance between Bradypus and Hapalops, with respect to its brain morphology and relative brain size. is congruent with the current hypothesis of the phylogenetic relations between fossil and recent Tardigrada.
Happel, Max F. K.; Ohl, Frank W.
2017-01-01
Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062
Auditory perception modulated by word reading.
Cao, Liyu; Klepp, Anne; Schnitzler, Alfons; Gross, Joachim; Biermann-Ruben, Katja
2016-10-01
Theories of embodied cognition positing that sensorimotor areas are indispensable during language comprehension are supported by neuroimaging and behavioural studies. Among others, the auditory system has been suggested to be important for understanding sound-related words (visually presented) and the motor system for action-related words. In this behavioural study, using a sound detection task embedded in a lexical decision task, we show that in participants with high lexical decision performance sound verbs improve auditory perception. The amount of modulation was correlated with lexical decision performance. Our study provides convergent behavioural evidence of auditory cortex involvement in word processing, supporting the view of embodied language comprehension concerning the auditory domain.
ERIC Educational Resources Information Center
Osnes, Berge; Hugdahl, Kenneth; Hjelmervik, Helene; Specht, Karsten
2012-01-01
In studies on auditory speech perception, participants are often asked to perform active tasks, e.g. decide whether the perceived sound is a speech sound or not. However, information about the stimulus, inherent in such tasks, may induce expectations that cause altered activations not only in the auditory cortex, but also in frontal areas such as…
Dutke, Stephan; Jaitner, Thomas; Berse, Timo; Barenberg, Jonathan
2014-02-01
Research on effects of acute physical exercise on performance in a concurrent cognitive task has generated equivocal evidence. Processing efficiency theory predicts that concurrent physical exercise can increase resource requirements for sustaining cognitive performance even when the level of performance is unaffected. This hypothesis was tested in a dual-task experiment. Sixty young adults worked on a primary auditory attention task and a secondary interval production task while cycling on a bicycle ergometer. Physical load (cycling) and cognitive load of the primary task were manipulated. Neither physical nor cognitive load affected primary task performance, but both factors interacted on secondary task performance. Sustaining primary task performance under increased physical and/or cognitive load increased resource consumption as indicated by decreased secondary task performance. Results demonstrated that physical exercise effects on cognition might be underestimated when only single task performance is the focus.
Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults
Tusch, Erich S.; Alperin, Brittany R.; Holcomb, Phillip J.; Daffner, Kirk R.
2016-01-01
The inhibitory deficit hypothesis of cognitive aging posits that older adults’ inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1) observed under an auditory-ignore, but not auditory-attend condition, 2) attenuated in individuals with high executive capacity (EC), and 3) augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend) task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study’s findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts. PMID:27806081
The role of Broca's area in speech perception: evidence from aphasia revisited.
Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele
2011-12-01
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Sullivan, Jessica R; Osman, Homira; Schafer, Erin C
2015-06-01
The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Children with normal hearing between the ages of 8 and 10 years were administered working memory and comprehension tasks in quiet and noise. The comprehension measure comprised 5 domains: main idea, details, reasoning, vocabulary, and understanding messages. Performance on auditory working memory and comprehension tasks were significantly poorer in noise than in quiet. The reasoning, details, understanding, and vocabulary subtests were particularly affected in noise (p < .05). The relationship between auditory working memory and comprehension was stronger in noise than in quiet, suggesting an increased contribution of working memory. These data suggest that school-age children's auditory working memory and comprehension are negatively affected by noise. Performance on comprehension tasks in noise is strongly related to demands placed on working memory, supporting the theory that degrading listening conditions draws resources away from the primary task.
Gopalakrishnan, R; Burgess, R C; Plow, E B; Floden, D P; Machado, A G
2015-09-24
Pain anticipation plays a critical role in pain chronification and results in disability due to pain avoidance. It is important to understand how different sensory modalities (auditory, visual or tactile) may influence pain anticipation as different strategies could be applied to mitigate anticipatory phenomena and chronification. In this study, using a countdown paradigm, we evaluated with magnetoencephalography the neural networks associated with pain anticipation elicited by different sensory modalities in normal volunteers. When encountered with well-established cues that signaled pain, visual and somatosensory cortices engaged the pain neuromatrix areas early during the countdown process, whereas the auditory cortex displayed delayed processing. In addition, during pain anticipation, the visual cortex displayed independent processing capabilities after learning the contextual meaning of cues from associative and limbic areas. Interestingly, cross-modal activation was also evident and strong when visual and tactile cues signaled upcoming pain. Dorsolateral prefrontal cortex and mid-cingulate cortex showed significant activity during pain anticipation regardless of modality. Our results show pain anticipation is processed with great time efficiency by a highly specialized and hierarchical network. The highest degree of higher-order processing is modulated by context (pain) rather than content (modality) and rests within the associative limbic regions, corroborating their intrinsic role in chronification. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Male-to-female gender dysphoria: Gender-specific differences in resting-state networks.
Clemens, Benjamin; Junger, Jessica; Pauly, Katharina; Neulen, Josef; Neuschaefer-Rube, Christiane; Frölich, Dirk; Mingoia, Gianluca; Derntl, Birgit; Habel, Ute
2017-05-01
Recent research found gender-related differences in resting-state functional connectivity (rs-FC) measured by functional magnetic resonance imaging (fMRI). To the best of our knowledge, there are no studies examining the differences in rs-FC between men, women, and individuals who report a discrepancy between their anatomical sex and their gender identity, i.e. gender dysphoria (GD). To address this important issue, we present the first fMRI study systematically investigating the differences in typical resting-state networks (RSNs) and hormonal treatment effects in 26 male-to-female GD individuals (MtFs) compared with 19 men and 20 women. Differences between male and female control groups were found only in the auditory RSN, whereas differences between both control groups and MtFs were found in the auditory and fronto-parietal RSNs, including both primary sensory areas (e.g. calcarine gyrus) and higher order cognitive areas such as the middle and posterior cingulate and dorsomedial prefrontal cortex. Overall, differences in MtFs compared with men and women were more pronounced before cross-sex hormonal treatment. Interestingly, rs-FC between MtFs and women did not differ significantly after treatment. When comparing hormonally untreated and treated MtFs, we found differences in connectivity of the calcarine gyrus and thalamus in the context of the auditory network, as well as the inferior frontal gyrus in context of the fronto-parietal network. Our results provide first evidence that MtFs exhibit patterns of rs-FC which are different from both their assigned and their aspired gender, indicating an intermediate position between the two sexes. We suggest that the present study constitutes a starting point for future research designed to clarify whether the brains of individuals with GD are more similar to their assigned or their aspired gender.
High lead exposure and auditory sensory-neural function in Andean children.
Counter, S A; Vahter, M; Laurell, G; Buchanan, L H; Ortega, F; Skerfving, S
1997-01-01
We investigated blood lead (B-Pb) and mercury (B-Hg) levels and auditory sensory-neural function in 62 Andean school children living in a Pb-contaminated area of Ecuador and 14 children in a neighboring gold mining area with no known Pb exposure. The median B-Pb level for 62 children in the Pb-exposed group was 52.6 micrograms/dl (range 9.9-110.0 micrograms/dl) compared with 6.4 micrograms/dl (range 3.9-12.0 micrograms/dl) for the children in the non-Pb exposed group; the differences were statistically significant (p < 0.001). Auditory thresholds for the Pb-exposed group were normal at the pure tone frequencies of 0.25-8 kHz over the entire range of B-Pb levels, Auditory brain stem response tests in seven children with high B-Pb levels showed normal absolute peak and interpeak latencies. The median B-Hg levels were 0.16 micrograms/dl (range 0.04-0.58 micrograms/dl) for children in the Pb-exposed group and 0.22 micrograms/dl (range 0.1-0.44 micrograms/dl) for children in the non-Pb exposed gold mining area, and showed no significant relationship to auditory function. Images Figure 1. Figure 3. A Figure 3. B PMID:9222138
Chen, Yu-Chen; Xia, Wenqing; Chen, Huiyou; Feng, Yuan; Xu, Jin-Jing; Gu, Jian-Ping; Salvi, Richard; Yin, Xindao
2017-05-01
The phantom sound of tinnitus is believed to be triggered by aberrant neural activity in the central auditory pathway, but since this debilitating condition is often associated with emotional distress and anxiety, these comorbidities likely arise from maladaptive functional connections to limbic structures such as the amygdala and hippocampus. To test this hypothesis, resting-state functional magnetic resonance imaging (fMRI) was used to identify aberrant effective connectivity of the amygdala and hippocampus in tinnitus patients and to determine the relationship with tinnitus characteristics. Chronic tinnitus patients (n = 26) and age-, sex-, and education-matched healthy controls (n = 23) were included. Both groups were comparable for hearing level. Granger causality analysis utilizing the amygdala and hippocampus as seed regions were used to investigate the directional connectivity and the relationship with tinnitus duration or distress. Relative to healthy controls, tinnitus patients demonstrated abnormal directional connectivity of the amygdala and hippocampus, including primary and association auditory cortex, and other non-auditory areas. Importantly, scores on the Tinnitus Handicap Questionnaires were positively correlated with increased connectivity from the left amygdala to left superior temporal gyrus (r = 0.570, P = 0.005), and from the right amygdala to right superior temporal gyrus (r = 0.487, P = 0.018). Moreover, enhanced effective connectivity from the right hippocampus to left transverse temporal gyrus was correlated with tinnitus duration (r = 0.452, P = 0.030). The results showed that tinnitus distress strongly correlates with enhanced effective connectivity that is directed from the amygdala to the auditory cortex. The longer the phantom sensation, the more likely acute tinnitus becomes permanently encoded by memory traces in the hippocampus. Hum Brain Mapp 38:2384-2397, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Bais, Leonie; Vercammen, Ans; Stewart, Roy; van Es, Frank; Visser, Bert; Aleman, André; Knegtering, Henderikus
2014-01-01
Background Repetitive transcranial magnetic stimulation of the left temporo-parietal junction area has been studied as a treatment option for auditory verbal hallucinations. Although the right temporo-parietal junction area has also shown involvement in the genesis of auditory verbal hallucinations, no studies have used bilateral stimulation. Moreover, little is known about durability effects. We studied the short and long term effects of 1 Hz treatment of the left temporo-parietal junction area in schizophrenia patients with persistent auditory verbal hallucinations, compared to sham stimulation, and added an extra treatment arm of bilateral TPJ area stimulation. Methods In this randomized controlled trial, 51 patients diagnosed with schizophrenia and persistent auditory verbal hallucinations were randomly allocated to treatment of the left or bilateral temporo-parietal junction area or sham treatment. Patients were treated for six days, twice daily for 20 minutes. Short term efficacy was measured with the Positive and Negative Syndrome Scale (PANSS), the Auditory Hallucinations Rating Scale (AHRS), and the Positive and Negative Affect Scale (PANAS). We included follow-up measures with the AHRS and PANAS at four weeks and three months. Results The interaction between time and treatment for Hallucination item P3 of the PANSS showed a trend for significance, caused by a small reduction of scores in the left group. Although self-reported hallucination scores, as measured with the AHRS and PANAS, decreased significantly during the trial period, there were no differences between the three treatment groups. Conclusion We did not find convincing evidence for the efficacy of left-sided rTMS, compared to sham rTMS. Moreover, bilateral rTMS was not superior over left rTMS or sham in improving AVH. Optimizing treatment parameters may result in stronger evidence for the efficacy of rTMS treatment of AVH. Moreover, future research should consider investigating factors predicting individual response. Trial Registration Dutch Trial Register NTR1813 PMID:25329799
Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.
Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi
2017-07-01
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.
Background sounds contribute to spectrotemporal plasticity in primary auditory cortex.
Moucha, Raluca; Pandya, Pritesh K; Engineer, Navzer D; Rathbun, Daniel L; Kilgard, Michael P
2005-05-01
The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8-4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity.
Sensory-to-motor integration during auditory repetition: a combined fMRI and lesion study
Parker Jones, ‘Ōiwi; Prejawa, Susan; Hope, Thomas M. H.; Oberhuber, Marion; Seghier, Mohamed L.; Leff, Alex P.; Green, David W.; Price, Cathy J.
2014-01-01
The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor translation during auditory repetition of unfamiliar pseudowords. We tested two different hypotheses. First we used functional magnetic resonance imaging in 25 healthy subjects to determine whether a functionally defined area in the left temporo-parietal junction (TPJ), referred to as Sylvian-parietal-temporal region (Spt), reflected the demands on auditory-to-motor integration during the repetition of pseudowords relative to a semantically mediated nonverbal sound-naming task. The experiment also allowed us to test alternative accounts of Spt function, namely that Spt is involved in subvocal articulation or auditory processing that can be driven either bottom-up or top-down. The results did not provide convincing evidence that activation increased in either Spt or any other cortical area when non-semantic auditory inputs were being translated into motor outputs. Instead, the results were most consistent with Spt responding to bottom up or top down auditory processing, independent of the demands on auditory-to-motor integration. Second, we investigated the lesion sites in eight patients who had selective difficulties repeating heard words but with preserved word comprehension, picture naming and verbal fluency (i.e., conduction aphasia). All eight patients had white-matter tract damage in the vicinity of the arcuate fasciculus and only one of the eight patients had additional damage to the Spt region, defined functionally in our fMRI data. Our results are therefore most consistent with the neurological tradition that emphasizes the importance of the arcuate fasciculus in the non-semantic integration of auditory and motor speech processing. PMID:24550807
Gestures, vocalizations, and memory in language origins.
Aboitiz, Francisco
2012-01-01
THIS ARTICLE DISCUSSES THE POSSIBLE HOMOLOGIES BETWEEN THE HUMAN LANGUAGE NETWORKS AND COMPARABLE AUDITORY PROJECTION SYSTEMS IN THE MACAQUE BRAIN, IN AN ATTEMPT TO RECONCILE TWO EXISTING VIEWS ON LANGUAGE EVOLUTION: one that emphasizes hand control and gestures, and the other that emphasizes auditory-vocal mechanisms. The capacity for language is based on relatively well defined neural substrates whose rudiments have been traced in the non-human primate brain. At its core, this circuit constitutes an auditory-vocal sensorimotor circuit with two main components, a "ventral pathway" connecting anterior auditory regions with anterior ventrolateral prefrontal areas, and a "dorsal pathway" connecting auditory areas with parietal areas and with posterior ventrolateral prefrontal areas via the arcuate fasciculus and the superior longitudinal fasciculus. In humans, the dorsal circuit is especially important for phonological processing and phonological working memory, capacities that are critical for language acquisition and for complex syntax processing. In the macaque, the homolog of the dorsal circuit overlaps with an inferior parietal-premotor network for hand and gesture selection that is under voluntary control, while vocalizations are largely fixed and involuntary. The recruitment of the dorsal component for vocalization behavior in the human lineage, together with a direct cortical control of the subcortical vocalizing system, are proposed to represent a fundamental innovation in human evolution, generating an inflection point that permitted the explosion of vocal language and human communication. In this context, vocal communication and gesturing have a common history in primate communication.
Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao
2016-01-01
The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.
Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition
Norman-Haignere, Sam
2015-01-01
SUMMARY The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles (“components”) whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech. PMID:26687225
Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition.
Norman-Haignere, Sam; Kanwisher, Nancy G; McDermott, Josh H
2015-12-16
The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech. Copyright © 2015 Elsevier Inc. All rights reserved.
Jacobsen, Leslie K; Slotkin, Theodore A; Mencl, W Einar; Frost, Stephen J; Pugh, Kenneth R
2007-12-01
Prenatal exposure to active maternal tobacco smoking elevates risk of cognitive and auditory processing deficits, and of smoking in offspring. Recent preclinical work has demonstrated a sex-specific pattern of reduction in cortical cholinergic markers following prenatal, adolescent, or combined prenatal and adolescent exposure to nicotine, the primary psychoactive component of tobacco smoke. Given the importance of cortical cholinergic neurotransmission to attentional function, we examined auditory and visual selective and divided attention in 181 male and female adolescent smokers and nonsmokers with and without prenatal exposure to maternal smoking. Groups did not differ in age, educational attainment, symptoms of inattention, or years of parent education. A subset of 63 subjects also underwent functional magnetic resonance imaging while performing an auditory and visual selective and divided attention task. Among females, exposure to tobacco smoke during prenatal or adolescent development was associated with reductions in auditory and visual attention performance accuracy that were greatest in female smokers with prenatal exposure (combined exposure). Among males, combined exposure was associated with marked deficits in auditory attention, suggesting greater vulnerability of neurocircuitry supporting auditory attention to insult stemming from developmental exposure to tobacco smoke in males. Activation of brain regions that support auditory attention was greater in adolescents with prenatal or adolescent exposure to tobacco smoke relative to adolescents with neither prenatal nor adolescent exposure to tobacco smoke. These findings extend earlier preclinical work and suggest that, in humans, prenatal and adolescent exposure to nicotine exerts gender-specific deleterious effects on auditory and visual attention, with concomitant alterations in the efficiency of neurocircuitry supporting auditory attention.
Impact of Educational Level on Performance on Auditory Processing Tests.
Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane
2016-01-01
Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.
Happel, Max F K; Jeschke, Marcus; Ohl, Frank W
2010-08-18
Primary sensory cortex integrates sensory information from afferent feedforward thalamocortical projection systems and convergent intracortical microcircuits. Both input systems have been demonstrated to provide different aspects of sensory information. Here we have used high-density recordings of laminar current source density (CSD) distributions in primary auditory cortex of Mongolian gerbils in combination with pharmacological silencing of cortical activity and analysis of the residual CSD, to dissociate the feedforward thalamocortical contribution and the intracortical contribution to spectral integration. We found a temporally highly precise integration of both types of inputs when the stimulation frequency was in close spectral neighborhood of the best frequency of the measurement site, in which the overlap between both inputs is maximal. Local intracortical connections provide both directly feedforward excitatory and modulatory input from adjacent cortical sites, which determine how concurrent afferent inputs are integrated. Through separate excitatory horizontal projections, terminating in cortical layers II/III, information about stimulus energy in greater spectral distance is provided even over long cortical distances. These projections effectively broaden spectral tuning width. Based on these data, we suggest a mechanism of spectral integration in primary auditory cortex that is based on temporally precise interactions of afferent thalamocortical inputs and different short- and long-range intracortical networks. The proposed conceptual framework allows integration of different and partly controversial anatomical and physiological models of spectral integration in the literature.
Kantrowitz, J T; Hoptman, M J; Leitman, D I; Silipo, G; Javitt, D C
2014-01-01
Intact sarcasm perception is a crucial component of social cognition and mentalizing (the ability to understand the mental state of oneself and others). In sarcasm, tone of voice is used to negate the literal meaning of an utterance. In particular, changes in pitch are used to distinguish between sincere and sarcastic utterances. Schizophrenia patients show well-replicated deficits in auditory function and functional connectivity (FC) within and between auditory cortical regions. In this study we investigated the contributions of auditory deficits to sarcasm perception in schizophrenia. Auditory measures including pitch processing, auditory emotion recognition (AER) and sarcasm detection were obtained from 76 patients with schizophrenia/schizo-affective disorder and 72 controls. Resting-state FC (rsFC) was obtained from a subsample and was analyzed using seeds placed in both auditory cortex and meta-analysis-defined core-mentalizing regions relative to auditory performance. Patients showed large effect-size deficits across auditory measures. Sarcasm deficits correlated significantly with general functioning and impaired pitch processing both across groups and within the patient group alone. Patients also showed reduced sensitivity to alterations in mean pitch and variability. For patients, sarcasm discrimination correlated exclusively with the level of rsFC within primary auditory regions whereas for controls, correlations were observed exclusively within core-mentalizing regions (the right posterior superior temporal gyrus, anterior superior temporal sulcus and insula, and left posterior medial temporal gyrus). These findings confirm the contribution of auditory deficits to theory of mind (ToM) impairments in schizophrenia, and demonstrate that FC within auditory, but not core-mentalizing, regions is rate limiting with respect to sarcasm detection in schizophrenia.
Concentric scheme of monkey auditory cortex
NASA Astrophysics Data System (ADS)
Kosaki, Hiroko; Saunders, Richard C.; Mishkin, Mortimer
2003-04-01
The cytoarchitecture of the rhesus monkey's auditory cortex was examined using immunocytochemical staining with parvalbumin, calbindin-D28K, and SMI32, as well as staining for cytochrome oxidase (CO). The results suggest that Kaas and Hackett's scheme of the auditory cortices can be extended to include five concentric rings surrounding an inner core. The inner core, containing areas A1 and R, is the most densely stained with parvalbumin and CO and can be separated on the basis of laminar patterns of SMI32 staining into lateral and medial subdivisions. From the inner core to the fifth (outermost) ring, parvalbumin staining gradually decreases and calbindin staining gradually increases. The first ring corresponds to Kaas and Hackett's auditory belt, and the second, to their parabelt. SMI32 staining revealed a clear border between these two. Rings 2 through 5 extend laterally into the dorsal bank of the superior temporal sulcus. The results also suggest that the rostral tip of the outermost ring adjoins the rostroventral part of the insula (area Pro) and the temporal pole, while the caudal tip adjoins the ventral part of area 7a.
Wolak, Tomasz; Cieśla, Katarzyna; Rusiniak, Mateusz; Piłka, Adam; Lewandowska, Monika; Pluta, Agnieszka; Skarżyński, Henryk; Skarżyński, Piotr H
2016-11-28
BACKGROUND The goal of the fMRI experiment was to explore the involvement of central auditory structures in pathomechanisms of a behaviorally manifested auditory temporary threshold shift in humans. MATERIAL AND METHODS The material included 18 healthy volunteers with normal hearing. Subjects in the exposure group were presented with 15 min of binaural acoustic overstimulation of narrowband noise (3 kHz central frequency) at 95 dB(A). The control group was not exposed to noise but instead relaxed in silence. Auditory fMRI was performed in 1 session before and 3 sessions after acoustic overstimulation and involved 3.5-4.5 kHz sweeps. RESULTS The outcomes of the study indicate a possible effect of acoustic overstimulation on central processing, with decreased brain responses to auditory stimulation up to 20 min after exposure to noise. The effect can be seen already in the primary auditory cortex. Decreased BOLD signal change can be due to increased excitation thresholds and/or increased spontaneous activity of auditory neurons throughout the auditory system. CONCLUSIONS The trial shows that fMRI can be a valuable tool in acoustic overstimulation studies but has to be used with caution and considered complimentary to audiological measures. Further methodological improvements are needed to distinguish the effects of TTS and neuronal habituation to repetitive stimulation.
Wolak, Tomasz; Cieśla, Katarzyna; Rusiniak, Mateusz; Piłka, Adam; Lewandowska, Monika; Pluta, Agnieszka; Skarżyński, Henryk; Skarżyński, Piotr H.
2016-01-01
Background The goal of the fMRI experiment was to explore the involvement of central auditory structures in pathomechanisms of a behaviorally manifested auditory temporary threshold shift in humans. Material/Methods The material included 18 healthy volunteers with normal hearing. Subjects in the exposure group were presented with 15 min of binaural acoustic overstimulation of narrowband noise (3 kHz central frequency) at 95 dB(A). The control group was not exposed to noise but instead relaxed in silence. Auditory fMRI was performed in 1 session before and 3 sessions after acoustic overstimulation and involved 3.5–4.5 kHz sweeps. Results The outcomes of the study indicate a possible effect of acoustic overstimulation on central processing, with decreased brain responses to auditory stimulation up to 20 min after exposure to noise. The effect can be seen already in the primary auditory cortex. Decreased BOLD signal change can be due to increased excitation thresholds and/or increased spontaneous activity of auditory neurons throughout the auditory system. Conclusions The trial shows that fMRI can be a valuable tool in acoustic overstimulation studies but has to be used with caution and considered complimentary to audiological measures. Further methodological improvements are needed to distinguish the effects of TTS and neuronal habituation to repetitive stimulation. PMID:27893698
Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia
Dale, Corby L.; Brown, Ethan G.; Fisher, Melissa; Herman, Alexander B.; Dowling, Anne F.; Hinkley, Leighton B.; Subramaniam, Karuna; Nagarajan, Srikantan S.; Vinogradov, Sophia
2016-01-01
Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668
Gómez-Nieto, Ricardo; Horta-Júnior, José de Anchieta C.; Castellano, Orlando; Millian-Morell, Lymarie; Rubio, Maria E.; López, Dolores E.
2014-01-01
The acoustic startle reflex (ASR) is a survival mechanism of alarm, which rapidly alerts the organism to a sudden loud auditory stimulus. In rats, the primary ASR circuit encompasses three serially connected structures: cochlear root neurons (CRNs), neurons in the caudal pontine reticular nucleus (PnC), and motoneurons in the medulla and spinal cord. It is well-established that both CRNs and PnC neurons receive short-latency auditory inputs to mediate the ASR. Here, we investigated the anatomical origin and functional role of these inputs using a multidisciplinary approach that combines morphological, electrophysiological and behavioral techniques. Anterograde tracer injections into the cochlea suggest that CRNs somata and dendrites receive inputs depending, respectively, on their basal or apical cochlear origin. Confocal colocalization experiments demonstrated that these cochlear inputs are immunopositive for the vesicular glutamate transporter 1 (VGLUT1). Using extracellular recordings in vivo followed by subsequent tracer injections, we investigated the response of PnC neurons after contra-, ipsi-, and bilateral acoustic stimulation and identified the source of their auditory afferents. Our results showed that the binaural firing rate of PnC neurons was higher than the monaural, exhibiting higher spike discharges with contralateral than ipsilateral acoustic stimulations. Our histological analysis confirmed the CRNs as the principal source of short-latency acoustic inputs, and indicated that other areas of the cochlear nucleus complex are not likely to innervate PnC. Behaviorally, we observed a strong reduction of ASR amplitude in monaural earplugged rats that corresponds with the binaural summation process shown in our electrophysiological findings. Our study contributes to understand better the role of neuronal mechanisms in auditory alerting behaviors and provides strong evidence that the CRNs-PnC pathway mediates fast neurotransmission and binaural summation of the ASR. PMID:25120419
Furutani, Rui
2008-09-01
The present investigation carried out Nissl, Klüver-Barrera, and Golgi studies of the cerebral cortex in three distinct genera of oceanic dolphins (Risso's dolphin, striped dolphin and bottlenose dolphin) to identify and classify cortical laminar and cytoarchitectonic structures in four distinct functional areas, including primary motor (M1), primary sensory (S1), primary visual (V1), and primary auditory (A1) cortices. The laminar and cytoarchitectonic organization of each of these cortical areas was similar among the three dolphin species. M1 was visualized as five-layer structure that included the molecular layer (layer I), external granular layer (layer II), external pyramidal layer (layer III), internal pyramidal layer (layer V), and fusiform layer (layer VI). The internal granular layer was absent. The cetacean sensory-related cortical areas S1, V1, and A1 were also found to have a five-layer organization comprising layers I, II, III, V and VI. In particular, A1 was characterized by the broadest layer I, layer II and developed band of pyramidal neurons in layers III (sublayers IIIa, IIIb and IIIc) and V. The patch organization consisting of the layer IIIb-pyramidal neurons was detected in the S1 and V1, but not in A1. The laminar patterns of V1 and S1 were similar, but the cytoarchitectonic structures of the two areas were different. V1 was characterized by a broader layer II than that of S1, and also contained the specialized pyramidal and multipolar stellate neurons in layers III and V.
Transient human auditory cortex activation during volitional attention shifting
Uhlig, Christian Harm; Gutschalk, Alexander
2017-01-01
While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues. PMID:28273110
Neural substrates related to auditory working memory comparisons in dyslexia: An fMRI study
CONWAY, TIM; HEILMAN, KENNETH M.; GOPINATH, KAUNDINYA; PECK, KYUNG; BAUER, RUSSELL; BRIGGS, RICHARD W.; TORGESEN, JOSEPH K.; CROSSON, BRUCE
2010-01-01
Adult readers with developmental phonological dyslexia exhibit significant difficulty comparing pseudowords and pure tones in auditory working memory (AWM). This suggests deficient AWM skills for adults diagnosed with dyslexia. Despite behavioral differences, it is unknown whether neural substrates of AWM differ between adults diagnosed with dyslexia and normal readers. Prior neuroimaging of adults diagnosed with dyslexia and normal readers, and post-mortem findings of neural structural anomalies in adults diagnosed with dyslexia support the hypothesis of atypical neural activity in temporoparietal and inferior frontal regions during AWM tasks in adults diagnosed with dyslexia. We used fMRI during two binaural AWM tasks (pseudowords or pure tones comparisons) in adults diagnosed with dyslexia (n = 11) and normal readers (n = 11). For both AWM tasks, adults diagnosed with dyslexia exhibited greater activity in left posterior superior temporal (BA 22) and inferior parietal regions (BA 40) than normal readers. Comparing neural activity between groups and between stimuli contrasts (pseudowords vs. tones), adults diagnosed with dyslexia showed greater primary auditory cortex activity (BA 42; tones > pseudowords) than normal readers. Thus, greater activity in primary auditory, posterior superior temporal, and inferior parietal cortices during linguistic and non-linguistic AWM tasks for adults diagnosed with dyslexia compared to normal readers indicate differences in neural substrates of AWM comparison tasks. PMID:18577292
[Feasibility of auditory cortical stimulation for the treatment of tinnitus. Three case reports].
Litré, C-F; Giersky, F; Theret, E; Leveque, M; Peruzzi, P; Rousseaux, P
2010-08-01
Tinnitus is a public health issue in France. Around 1 % of the population is affected and 30,000 people are handicapped in their daily life. The treatments available for disabling tinnitus have until now been disappointing. We report our experience on the treatment of these patients in neurosurgery. Between 2006 and 2008, transcranial magnetic stimulation (rTMS) was performed following several supraliminal and subliminal protocols in 16 patients whose mean age was 47 years (range, 35-71). All patients underwent anatomical and functional MRI of the auditory cortex before and 18 h after rTMS, to straddle the primary and secondary auditory cortices. All patients underwent audiometric testing by an ENT physician. Nine patients responded with rTMS. After these investigations, two quadrapolar electrodes (Resume), connected to a stimulating device implanted under the skin (Synergy, from Medtronic), were extradurally implanted in three patients. The electrodes were placed between the primary and secondary auditory cortices. The mean follow-up was 25 months and significant improvement was found in these patients. The feasibility of cortical stimulation in symptomatic treatment of tinnitus was demonstrated by this preparatory work. The intermediate- and long-term therapeutic effects remain to be evaluated. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.
Neurotrophic factor intervention restores auditory function in deafened animals
NASA Astrophysics Data System (ADS)
Shinohara, Takayuki; Bredberg, Göran; Ulfendahl, Mats; Pyykkö, Ilmari; Petri Olivius, N.; Kaksonen, Risto; Lindström, Bo; Altschuler, Richard; Miller, Josef M.
2002-02-01
A primary cause of deafness is damage of receptor cells in the inner ear. Clinically, it has been demonstrated that effective functionality can be provided by electrical stimulation of the auditory nerve, thus bypassing damaged receptor cells. However, subsequent to sensory cell loss there is a secondary degeneration of the afferent nerve fibers, resulting in reduced effectiveness of such cochlear prostheses. The effects of neurotrophic factors were tested in a guinea pig cochlear prosthesis model. After chemical deafening to mimic the clinical situation, the neurotrophic factors brain-derived neurotrophic factor and an analogue of ciliary neurotrophic factor were infused directly into the cochlea of the inner ear for 26 days by using an osmotic pump system. An electrode introduced into the cochlea was used to elicit auditory responses just as in patients implanted with cochlear prostheses. Intervention with brain-derived neurotrophic factor and the ciliary neurotrophic factor analogue not only increased the survival of auditory spiral ganglion neurons, but significantly enhanced the functional responsiveness of the auditory system as measured by using electrically evoked auditory brainstem responses. This demonstration that neurotrophin intervention enhances threshold sensitivity within the auditory system will have great clinical importance for the treatment of deaf patients with cochlear prostheses. The findings have direct implications for the enhancement of responsiveness in deafferented peripheral nerves.
de Pesters, A; Coon, W G; Brunner, P; Gunduz, A; Ritaccio, A L; Brunet, N M; de Weerd, P; Roberts, M J; Oostenveld, R; Fries, P; Schalk, G
2016-07-01
Performing different tasks, such as generating motor movements or processing sensory input, requires the recruitment of specific networks of neuronal populations. Previous studies suggested that power variations in the alpha band (8-12Hz) may implement such recruitment of task-specific populations by increasing cortical excitability in task-related areas while inhibiting population-level cortical activity in task-unrelated areas (Klimesch et al., 2007; Jensen and Mazaheri, 2010). However, the precise temporal and spatial relationships between the modulatory function implemented by alpha oscillations and population-level cortical activity remained undefined. Furthermore, while several studies suggested that alpha power indexes task-related populations across large and spatially separated cortical areas, it was largely unclear whether alpha power also differentially indexes smaller networks of task-related neuronal populations. Here we addressed these questions by investigating the temporal and spatial relationships of electrocorticographic (ECoG) power modulations in the alpha band and in the broadband gamma range (70-170Hz, indexing population-level activity) during auditory and motor tasks in five human subjects and one macaque monkey. In line with previous research, our results confirm that broadband gamma power accurately tracks task-related behavior and that alpha power decreases in task-related areas. More importantly, they demonstrate that alpha power suppression lags population-level activity in auditory areas during the auditory task, but precedes it in motor areas during the motor task. This suppression of alpha power in task-related areas was accompanied by an increase in areas not related to the task. In addition, we show for the first time that these differential modulations of alpha power could be observed not only across widely distributed systems (e.g., motor vs. auditory system), but also within the auditory system. Specifically, alpha power was suppressed in the locations within the auditory system that most robustly responded to particular sound stimuli. Altogether, our results provide experimental evidence for a mechanism that preferentially recruits task-related neuronal populations by increasing cortical excitability in task-related cortical areas and decreasing cortical excitability in task-unrelated areas. This mechanism is implemented by variations in alpha power and is common to humans and the non-human primate under study. These results contribute to an increasingly refined understanding of the mechanisms underlying the selection of the specific neuronal populations required for task execution. Copyright © 2016 Elsevier Inc. All rights reserved.
D’Angiulli, Amedeo; Griffiths, Gordon; Marmolejo-Ramos, Fernando
2015-01-01
The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization), followed by a four-picture array (a target plus three distractors; part 2: matching visualization). Children were to select the picture matching the word they heard in part 1. Event-related potentials (ERPs) locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e., <300 ms) was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e., 300–699 ms) and late (i.e., 700–1000 ms) ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a “post-anterior” pathway sequence: occipital, parietal, and temporal areas; conversely, matching visualization involved left-hemispheric activity following an “ant-posterior” pathway sequence: frontal, temporal, parietal, and occipital areas. These results suggest that, similarly, for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying representations. PMID:26175697
Functional anatomic studies of memory retrieval for auditory words and visual pictures.
Buckner, R L; Raichle, M E; Miezin, F M; Petersen, S E
1996-10-01
Functional neuroimaging with positron emission tomography was used to study brain areas activated during memory retrieval. Subjects (n = 15) recalled items from a recent study episode (episodic memory) during two paired-associate recall tasks. The tasks differed in that PICTURE RECALL required pictorial retrieval, whereas AUDITORY WORD RECALL required word retrieval. Word REPETITION and REST served as two reference tasks. Comparing recall with repetition revealed the following observations. (1) Right anterior prefrontal activation (similar to that seen in several previous experiments), in addition to bilateral frontal-opercular and anterior cingulate activations. (2) An anterior subdivision of medial frontal cortex [pre-supplementary motor area (SMA)] was activated, which could be dissociated from a more posterior area (SMA proper). (3) Parietal areas were activated, including a posterior medial area near precuneus, that could be dissociated from an anterior parietal area that was deactivated. (4) Multiple medial and lateral cerebellar areas were activated. Comparing recall with rest revealed similar activations, except right prefrontal activation was minimal and activations related to motor and auditory demands became apparent (e.g., bilateral motor and temporal cortex). Directly comparing picture recall with auditory word recall revealed few notable activations. Taken together, these findings suggest a pathway that is commonly used during the episodic retrieval of picture and word stimuli under these conditions. Many areas in this pathway overlap with areas previously activated by a different set of retrieval tasks using stem-cued recall, demonstrating their generality. Examination of activations within individual subjects in relation to structural magnetic resonance images provided an-atomic information about the location of these activations. Such data, when combined with the dissociations between functional areas, provide an increasingly detailed picture of the brain pathways involved in episodic retrieval tasks.
Neuroanatomical and resting state EEG power correlates of central hearing loss in older adults.
Giroud, Nathalie; Hirsiger, Sarah; Muri, Raphaela; Kegel, Andrea; Dillier, Norbert; Meyer, Martin
2018-01-01
To gain more insight into central hearing loss, we investigated the relationship between cortical thickness and surface area, speech-relevant resting state EEG power, and above-threshold auditory measures in older adults and younger controls. Twenty-three older adults and 13 younger controls were tested with an adaptive auditory test battery to measure not only traditional pure-tone thresholds, but also above individual thresholds of temporal and spectral processing. The participants' speech recognition in noise (SiN) was evaluated, and a T1-weighted MRI image obtained for each participant. We then determined the cortical thickness (CT) and mean cortical surface area (CSA) of auditory and higher speech-relevant regions of interest (ROIs) with FreeSurfer. Further, we obtained resting state EEG from all participants as well as data on the intrinsic theta and gamma power lateralization, the latter in accordance with predictions of the Asymmetric Sampling in Time hypothesis regarding speech processing (Poeppel, Speech Commun 41:245-255, 2003). Methodological steps involved the calculation of age-related differences in behavior, anatomy and EEG power lateralization, followed by multiple regressions with anatomical ROIs as predictors for auditory performance. We then determined anatomical regressors for theta and gamma lateralization, and further constructed all regressions to investigate age as a moderator variable. Behavioral results indicated that older adults performed worse in temporal and spectral auditory tasks, and in SiN, despite having normal peripheral hearing as signaled by the audiogram. These behavioral age-related distinctions were accompanied by lower CT in all ROIs, while CSA was not different between the two age groups. Age modulated the regressions specifically in right auditory areas, where a thicker cortex was associated with better auditory performance in older adults. Moreover, a thicker right supratemporal sulcus predicted more rightward theta lateralization, indicating the functional relevance of the right auditory areas in older adults. The question how age-related cortical thinning and intrinsic EEG architecture relates to central hearing loss has so far not been addressed. Here, we provide the first neuroanatomical and neurofunctional evidence that cortical thinning and lateralization of speech-relevant frequency band power relates to the extent of age-related central hearing loss in older adults. The results are discussed within the current frameworks of speech processing and aging.
Neural network retuning and neural predictors of learning success associated with cello training.
Wollman, Indiana; Penhune, Virginia; Segado, Melanie; Carpentier, Thibaut; Zatorre, Robert J
2018-06-26
The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio-motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio-motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory-motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio-motor learning.
Infants’ brain responses to speech suggest Analysis by Synthesis
Kuhl, Patricia K.; Ramírez, Rey R.; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki
2014-01-01
Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners’ knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca’s area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of “motherese” on early language learning, and (iii) the “social-gating” hypothesis and humans’ development of social understanding. PMID:25024207
Infants' brain responses to speech suggest analysis by synthesis.
Kuhl, Patricia K; Ramírez, Rey R; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki
2014-08-05
Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.
Auditory motion processing after early blindness
Jiang, Fang; Stecker, G. Christopher; Fine, Ione
2014-01-01
Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion. PMID:25378368
Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.
Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M
2013-11-01
Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.
Characterization of auditory synaptic inputs to gerbil perirhinal cortex
Kotak, Vibhakar C.; Mowery, Todd M.; Sanes, Dan H.
2015-01-01
The representation of acoustic cues involves regions downstream from the auditory cortex (ACx). One such area, the perirhinal cortex (PRh), processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG) and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of type A gamma-aminobutyric acid (GABA-A) receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the ACx. PMID:26321918
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David
2015-02-01
Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David
2014-01-01
Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals - over a range of time scales from milliseconds to seconds - renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own ‚privileged‘ temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. PMID:24956028
Bell, Brittany A; Phan, Mimi L; Vicario, David S
2015-03-01
How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions. Copyright © 2015 the American Physiological Society.
Psychophysical and Neural Correlates of Auditory Attraction and Aversion
NASA Astrophysics Data System (ADS)
Patten, Kristopher Jakob
This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.
Constructing Noise-Invariant Representations of Sound in the Auditory Pathway
Rabinowitz, Neil C.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.
2013-01-01
Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. PMID:24265596
Sharma, Anu; Campbell, Julia; Cardon, Garrett
2015-02-01
Cortical development is dependent on extrinsic stimulation. As such, sensory deprivation, as in congenital deafness, can dramatically alter functional connectivity and growth in the auditory system. Cochlear implants ameliorate deprivation-induced delays in maturation by directly stimulating the central nervous system, and thereby restoring auditory input. The scenario in which hearing is lost due to deafness and then reestablished via a cochlear implant provides a window into the development of the central auditory system. Converging evidence from electrophysiologic and brain imaging studies of deaf animals and children fitted with cochlear implants has allowed us to elucidate the details of the time course for auditory cortical maturation under conditions of deprivation. Here, we review how the P1 cortical auditory evoked potential (CAEP) provides useful insight into sensitive period cut-offs for development of the primary auditory cortex in deaf children fitted with cochlear implants. Additionally, we present new data on similar sensitive period dynamics in higher-order auditory cortices, as measured by the N1 CAEP in cochlear implant recipients. Furthermore, cortical re-organization, secondary to sensory deprivation, may take the form of compensatory cross-modal plasticity. We provide new case-study evidence that cross-modal re-organization, in which intact sensory modalities (i.e., vision and somatosensation) recruit cortical regions associated with deficient sensory modalities (i.e., auditory) in cochlear implanted children may influence their behavioral outcomes with the implant. Improvements in our understanding of developmental neuroplasticity in the auditory system should lead to harnessing central auditory plasticity for superior clinical technique. Copyright © 2014 Elsevier B.V. All rights reserved.
Noto, M; Nishikawa, J; Tateno, T
2016-03-24
A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self-exciting system is a key element for qualitatively reproducing A1 population activity and to understand the underlying mechanisms. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Audiovisual Association Learning in the Absence of Primary Visual Cortex.
Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice
2015-01-01
Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.
Background sounds contribute to spectrotemporal plasticity in primary auditory cortex
Moucha, Raluca; Pandya, Pritesh K.; Engineer, Navzer D.; Rathbun, Daniel L.
2010-01-01
The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8–4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity PMID:15616812
Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.
Vercillo, Tiziana; Gori, Monica
2015-01-01
The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.
Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J
2016-11-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.
Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.
2016-01-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647
Effect of the environment on the dendritic morphology of the rat auditory cortex
Bose, Mitali; Muñoz-Llancao, Pablo; Roychowdhury, Swagata; Nichols, Justin A.; Jakkamsetti, Vikram; Porter, Benjamin; Byrapureddy, Rajasekhar; Salgado, Humberto; Kilgard, Michael P.; Aboitiz, Francisco; Dagnino-Subiabre, Alexies; Atzori, Marco
2010-01-01
The present study aimed to identify morphological correlates of environment-induced changes at excitatory synapses of the primary auditory cortex (A1). We used the Golgi-Cox stain technique to compare pyramidal cells dendritic properties of Sprague-Dawley rats exposed to different environmental manipulations. Sholl analysis, dendritic length measures, and spine density counts were used to monitor the effects of sensory deafness and an auditory version of environmental enrichment (EE). We found that deafness decreased apical dendritic length leaving basal dendritic length unchanged, whereas EE selectively increased basal dendritic length without changing apical dendritic length. On the contrary, deafness decreased while EE increased spine density in both basal and apical dendrites of A1 layer 2/3 (LII/III) neurons. To determine whether stress contributed to the observed morphological changes in A1, we studied neural morphology in a restraint-induced model that lacked behaviorally relevant acoustic cues. We found that stress selectively decreased apical dendritic length in the auditory but not in the visual primary cortex. Similar to the acoustic manipulation, stress-induced changes in dendritic length possessed a layer specific pattern displaying LII/III neurons from stressed animals with normal apical dendrites but shorter basal dendrites, while infragranular neurons (layers V and VI) displayed shorter apical dendrites but normal basal dendrites. The same treatment did not induce similar changes in the visual cortex, demonstrating that the auditory cortex is an exquisitely sensitive target of neocortical plasticity, and that prolonged exposure to different acoustic as well as emotional environmental manipulation may produce specific changes in dendritic shape and spine density. PMID:19771593
Koehler, Seth D.; Shore, Susan E.
2015-01-01
Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. PMID:26289461
Double dissociation of 'what' and 'where' processing in auditory cortex.
Lomber, Stephen G; Malhotra, Shveta
2008-05-01
Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.
To, Wing Ting; Ost, Jan; Hart, John; De Ridder, Dirk; Vanneste, Sven
2017-01-01
Tinnitus is the perception of a sound in the absence of a corresponding external sound source. Research has suggested that functional abnormalities in tinnitus patients involve auditory as well as non-auditory brain areas. Transcranial electrical stimulation (tES), such as transcranial direct current stimulation (tDCS) to the dorsolateral prefrontal cortex and transcranial random noise stimulation (tRNS) to the auditory cortex, has demonstrated modulation of brain activity to transiently suppress tinnitus symptoms. Targeting two core regions of the tinnitus network by tES might establish a promising strategy to enhance treatment effects. This proof-of-concept study aims to investigate the effect of a multisite tES treatment protocol on tinnitus intensity and distress. A total of 40 tinnitus patients were enrolled in this study and received either bifrontal tDCS or the multisite treatment of bifrontal tDCS before bilateral auditory cortex tRNS. Both groups were treated on eight sessions (two times a week for 4 weeks). Our results show that a multisite treatment protocol resulted in more pronounced effects when compared with the bifrontal tDCS protocol or the waiting list group, suggesting an added value of auditory cortex tRNS to the bifrontal tDCS protocol for tinnitus patients. These findings support the involvement of the auditory as well as non-auditory brain areas in the pathophysiology of tinnitus and demonstrate the idea of the efficacy of network stimulation in the treatment of neurological disorders. This multisite tES treatment protocol proved to be save and feasible for clinical routine in tinnitus patients.
Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram
2011-01-01
Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.
A Generative Model of Speech Production in Broca’s and Wernicke’s Areas
Price, Cathy J.; Crinion, Jenny T.; MacSweeney, Mairéad
2011-01-01
Speech production involves the generation of an auditory signal from the articulators and vocal tract. When the intended auditory signal does not match the produced sounds, subsequent articulatory commands can be adjusted to reduce the difference between the intended and produced sounds. This requires an internal model of the intended speech output that can be compared to the produced speech. The aim of this functional imaging study was to identify brain activation related to the internal model of speech production after activation related to vocalization, auditory feedback, and movement in the articulators had been controlled. There were four conditions: silent articulation of speech, non-speech mouth movements, finger tapping, and visual fixation. In the speech conditions, participants produced the mouth movements associated with the words “one” and “three.” We eliminated auditory feedback from the spoken output by instructing participants to articulate these words without producing any sound. The non-speech mouth movement conditions involved lip pursing and tongue protrusions to control for movement in the articulators. The main difference between our speech and non-speech mouth movement conditions is that prior experience producing speech sounds leads to the automatic and covert generation of auditory and phonological associations that may play a role in predicting auditory feedback. We found that, relative to non-speech mouth movements, silent speech activated Broca’s area in the left dorsal pars opercularis and Wernicke’s area in the left posterior superior temporal sulcus. We discuss these results in the context of a generative model of speech production and propose that Broca’s and Wernicke’s areas may be involved in predicting the speech output that follows articulation. These predictions could provide a mechanism by which rapid movement of the articulators is precisely matched to the intended speech outputs during future articulations. PMID:21954392
Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B
2012-06-07
In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here, we used chronic microelectrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions, we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. Copyright © 2012 Elsevier Inc. All rights reserved.
Fukushima, Makoto; Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.
2012-01-01
Summary In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here we used chronic micro-electrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. PMID:22681693
Degraded speech sound processing in a rat model of fragile X syndrome
Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Kilgard, Michael P.
2014-01-01
Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies. PMID:24713347
Penhune, V B; Zatorre, R J; Feindel, W H
1999-03-01
This experiment examined the participation of the auditory cortex of the temporal lobe in the perception and retention of rhythmic patterns. Four patient groups were tested on a paradigm contrasting reproduction of auditory and visual rhythms: those with right or left anterior temporal lobe removals which included Heschl's gyrus (HG), the region of primary auditory cortex (RT-A and LT-A); and patients with right or left anterior temporal lobe removals which did not include HG (RT-a and LT-a). Estimation of lesion extent in HG using an MRI-based probabilistic map indicated that, in the majority of subjects, the lesion was confined to the anterior secondary auditory cortex located on the anterior-lateral extent of HG. On the rhythm reproduction task, RT-A patients were impaired in retention of auditory but not visual rhythms, particularly when accurate reproduction of stimulus durations was required. In contrast, LT-A patients as well as both RT-a and LT-a patients were relatively unimpaired on this task. None of the patient groups was impaired in the ability to make an adequate motor response. Further, they were unimpaired when using a dichotomous response mode, indicating that they were able to adequately differentiate the stimulus durations and, when given an alternative method of encoding, to retain them. Taken together, these results point to a specific role for the right anterior secondary auditory cortex in the retention of a precise analogue representation of auditory tonal patterns.
Cellular generators of the cortical auditory evoked potential initial component.
Steinschneider, M; Tenke, C E; Schroeder, C E; Javitt, D C; Simpson, G V; Arezzo, J C; Vaughan, H G
1992-01-01
Cellular generators of the initial cortical auditory evoked potential (AEP) component were determined by analyzing laminar profiles of click-evoked AEPs, current source density, and multiple unit activity (MUA) in primary auditory cortex of awake monkeys. The initial AEP component is a surface-negative wave, N8, that peaks at 8-9 msec and inverts in polarity below lamina 4. N8 is generated by a lamina 4 current sink and a deeper current source. Simultaneous MUA is present from lower lamina 3 to the subjacent white matter. Findings indicate that thalamocortical afferents are a generator of N8 and support a role for lamina 4 stellate cells. Relationships to the human AEP are discussed.
Distinct Effects of Trial-Driven and Task Set-Related Control in Primary Visual Cortex
Vaden, Ryan J.; Visscher, Kristina M.
2015-01-01
Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of intermodal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors—portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas. PMID:26163806
Anatomical Substrates of Visual and Auditory Miniature Second-language Learning
Newman-Norlund, Roger D.; Frey, Scott H.; Petitto, Laura-Ann; Grafton, Scott T.
2007-01-01
Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca’s area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca’s area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the ‘‘critical period.’’ The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance. PMID:17129186
Mapping perception to action in piano practice: a longitudinal DC-EEG study
Bangert, Marc; Altenmüller, Eckart O
2003-01-01
Background Performing music requires fast auditory and motor processing. Regarding professional musicians, recent brain imaging studies have demonstrated that auditory stimulation produces a co-activation of motor areas, whereas silent tapping of musical phrases evokes a co-activation in auditory regions. Whether this is obtained via a specific cerebral relay station is unclear. Furthermore, the time course of plasticity has not yet been addressed. Results Changes in cortical activation patterns (DC-EEG potentials) induced by short (20 minute) and long term (5 week) piano learning were investigated during auditory and motoric tasks. Two beginner groups were trained. The 'map' group was allowed to learn the standard piano key-to-pitch map. For the 'no-map' group, random assignment of keys to tones prevented such a map. Auditory-sensorimotor EEG co-activity occurred within only 20 minutes. The effect was enhanced after 5-week training, contributing elements of both perception and action to the mental representation of the instrument. The 'map' group demonstrated significant additional activity of right anterior regions. Conclusion We conclude that musical training triggers instant plasticity in the cortex, and that right-hemispheric anterior areas provide an audio-motor interface for the mental representation of the keyboard. PMID:14575529
A Brain for Speech. Evolutionary Continuity in Primate and Human Auditory-Vocal Processing
Aboitiz, Francisco
2018-01-01
In this review article, I propose a continuous evolution from the auditory-vocal apparatus and its mechanisms of neural control in non-human primates, to the peripheral organs and the neural control of human speech. Although there is an overall conservatism both in peripheral systems and in central neural circuits, a few changes were critical for the expansion of vocal plasticity and the elaboration of proto-speech in early humans. Two of the most relevant changes were the acquisition of direct cortical control of the vocal fold musculature and the consolidation of an auditory-vocal articulatory circuit, encompassing auditory areas in the temporoparietal junction and prefrontal and motor areas in the frontal cortex. This articulatory loop, also referred to as the phonological loop, enhanced vocal working memory capacity, enabling early humans to learn increasingly complex utterances. The auditory-vocal circuit became progressively coupled to multimodal systems conveying information about objects and events, which gradually led to the acquisition of modern speech. Gestural communication accompanies the development of vocal communication since very early in human evolution, and although both systems co-evolved tightly in the beginning, at some point speech became the main channel of communication. PMID:29636657
Discriminating between auditory and motor cortical responses to speech and non-speech mouth sounds
Agnew, Z.K.; McGettigan, C.; Scott, S.K.
2012-01-01
Several perspectives on speech perception posit a central role for the representation of articulations in speech comprehension, supported by evidence for premotor activation when participants listen to speech. However no experiments have directly tested whether motor responses mirror the profile of selective auditory cortical responses to native speech sounds, or whether motor and auditory areas respond in different ways to sounds. We used fMRI to investigate cortical responses to speech and non-speech mouth (ingressive click) sounds. Speech sounds activated bilateral superior temporal gyri more than other sounds, a profile not seen in motor and premotor cortices. These results suggest that there are qualitative differences in the ways that temporal and motor areas are activated by speech and click sounds: anterior temporal lobe areas are sensitive to the acoustic/phonetic properties while motor responses may show more generalised responses to the acoustic stimuli. PMID:21812557
Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.
Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M
2003-05-13
Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.
Xia, Shuang; Song, TianBin; Che, Jing; Li, Qiang; Chai, Chao; Zheng, Meizhu; Shen, Wen
2017-01-01
Early hearing deprivation could affect the development of auditory, language, and vision ability. Insufficient or no stimulation of the auditory cortex during the sensitive periods of plasticity could affect the function of hearing, language, and vision development. Twenty-three infants with congenital severe sensorineural hearing loss (CSSHL) and 17 age and sex matched normal hearing subjects were recruited. The amplitude of low frequency fluctuations (ALFF) and regional homogeneity (ReHo) of the auditory, language, and vision related brain areas were compared between deaf infants and normal subjects. Compared with normal hearing subjects, decreased ALFF and ReHo were observed in auditory and language-related cortex. Increased ALFF and ReHo were observed in vision related cortex, which suggest that hearing and language function were impaired and vision function was enhanced due to the loss of hearing. ALFF of left Brodmann area 45 (BA45) was negatively correlated with deaf duration in infants with CSSHL. ALFF of right BA39 was positively correlated with deaf duration in infants with CSSHL. In conclusion, ALFF and ReHo can reflect the abnormal brain function in language, auditory, and visual information processing in infants with CSSHL. This demonstrates that the development of auditory, language, and vision processing function has been affected by congenital severe sensorineural hearing loss before 4 years of age.
Auditory subliminal stimulation: a re-examination.
Urban, M J
1992-04-01
Unconscious or subliminal perception has historically been a thorny issue in psychology. It has been the subject of debate and experimentation since the turn of the century. While psychologists now agree that the phenomenon of visual subliminal stimulation is real, disagreement continues over the effects of such stimulation as well as to its existence in other sensory modalities, notably the auditory. The present paper provides an overview of unresolved issues in auditory subliminal stimulation which explains much of the difficulty that has been encountered in experimental work in this area. A context is proposed for considering the effects of auditory subliminal stimulation and an overview of current investigations in this field is provided.
Cardin, Jessica A; Raksin, Jonathan N; Schmidt, Marc F
2005-04-01
Sensorimotor integration in the avian song system is crucial for both learning and maintenance of song, a vocal motor behavior. Although a number of song system areas demonstrate both sensory and motor characteristics, their exact roles in auditory and premotor processing are unclear. In particular, it is unknown whether input from the forebrain nucleus interface of the nidopallium (NIf), which exhibits both sensory and premotor activity, is necessary for both auditory and premotor processing in its target, HVC. Here we show that bilateral NIf lesions result in long-term loss of HVC auditory activity but do not impair song production. NIf is thus a major source of auditory input to HVC, but an intact NIf is not necessary for motor output in adult zebra finches.
Fisher, Simon D.; Reynolds, John N. J.
2014-01-01
Anatomical investigations have revealed connections between the intralaminar thalamic nuclei and areas such as the superior colliculus (SC) that receive short latency input from visual and auditory primary sensory areas. The intralaminar nuclei in turn project to the major input nucleus of the basal ganglia, the striatum, providing this nucleus with a source of subcortical excitatory input. Together with a converging input from the cerebral cortex, and a neuromodulatory dopaminergic input from the midbrain, the components previously found necessary for reinforcement learning in the basal ganglia are present. With this intralaminar sensory input, the basal ganglia are thought to play a primary role in determining what aspect of an organism’s own behavior has caused salient environmental changes. Additionally, subcortical loops through thalamic and basal ganglia nuclei are proposed to play a critical role in action selection. In this mini review we will consider the anatomical and physiological evidence underlying the existence of these circuits. We will propose how the circuits interact to modulate basal ganglia output and solve common behavioral learning problems of agency determination and action selection. PMID:24765070
Cortical mechanisms for the segregation and representation of acoustic textures.
Overath, Tobias; Kumar, Sukhbinder; Stewart, Lauren; von Kriegstein, Katharina; Cusack, Rhodri; Rees, Adrian; Griffiths, Timothy D
2010-02-10
Auditory object analysis requires two fundamental perceptual processes: the definition of the boundaries between objects, and the abstraction and maintenance of an object's characteristic features. Although it is intuitive to assume that the detection of the discontinuities at an object's boundaries precedes the subsequent precise representation of the object, the specific underlying cortical mechanisms for segregating and representing auditory objects within the auditory scene are unknown. We investigated the cortical bases of these two processes for one type of auditory object, an "acoustic texture," composed of multiple frequency-modulated ramps. In these stimuli, we independently manipulated the statistical rules governing (1) the frequency-time space within individual textures (comprising ramps with a given spectrotemporal coherence) and (2) the boundaries between textures (adjacent textures with different spectrotemporal coherences). Using functional magnetic resonance imaging, we show mechanisms defining boundaries between textures with different coherences in primary and association auditory cortices, whereas texture coherence is represented only in association cortex. Furthermore, participants' superior detection of boundaries across which texture coherence increased (as opposed to decreased) was reflected in a greater neural response in auditory association cortex at these boundaries. The results suggest a hierarchical mechanism for processing acoustic textures that is relevant to auditory object analysis: boundaries between objects are first detected as a change in statistical rules over frequency-time space, before a representation that corresponds to the characteristics of the perceived object is formed.
Impey, Danielle; Knott, Verner
2015-08-01
Membrane potentials and brain plasticity are basic modes of cerebral information processing. Both can be externally (non-invasively) modulated by weak transcranial direct current stimulation (tDCS). Polarity-dependent tDCS-induced reversible circumscribed increases and decreases in cortical excitability and functional changes have been observed following stimulation of motor and visual cortices but relatively little research has been conducted with respect to the auditory cortex. The aim of this pilot study was to examine the effects of tDCS on auditory sensory discrimination in healthy participants (N = 12) assessed with the mismatch negativity (MMN) brain event-related potential (ERP). In a randomized, double-blind, sham-controlled design, participants received anodal tDCS over the primary auditory cortex (2 mA for 20 min) in one session and 'sham' stimulation (i.e., no stimulation except initial ramp-up for 30 s) in the other session. MMN elicited by changes in auditory pitch was found to be enhanced after receiving anodal tDCS compared to 'sham' stimulation, with the effects being evidenced in individuals with relatively reduced (vs. increased) baseline amplitudes and with relatively small (vs. large) pitch deviants. Additional studies are needed to further explore relationships between tDCS-related parameters, auditory stimulus features and individual differences prior to assessing the utility of this tool for treating auditory processing deficits in psychiatric and/or neurological disorders.
Behavioral and subcortical signatures of musical expertise in Mandarin Chinese speakers
Tervaniemi, Mari; Aalto, Daniel
2018-01-01
Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for fo or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers. PMID:29300756
Ivanova, T N; Matthews, A; Gross, C; Mappus, R C; Gollnick, C; Swanson, A; Bassell, G J; Liu, R C
2011-05-05
Acquiring the behavioral significance of sound has repeatedly been shown to correlate with long term changes in response properties of neurons in the adult primary auditory cortex. However, the molecular and cellular basis for such changes is still poorly understood. To address this, we have begun examining the auditory cortical expression of an activity-dependent effector immediate early gene (IEG) with documented roles in synaptic plasticity and memory consolidation in the hippocampus: Arc/Arg3.1. For initial characterization, we applied a repeated 10 min (24 h separation) sound exposure paradigm to determine the strength and consistency of sound-evoked Arc/Arg3.1 mRNA expression in the absence of explicit behavioral contingencies for the sound. We used 3D surface reconstruction methods in conjunction with fluorescent in situ hybridization (FISH) to assess the layer-specific subcellular compartmental expression of Arc/Arg3.1 mRNA. We unexpectedly found that both the intranuclear and cytoplasmic patterns of expression depended on the prior history of sound stimulation. Specifically, the percentage of neurons with expression only in the cytoplasm increased for repeated versus singular sound exposure, while intranuclear expression decreased. In contrast, the total cellular expression did not differ, consistent with prior IEG studies of primary auditory cortex. Our results were specific for cortical layers 3-6, as there was virtually no sound driven Arc/Arg3.1 mRNA in layers 1-2 immediately after stimulation. Our results are consistent with the kinetics and/or detectability of cortical subcellular Arc/Arg3.1 mRNA expression being altered by the initial exposure to the sound, suggesting exposure-induced modifications in the cytoplasmic Arc/Arg3.1 mRNA pool. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Ivanova, Tamara; Matthews, Andrew; Gross, Christina; Mappus, Rudolph C.; Gollnick, Clare; Swanson, Andrew; Bassell, Gary J.; Liu, Robert C.
2011-01-01
Acquiring the behavioral significance of a sound has repeatedly been shown to correlate with long term changes in response properties of neurons in the adult primary auditory cortex. However, the molecular and cellular basis for such changes is still poorly understood. To address this, we have begun examining the auditory cortical expression of an activity-dependent effector immediate early gene (IEG) with documented roles in synaptic plasticity and memory consolidation in the hippocampus: Arc/Arg3.1. For initial characterization, we applied a repeated 10 minute (24 hour separation) sound exposure paradigm to determine the strength and consistency of sound-evoked Arc/Arg3.1 mRNA expression in the absence of explicit behavioral contingencies for the sound. We used 3D surface reconstruction methods in conjunction with fluorescent in-situ hybridization (FISH) to assess the layer-specific sub-cellular compartmental expression of Arc/Arg3.1 mRNA. We unexpectedly found that both the intranuclear and cytoplasmic patterns of expression depended on the prior history of sound stimulation. Specifically, the percentage of neurons with expression only in the cytoplasm increased for repeated versus singular sound exposure, while intranuclear expression decreased. In contrast, the total cellular expression did not differ, consistent with prior IEG studies of primary auditory cortex. Our results were specific for cortical layers 3–6, as there was virtually no sound driven Arc/Arg3.1 mRNA in layers 1–2 immediately after stimulation. Our results are consistent with the kinetics and/or detectability of cortical sub-cellular Arc/Arg3.1 mRNA expression being altered by the initial exposure to the sound, suggesting exposure-induced modifications in the cytoplasmic Arc/Arg3.1 mRNA pool. PMID:21334422
Merrett, Zalie; Rossell, Susan L; Castle, David J
2016-07-01
In clinical settings, there is substantial evidence both clinically and empirically to suggest that approximately 50% of individuals with borderline personality disorder experience auditory verbal hallucinations. However, there is limited research investigating the phenomenology of these voices. The aim of this study was to review and compare our current understanding of auditory verbal hallucinations in borderline personality disorder with auditory verbal hallucinations in patients with a psychotic disorder, to critically analyse existing studies investigating auditory verbal hallucinations in borderline personality disorder and to identify gaps in current knowledge, which will help direct future research. The literature was searched using the electronic database Scopus, PubMed and MEDLINE. Relevant studies were included if they were written in English, were empirical studies specifically addressing auditory verbal hallucinations and borderline personality disorder, were peer reviewed, used only adult humans and sample comprising borderline personality disorder as the primary diagnosis, and included a comparison group with a primary psychotic disorder such as schizophrenia. Our search strategy revealed a total of 16 articles investigating the phenomenology of auditory verbal hallucinations in borderline personality disorder. Some studies provided evidence to suggest that the voice experiences in borderline personality disorder are similar to those experienced by people with schizophrenia, for example, occur inside the head, and often involved persecutory voices. Other studies revealed some differences between schizophrenia and borderline personality disorder voice experiences, with the borderline personality disorder voices sounding more derogatory and self-critical in nature and the voice-hearers' response to the voices were more emotionally resistive. Furthermore, in one study, the schizophrenia group's voices resulted in more disruption in daily functioning. These studies are, however, limited in number and do not provide definitive evidence of these differences. The limited research examining auditory verbal hallucinations experiences in borderline personality disorder poses a significant diagnostic and treatment challenge. A deeper understanding of the precise phenomenological characteristics will help us in terms of diagnostic distinction as well as inform treatments. © The Royal Australian and New Zealand College of Psychiatrists 2016.
Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T
2016-01-01
Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.
Evaluation of Techniques Used to Estimate Cortical Feature Maps
Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.
2011-01-01
Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537
Mulert, C; Juckel, G; Augustin, H; Hegerl, U
2002-10-01
The loudness dependency of the auditory evoked potentials (LDAEP) is used as an indicator of the central serotonergic system and predicts clinical response to serotonin agonists. So far, LDAEP has been typically investigated with dipole source analysis, because with this method the primary and secondary auditory cortex (with a high versus low serotonergic innervation) can be separated at least in parts. We have developed a new analysis procedure that uses an MRI probabilistic map of the primary auditory cortex in Talairach space and analyzed the current density in this region of interest with low resolution electromagnetic tomography (LORETA). LORETA is a tomographic localization method that calculates the current density distribution in Talairach space. In a group of patients with major depression (n=15), this new method can predict the response to an selective serotonin reuptake inhibitor (citalopram) at least to the same degree than the traditional dipole source analysis method (P=0.019 vs. P=0.028). The correlation of the improvement in the Hamilton Scale is significant with the LORETA-LDAEP-values (0.56; P=0.031) but not with the dipole source analysis LDAEP-values (0.43; P=0.11). The new tomographic LDAEP analysis is a promising tool in the analysis of the central serotonergic system.
Wang, Jie; Wu, Dongyu; Chen, Yan; Yuan, Ying; Zhang, Meikui
2013-08-09
We investigate the effects of transcranial direct current stimulation (tDCS) on language improvement and cortical activation in nonfluent variant primary progressive aphasia (nfvPPA). A 67-year-old woman diagnosed as nfvPPA received sham-tDCS for 5 days over the left posterior perisylvian region (PPR) in the morning and over left Broca's area in the afternoon in Phases A1 and A2, and tDCS for 5 days with an anodal electrode over the left PPR in the morning and over left Broca's area in the afternoon in Phases B1 and B2. Auditory word comprehension, picture naming, oral word reading and word repetition subtests of the Psycholinguistic Assessment in Chinese Aphasia (PACA) were administered before and after each phase. The EEG nonlinear index of approximate entropy (ApEn) was calculated before Phase A1, and after Phases B1 and B2. Our findings revealed that the patient improved greatly in the four subtests after A-tDCS and ApEn indices increased in stimulated areas and non-stimulated areas. We demonstrated that anodal tDCS over the left PPR and Broca's area can improve language performance of nfvPPA. tDCS may be used as an alternative therapeutic tool for PPA. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Petrini, Karin; Crabbe, Frances; Sheridan, Carol; Pollick, Frank E
2011-04-29
In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.
Early auditory processing in area V5/MT+ of the congenitally blind brain.
Watkins, Kate E; Shakespeare, Timothy J; O'Donoghue, M Clare; Alexander, Iona; Ragge, Nicola; Cowey, Alan; Bridge, Holly
2013-11-13
Previous imaging studies of congenital blindness have studied individuals with heterogeneous causes of blindness, which may influence the nature and extent of cross-modal plasticity. Here, we scanned a homogeneous group of blind people with bilateral congenital anophthalmia, a condition in which both eyes fail to develop, and, as a result, the visual pathway is not stimulated by either light or retinal waves. This model of congenital blindness presents an opportunity to investigate the effects of very early visual deafferentation on the functional organization of the brain. In anophthalmic animals, the occipital cortex receives direct subcortical auditory input. We hypothesized that this pattern of subcortical reorganization ought to result in a topographic mapping of auditory frequency information in the occipital cortex of anophthalmic people. Using functional MRI, we examined auditory-evoked activity to pure tones of high, medium, and low frequencies. Activity in the superior temporal cortex was significantly reduced in anophthalmic compared with sighted participants. In the occipital cortex, a region corresponding to the cytoarchitectural area V5/MT+ was activated in the anophthalmic participants but not in sighted controls. Whereas previous studies in the blind indicate that this cortical area is activated to auditory motion, our data show it is also active for trains of pure tone stimuli and in some anophthalmic participants shows a topographic mapping (tonotopy). Therefore, this region appears to be performing early sensory processing, possibly served by direct subcortical input from the pulvinar to V5/MT+.
Pre-attentive auditory discrimination skill in Indian classical vocal musicians and non-musicians.
Sanju, Himanshu Kumar; Kumar, Prawin
2016-09-01
To test for pre-attentive auditory discrimination skills in Indian classical vocal musicians and non-musicians. Mismatch negativity (MMN) was recorded to test for pre-attentive auditory discrimination skills with a pair of stimuli of /1000 Hz/ and /1100 Hz/, with /1000 Hz/ as the frequent stimulus and /1100 Hz/ as the infrequent stimulus. Onset, offset and peak latencies were the considered latency parameters, whereas peak amplitude and area under the curve were considered for amplitude analysis. Exactly 50 participants, out of which the experimental group had 25 adult Indian classical vocal musicians and 25 age-matched non-musicians served as the control group, were included in the study. Experimental group participants had a minimum professional music experience in Indian classic vocal music of 10 years. However, control group participants did not have any formal training in music. Descriptive statistics showed better waveform morphology in the experimental group as compared to the control. MANOVA showed significantly better onset latency, peak amplitude and area under the curve in the experimental group but no significant difference in the offset and peak latencies between the two groups. The present study probably points towards the enhancement of pre-attentive auditory discrimination skills in Indian classical vocal musicians compared to non-musicians. It indicates that Indian classical musical training enhances pre-attentive auditory discrimination skills in musicians, leading to higher peak amplitude and a greater area under the curve compared to non-musicians.
Furutani, Rui
2008-01-01
The present investigation carried out Nissl, Klüver-Barrera, and Golgi studies of the cerebral cortex in three distinct genera of oceanic dolphins (Risso's dolphin, striped dolphin and bottlenose dolphin) to identify and classify cortical laminar and cytoarchitectonic structures in four distinct functional areas, including primary motor (M1), primary sensory (S1), primary visual (V1), and primary auditory (A1) cortices. The laminar and cytoarchitectonic organization of each of these cortical areas was similar among the three dolphin species. M1 was visualized as five-layer structure that included the molecular layer (layer I), external granular layer (layer II), external pyramidal layer (layer III), internal pyramidal layer (layer V), and fusiform layer (layer VI). The internal granular layer was absent. The cetacean sensory-related cortical areas S1, V1, and A1 were also found to have a five-layer organization comprising layers I, II, III, V and VI. In particular, A1 was characterized by the broadest layer I, layer II and developed band of pyramidal neurons in layers III (sublayers IIIa, IIIb and IIIc) and V. The patch organization consisting of the layer IIIb-pyramidal neurons was detected in the S1 and V1, but not in A1. The laminar patterns of V1 and S1 were similar, but the cytoarchitectonic structures of the two areas were different. V1 was characterized by a broader layer II than that of S1, and also contained the specialized pyramidal and multipolar stellate neurons in layers III and V. PMID:18625031
Auditory conflict and congruence in frontotemporal dementia.
Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D
2017-09-01
Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
A Brain System for Auditory Working Memory.
Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D
2016-04-20
The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.
Avey, Marc T; Hoeschele, Marisa; Moscicki, Michele K; Bloomfield, Laurie L; Sturdy, Christopher B
2011-01-01
Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators. Mobbing calls produced in response to smaller, higher-threat predators contain more "D" notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned.
Functional significance of the electrocorticographic auditory responses in the premotor cortex.
Tanji, Kazuyo; Sakurada, Kaori; Funiu, Hayato; Matsuda, Kenichiro; Kayama, Takamasa; Ito, Sayuri; Suzuki, Kyoko
2015-01-01
Other than well-known motor activities in the precentral gyrus, functional magnetic resonance imaging (fMRI) studies have found that the ventral part of the precentral gyrus is activated in response to linguistic auditory stimuli. It has been proposed that the premotor cortex in the precentral gyrus is responsible for the comprehension of speech, but the precise function of this area is still debated because patients with frontal lesions that include the precentral gyrus do not exhibit disturbances in speech comprehension. We report on a patient who underwent resection of the tumor in the precentral gyrus with electrocorticographic recordings while she performed the verb generation task during awake brain craniotomy. Consistent with previous fMRI studies, high-gamma band auditory activity was observed in the precentral gyrus. Due to the location of the tumor, the patient underwent resection of the auditory responsive precentral area which resulted in the post-operative expression of a characteristic articulatory disturbance known as apraxia of speech (AOS). The language function of the patient was otherwise preserved and she exhibited intact comprehension of both spoken and written language. The present findings demonstrated that a lesion restricted to the ventral precentral gyrus is sufficient for the expression of AOS and suggest that the auditory-responsive area plays an important role in the execution of fluent speech rather than the comprehension of speech. These findings also confirm that the function of the premotor area is predominantly motor in nature and its sensory responses is more consistent with the "sensory theory of speech production," in which it was proposed that sensory representations are used to guide motor-articulatory processes.
Learning to Encode Timing: Mechanisms of Plasticity in the Auditory Brainstem
Tzounopoulos, Thanos; Kraus, Nina
2009-01-01
Mechanisms of plasticity have traditionally been ascribed to higher-order sensory processing areas such as the cortex, whereas early sensory processing centers have been considered largely hard-wired. In agreement with this view, the auditory brainstem has been viewed as a nonplastic site, important for preserving temporal information and minimizing transmission delays. However, recent groundbreaking results from animal models and human studies have revealed remarkable evidence for cellular and behavioral mechanisms for learning and memory in the auditory brainstem. PMID:19477149
Gavrilescu, M; Rossell, S; Stuart, G W; Shea, T L; Innes-Brown, H; Henshall, K; McKay, C; Sergejew, A A; Copolov, D; Egan, G F
2010-07-01
Previous research has reported auditory processing deficits that are specific to schizophrenia patients with a history of auditory hallucinations (AH). One explanation for these findings is that there are abnormalities in the interhemispheric connectivity of auditory cortex pathways in AH patients; as yet this explanation has not been experimentally investigated. We assessed the interhemispheric connectivity of both primary (A1) and secondary (A2) auditory cortices in n=13 AH patients, n=13 schizophrenia patients without auditory hallucinations (non-AH) and n=16 healthy controls using functional connectivity measures from functional magnetic resonance imaging (fMRI) data. Functional connectivity was estimated from resting state fMRI data using regions of interest defined for each participant based on functional activation maps in response to passive listening to words. Additionally, stimulus-induced responses were regressed out of the stimulus data and the functional connectivity was estimated for the same regions to investigate the reliability of the estimates. AH patients had significantly reduced interhemispheric connectivity in both A1 and A2 when compared with non-AH patients and healthy controls. The latter two groups did not show any differences in functional connectivity. Further, this pattern of findings was similar across the two datasets, indicating the reliability of our estimates. These data have identified a trait deficit specific to AH patients. Since this deficit was characterized within both A1 and A2 it is expected to result in the disruption of multiple auditory functions, for example, the integration of basic auditory information between hemispheres (via A1) and higher-order language processing abilities (via A2).
Dampney, Roger
2018-01-01
The midbrain periaqueductal gray (PAG) plays a major role in generating different types of behavioral responses to emotional stressors. This review focuses on the role of the dorsolateral (dl) portion of the PAG, which on the basis of anatomical and functional studies, appears to have a unique and distinctive role in generating behavioral, cardiovascular and respiratory responses to real and perceived emotional stressors. In particular, the dlPAG, but not other parts of the PAG, receives direct inputs from the primary auditory cortex and from the secondary visual cortex. In addition, there are strong direct inputs to the dlPAG, but not other parts of the PAG, from regions within the medial prefrontal cortex that in primates correspond to cortical areas 10 m, 25 and 32. I first summarise the evidence that the inputs to the dlPAG arising from visual, auditory and olfactory signals trigger defensive behavioral responses supported by appropriate cardiovascular and respiratory effects, when such signals indicate the presence of a real external threat, such as the presence of a predator. I then consider the functional roles of the direct inputs from the medial prefrontal cortex, and propose the hypothesis that these inputs are activated by perceived threats, that are generated as a consequence of complex cognitive processes. I further propose that the inputs from areas 10 m, 25 and 32 are activated under different circumstances. The input from cortical area 10 m is of special interest, because this cortical area exists only in primates and is much larger in the brain of humans than in all other primates.
Dampney, Roger
2018-01-01
The midbrain periaqueductal gray (PAG) plays a major role in generating different types of behavioral responses to emotional stressors. This review focuses on the role of the dorsolateral (dl) portion of the PAG, which on the basis of anatomical and functional studies, appears to have a unique and distinctive role in generating behavioral, cardiovascular and respiratory responses to real and perceived emotional stressors. In particular, the dlPAG, but not other parts of the PAG, receives direct inputs from the primary auditory cortex and from the secondary visual cortex. In addition, there are strong direct inputs to the dlPAG, but not other parts of the PAG, from regions within the medial prefrontal cortex that in primates correspond to cortical areas 10 m, 25 and 32. I first summarise the evidence that the inputs to the dlPAG arising from visual, auditory and olfactory signals trigger defensive behavioral responses supported by appropriate cardiovascular and respiratory effects, when such signals indicate the presence of a real external threat, such as the presence of a predator. I then consider the functional roles of the direct inputs from the medial prefrontal cortex, and propose the hypothesis that these inputs are activated by perceived threats, that are generated as a consequence of complex cognitive processes. I further propose that the inputs from areas 10 m, 25 and 32 are activated under different circumstances. The input from cortical area 10 m is of special interest, because this cortical area exists only in primates and is much larger in the brain of humans than in all other primates. PMID:29881334
[Functional anatomy of the cochlear nerve and the central auditory system].
Simon, E; Perrot, X; Mertens, P
2009-04-01
The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.
The harmonic organization of auditory cortex.
Wang, Xiaoqin
2013-12-17
A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.
Aizenberg, Mark; Mwilambwe-Tshilobo, Laetitia; Briguglio, John J.; Natan, Ryan G.; Geffen, Maria N.
2015-01-01
The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. PMID:26629746
Steinschneider, Mitchell; Micheyl, Christophe
2014-01-01
The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood. Here, we examined neural responses in monkey primary auditory cortex (A1) to two concurrent HCTs that differed in F0 such that they are heard as two separate “auditory objects” with distinct pitches. We found that A1 can resolve, via a rate-place code, the lower harmonics of both HCTs, a prerequisite for deriving their pitches and for their perceptual segregation. Onset asynchrony between the HCTs enhanced the neural representation of their harmonics, paralleling their improved perceptual segregation in humans. Pitches of the concurrent HCTs could also be temporally represented by neuronal phase-locking at their respective F0s. Furthermore, a model of A1 responses using harmonic templates could qualitatively reproduce psychophysical data on concurrent sound segregation in humans. Finally, we identified a possible intracortical homolog of the “object-related negativity” recorded noninvasively in humans, which correlates with the perceptual segregation of concurrent sounds. Findings indicate that A1 contains sufficient spectral and temporal information for segregating concurrent sounds based on differences in pitch. PMID:25209282
Cortical plasticity as a mechanism for storing Bayesian priors in sensory perception.
Köver, Hania; Bao, Shaowen
2010-05-05
Human perception of ambiguous sensory signals is biased by prior experiences. It is not known how such prior information is encoded, retrieved and combined with sensory information by neurons. Previous authors have suggested dynamic encoding mechanisms for prior information, whereby top-down modulation of firing patterns on a trial-by-trial basis creates short-term representations of priors. Although such a mechanism may well account for perceptual bias arising in the short-term, it does not account for the often irreversible and robust changes in perception that result from long-term, developmental experience. Based on the finding that more frequently experienced stimuli gain greater representations in sensory cortices during development, we reasoned that prior information could be stored in the size of cortical sensory representations. For the case of auditory perception, we use a computational model to show that prior information about sound frequency distributions may be stored in the size of primary auditory cortex frequency representations, read-out by elevated baseline activity in all neurons and combined with sensory-evoked activity to generate a perception that conforms to Bayesian integration theory. Our results suggest an alternative neural mechanism for experience-induced long-term perceptual bias in the context of auditory perception. They make the testable prediction that the extent of such perceptual prior bias is modulated by both the degree of cortical reorganization and the magnitude of spontaneous activity in primary auditory cortex. Given that cortical over-representation of frequently experienced stimuli, as well as perceptual bias towards such stimuli is a common phenomenon across sensory modalities, our model may generalize to sensory perception, rather than being specific to auditory perception.
Out-of-synchrony speech entrainment in developmental dyslexia.
Molinaro, Nicola; Lizarazu, Mikel; Lallier, Marie; Bourguignon, Mathieu; Carreiras, Manuel
2016-08-01
Developmental dyslexia is a reading disorder often characterized by reduced awareness of speech units. Whether the neural source of this phonological disorder in dyslexic readers results from the malfunctioning of the primary auditory system or damaged feedback communication between higher-order phonological regions (i.e., left inferior frontal regions) and the auditory cortex is still under dispute. Here we recorded magnetoencephalographic (MEG) signals from 20 dyslexic readers and 20 age-matched controls while they were listening to ∼10-s-long spoken sentences. Compared to controls, dyslexic readers had (1) an impaired neural entrainment to speech in the delta band (0.5-1 Hz); (2) a reduced delta synchronization in both the right auditory cortex and the left inferior frontal gyrus; and (3) an impaired feedforward functional coupling between neural oscillations in the right auditory cortex and the left inferior frontal regions. This shows that during speech listening, individuals with developmental dyslexia present reduced neural synchrony to low-frequency speech oscillations in primary auditory regions that hinders higher-order speech processing steps. The present findings, thus, strengthen proposals assuming that improper low-frequency acoustic entrainment affects speech sampling. This low speech-brain synchronization has the strong potential to cause severe consequences for both phonological and reading skills. Interestingly, the reduced speech-brain synchronization in dyslexic readers compared to normal readers (and its higher-order consequences across the speech processing network) appears preserved through the development from childhood to adulthood. Thus, the evaluation of speech-brain synchronization could possibly serve as a diagnostic tool for early detection of children at risk of dyslexia. Hum Brain Mapp 37:2767-2783, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Auditory cortex asymmetry, altered minicolumn spacing and absence of ageing effects in schizophrenia
Casanova, Manuel F.; Switala, Andy E.; Crow, Timothy J.
2008-01-01
The superior temporal gyrus, which contains the auditory cortex, including the planum temporale, is the most consistently altered neocortical structure in schizophrenia (Shenton ME, Dickey CC, Frumin M, McCarley RW. A review of MRI findings in schizophrenia. Schizophr Res 2001; 49: 1–52). Auditory hallucinations are associated with abnormalities in this region and activation in Heschl's gyrus. Our review of 34 MRI and 5 post-mortem studies of planum temporale reveals that half of those measuring region size reported a change in schizophrenia, usually consistent with a reduction in the left hemisphere and a relative increase in the right hemisphere. Furthermore, female subjects are under-represented in the literature and insight from sex differences may be lost. Here we present evidence from post-mortem brain (N = 21 patients, compared with 17 previously reported controls) that normal age-associated changes in planum temporale are not found in schizophrenia. These age-associated differences are reported in an adult population (age range 29–90 years) and were not found in the primary auditory cortex of Heschl's gyrus, indicating that they are selective to the more plastic regions of association cortex involved in cognition. Areas and volumes of Heschl's gyrus and planum temporale and the separation of the minicolumns that are held to be the structural units of the cerebral cortex were assessed in patients. Minicolumn distribution in planum temporale and Heschl's gyrus was assessed on Nissl-stained sections by semi-automated microscope image analysis. The cortical surface area of planum temporale in the left hemisphere (usually asymmetrically larger) was positively correlated with its constituent minicolumn spacing in patients and controls. Surface area asymmetry of planum temporale was reduced in patients with schizophrenia by a reduction in the left hemisphere (F = 7.7, df 1,32, P < 0.01). The relationship between cortical asymmetry and the connecting, interhemispheric callosal white matter was also investigated; minicolumn asymmetry of both Heschl's gyrus and planum temporale was correlated with axon number in the wrong subregions of the corpus callosum in patients. The spacing of minicolumns was altered in a sex-dependent manner due to the absence of age-related minicolumn thinning in schizophrenia. This is interpreted as a failure of adult neuroplasticity that maintains neuropil space. The arrested capacity to absorb anomalous events and cognitive demands may confer vulnerability to schizophrenic symptoms when adult neuroplastic demands are not met. PMID:18819990
Reliance on auditory feedback in children with childhood apraxia of speech.
Iuzzini-Seigel, Jenya; Hogan, Tiffany P; Guarino, Anthony J; Green, Jordan R
2015-01-01
Children with childhood apraxia of speech (CAS) have been hypothesized to continuously monitor their speech through auditory feedback to minimize speech errors. We used an auditory masking paradigm to determine the effect of attenuating auditory feedback on speech in 30 children: 9 with CAS, 10 with speech delay, and 11 with typical development. The masking only affected the speech of children with CAS as measured by voice onset time and vowel space area. These findings provide preliminary support for greater reliance on auditory feedback among children with CAS. Readers of this article should be able to (i) describe the motivation for investigating the role of auditory feedback in children with CAS; (ii) report the effects of feedback attenuation on speech production in children with CAS, speech delay, and typical development, and (iii) understand how the current findings may support a feedforward program deficit in children with CAS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Correlation between the characteristics of resonance and aging of the external ear.
Silva, Aline Papin Roedas da; Blasca, Wanderléia Quinhoneiro; Lauris, José Roberto Pereira; Oliveira, Jerusa Roberta Massola de
2014-01-01
Aging causes changes in the external ear as a collapse of the external auditory canal and tympanic membrane senile. Knowing them is appropriate for the diagnosis of hearing loss and selection of hearing aids. For this reason, the study aimed to verify the influence of the anatomical changes of the external ear resonance in the auditory canal in the elderly. The sample consisted of objective measures of the external ear of elderly with collapse (group A), senile tympanic membrane (group B) and without changing the external auditory canal or tympanic membrane (group C) and adults without changing the external ear (group D). In the retrospective/clinical study were performed comparisons of measures of individuals with and without alteration of the external ear through the gain and response external ear resonant frequency and the primary peak to the right ear. In groups A, B and C was no statistically significant difference between Real Ear Unaided Response (REUR) and Real Ear Unaided Gain (REUG), but not for the peak frequency. For groups A and B were shown significant differences in REUR and REUG. Between the C and D groups were significant statistics to the REUR and REUG, but not for the frequency of the primary peak. Changes influence the external ear resonance, decreasing its amplitude. However, the frequency of the primary peak is not affected.
Anomal, Renata; de Villers-Sidani, Etienne; Merzenich, Michael M; Panizzutti, Rogerio
2013-01-01
Sensory experience powerfully shapes cortical sensory representations during an early developmental "critical period" of plasticity. In the rat primary auditory cortex (A1), the experience-dependent plasticity is exemplified by significant, long-lasting distortions in frequency representation after mere exposure to repetitive frequencies during the second week of life. In the visual system, the normal unfolding of critical period plasticity is strongly dependent on the elaboration of brain-derived neurotrophic factor (BDNF), which promotes the establishment of inhibition. Here, we tested the hypothesis that BDNF signaling plays a role in the experience-dependent plasticity induced by pure tone exposure during the critical period in the primary auditory cortex. Elvax resin implants filled with either a blocking antibody against BDNF or the BDNF protein were placed on the A1 of rat pups throughout the critical period window. These pups were then exposed to 7 kHz pure tone for 7 consecutive days and their frequency representations were mapped. BDNF blockade completely prevented the shaping of cortical tuning by experience and resulted in poor overall frequency tuning in A1. By contrast, BDNF infusion on the developing A1 amplified the effect of 7 kHz tone exposure compared to control. These results indicate that BDNF signaling participates in the experience-dependent plasticity induced by pure tone exposure during the critical period in A1.
[Three applications and the challenge of the big data in otology].
Lei, Guanxiong; Li, Jianan; Shen, Weidong; Yang, Shiming
2016-03-01
With the expansion of human practical activities, more and more areas have suffered from big data problems. The emergence of big data requires people to update the research paradigm and develop new technical methods. This review discussed that big data might bring opportunities and challenges in the area of auditory implantation, the deafness genome, and auditory pathophysiology, and pointed out that we needed to find appropriate theories and methods to make this kind of expectation into reality.
Auditory evoked functions in ground crew working in high noise environment of Mumbai airport.
Thakur, L; Anand, J P; Banerjee, P K
2004-10-01
The continuous exposure to the relatively high level of noise in the surroundings of an airport is likely to affect the central pathway of the auditory system as well as the cognitive functions of the people working in that environment. The Brainstem Auditory Evoked Responses (BAER), Mid Latency Response (MLR) and P300 response of the ground crew employees working in Mumbai airport were studied to evaluate the effects of continuous exposure to high level of noise of the surroundings of the airport on these responses. BAER, P300 and MLR were recorded by using a Nicolet Compact-4 (USA) instrument. Audiometry was also monitored with the help of GSI-16 Audiometer. There was a significant increase in the peak III latency of the BAER in the subjects exposed to noise compared to controls with no change in their P300 values. The exposed group showed hearing loss at different frequencies. The exposure to the high level of noise caused a considerable decline in the auditory conduction upto the level of the brainstem with no significant change in conduction in the midbrain, subcortical areas, auditory cortex and associated areas. There was also no significant change in cognitive function as measured by P300 response.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Niederleitner, Bertram; Gutierrez-Ibanez, Cristian; Krabichler, Quirin; Weigel, Stefan; Luksch, Harald
2017-02-15
Processing multimodal sensory information is vital for behaving animals in many contexts. The barn owl, an auditory specialist, is a classic model for studying multisensory integration. In the barn owl, spatial auditory information is conveyed to the optic tectum (TeO) by a direct projection from the external nucleus of the inferior colliculus (ICX). In contrast, evidence of an integration of visual and auditory information in auditory generalist avian species is completely lacking. In particular, it is not known whether in auditory generalist species the ICX projects to the TeO at all. Here we use various retrograde and anterograde tracing techniques both in vivo and in vitro, intracellular fillings of neurons in vitro, and whole-cell patch recordings to characterize the connectivity between ICX and TeO in the chicken. We found that there is a direct projection from ICX to the TeO in the chicken, although this is small and only to the deeper layers (layers 13-15) of the TeO. However, we found a relay area interposed among the IC, the TeO, and the isthmic complex that receives strong synaptic input from the ICX and projects broadly upon the intermediate and deep layers of the TeO. This area is an external portion of the formatio reticularis lateralis (FRLx). In addition to the projection to the TeO, cells in FRLx send, via collaterals, descending projections through tectopontine-tectoreticular pathways. This newly described connection from the inferior colliculus to the TeO provides a solid basis for visual-auditory integration in an auditory generalist bird. J. Comp. Neurol. 525:513-534, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Kamal, Brishna; Holman, Constance; de Villers-Sidani, Etienne
2013-01-01
Age-related impairments in the primary auditory cortex (A1) include poor tuning selectivity, neural desynchronization, and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function. PMID:24062649
Non-visual spatial tasks reveal increased interactions with stance postural control.
Woollacott, Marjorie; Vander Velde, Timothy
2008-05-07
The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.
Song decrystallization in adult zebra finches does not require the song nucleus NIf.
Roy, Arani; Mooney, Richard
2009-08-01
In adult male zebra finches, transecting the vocal nerve causes previously stable (i.e., crystallized) song to slowly degrade, presumably because of the resulting distortion in auditory feedback. How and where distorted feedback interacts with song motor networks to induce this process of song decrystallization remains unknown. The song premotor nucleus HVC is a potential site where auditory feedback signals could interact with song motor commands. Although the forebrain nucleus interface of the nidopallium (NIf) appears to be the primary auditory input to HVC, NIf lesions made in adult zebra finches do not trigger song decrystallization. One possibility is that NIf lesions do not interfere with song maintenance, but do compromise the adult zebra finch's ability to express renewed vocal plasticity in response to feedback perturbations. To test this idea, we bilaterally lesioned NIf and then transected the vocal nerve in adult male zebra finches. We found that bilateral NIf lesions did not prevent nerve section-induced song decrystallization. To test the extent to which the NIf lesions disrupted auditory processing in the song system, we made in vivo extracellular recordings in HVC and a downstream anterior forebrain pathway (AFP) in NIf-lesioned birds. We found strong and selective auditory responses to the playback of the birds' own song persisted in HVC and the AFP following NIf lesions. These findings suggest that auditory inputs to the song system other than NIf, such as the caudal mesopallium, could act as a source of auditory feedback signals to the song motor network.
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442
Forebrain pathway for auditory space processing in the barn owl.
Cohen, Y E; Miller, G L; Knudsen, E I
1998-02-01
The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.
Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.
Poremba, Amy; Mishkin, Mortimer
2007-07-01
Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys.
Exploring the extent and function of higher-order auditory cortex in rhesus monkeys
Mishkin, Mortimer
2009-01-01
Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left hemisphere “dominance” during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole “dominance” appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys. PMID:17321703
Neural Correlates of Sound Localization in Complex Acoustic Environments
Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto
2013-01-01
Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185
Visual and auditory perception in preschool children at risk for dyslexia.
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
2014-11-01
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.
Barker, Matthew D; Purdy, Suzanne C
2016-01-01
This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.
Kornysheva, Katja; Schubotz, Ricarda I.
2011-01-01
Integrating auditory and motor information often requires precise timing as in speech and music. In humans, the position of the ventral premotor cortex (PMv) in the dorsal auditory stream renders this area a node for auditory-motor integration. Yet, it remains unknown whether the PMv is critical for auditory-motor timing and which activity increases help to preserve task performance following its disruption. 16 healthy volunteers participated in two sessions with fMRI measured at baseline and following rTMS (rTMS) of either the left PMv or a control region. Subjects synchronized left or right finger tapping to sub-second beat rates of auditory rhythms in the experimental task, and produced self-paced tapping during spectrally matched auditory stimuli in the control task. Left PMv rTMS impaired auditory-motor synchronization accuracy in the first sub-block following stimulation (p<0.01, Bonferroni corrected), but spared motor timing and attention to task. Task-related activity increased in the homologue right PMv, but did not predict the behavioral effect of rTMS. In contrast, anterior midline cerebellum revealed most pronounced activity increase in less impaired subjects. The present findings suggest a critical role of the left PMv in feed-forward computations enabling accurate auditory-motor timing, which can be compensated by activity modulations in the cerebellum, but not in the homologue region contralateral to stimulation. PMID:21738657
Stehberg, Jimmy; Dang, Phat T; Frostig, Ron D
2014-01-01
Research based on functional imaging and neuronal recordings in the barrel cortex subdivision of primary somatosensory cortex (SI) of the adult rat has revealed novel aspects of structure-function relationships in this cortex. Specifically, it has demonstrated that single whisker stimulation evokes subthreshold neuronal activity that spreads symmetrically within gray matter from the appropriate barrel area, crosses cytoarchitectural borders of SI and reaches deeply into other unimodal primary cortices such as primary auditory (AI) and primary visual (VI). It was further demonstrated that this spread is supported by a spatially matching underlying diffuse network of border-crossing, long-range projections that could also reach deeply into AI and VI. Here we seek to determine whether such a network of border-crossing, long-range projections is unique to barrel cortex or characterizes also other primary, unimodal sensory cortices and therefore could directly connect them. Using anterograde (BDA) and retrograde (CTb) tract-tracing techniques, we demonstrate that such diffuse horizontal networks directly and mutually connect VI, AI and SI. These findings suggest that diffuse, border-crossing axonal projections connecting directly primary cortices are an important organizational motif common to all major primary sensory cortices in the rat. Potential implications of these findings for topics including cortical structure-function relationships, multisensory integration, functional imaging, and cortical parcellation are discussed.
Stehberg, Jimmy; Dang, Phat T.; Frostig, Ron D.
2014-01-01
Research based on functional imaging and neuronal recordings in the barrel cortex subdivision of primary somatosensory cortex (SI) of the adult rat has revealed novel aspects of structure-function relationships in this cortex. Specifically, it has demonstrated that single whisker stimulation evokes subthreshold neuronal activity that spreads symmetrically within gray matter from the appropriate barrel area, crosses cytoarchitectural borders of SI and reaches deeply into other unimodal primary cortices such as primary auditory (AI) and primary visual (VI). It was further demonstrated that this spread is supported by a spatially matching underlying diffuse network of border-crossing, long-range projections that could also reach deeply into AI and VI. Here we seek to determine whether such a network of border-crossing, long-range projections is unique to barrel cortex or characterizes also other primary, unimodal sensory cortices and therefore could directly connect them. Using anterograde (BDA) and retrograde (CTb) tract-tracing techniques, we demonstrate that such diffuse horizontal networks directly and mutually connect VI, AI and SI. These findings suggest that diffuse, border-crossing axonal projections connecting directly primary cortices are an important organizational motif common to all major primary sensory cortices in the rat. Potential implications of these findings for topics including cortical structure-function relationships, multisensory integration, functional imaging, and cortical parcellation are discussed. PMID:25309339
Garg, Arun; Schwartz, Daniel; Stevens, Alexander A.
2007-01-01
What happens in vision related cortical areas when congenitally blind (CB) individuals orient attention to spatial locations? Previous neuroimaging of sighted individuals has found overlapping activation in a network of frontoparietal areas including frontal eye-fields (FEF), during both overt (with eye movement) and covert (without eye movement) shifts of spatial attention. Since voluntary eye movement planning seems irrelevant in CB, their FEF neurons should be recruited for alternative functions if their attentional role in sighted individuals is only due to eye movement planning. Recent neuroimaging of the blind has also reported activation in medial occipital areas, normally associated with visual processing, during a diverse set of non-visual tasks, but their response to attentional shifts remains poorly understood. Here, we used event-related fMRI to explore FEF and medial occipital areas in CB individuals and sighted controls with eyes closed (SC) performing a covert attention orienting task, using endogenous verbal cues and spatialized auditory targets. We found robust stimulus-locked FEF activation of all CB subjects, similar but stronger than in SC, suggesting that FEF plays a role in endogenous orienting of covert spatial attention even in individuals in whom voluntary eye movements are irrelevant. We also found robust activation in bilateral medial occipital cortex in CB but not in SC subjects. The response decreased below baseline following endogenous verbal cues but increased following auditory targets, suggesting that the medial occipital area in CB does not directly engage during cued orienting of attention but may be recruited for processing of spatialized auditory targets. PMID:17397882
Neural networks mediating sentence reading in the deaf
Hirshorn, Elizabeth A.; Dye, Matthew W. G.; Hauser, Peter C.; Supalla, Ted R.; Bavelier, Daphne
2014-01-01
The present work addresses the neural bases of sentence reading in deaf populations. To better understand the relative role of deafness and spoken language knowledge in shaping the neural networks that mediate sentence reading, three populations with different degrees of English knowledge and depth of hearing loss were included—deaf signers, oral deaf and hearing individuals. The three groups were matched for reading comprehension and scanned while reading sentences. A similar neural network of left perisylvian areas was observed, supporting the view of a shared network of areas for reading despite differences in hearing and English knowledge. However, differences were observed, in particular in the auditory cortex, with deaf signers and oral deaf showing greatest bilateral superior temporal gyrus (STG) recruitment as compared to hearing individuals. Importantly, within deaf individuals, the same STG area in the left hemisphere showed greater recruitment as hearing loss increased. To further understand the functional role of such auditory cortex re-organization after deafness, connectivity analyses were performed from the STG regions identified above. Connectivity from the left STG toward areas typically associated with semantic processing (BA45 and thalami) was greater in deaf signers and in oral deaf as compared to hearing. In contrast, connectivity from left STG toward areas identified with speech-based processing was greater in hearing and in oral deaf as compared to deaf signers. These results support the growing literature indicating recruitment of auditory areas after congenital deafness for visually-mediated language functions, and establish that both auditory deprivation and language experience shape its functional reorganization. Implications for differential reliance on semantic vs. phonological pathways during reading in the three groups is discussed. PMID:24959127
Brown, Erik C.; Rothermel, Robert; Nishida, Masaaki; Juhász, Csaba; Muzik, Otto; Hoechstetter, Karsten; Sood, Sandeep; Chugani, Harry T.; Asano, Eishi
2008-01-01
We determined if high-frequency gamma-oscillations (50- to 150-Hz) were induced by simple auditory communication over the language network areas in children with focal epilepsy. Four children (ages: 7, 9, 10 and 16 years) with intractable left-hemispheric focal epilepsy underwent extraoperative electrocorticography (ECoG) as well as language mapping using neurostimulation and auditory-language-induced gamma-oscillations on ECoG. The audible communication was recorded concurrently and integrated with ECoG recording to allow for accurate time-lock upon ECoG analysis. In three children, who successfully completed the auditory-language task, high-frequency gamma-augmentation sequentially involved: i) the posterior superior temporal gyrus when listening to the question, ii) the posterior lateral temporal region and the posterior frontal region in the time interval between question completion and the patient’s vocalization, and iii) the pre- and post-central gyri immediately preceding and during the patient’s vocalization. The youngest child, with attention deficits, failed to cooperate during the auditory-language task, and high-frequency gamma-augmentation was noted only in the posterior superior temporal gyrus when audible questions were given. The size of language areas suggested by statistically-significant high-frequency gamma-augmentation was larger than that defined by neurostimulation. The present method can provide in-vivo imaging of electrophysiological activities over the language network areas during language processes. Further studies are warranted to determine whether recording of language-induced gamma-oscillations can supplement language mapping using neurostimulation in presurgical evaluation of children with focal epilepsy. PMID:18455440
Meas, Steven J.; Zhang, Chun-Li; Dabdoub, Alain
2018-01-01
Disabling hearing loss affects over 5% of the world’s population and impacts the lives of individuals from all age groups. Within the next three decades, the worldwide incidence of hearing impairment is expected to double. Since a leading cause of hearing loss is the degeneration of primary auditory neurons (PANs), the sensory neurons of the auditory system that receive input from mechanosensory hair cells in the cochlea, it may be possible to restore hearing by regenerating PANs. A direct reprogramming approach can be used to convert the resident spiral ganglion glial cells into induced neurons to restore hearing. This review summarizes recent advances in reprogramming glia in the CNS to suggest future steps for regenerating the peripheral auditory system. In the coming years, direct reprogramming of spiral ganglion glial cells has the potential to become one of the leading biological strategies to treat hearing impairment. PMID:29593497
Sun, Hongyu; Takesian, Anne E; Wang, Ting Ting; Lippman-Bell, Jocelyn J; Hensch, Takao K; Jensen, Frances E
2018-05-29
Heightened neural excitability in infancy and childhood results in increased susceptibility to seizures. Such early-life seizures are associated with language deficits and autism that can result from aberrant development of the auditory cortex. Here, we show that early-life seizures disrupt a critical period (CP) for tonotopic map plasticity in primary auditory cortex (A1). We show that this CP is characterized by a prevalence of "silent," NMDA-receptor (NMDAR)-only, glutamate receptor synapses in auditory cortex that become "unsilenced" due to activity-dependent AMPA receptor (AMPAR) insertion. Induction of seizures prior to this CP occludes tonotopic map plasticity by prematurely unsilencing NMDAR-only synapses. Further, brief treatment with the AMPAR antagonist NBQX following seizures, prior to the CP, prevents synapse unsilencing and permits subsequent A1 plasticity. These findings reveal that early-life seizures modify CP regulators and suggest that therapeutic targets for early post-seizure treatment can rescue CP plasticity. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Abnormal auditory synchronization in stuttering: A magnetoencephalographic study.
Kikuchi, Yoshikazu; Okamoto, Tsuyoshi; Ogata, Katsuya; Hagiwara, Koichi; Umezaki, Toshiro; Kenjo, Masamutsu; Nakagawa, Takashi; Tobimatsu, Shozo
2017-02-01
In a previous magnetoencephalographic study, we showed both functional and structural reorganization of the right auditory cortex and impaired left auditory cortex function in people who stutter (PWS). In the present work, we reevaluated the same dataset to further investigate how the right and left auditory cortices interact to compensate for stuttering. We evaluated bilateral N100m latencies as well as indices of local and inter-hemispheric phase synchronization of the auditory cortices. The left N100m latency was significantly prolonged relative to the right N100m latency in PWS, while healthy control participants did not show any inter-hemispheric differences in latency. A phase-locking factor (PLF) analysis, which indicates the degree of local phase synchronization, demonstrated enhanced alpha-band synchrony in the right auditory area of PWS. A phase-locking value (PLV) analysis of inter-hemispheric synchronization demonstrated significant elevations in the beta band between the right and left auditory cortices in PWS. In addition, right PLF and PLVs were positively correlated with stuttering frequency in PWS. Taken together, our data suggest that increased right hemispheric local phase synchronization and increased inter-hemispheric phase synchronization are electrophysiological correlates of a compensatory mechanism for impaired left auditory processing in PWS. Published by Elsevier B.V.
Vicario, David S.
2017-01-01
Sensory and motor brain structures work in collaboration during perception. To evaluate their respective contributions, the present study recorded neural responses to auditory stimulation at multiple sites simultaneously in both the higher-order auditory area NCM and the premotor area HVC of the songbird brain in awake zebra finches (Taeniopygia guttata). Bird’s own song (BOS) and various conspecific songs (CON) were presented in both blocked and shuffled sequences. Neural responses showed plasticity in the form of stimulus-specific adaptation, with markedly different dynamics between the two structures. In NCM, the response decrease with repetition of each stimulus was gradual and long-lasting and did not differ between the stimuli or the stimulus presentation sequences. In contrast, HVC responses to CON stimuli decreased much more rapidly in the blocked than in the shuffled sequence. Furthermore, this decrease was more transient in HVC than in NCM, as shown by differential dynamics in the shuffled sequence. Responses to BOS in HVC decreased more gradually than to CON stimuli. The quality of neural representations, computed as the mutual information between stimuli and neural activity, was higher in NCM than in HVC. Conversely, internal functional correlations, estimated as the coherence between recording sites, were greater in HVC than in NCM. The cross-coherence between the two structures was weak and limited to low frequencies. These findings suggest that auditory communication signals are processed according to very different but complementary principles in NCM and HVC, a contrast that may inform study of the auditory and motor pathways for human speech processing. NEW & NOTEWORTHY Neural responses to auditory stimulation in sensory area NCM and premotor area HVC of the songbird forebrain show plasticity in the form of stimulus-specific adaptation with markedly different dynamics. These two structures also differ in stimulus representations and internal functional correlations. Accordingly, NCM seems to process the individually specific complex vocalizations of others based on prior familiarity, while HVC responses appear to be modulated by transitions and/or timing in the ongoing sequence of sounds. PMID:28031398
Chaves, Patrícia P; Valdoria, Ciara M C; Amorim, M Clara P; Vasconcelos, Raquel O
2017-09-01
Studies addressing structure-function relationships of the fish auditory system during development are sparse compared to other taxa. The Batrachoididae has become an important group to investigate mechanisms of auditory plasticity and evolution of auditory-vocal systems. A recent study reported ontogenetic improvements in the inner ear saccule sensitivity of the Lusitanian toadfish, Halobatrachus didactylus, but whether this results from changes in the sensory morphology remains unknown. We investigated how the macula and organization of auditory receptors in the saccule and utricle change during growth in this species. Inner ear sensory epithelia were removed from the end organs of previously PFA-fixed specimens, from non-vocal posthatch fry (<1.4 cm, standard length) to adults (>23 cm). Epithelia were phalloidin-stained and analysed for area, shape, number and orientation patterns of hair cells (HC), and number and size of saccular supporting cells (SC). Saccular macula area expanded 41x in total, and significantly more (relative to body length) among vocal juveniles (2.3-2.9 cm). Saccular HC number increased 25x but HC density decreased, suggesting that HC addition is slower relative to epithelial growth. While SC density decreased, SC apical area increased, contributing to the epithelial expansion. The utricule revealed increased HC density (striolar region) and less epithelial expansion (5x) with growth, contrasting with the saccule that may have a different developmental pattern due to its larger size and main auditory functions. Both macula shape and HC orientation patterns were already established in the posthatch fry and retained throughout growth in both end organs. We suggest that previously reported ontogenetic improvements in saccular sensitivity might be associated with changes in HC number (not density), size and/or molecular mechanisms controlling HC sensitivity. This is one of the first studies investigating the ontogenetic development of the saccule and utricle in a vocal fish and how it potentially relates to auditory enhancement for acoustic communication. Copyright © 2017 Elsevier B.V. All rights reserved.
The topography of frequency and time representation in primate auditory cortices
Baumann, Simon; Joly, Olivier; Rees, Adrian; Petkov, Christopher I; Sun, Li; Thiele, Alexander; Griffiths, Timothy D
2015-01-01
Natural sounds can be characterised by their spectral content and temporal modulation, but how the brain is organized to analyse these two critical sound dimensions remains uncertain. Using functional magnetic resonance imaging, we demonstrate a topographical representation of amplitude modulation rate in the auditory cortex of awake macaques. The representation of this temporal dimension is organized in approximately concentric bands of equal rates across the superior temporal plane in both hemispheres, progressing from high rates in the posterior core to low rates in the anterior core and lateral belt cortex. In A1 the resulting gradient of modulation rate runs approximately perpendicular to the axis of the tonotopic gradient, suggesting an orthogonal organisation of spectral and temporal sound dimensions. In auditory belt areas this relationship is more complex. The data suggest a continuous representation of modulation rate across several physiological areas, in contradistinction to a separate representation of frequency within each area. DOI: http://dx.doi.org/10.7554/eLife.03256.001 PMID:25590651
Łukaszewicz-Moszyńska, Zuzanna; Lachowska, Magdalena; Niemczyk, Kazimierz
2014-01-01
The purpose of this study was to evaluate possible relationships between duration of cochlear implant use and results of positron emission tomography (PET) measurements in the temporal lobes performed while subjects listened to speech stimuli. Other aspects investigated were whether implantation side impacts significantly on cortical representations of functions related to understanding speech (ipsi- or contralateral to the implanted side) and whether any correlation exists between cortical activation and speech therapy results. Objective cortical responses to acoustic stimulation were measured, using PET, in nine cochlear implant patients (age range: 15 to 50 years). All the patients suffered from bilateral deafness, were right-handed, and had no additional neurological deficits. They underwent PET imaging three times: immediately after the first fitting of the speech processor (activation of the cochlear implant), and one and two years later. A tendency towards increasing levels of activation in areas of the primary and secondary auditory cortex on the left side of the brain was observed. There was no clear effect of the side of implantation (left or right) on the degree of cortical activation in the temporal lobe. However, the PET results showed a correlation between degree of cortical activation and speech therapy results.
Łukaszewicz-Moszyńska, Zuzanna; Lachowska, Magdalena; Niemczyk, Kazimierz
2014-01-01
Summary The purpose of this study was to evaluate possible relationships between duration of cochlear implant use and results of positron emission tomography (PET) measurements in the temporal lobes performed while subjects listened to speech stimuli. Other aspects investigated were whether implantation side impacts significantly on cortical representations of functions related to understanding speech (ipsi- or contralateral to the implanted side) and whether any correlation exists between cortical activation and speech therapy results. Objective cortical responses to acoustic stimulation were measured, using PET, in nine cochlear implant patients (age range: 15 to 50 years). All the patients suffered from bilateral deafness, were right-handed, and had no additional neurological deficits. They underwent PET imaging three times: immediately after the first fitting of the speech processor (activation of the cochlear implant), and one and two years later. A tendency towards increasing levels of activation in areas of the primary and secondary auditory cortex on the left side of the brain was observed. There was no clear effect of the side of implantation (left or right) on the degree of cortical activation in the temporal lobe. However, the PET results showed a correlation between degree of cortical activation and speech therapy results. PMID:25306122
Rogalsky, Corianne; Love, Tracy; Driscoll, David; Anderson, Steven W.; Hickok, Gregory
2013-01-01
The discovery of mirror neurons in macaque has led to a resurrection of motor theories of speech perception. Although the majority of lesion and functional imaging studies have associated perception with the temporal lobes, it has also been proposed that the ‘human mirror system’, which prominently includes Broca’s area, is the neurophysiological substrate of speech perception. Although numerous studies have demonstrated a tight link between sensory and motor speech processes, few have directly assessed the critical prediction of mirror neuron theories of speech perception, namely that damage to the human mirror system should cause severe deficits in speech perception. The present study measured speech perception abilities of patients with lesions involving motor regions in the left posterior frontal lobe and/or inferior parietal lobule (i.e., the proposed human ‘mirror system’). Performance was at or near ceiling in patients with fronto-parietal lesions. It is only when the lesion encroaches on auditory regions in the temporal lobe that perceptual deficits are evident. This suggests that ‘mirror system’ damage does not disrupt speech perception, but rather that auditory systems are the primary substrate for speech perception. PMID:21207313
Mapping a lateralization gradient within the ventral stream for auditory speech perception
Specht, Karsten
2013-01-01
Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic resonance imaging studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesized, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory–phonetic to lexico-semantic processing and along the posterior–anterior axis, thus forming a “lateralization” gradient. This increasing leftward lateralization was particularly evident for the left superior temporal sulcus and more anterior parts of the temporal lobe. PMID:24106470
Speech-Processing Fatigue in Children: Auditory Event-Related Potential and Behavioral Measures
ERIC Educational Resources Information Center
Key, Alexandra P.; Gustafson, Samantha J.; Rentmeester, Lindsey; Hornsby, Benjamin W. Y.; Bess, Fred H.
2017-01-01
Purpose: Fatigue related to speech processing is an understudied area that may have significant negative effects, especially in children who spend the majority of their school days listening to classroom instruction. Method: This study examined the feasibility of using auditory P300 responses and behavioral indices (lapses of attention and…
ERIC Educational Resources Information Center
Hill, P. R.; Hogben, J. H.; Bishop, D. M. V.
2005-01-01
It has been proposed that specific language impairment (SLI) is caused by an impairment of auditory processing, but it is unclear whether this problem affects temporal processing, frequency discrimination (FD), or both. Furthermore, there are few longitudinal studies in this area, making it hard to establish whether any deficit represents a…
Lebedeva, I S; Akhadov, T A; Petriaĭkin, A V; Kaleda, V G; Barkhatova, A N; Golubev, S A; Rumiantseva, E E; Vdovenko, A M; Fufaeva, E A; Semenova, N A
2011-01-01
Six patients in the state of remission after the first episode ofjuvenile schizophrenia and seven sex- and age-matched mentally healthy subjects were examined by fMRI and ERP methods. The auditory oddball paradigm was applied. Differences in P300 parameters didn't reach the level of significance, however, a significantly higher hemodynamic response to target stimuli was found in patients bilaterally in the supramarginal gyrus and in the right medial frontal gyrus, which points to pathology of these brain areas in supporting of auditory selective attention.
Perceptual and academic patterns of learning-disabled/gifted students.
Waldron, K A; Saphire, D G
1992-04-01
This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.
Saunders, Gabrielle H; Echt, Katharina V
2012-01-01
Combat exposures to blast can result in both peripheral damage to the ears and eyes and central damage to the auditory and visual processing areas in the brain. The functional effects of the latter include visual, auditory, and cognitive processing difficulties that manifest as deficits in attention, memory, and problem solving--symptoms similar to those seen in individuals with visual and auditory processing disorders. Coexisting damage to the auditory and visual system is referred to as dual sensory impairment (DSI). The number of Operation Iraqi Freedom/Operation Enduring Freedom Veterans with DSI is vast; yet currently no established models or guidelines exist for assessment, rehabilitation, or service-delivery practice. In this article, we review the current state of knowledge regarding blast exposure and DSI and outline the many unknowns in this area. Further, we propose a model for clinical assessment and rehabilitation of blast-related DSI that includes development of a coordinated team-based approach to target activity limitations and participation restrictions in order to enhance reintegration, recovery, and quality of life.
Processing of band-passed noise in the lateral auditory belt cortex of the rhesus monkey.
Rauschecker, Josef P; Tian, Biao
2004-06-01
Neurons in the lateral belt areas of rhesus monkey auditory cortex were stimulated with band-passed noise (BPN) bursts of different bandwidths and center frequencies. Most neurons responded much more vigorously to these sounds than to tone bursts of a single frequency, and it thus became possible to elicit a clear response in 85% of lateral belt neurons. Tuning to center frequency and bandwidth of the BPN bursts was analyzed. Best center frequency varied along the rostrocaudal direction, with 2 reversals defining borders between areas. We confirmed the existence of 2 belt areas (AL and ML) that were laterally adjacent to the core areas (R and A1, respectively) and a third area (CL) adjacent to area CM on the supratemporal plane (STP). All 3 lateral belt areas were cochleotopically organized with their frequency gradients collinear to those of the adjacent STP areas. Although A1 neurons responded best to pure tones and their responses decreased with increasing bandwidth, 63% of the lateral belt neurons were tuned to bandwidths between 1/3 and 2 octaves and showed either one or multiple peaks. The results are compared with previous data from visual cortex and are discussed in the context of spectral integration, whereby the lateral belt forms a relatively early stage of processing in the cortical hierarchy, giving rise to parallel streams for the identification of auditory objects and their localization in space.
Deviance sensitivity in the auditory cortex of freely moving rats
2018-01-01
Deviance sensitivity is the specific response to a surprising stimulus, one that violates expectations set by the past stimulation stream. In audition, deviance sensitivity is often conflated with stimulus-specific adaptation (SSA), the decrease in responses to a common stimulus that only partially generalizes to other, rare stimuli. SSA is usually measured using oddball sequences, where a common (standard) tone and a rare (deviant) tone are randomly intermixed. However, the larger responses to a tone when deviant does not necessarily represent deviance sensitivity. Deviance sensitivity is commonly tested using a control sequence in which many different tones serve as the standard, eliminating the expectations set by the standard ('deviant among many standards'). When the response to a tone when deviant (against a single standard) is larger than the responses to the same tone in the control sequence, it is concluded that true deviance sensitivity occurs. In primary auditory cortex of anesthetized rats, responses to deviants and to the same tones in the control condition are comparable in size. We recorded local field potentials and multiunit activity from the auditory cortex of awake, freely moving rats, implanted with 32-channel drivable microelectrode arrays and using telemetry. We observed highly significant SSA in the awake state. Moreover, the responses to a tone when deviant were significantly larger than the responses to the same tone in the control condition. These results establish the presence of true deviance sensitivity in primary auditory cortex in awake rats. PMID:29874246
Götz, Theresa; Hanke, David; Huonker, Ralph; Weiss, Thomas; Klingner, Carsten; Brodoehl, Stefan; Baumbach, Philipp; Witte, Otto W
2017-06-01
We often close our eyes to improve perception. Recent results have shown a decrease of perception thresholds accompanied by an increase in somatosensory activity after eye closure. However, does somatosensory spatial discrimination also benefit from eye closure? We previously showed that spatial discrimination is accompanied by a reduction of somatosensory activity. Using magnetoencephalography, we analyzed the magnitude of primary somatosensory (somatosensory P50m) and primary auditory activity (auditory P50m) during a one-back discrimination task in 21 healthy volunteers. In complete darkness, participants were requested to pay attention to either the somatosensory or auditory stimulation and asked to open or close their eyes every 6.5 min. Somatosensory P50m was reduced during a task requiring the distinguishing of stimulus location changes at the distal phalanges of different fingers. The somatosensory P50m was further reduced and detection performance was higher during eyes open. A similar reduction was found for the auditory P50m during a task requiring the distinguishing of changing tones. The function of eye closure is more than controlling visual input. It might be advantageous for perception because it is an effective way to reduce interference from other modalities, but disadvantageous for spatial discrimination because it requires at least one top-down processing stage. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
2016-01-01
Abstract Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor‐preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface‐based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory‐motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory‐motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M‐I. Hum Brain Mapp 37:2784–2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:27061771
Butler, Blake E; Chabot, Nicole; Kral, Andrej; Lomber, Stephen G
2017-01-01
Crossmodal plasticity takes place following sensory loss, such that areas that normally process the missing modality are reorganized to provide compensatory function in the remaining sensory systems. For example, congenitally deaf cats outperform normal hearing animals on localization of visual stimuli presented in the periphery, and this advantage has been shown to be mediated by the posterior auditory field (PAF). In order to determine the nature of the anatomical differences that underlie this phenomenon, we injected a retrograde tracer into PAF of congenitally deaf animals and quantified the thalamic and cortical projections to this field. The pattern of projections from areas throughout the brain was determined to be qualitatively similar to that previously demonstrated in normal hearing animals, but with twice as many projections arising from non-auditory cortical areas. In addition, small ectopic projections were observed from a number of fields in visual cortex, including areas 19, 20a, 20b, and 21b, and area 7 of parietal cortex. These areas did not show projections to PAF in cats deafened ototoxically near the onset of hearing, and provide a possible mechanism for crossmodal reorganization of PAF. These, along with the possible contributions of other mechanisms, are considered. Copyright © 2016 Elsevier B.V. All rights reserved.
Blom, Jan Dirk
2015-01-01
Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.
Headphone and Head-Mounted Visual Displays for Virtual Environments
NASA Technical Reports Server (NTRS)
Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)
1998-01-01
A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.
Attention and Working Memory in Adolescents with Autism Spectrum Disorder: A Functional MRI Study.
Rahko, Jukka S; Vuontela, Virve A; Carlson, Synnöve; Nikkinen, Juha; Hurtig, Tuula M; Kuusikko-Gauffin, Sanna; Mattila, Marja-Leena; Jussila, Katja K; Remes, Jukka J; Jansson-Verkasalo, Eira M; Aronen, Eeva T; Pauls, David L; Ebeling, Hanna E; Tervonen, Osmo; Moilanen, Irma K; Kiviniemi, Vesa J
2016-06-01
The present study examined attention and memory load-dependent differences in the brain activation and deactivation patterns between adolescents with autism spectrum disorders (ASDs) and typically developing (TD) controls using functional magnetic resonance imaging. Attentional (0-back) and working memory (WM; 2-back) processing and load differences (0 vs. 2-back) were analysed. WM-related areas activated and default mode network deactivated normally in ASDs as a function of task load. ASDs performed the attentional 0-back task similarly to TD controls but showed increased deactivation in cerebellum and right temporal cortical areas and weaker activation in other cerebellar areas. Increasing task load resulted in multiple responses in ASDs compared to TD and in inadequate modulation of brain activity in right insula, primary somatosensory, motor and auditory cortices. The changes during attentional task may reflect compensatory mechanisms enabling normal behavioral performance. The inadequate memory load-dependent modulation of activity suggests diminished compensatory potential in ASD.
Identification of a pathway for intelligible speech in the left temporal lobe
Scott, Sophie K.; Blank, C. Catrin; Rosen, Stuart; Wise, Richard J. S.
2017-01-01
Summary It has been proposed that the identification of sounds, including species-specific vocalizations, by primates depends on anterior projections from the primary auditory cortex, an auditory pathway analogous to the ventral route proposed for the visual identification of objects. We have identified a similar route in the human for understanding intelligible speech. Using PET imaging to identify separable neural subsystems within the human auditory cortex, we used a variety of speech and speech-like stimuli with equivalent acoustic complexity but varying intelligibility. We have demonstrated that the left superior temporal sulcus responds to the presence of phonetic information, but its anterior part only responds if the stimulus is also intelligible. This novel observation demonstrates a left anterior temporal pathway for speech comprehension. PMID:11099443
Evaluating the Precision of Auditory Sensory Memory as an Index of Intrusion in Tinnitus.
Barrett, Doug J K; Pilling, Michael
The purpose of this study was to investigate the potential of measures of auditory short-term memory (ASTM) to provide a clinical measure of intrusion in tinnitus. Response functions for six normal listeners on a delayed pitch discrimination task were contrasted in three conditions designed to manipulate attention in the presence and absence of simulated tinnitus: (1) no-tinnitus, (2) ignore-tinnitus, and (3) attend-tinnitus. Delayed pitch discrimination functions were more variable in the presence of simulated tinnitus when listeners were asked to divide attention between the primary task and the amplitude of the tinnitus tone. Changes in the variability of auditory short-term memory may provide a novel means of quantifying the level of intrusion associated with the tinnitus percept during listening.
Changes in Properties of Auditory Nerve Synapses following Conductive Hearing Loss.
Zhuang, Xiaowen; Sun, Wei; Xu-Friedman, Matthew A
2017-01-11
Auditory activity plays an important role in the development of the auditory system. Decreased activity can result from conductive hearing loss (CHL) associated with otitis media, which may lead to long-term perceptual deficits. The effects of CHL have been mainly studied at later stages of the auditory pathway, but early stages remain less examined. However, changes in early stages could be important because they would affect how information about sounds is conveyed to higher-order areas for further processing and localization. We examined the effects of CHL at auditory nerve synapses onto bushy cells in the mouse anteroventral cochlear nucleus following occlusion of the ear canal. These synapses, called endbulbs of Held, normally show strong depression in voltage-clamp recordings in brain slices. After 1 week of CHL, endbulbs showed even greater depression, reflecting higher release probability. We observed no differences in quantal size between control and occluded mice. We confirmed these observations using mean-variance analysis and the integration method, which also revealed that the number of release sites decreased after occlusion. Consistent with this, synaptic puncta immunopositive for VGLUT1 decreased in area after occlusion. The level of depression and number of release sites both showed recovery after returning to normal conditions. Finally, bushy cells fired fewer action potentials in response to evoked synaptic activity after occlusion, likely because of increased depression and decreased input resistance. These effects appear to reflect a homeostatic, adaptive response of auditory nerve synapses to reduced activity. These effects may have important implications for perceptual changes following CHL. Normal hearing is important to everyday life, but abnormal auditory experience during development can lead to processing disorders. For example, otitis media reduces sound to the ear, which can cause long-lasting deficits in language skills and verbal production, but the location of the problem is unknown. Here, we show that occluding the ear causes synapses at the very first stage of the auditory pathway to modify their properties, by decreasing in size and increasing the likelihood of releasing neurotransmitter. This causes synapses to deplete faster, which reduces fidelity at central targets of the auditory nerve, which could affect perception. Temporary hearing loss could cause similar changes at later stages of the auditory pathway, which could contribute to disorders in behavior. Copyright © 2017 the authors 0270-6474/17/370323-10$15.00/0.
Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J
2017-06-01
Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.
The harmonic organization of auditory cortex
Wang, Xiaoqin
2013-01-01
A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds. PMID:24381544
Multimodal lexical processing in auditory cortex is literacy skill dependent.
McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R
2014-09-01
Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Brain bases for auditory stimulus-driven figure-ground segregation.
Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Griffiths, Timothy D
2011-01-05
Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.
Seither-Preisler, Annemarie; Parncutt, Richard; Schneider, Peter
2014-08-13
Playing a musical instrument is associated with numerous neural processes that continuously modify the human brain and may facilitate characteristic auditory skills. In a longitudinal study, we investigated the auditory and neural plasticity of musical learning in 111 young children (aged 7-9 y) as a function of the intensity of instrumental practice and musical aptitude. Because of the frequent co-occurrence of central auditory processing disorders and attentional deficits, we also tested 21 children with attention deficit (hyperactivity) disorder [AD(H)D]. Magnetic resonance imaging and magnetoencephalography revealed enlarged Heschl's gyri and enhanced right-left hemispheric synchronization of the primary evoked response (P1) to harmonic complex sounds in children who spent more time practicing a musical instrument. The anatomical characteristics were positively correlated with frequency discrimination, reading, and spelling skills. Conversely, AD(H)D children showed reduced volumes of Heschl's gyri and enhanced volumes of the plana temporalia that were associated with a distinct bilateral P1 asynchrony. This may indicate a risk for central auditory processing disorders that are often associated with attentional and literacy problems. The longitudinal comparisons revealed a very high stability of auditory cortex morphology and gray matter volumes, suggesting that the combined anatomical and functional parameters are neural markers of musicality and attention deficits. Educational and clinical implications are considered. Copyright © 2014 the authors 0270-6474/14/3410937-13$15.00/0.
Differential Effects of Noise and Music Signals on the Behavior of Children
NASA Astrophysics Data System (ADS)
ANDO, Y.
2001-03-01
A theory based on the model of how the auditory-brain system perceive primary sensations is used to explain the differential effects of noise and music signals on the sleep of babies and on the performance of mental tasks by children. In a previous study by Ando and Hattori, [1], it was found that sleeping babies (2-4 months old) whose mothers had begun living in a noisy area before conception or during the first five months of pregnancy did not react to daily aircraft noise but did react to music. In another previous study by Ando et al.[2], the percentage of the pupils in "V-type relaxation" state during an adding task in a quiet living area was much greater when pupils heard music than when they heard noise. These phenomena are explained here by the difference between the temporal factors extracted from the running autocorrelation function of the noise and music signals.
Mismatch Negativity with Visual-only and Audiovisual Speech
Ponton, Curtis W.; Bernstein, Lynne E.; Auer, Edward T.
2009-01-01
The functional organization of cortical speech processing is thought to be hierarchical, increasing in complexity and proceeding from primary sensory areas centrifugally. The current study used the mismatch negativity (MMN) obtained with electrophysiology (EEG) to investigate the early latency period of visual speech processing under both visual-only (VO) and audiovisual (AV) conditions. Current density reconstruction (CDR) methods were used to model the cortical MMN generator locations. MMNs were obtained with VO and AV speech stimuli at early latencies (approximately 82-87 ms peak in time waveforms relative to the acoustic onset) and in regions of the right lateral temporal and parietal cortices. Latencies were consistent with bottom-up processing of the visible stimuli. We suggest that a visual pathway extracts phonetic cues from visible speech, and that previously reported effects of AV speech in classical early auditory areas, given later reported latencies, could be attributable to modulatory feedback from visual phonetic processing. PMID:19404730
Intersubject synchronization of cortical activity during natural vision.
Hasson, Uri; Nir, Yuval; Levy, Ifat; Fuhrmann, Galit; Malach, Rafael
2004-03-12
To what extent do all brains work alike during natural conditions? We explored this question by letting five subjects freely view half an hour of a popular movie while undergoing functional brain imaging. Applying an unbiased analysis in which spatiotemporal activity patterns in one brain were used to "model" activity in another brain, we found a striking level of voxel-by-voxel synchronization between individuals, not only in primary and secondary visual and auditory areas but also in association cortices. The results reveal a surprising tendency of individual brains to "tick collectively" during natural vision. The intersubject synchronization consisted of a widespread cortical activation pattern correlated with emotionally arousing scenes and regionally selective components. The characteristics of these activations were revealed with the use of an open-ended "reverse-correlation" approach, which inverts the conventional analysis by letting the brain signals themselves "pick up" the optimal stimuli for each specialized cortical area.
Seshagiri, Chandran V.; Delgutte, Bertrand
2007-01-01
The complex anatomical structure of the central nucleus of the inferior colliculus (ICC), the principal auditory nucleus in the midbrain, may provide the basis for functional organization of auditory information. To investigate this organization, we used tetrodes to record from neighboring neurons in the ICC of anesthetized cats and studied the similarity and difference among the responses of these neurons to pure-tone stimuli using widely used physiological characterizations. Consistent with the tonotopic arrangement of neurons in the ICC and reports of a threshold map, we found a high degree of correlation in the best frequencies (BFs) of neighboring neurons, which were mostly <3 kHz in our sample, and the pure-tone thresholds among neighboring neurons. However, width of frequency tuning, shapes of the frequency response areas, and temporal discharge patterns showed little or no correlation among neighboring neurons. Because the BF and threshold are measured at levels near the threshold and the characteristic frequency (CF), neighboring neurons may receive similar primary inputs tuned to their CF; however, at higher levels, additional inputs from other frequency channels may be recruited, introducing greater variability in the responses. There was also no correlation among neighboring neurons' sensitivity to interaural time differences (ITD) measured with binaural beats. However, the characteristic phases (CPs) of neighboring neurons revealed a significant correlation. Because the CP is related to the neural mechanisms generating the ITD sensitivity, this result is consistent with segregation of inputs to the ICC from the lateral and medial superior olives. PMID:17671101
Seshagiri, Chandran V; Delgutte, Bertrand
2007-10-01
The complex anatomical structure of the central nucleus of the inferior colliculus (ICC), the principal auditory nucleus in the midbrain, may provide the basis for functional organization of auditory information. To investigate this organization, we used tetrodes to record from neighboring neurons in the ICC of anesthetized cats and studied the similarity and difference among the responses of these neurons to pure-tone stimuli using widely used physiological characterizations. Consistent with the tonotopic arrangement of neurons in the ICC and reports of a threshold map, we found a high degree of correlation in the best frequencies (BFs) of neighboring neurons, which were mostly <3 kHz in our sample, and the pure-tone thresholds among neighboring neurons. However, width of frequency tuning, shapes of the frequency response areas, and temporal discharge patterns showed little or no correlation among neighboring neurons. Because the BF and threshold are measured at levels near the threshold and the characteristic frequency (CF), neighboring neurons may receive similar primary inputs tuned to their CF; however, at higher levels, additional inputs from other frequency channels may be recruited, introducing greater variability in the responses. There was also no correlation among neighboring neurons' sensitivity to interaural time differences (ITD) measured with binaural beats. However, the characteristic phases (CPs) of neighboring neurons revealed a significant correlation. Because the CP is related to the neural mechanisms generating the ITD sensitivity, this result is consistent with segregation of inputs to the ICC from the lateral and medial superior olives.
Szymanski, Francois D; Rabinowitz, Neil C; Magri, Cesare; Panzeri, Stefano; Schnupp, Jan W H
2011-11-02
Recent studies have shown that the phase of low-frequency local field potentials (LFPs) in sensory cortices carries a significant amount of information about complex naturalistic stimuli, yet the laminar circuit mechanisms and the aspects of stimulus dynamics responsible for generating this phase information remain essentially unknown. Here we investigated these issues by means of an information theoretic analysis of LFPs and current source densities (CSDs) recorded with laminar multi-electrode arrays in the primary auditory area of anesthetized rats during complex acoustic stimulation (music and broadband 1/f stimuli). We found that most LFP phase information originated from discrete "CSD events" consisting of granular-superficial layer dipoles of short duration and large amplitude, which we hypothesize to be triggered by transient thalamocortical activation. These CSD events occurred at rates of 2-4 Hz during both stimulation with complex sounds and silence. During stimulation with complex sounds, these events reliably reset the LFP phases at specific times during the stimulation history. These facts suggest that the informativeness of LFP phase in rat auditory cortex is the result of transient, large-amplitude events, of the "evoked" or "driving" type, reflecting strong depolarization in thalamo-recipient layers of cortex. Finally, the CSD events were characterized by a small number of discrete types of infragranular activation. The extent to which infragranular regions were activated was stimulus dependent. These patterns of infragranular activations may reflect a categorical evaluation of stimulus episodes by the local circuit to determine whether to pass on stimulus information through the output layers.
Keitel, Anne; Gross, Joachim
2016-06-01
The human brain can be parcellated into diverse anatomical areas. We investigated whether rhythmic brain activity in these areas is characteristic and can be used for automatic classification. To this end, resting-state MEG data of 22 healthy adults was analysed. Power spectra of 1-s long data segments for atlas-defined brain areas were clustered into spectral profiles ("fingerprints"), using k-means and Gaussian mixture (GM) modelling. We demonstrate that individual areas can be identified from these spectral profiles with high accuracy. Our results suggest that each brain area engages in different spectral modes that are characteristic for individual areas. Clustering of brain areas according to similarity of spectral profiles reveals well-known brain networks. Furthermore, we demonstrate task-specific modulations of auditory spectral profiles during auditory processing. These findings have important implications for the classification of regional spectral activity and allow for novel approaches in neuroimaging and neurostimulation in health and disease.
Age-equivalent top-down modulation during cross-modal selective attention.
Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam
2014-12-01
Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.
Brenowitz, Eliot A; Lent, Karin; Rubel, Edwin W
2007-06-20
An important area of research in neuroscience is understanding what properties of brain structure and function are stimulated by sensory experience and behavioral performance. We tested the roles of experience and behavior in seasonal plasticity of the neural circuits that regulate learned song behavior in adult songbirds. Neurons in these circuits receive auditory input and show selective auditory responses to conspecific song. We asked whether auditory input or song production contribute to seasonal growth of telencephalic song nuclei. Adult male Gambel's white-crowned sparrows were surgically deafened, which eliminates auditory input and greatly reduces song production. These birds were then exposed to photoperiod and hormonal conditions that regulate the growth of song nuclei. We measured the volumes of the nuclei HVC, robust nucleus of arcopallium (RA), and area X at 7 and 30 d after exposure to long days plus testosterone in deafened and normally hearing birds. We also assessed song production and examined protein kinase C (PKC) expression because previous research reported that immunostaining for PKC increases transiently after deafening. Deafening did not delay or block the growth of the song nuclei to their full breeding-condition size. PKC activity in RA was not altered by deafening in the sparrows. Song continued to be well structured for up to 10 months after deafening, but song production decreased almost eightfold. These results suggest that neither auditory input nor high rates of song production are necessary for seasonal growth of the adult song control system in this species.
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Ebert, Kerry Danahy
2014-01-01
Background: Sentence repetition performance is attracting increasing interest as a valuable clinical marker for primary (or specific) language impairment (LI) in both monolingual and bilingual populations. Multiple aspects of memory appear to contribute to sentence repetition performance, but non-verbal memory has not yet been considered. Aims: To…
Spatiotemporal differentiation in auditory and motor regions during auditory phoneme discrimination.
Aerts, Annelies; Strobbe, Gregor; van Mierlo, Pieter; Hartsuiker, Robert J; Corthals, Paul; Santens, Patrick; De Letter, Miet
2017-06-01
Auditory phoneme discrimination (APD) is supported by both auditory and motor regions through a sensorimotor interface embedded in a fronto-temporo-parietal cortical network. However, the specific spatiotemporal organization of this network during APD with respect to different types of phonemic contrasts is still unclear. Here, we use source reconstruction, applied to event-related potentials in a group of 47 participants, to uncover a potential spatiotemporal differentiation in these brain regions during a passive and active APD task with respect to place of articulation (PoA), voicing and manner of articulation (MoA). Results demonstrate that in an early stage (50-110 ms), auditory, motor and sensorimotor regions elicit more activation during the passive and active APD task with MoA and active APD task with voicing compared to PoA. In a later stage (130-175 ms), the same auditory and motor regions elicit more activation during the APD task with PoA compared to MoA and voicing, yet only in the active condition, implying important timing differences. Degree of attention influences a frontal network during the APD task with PoA, whereas auditory regions are more affected during the APD task with MoA and voicing. Based on these findings, it can be carefully suggested that APD is supported by the integration of early activation of auditory-acoustic properties in superior temporal regions, more perpetuated for MoA and voicing, and later auditory-to-motor integration in sensorimotor areas, more perpetuated for PoA.
Stuart Gatehouse: The International Perspective
Van Tasell, Dianne J.; Levitt, Harry
2008-01-01
The international contributions of Stuart Gatehouse are reviewed in three areas: as a scientist, as an advisor to health policy makers, and as a participant in international conferences. He was able, as no other auditory scientist of his time, to bridge the gap between scientific and clinical research. His ability to apply sound scientific principles to issues of clinical importance was most apparent in his work in three main areas of his research: acclimatization to amplified speech, auditory disability and hearing aid benefit, and candidature for linear and nonlinear signal processing. PMID:18567589
Emmert, Kirsten; Kopel, Rotem; Koush, Yury; Maire, Raphael; Senn, Pascal; Van De Ville, Dimitri; Haller, Sven
2017-01-01
The emerging technique of real-time fMRI neurofeedback trains individuals to regulate their own brain activity via feedback from an fMRI measure of neural activity. Optimum feedback presentation has yet to be determined, particularly when working with clinical populations. To this end, we compared continuous against intermittent feedback in subjects with tinnitus. Fourteen participants with tinnitus completed the whole experiment consisting of nine runs (3 runs × 3 days). Prior to the neurofeedback, the target region was localized within the auditory cortex using auditory stimulation (1 kHz tone pulsating at 6 Hz) in an ON-OFF block design. During neurofeedback runs, participants received either continuous (n = 7, age 46.84 ± 12.01, Tinnitus Functional Index (TFI) 49.43 ± 15.70) or intermittent feedback (only after the regulation block) (n = 7, age 47.42 ± 12.39, TFI 49.82 ± 20.28). Participants were asked to decrease auditory cortex activity that was presented to them by a moving bar. In the first and the last session, participants also underwent arterial spin labeling (ASL) and resting-state fMRI imaging. We assessed tinnitus severity using the TFI questionnaire before all sessions, directly after all sessions and six weeks after all sessions. We then compared neuroimaging results from neurofeedback using a general linear model (GLM) and region-of-interest analysis as well as behavior measures employing a repeated-measures ANOVA. In addition, we looked at the seed-based connectivity of the auditory cortex using resting-state data and the cerebral blood flow using ASL data. GLM group analysis revealed that a considerable part of the target region within the auditory cortex was significantly deactivated during neurofeedback. When comparing continuous and intermittent feedback groups, the continuous group showed a stronger deactivation of parts of the target region, specifically the secondary auditory cortex. This result was confirmed in the region-of-interest analysis that showed a significant down-regulation effect for the continuous but not the intermittent group. Additionally, continuous feedback led to a slightly stronger effect over time while intermittent feedback showed best results in the first session. Behaviorally, there was no significant effect on the total TFI score, though on a descriptive level TFI scores tended to decrease after all sessions and in the six weeks follow up in the continuous group. Seed-based connectivity with a fixed-effects analysis revealed that functional connectivity increased over sessions in the posterior cingulate cortex, premotor area and part of the insula when looking at all patients while cerebral blood flow did not change significantly over time. Overall, these results show that continuous feedback is suitable for long-term neurofeedback experiments while intermittent feedback presentation promises good results for single session experiments when using the auditory cortex as a target region. In particular, the down-regulation effect is more pronounced in the secondary auditory cortex, which might be more susceptible to voluntary modulation in comparison to a primary sensory region.
Retrosplenial Cortex Is Required for the Retrieval of Remote Memory for Auditory Cues
ERIC Educational Resources Information Center
Todd, Travis P.; Mehlman, Max L.; Keene, Christopher S.; DeAngeli, Nicole E.; Bucci, David J.
2016-01-01
The retrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of…
Song Decrystallization in Adult Zebra Finches Does Not Require the Song Nucleus NIf
Roy, Arani; Mooney, Richard
2009-01-01
In adult male zebra finches, transecting the vocal nerve causes previously stable (i.e., crystallized) song to slowly degrade, presumably because of the resulting distortion in auditory feedback. How and where distorted feedback interacts with song motor networks to induce this process of song decrystallization remains unknown. The song premotor nucleus HVC is a potential site where auditory feedback signals could interact with song motor commands. Although the forebrain nucleus interface of the nidopallium (NIf) appears to be the primary auditory input to HVC, NIf lesions made in adult zebra finches do not trigger song decrystallization. One possibility is that NIf lesions do not interfere with song maintenance, but do compromise the adult zebra finch's ability to express renewed vocal plasticity in response to feedback perturbations. To test this idea, we bilaterally lesioned NIf and then transected the vocal nerve in adult male zebra finches. We found that bilateral NIf lesions did not prevent nerve section–induced song decrystallization. To test the extent to which the NIf lesions disrupted auditory processing in the song system, we made in vivo extracellular recordings in HVC and a downstream anterior forebrain pathway (AFP) in NIf-lesioned birds. We found strong and selective auditory responses to the playback of the birds' own song persisted in HVC and the AFP following NIf lesions. These findings suggest that auditory inputs to the song system other than NIf, such as the caudal mesopallium, could act as a source of auditory feedback signals to the song motor network. PMID:19515953
Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A
2005-12-07
Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.
Audio-tactile integration and the influence of musical training.
Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo
2014-01-01
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.
Avey, Marc T.; Hoeschele, Marisa; Moscicki, Michele K.; Bloomfield, Laurie L.; Sturdy, Christopher B.
2011-01-01
Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise [1]–[2]. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators [3]. Mobbing calls produced in response to smaller, higher-threat predators contain more “D” notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators [4]. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned. PMID:21909363
Emri, Miklós; Glaub, Teodóra; Berecz, Roland; Lengyel, Zsolt; Mikecz, Pál; Repa, Imre; Bartók, Eniko; Degrell, István; Trón, Lajos
2006-05-01
Cognitive deficit is an essential feature of schizophrenia. One of the generally used simple cognitive tasks to characterize specific cognitive dysfunctions is the auditory "oddball" paradigm. During this task, two different tones are presented with different repetition frequencies and the subject is asked to pay attention and to respond to the less frequent tone. The aim of the present study was to apply positron emission tomography (PET) to measure the regional brain blood flow changes induced by an auditory oddball task in healthy volunteers and in stable schizophrenic patients in order to detect activation differences between the two groups. Eight healthy volunteers and 11 schizophrenic patients were studied. The subjects carried out a specific auditory oddball task, while cerebral activation measured via the regional distribution of [15O]-butanol activity changes in the PET camera was recorded. Task-related activation differed significantly across the patients and controls. The healthy volunteers displayed significant activation in the anterior cingulate area (Brodman Area - BA32), while in the schizophrenic patients the area was wider, including the mediofrontal regions (BA32 and BA10). The distance between the locations of maximal activation of the two populations were 33 mm and the cluster size was about twice as large in the patient group. The present results demonstrate that the perfusion changes induced in the schizophrenic patients by this cognitive task extends over a larger part of the mediofrontal cortex than in the healthy volunteers. The different pattern of activation observed during the auditory oddball task in the schizophrenic patients suggests that a larger cortical area - and consequently a larger variety of neuronal networks--is involved in the cognitive processes in these patients. The dispersion of stimulus processing during a cognitive task requiring sustained attention and stimulus discrimination may play an important role in the pathomechanism of the disorder.
The effect of early visual deprivation on the neural bases of multisensory processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2015-06-01
Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Cheyne, Susan M; Thompson, Claire J H; Phillips, Abigail C; Hill, Robyn M C; Limin, Suwido H
2008-01-01
We demonstrate that although auditory sampling is a useful tool, this method alone will not provide a truly accurate indication of population size, density and distribution of gibbons in an area. If auditory sampling alone is employed, we show that data collection must take place over a sufficient period to account for variation in calling patterns across seasons. The population of Hylobates albibarbis in the Sabangau catchment, Central Kalimantan, Indonesia, was surveyed from July to December 2005 using methods established previously. In addition, auditory sampling was complemented by detailed behavioural data on six habituated groups within the study area. Here we compare results from this study to those of a 1-month study conducted in 2004. The total population of the Sabangau catchment is estimated to be about in the tens of thousands, though numbers, distribution and density for the different forest subtypes vary considerably. We propose that future density surveys of gibbons must include data from all forest subtypes where gibbons are found and that extrapolating from one forest subtype is likely to yield inaccurate density and population estimates. We also propose that auditory census be carried out by using at least three listening posts (LP) in order to increase the area sampled and the chances of hearing groups. Our results suggest that the Sabangau catchment contains one of the largest remaining contiguous populations of Bornean agile gibbon.
Plasticity of white matter connectivity in phonetics experts.
Vandermosten, Maaike; Price, Cathy J; Golestani, Narly
2016-09-01
Phonetics experts are highly trained to analyze and transcribe speech, both with respect to faster changing, phonetic features, and to more slowly changing, prosodic features. Previously we reported that, compared to non-phoneticians, phoneticians had greater local brain volume in bilateral auditory cortices and the left pars opercularis of Broca's area, with training-related differences in the grey-matter volume of the left pars opercularis in the phoneticians group (Golestani et al. 2011). In the present study, we used diffusion MRI to examine white matter microstructure, indexed by fractional anisotropy, in (1) the long segment of arcuate fasciculus (AF_long), which is a well-known language tract that connects Broca's area, including left pars opercularis, to the temporal cortex, and in (2) the fibers arising from the auditory cortices. Most of these auditory fibers belong to three validated language tracts, namely to the AF_long, the posterior segment of the arcuate fasciculus and the middle longitudinal fasciculus. We found training-related differences in phoneticians in left AF_long, as well as group differences relative to non-experts in the auditory fibers (including the auditory fibers belonging to the left AF_long). Taken together, the results of both studies suggest that grey matter structural plasticity arising from phonetic transcription training in Broca's area is accompanied by changes to the white matter fibers connecting this very region to the temporal cortex. Our findings suggest expertise-related changes in white matter fibers connecting fronto-temporal functional hubs that are important for phonetic processing. Further studies can pursue this hypothesis by examining the dynamics of these expertise related grey and white matter changes as they arise during phonetic training.
Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind
Burton, Harold; Sinclair, Robert J.; Agato, Alvin
2012-01-01
We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836
Washington, Stuart D.
2012-01-01
Species-specific vocalizations of mammals, including humans, contain slow and fast frequency modulations (FMs) as well as tone and noise bursts. In this study, we established sex-specific hemispheric differences in the tonal and FM response characteristics of neurons in the Doppler-shifted constant-frequency processing area in the mustached bat's primary auditory cortex (A1). We recorded single-unit cortical activity from the right and left A1 in awake bats in response to the presentation of tone bursts and linear FM sweeps that are contained within their echolocation and/or communication sounds. Peak response latencies to neurons' preferred or best FMs were significantly longer on the right compared with the left in both sexes, and in males this right-left difference was also present for the most excitatory tone burst. Based on peak response magnitudes, right hemispheric A1 neurons in males preferred low-rate, narrowband FMs, whereas those on the left were less selective, responding to FMs with a variety of rates and bandwidths. The distributions of parameters for best FMs in females were similar on the two sides. Together, our data provide the first strong physiological support of a sex-specific, spectrotemporal hemispheric asymmetry for the representation of tones and FMs in a nonhuman mammal. Specifically, our results demonstrate a left hemispheric bias in males for the representation of a diverse array of FMs differing in rate and bandwidth. We propose that these asymmetries underlie lateralized processing of communication sounds and are common to species as divergent as bats and humans. PMID:22649207
Recognition memory for Braille or spoken words: an fMRI study in early blind.
Burton, Harold; Sinclair, Robert J; Agato, Alvin
2012-02-15
We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.
Neuronal correlates of perception, imagery, and memory for familiar tunes.
Herholz, Sibylle C; Halpern, Andrea R; Zatorre, Robert J
2012-06-01
We used fMRI to investigate the neuronal correlates of encoding and recognizing heard and imagined melodies. Ten participants were shown lyrics of familiar verbal tunes; they either heard the tune along with the lyrics, or they had to imagine it. In a subsequent surprise recognition test, they had to identify the titles of tunes that they had heard or imagined earlier. The functional data showed substantial overlap during melody perception and imagery, including secondary auditory areas. During imagery compared with perception, an extended network including pFC, SMA, intraparietal sulcus, and cerebellum showed increased activity, in line with the increased processing demands of imagery. Functional connectivity of anterior right temporal cortex with frontal areas was increased during imagery compared with perception, indicating that these areas form an imagery-related network. Activity in right superior temporal gyrus and pFC was correlated with the subjective rating of imagery vividness. Similar to the encoding phase, the recognition task recruited overlapping areas, including inferior frontal cortex associated with memory retrieval, as well as left middle temporal gyrus. The results present new evidence for the cortical network underlying goal-directed auditory imagery, with a prominent role of the right pFC both for the subjective impression of imagery vividness and for on-line mental monitoring of imagery-related activity in auditory areas.
Sommer, Iris E; Selten, Jean-Paul; Diederen, Kelly M; Blom, Jan Dirk
2010-01-01
This study proposes a theoretical framework which dissects auditory verbal hallucinations (AVH) into 2 essential components: audibility and alienation. Audibility, the perceptual aspect of AVH, may result from a disinhibition of the auditory cortex in response to self-generated speech. In isolation, this aspect leads to audible thoughts: Gedankenlautwerden. The second component is alienation, which is the failure to recognize the content of AVH as self-generated. This failure may be related to the fact that cerebral activity associated with AVH is predominantly present in the speech production area of the right hemisphere. Since normal inner speech is derived from the left speech area, an aberrant source may lead to confusion about the origin of the language fragments. When alienation is not accompanied by audibility, it will result in the experience of thought insertion. The 2 hypothesized components are illustrated using case vignettes. Copyright 2010 S. Karger AG, Basel.
Auditory discrimination therapy (ADT) for tinnitus management.
Herraiz, C; Diges, I; Cobo, P
2007-01-01
Auditory discrimination training (ADT) designs a procedure to increase cortical areas responding to trained frequencies (damaged cochlear areas with cortical misrepresentation) and to shrink the neighboring over-represented ones (tinnitus pitch). In a prospective descriptive study of 27 patients with high frequency tinnitus, the severity of the tinnitus was measured using a visual analog scale (VAS) and the tinnitus handicap inventory (THI). Patients performed a 10-min auditory discrimination task twice a day during one month. Discontinuous 4 kHz pure tones were mixed randomly with short broadband noise sounds through an MP3 system. After the treatment mean VAS scores were reduced from 5.2 to 4.5 (p=0.000) and the THI decreased from 26.2% to 21.3% (p=0.000). Forty percent of the patients had improvement in tinnitus perception (RESP). Comparing the ADT group with a control group showed statistically significant improvement of their tinnitus as assessed by RESP, VAS, and THI.
Lount, Sarah A; Purdy, Suzanne C; Hand, Linda
2017-01-01
International evidence suggests youth offenders have greater difficulties with oral language than their nonoffending peers. This study examined the hearing, auditory processing, and language skills of male youth offenders and remandees (YORs) in New Zealand. Thirty-three male YORs, aged 14-17 years, were recruited from 2 youth justice residences, plus 39 similarly aged male students from local schools for comparison. Testing comprised tympanometry, self-reported hearing, pure-tone audiometry, 4 auditory processing tests, 2 standardized language tests, and a nonverbal intelligence test. Twenty-one (64%) of the YORs were identified as language impaired (LI), compared with 4 (10%) of the controls. Performance on all language measures was significantly worse in the YOR group, as were their hearing thresholds. Nine (27%) of the YOR group versus 7 (18%) of the control group fulfilled criteria for auditory processing disorder. Only 1 YOR versus 5 controls had an auditory processing disorder without LI. Language was an area of significant difficulty for YORs. Difficulties with auditory processing were more likely to be accompanied by LI in this group, compared with the controls. Provision of speech-language therapy services and awareness of auditory and language difficulties should be addressed in youth justice systems.
Metabotropic glutamate receptors in auditory processing
Lu, Yong
2014-01-01
As the major excitatory neurotransmitter used in the vertebrate brain, glutamate activates ionotropic and metabotropic glutamate receptors (mGluRs), which mediate fast and slow neuronal actions, respectively. Important modulatory roles of mGluRs have been shown in many brain areas, and drugs targeting mGluRs have been developed for treatment of brain disorders. Here, I review the studies on mGluRs in the auditory system. Anatomical expression of mGluRs in the cochlear nucleus has been well characterized, while data for other auditory nuclei await more systematic investigations at both the light and electron microscopy levels. The physiology of mGluRs has been extensively studied using in vitro brain slice preparations, with a focus on the lower auditory brainstem in both mammals and birds. These in vitro physiological studies have revealed that mGluRs participate in neurotransmission, regulate ionic homeostasis, induce synaptic plasticity, and maintain the balance between excitation and inhibition in a variety of auditory structures. However, very few in vivo physiological studies on mGluRs in auditory processing have been undertaken at the systems level. Many questions regarding the essential roles of mGluRs in auditory processing still remain unanswered and more rigorous basic research is warranted. PMID:24909898
Sensory-Motor Networks Involved in Speech Production and Motor Control: An fMRI Study
Behroozmand, Roozbeh; Shebek, Rachel; Hansen, Daniel R.; Oya, Hiroyuki; Robin, Donald A.; Howard, Matthew A.; Greenlee, Jeremy D.W.
2015-01-01
Speaking is one of the most complex motor behaviors developed to facilitate human communication. The underlying neural mechanisms of speech involve sensory-motor interactions that incorporate feedback information for online monitoring and control of produced speech sounds. In the present study, we adopted an auditory feedback pitch perturbation paradigm and combined it with functional magnetic resonance imaging (fMRI) recordings in order to identify brain areas involved in speech production and motor control. Subjects underwent fMRI scanning while they produced a steady vowel sound /a/ (speaking) or listened to the playback of their own vowel production (playback). During each condition, the auditory feedback from vowel production was either normal (no perturbation) or perturbed by an upward (+600 cents) pitch shift stimulus randomly. Analysis of BOLD responses during speaking (with and without shift) vs. rest revealed activation of a complex network including bilateral superior temporal gyrus (STG), Heschl's gyrus, precentral gyrus, supplementary motor area (SMA), Rolandic operculum, postcentral gyrus and right inferior frontal gyrus (IFG). Performance correlation analysis showed that the subjects produced compensatory vocal responses that significantly correlated with BOLD response increases in bilateral STG and left precentral gyrus. However, during playback, the activation network was limited to cortical auditory areas including bilateral STG and Heschl's gyrus. Moreover, the contrast between speaking vs. playback highlighted a distinct functional network that included bilateral precentral gyrus, SMA, IFG, postcentral gyrus and insula. These findings suggest that speech motor control involves feedback error detection in sensory (e.g. auditory) cortices that subsequently activate motor-related areas for the adjustment of speech parameters during speaking. PMID:25623499
Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.
Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko
2017-08-15
During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.
Sood, Mariam R; Sereno, Martin I
2016-08-01
Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Adult Plasticity in the Subcortical Auditory Pathway of the Maternal Mouse
Miranda, Jason A.; Shepard, Kathryn N.; McClintock, Shannon K.; Liu, Robert C.
2014-01-01
Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system – motherhood – is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered. PMID:24992362
Gudi-Mindermann, Helene; Rimmele, Johanna M; Nolte, Guido; Bruns, Patrick; Engel, Andreas K; Röder, Brigitte
2018-04-12
The functional relevance of crossmodal activation (e.g. auditory activation of occipital brain regions) in congenitally blind individuals is still not fully understood. The present study tested whether the occipital cortex of blind individuals is integrated into a challenged functional network. A working memory (WM) training over four sessions was implemented. Congenitally blind and matched sighted participants were adaptively trained with an n-back task employing either voices (auditory training) or tactile stimuli (tactile training). In addition, a minimally demanding 1-back task served as an active control condition. Power and functional connectivity of EEG activity evolving during the maintenance period of an auditory 2-back task were analyzed, run prior to and after the WM training. Modality-specific (following auditory training) and modality-independent WM training effects (following both auditory and tactile training) were assessed. Improvements in auditory WM were observed in all groups, and blind and sighted individuals did not differ in training gains. Auditory and tactile training of sighted participants led, relative to the active control group, to an increase in fronto-parietal theta-band power, suggesting a training-induced strengthening of the existing modality-independent WM network. No power effects were observed in the blind. Rather, after auditory training the blind showed a decrease in theta-band connectivity between central, parietal, and occipital electrodes compared to the blind tactile training and active control groups. Furthermore, in the blind auditory training increased beta-band connectivity between fronto-parietal, central and occipital electrodes. In the congenitally blind, these findings suggest a stronger integration of occipital areas into the auditory WM network. Copyright © 2018 Elsevier B.V. All rights reserved.
Vahaba, Daniel M; Macedo-Lima, Matheus; Remage-Healey, Luke
2017-01-01
Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor's song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM's established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E 2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches ( Taeniopygia guttata ) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E 2 administration on sensory processing. In sensory-aged subjects, E 2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E 2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E 2 sensitivity that each precisely track a key neural "switch point" from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds.
2017-01-01
Abstract Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor’s song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM’s established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches (Taeniopygia guttata) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E2 administration on sensory processing. In sensory-aged subjects, E2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E2 sensitivity that each precisely track a key neural “switch point” from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds. PMID:29255797
Auditory short-term memory in the primate auditory cortex.
Scott, Brian H; Mishkin, Mortimer
2016-06-01
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.