Yanagihara, Shin; Yazaki-Sugiyama, Yoko
2018-04-12
Behavioral states of animals, such as observing the behavior of a conspecific, modify signal perception and/or sensations that influence state-dependent higher cognitive behavior, such as learning. Recent studies have shown that neuronal responsiveness to sensory signals is modified when animals are engaged in social interactions with others or in locomotor activities. However, how these changes produce state-dependent differences in higher cognitive function is still largely unknown. Zebra finches, which have served as the premier songbird model, learn to sing from early auditory experiences with tutors. They also learn from playback of recorded songs however, learning can be greatly improved when song models are provided through social communication with tutors (Eales, 1989; Chen et al., 2016). Recently we found a subset of neurons in the higher-level auditory cortex of juvenile zebra finches that exhibit highly selective auditory responses to the tutor song after song learning, suggesting an auditory memory trace of the tutor song (Yanagihara and Yazaki-Sugiyama, 2016). Here we show that auditory responses of these selective neurons became greater when juveniles were paired with their tutors, while responses of non-selective neurons did not change. These results suggest that social interaction modulates cortical activity and might function in state-dependent song learning. Copyright © 2018 Elsevier B.V. All rights reserved.
Phillips, Derrick J; Schei, Jennifer L; Meighan, Peter C; Rector, David M
2011-11-01
Auditory evoked potential (AEP) components correspond to sequential activation of brain structures within the auditory pathway and reveal neural activity during sensory processing. To investigate state-dependent modulation of stimulus intensity response profiles within different brain structures, we assessed AEP components across both stimulus intensity and state. We implanted adult female Sprague-Dawley rats (N = 6) with electrodes to measure EEG, EKG, and EMG. Intermittent auditory stimuli (6-12 s) varying from 50 to 75 dBa were delivered over a 24-h period. Data were parsed into 2-s epochs and scored for wake/sleep state. All AEP components increased in amplitude with increased stimulus intensity during wake. During quiet sleep, however, only the early latency response (ELR) showed this relationship, while the middle latency response (MLR) increased at the highest 75 dBa intensity, and the late latency response (LLR) showed no significant change across the stimulus intensities tested. During rapid eye movement sleep (REM), both ELR and LLR increased, similar to wake, but MLR was severely attenuated. Stimulation intensity and the corresponding AEP response profile were dependent on both brain structure and sleep state. Lower brain structures maintained stimulus intensity and neural response relationships during sleep. This relationship was not observed in the cortex, implying state-dependent modification of stimulus intensity coding. Since AEP amplitude is not modulated by stimulus intensity during sleep, differences between paired 75/50 dBa stimuli could be used to determine state better than individual intensities.
The what, where and how of auditory-object perception.
Bizley, Jennifer K; Cohen, Yale E
2013-10-01
The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.
The what, where and how of auditory-object perception
Bizley, Jennifer K.; Cohen, Yale E.
2014-01-01
The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177
Tinnitus Intensity Dependent Gamma Oscillations of the Contralateral Auditory Cortex
van der Loo, Elsa; Gais, Steffen; Congedo, Marco; Vanneste, Sven; Plazier, Mark; Menovsky, Tomas; Van de Heyning, Paul; De Ridder, Dirk
2009-01-01
Background Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. Methods and Findings In unilateral tinnitus patients (N = 15; 10 right, 5 left) source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05). Conclusion Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception. PMID:19816597
Auditory Verbal Experience and Agency in Waking, Sleep Onset, REM, and Non-REM Sleep.
Speth, Jana; Harley, Trevor A; Speth, Clemens
2017-04-01
We present one of the first quantitative studies on auditory verbal experiences ("hearing voices") and auditory verbal agency (inner speech, and specifically "talking to (imaginary) voices or characters") in healthy participants across states of consciousness. Tools of quantitative linguistic analysis were used to measure participants' implicit knowledge of auditory verbal experiences (VE) and auditory verbal agencies (VA), displayed in mentation reports from four different states. Analysis was conducted on a total of 569 mentation reports from rapid eye movement (REM) sleep, non-REM sleep, sleep onset, and waking. Physiology was controlled with the nightcap sleep-wake mentation monitoring system. Sleep-onset hallucinations, traditionally at the focus of scientific attention on auditory verbal hallucinations, showed the lowest degree of VE and VA, whereas REM sleep showed the highest degrees. Degrees of different linguistic-pragmatic aspects of VE and VA likewise depend on the physiological states. The quantity and pragmatics of VE and VA are a function of the physiologically distinct state of consciousness in which they are conceived. Copyright © 2016 Cognitive Science Society, Inc.
Prestimulus influences on auditory perception from sensory representations and decision processes.
Kayser, Stephanie J; McNair, Steven W; Kayser, Christoph
2016-04-26
The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task.
Prestimulus influences on auditory perception from sensory representations and decision processes
McNair, Steven W.
2016-01-01
The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task. PMID:27071110
Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model
Nakao, Kazuhito; Nakazawa, Kazu
2014-01-01
In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs) are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR) hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP) activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 s, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1) a broadband increase in spontaneous LFP power in the absence of external inputs, and (2) a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The “paradoxically” high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception. PMID:25018691
Delayed Auditory Feedback and Movement
ERIC Educational Resources Information Center
Pfordresher, Peter Q.; Dalla Bella, Simone
2011-01-01
It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…
A Circuit for Motor Cortical Modulation of Auditory Cortical Activity
Nelson, Anders; Schneider, David M.; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan
2013-01-01
Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity. PMID:24005287
Olulade, O; Hu, S; Gonzalez-Castillo, J; Tamer, G G; Luh, W-M; Ulmer, J L; Talavage, T M
2011-07-01
A confounding factor in auditory functional magnetic resonance imaging (fMRI) experiments is the presence of the acoustic noise inherently associated with the echo planar imaging acquisition technique. Previous studies have demonstrated that this noise can induce unwanted neuronal responses that can mask stimulus-induced responses. Similarly, activation accumulated over multiple stimuli has been demonstrated to elevate the baseline, thus reducing the dynamic range available for subsequent responses. To best evaluate responses to auditory stimuli, it is necessary to account for the presence of all recent acoustic stimulation, beginning with an understanding of the attenuating effects brought about by interaction between and among induced unwanted neuronal responses, and responses to desired auditory stimuli. This study focuses on the characterization of the duration of this temporal memory and qualitative assessment of the associated response attenuation. Two experimental parameters--inter-stimulus interval (ISI) and repetition time (TR)--were varied during an fMRI experiment in which participants were asked to passively attend to an auditory stimulus. Results present evidence of a state-dependent interaction between induced responses. As expected, attenuating effects of these interactions become less significant as TR and ISI increase and in contrast to previous work, persist up to 18s after a stimulus presentation. Copyright © 2011 Elsevier B.V. All rights reserved.
Electrophysiological measurement of human auditory function
NASA Technical Reports Server (NTRS)
Galambos, R.
1975-01-01
Knowledge of the human auditory evoked response is reviewed, including methods of determining this response, the way particular changes in the stimulus are coupled to specific changes in the response, and how the state of mind of the listener will influence the response. Important practical applications of this basic knowledge are discussed. Measurement of the brainstem evoked response, for instance, can state unequivocally how well the peripheral auditory apparatus functions. It might then be developed into a useful hearing test, especially for infants and preverbal or nonverbal children. Clinical applications of measuring the brain waves evoked 100 msec and later after the auditory stimulus are undetermined. These waves are clearly related to brain events associated with cognitive processing of acoustic signals, since their properties depend upon where the listener directs his attention and whether how long he expects the signal.
Matragrano, Lisa L.; Sanford, Sara E.; Salvante, Katrina G.; Beaulieu, Michaël; Sockman, Keith W.; Maney, Donna L.
2011-01-01
Because no organism lives in an unchanging environment, sensory processes must remain plastic so that in any context, they emphasize the most relevant signals. As the behavioral relevance of sociosexual signals changes along with reproductive state, the perception of those signals is altered by reproductive hormones such as estradiol (E2). We showed previously that in white-throated sparrows, immediate early gene responses in the auditory pathway of females are selective for conspecific male song only when plasma E2 is elevated to breeding-typical levels. In this study, we looked for evidence that E2-dependent modulation of auditory responses is mediated by serotonergic systems. In female nonbreeding white-throated sparrows treated with E2, the density of fibers immunoreactive for serotonin transporter innervating the auditory midbrain and rostral auditory forebrain increased compared with controls. E2 treatment also increased the concentration of the serotonin metabolite 5-HIAA in the caudomedial mesopallium of the auditory forebrain. In a second experiment, females exposed to 30 min of conspecific male song had higher levels of 5-HIAA in the caudomedial nidopallium of the auditory forebrain than birds not exposed to song. Overall, we show that in this seasonal breeder, (1) serotonergic fibers innervate auditory areas; (2) the density of those fibers is higher in females with breeding-typical levels of E2 than in nonbreeding, untreated females; and (3) serotonin is released in the auditory forebrain within minutes in response to conspecific vocalizations. Our results are consistent with the hypothesis that E2 acts via serotonin systems to alter auditory processing. PMID:21942431
Influence of anxiety, depression and looming cognitive style on auditory looming perception.
Riskind, John H; Kleiman, Evan M; Seifritz, Erich; Neuhoff, John
2014-01-01
Previous studies show that individuals with an anticipatory auditory looming bias over-estimate the closeness of a sound source that approaches them. Our present study bridges cognitive clinical and perception research, and provides evidence that anxiety symptoms and a particular putative cognitive style that creates vulnerability for anxiety (looming cognitive style, or LCS) are related to how people perceive this ecologically fundamental auditory warning signal. The effects of anxiety symptoms on the anticipatory auditory looming effect synergistically depend on the dimension of perceived personal danger assessed by the LCS (physical or social threat). Depression symptoms, in contrast to anxiety symptoms, predict a diminution of the auditory looming bias. Findings broaden our understanding of the links between cognitive-affective states and auditory perception processes and lend further support to past studies providing evidence that the looming cognitive style is related to bias in threat processing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Responses in Rat Core Auditory Cortex are Preserved during Sleep Spindle Oscillations
Sela, Yaniv; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Tononi, Giulio; Nir, Yuval
2016-01-01
Study Objectives: Sleep is defined as a reversible state of reduction in sensory responsiveness and immobility. A long-standing hypothesis suggests that a high arousal threshold during non-rapid eye movement (NREM) sleep is mediated by sleep spindle oscillations, impairing thalamocortical transmission of incoming sensory stimuli. Here we set out to test this idea directly by examining sensory-evoked neuronal spiking activity during natural sleep. Methods: We compared neuronal (n = 269) and multiunit activity (MUA), as well as local field potentials (LFP) in rat core auditory cortex (A1) during NREM sleep, comparing responses to sounds depending on the presence or absence of sleep spindles. Results: We found that sleep spindles robustly modulated the timing of neuronal discharges in A1. However, responses to sounds were nearly identical for all measured signals including isolated neurons, MUA, and LFPs (all differences < 10%). Furthermore, in 10% of trials, auditory stimulation led to an early termination of the sleep spindle oscillation around 150–250 msec following stimulus onset. Finally, active ON states and inactive OFF periods during slow waves in NREM sleep affected the auditory response in opposite ways, depending on stimulus intensity. Conclusions: Responses in core auditory cortex are well preserved regardless of sleep spindles recorded in that area, suggesting that thalamocortical sensory relay remains functional during sleep spindles, and that sensory disconnection in sleep is mediated by other mechanisms. Citation: Sela Y, Vyazovskiy VV, Cirelli C, Tononi G, Nir Y. Responses in rat core auditory cortex are preserved during sleep spindle oscillations. SLEEP 2016;39(5):1069–1082. PMID:26856904
Woodruff, P W; Wright, I C; Bullmore, E T; Brammer, M; Howard, R J; Williams, S C; Shapleske, J; Rossell, S; David, A S; McGuire, P K; Murray, R M
1997-12-01
The authors explored whether abnormal functional lateralization of temporal cortical language areas in schizophrenia was associated with a predisposition to auditory hallucinations and whether the auditory hallucinatory state would reduce the temporal cortical response to external speech. Functional magnetic resonance imaging was used to measure the blood-oxygenation-level-dependent signal induced by auditory perception of speech in three groups of male subjects: eight schizophrenic patients with a history of auditory hallucinations (trait-positive), none of whom was currently hallucinating; seven schizophrenic patients without such a history (trait-negative); and eight healthy volunteers. Seven schizophrenic patients were also examined while they were actually experiencing severe auditory verbal hallucinations and again after their hallucinations had diminished. Voxel-by-voxel comparison of the median power of subjects' responses to periodic external speech revealed that this measure was reduced in the left superior temporal gyrus but increased in the right middle temporal gyrus in the combined schizophrenic groups relative to the healthy comparison group. Comparison of the trait-positive and trait-negative patients revealed no clear difference in the power of temporal cortical activation. Comparison of patients when experiencing severe hallucinations and when hallucinations were mild revealed reduced responsivity of the temporal cortex, especially the right middle temporal gyrus, to external speech during the former state. These results suggest that schizophrenia is associated with a reduced left and increased right temporal cortical response to auditory perception of speech, with little distinction between patients who differ in their vulnerability to hallucinations. The auditory hallucinatory state is associated with reduced activity in temporal cortical regions that overlap with those that normally process external speech, possibly because of competition for common neurophysiological resources.
Lustenberger, Caroline; Patel, Yogi A; Alagapan, Sankaraleengam; Page, Jessica M; Price, Betsy; Boyle, Michael R; Fröhlich, Flavio
2018-04-01
Auditory rhythmic sensory stimulation modulates brain oscillations by increasing phase-locking to the temporal structure of the stimuli and by increasing the power of specific frequency bands, resulting in Auditory Steady State Responses (ASSR). The ASSR is altered in different diseases of the central nervous system such as schizophrenia. However, in order to use the ASSR as biological markers for disease states, it needs to be understood how different vigilance states and underlying brain activity affect the ASSR. Here, we compared the effects of auditory rhythmic stimuli on EEG brain activity during wake and NREM sleep, investigated the influence of the presence of dominant sleep rhythms on the ASSR, and delineated the topographical distribution of these modulations. Participants (14 healthy males, 20-33 years) completed on the same day a 60 min nap session and two 30 min wakefulness sessions (before and after the nap). During these sessions, amplitude modulated (AM) white noise auditory stimuli at different frequencies were applied. High-density EEG was continuously recorded and time-frequency analyses were performed to assess ASSR during wakefulness and NREM periods. Our analysis revealed that depending on the electrode location, stimulation frequency applied and window/frequencies analysed the ASSR was significantly modulated by sleep pressure (before and after sleep), vigilance state (wake vs. NREM sleep), and the presence of slow wave activity and sleep spindles. Furthermore, AM stimuli increased spindle activity during NREM sleep but not during wakefulness. Thus, (1) electrode location, sleep history, vigilance state and ongoing brain activity needs to be carefully considered when investigating ASSR and (2) auditory rhythmic stimuli during sleep might represent a powerful tool to boost sleep spindles. Copyright © 2017 Elsevier Inc. All rights reserved.
Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten
2010-01-01
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Auditory modulation of wind-elicited walking behavior in the cricket Gryllus bimaculatus.
Fukutomi, Matasaburo; Someya, Makoto; Ogawa, Hiroto
2015-12-01
Animals flexibly change their locomotion triggered by an identical stimulus depending on the environmental context and behavioral state. This indicates that additional sensory inputs in different modality from the stimulus triggering the escape response affect the neuronal circuit governing that behavior. However, how the spatio-temporal relationships between these two stimuli effect a behavioral change remains unknown. We studied this question, using crickets, which respond to a short air-puff by oriented walking activity mediated by the cercal sensory system. In addition, an acoustic stimulus, such as conspecific 'song' received by the tympanal organ, elicits a distinct oriented locomotion termed phonotaxis. In this study, we examined the cross-modal effects on wind-elicited walking when an acoustic stimulus was preceded by an air-puff and tested whether the auditory modulation depends on the coincidence of the direction of both stimuli. A preceding 10 kHz pure tone biased the wind-elicited walking in a backward direction and elevated a threshold of the wind-elicited response, whereas other movement parameters, including turn angle, reaction time, walking speed and distance were unaffected. The auditory modulations, however, did not depend on the coincidence of the stimulus directions. A preceding sound consistently altered the wind-elicited walking direction and response probability throughout the experimental sessions, meaning that the auditory modulation did not result from previous experience or associative learning. These results suggest that the cricket nervous system is able to integrate auditory and air-puff stimuli, and modulate the wind-elicited escape behavior depending on the acoustic context. © 2015. Published by The Company of Biologists Ltd.
Hanson, Jessica L.; Hurley, Laura M.
2014-01-01
In the face of changing behavioral situations, plasticity of sensory systems can be a valuable mechanism to facilitate appropriate behavioral responses. In the auditory system, the neurotransmitter serotonin is an important messenger for context-dependent regulation because it is sensitive to both external events and internal state, and it modulates neural activity. In male mice, serotonin increases in the auditory midbrain region, the inferior colliculus (IC), in response to changes in behavioral context such as restriction stress and social contact. Female mice have not been measured in similar contexts, although the serotonergic system is sexually dimorphic in many ways. In the present study, we investigated the effects of sex, experience and estrous state on the fluctuation of serotonin in the IC across contexts, as well as potential relationships between behavior and serotonin. Contrary to our expectation, there were no sex differences in increases of serotonin in response to a restriction stimulus. Both sexes had larger increases in second exposures, suggesting experience plays a role in serotonergic release in the IC. In females, serotonin increased during both restriction and interactions with males; however, the increase was more rapid during restriction. There was no effect of female estrous phase on the serotonergic change for either context, but serotonin was related to behavioral activity in females interacting with males. These results show that changes in behavioral context induce increases in serotonin in the IC by a mechanism that appears to be uninfluenced by sex or estrous state, but may depend on experience and behavioral activity. PMID:24198252
Fergus, Daniel J; Feng, Ni Y; Bass, Andrew H
2015-10-14
Successful animal communication depends on a receiver's ability to detect a sender's signal. Exemplars of adaptive sender-receiver coupling include acoustic communication, often important in the context of seasonal reproduction. During the reproductive summer season, both male and female midshipman fish (Porichthys notatus) exhibit similar increases in the steroid-dependent frequency sensitivity of the saccule, the main auditory division of the inner ear. This form of auditory plasticity enhances detection of the higher frequency components of the multi-harmonic, long-duration advertisement calls produced repetitively by males during summer nights of peak vocal and spawning activity. The molecular basis of this seasonal auditory plasticity has not been fully resolved. Here, we utilize an unbiased transcriptomic RNA sequencing approach to identify differentially expressed transcripts within the saccule's hair cell epithelium of reproductive summer and non-reproductive winter fish. We assembled 74,027 unique transcripts from our saccular epithelial sequence reads. Of these, 6.4 % and 3.0 % were upregulated in the reproductive and non-reproductive saccular epithelium, respectively. Gene ontology (GO) term enrichment analyses of the differentially expressed transcripts showed that the reproductive saccular epithelium was transcriptionally, translationally, and metabolically more active than the non-reproductive epithelium. Furthermore, the expression of a specific suite of candidate genes, including ion channels and components of steroid-signaling pathways, was upregulated in the reproductive compared to the non-reproductive saccular epithelium. We found reported auditory functions for 14 candidate genes upregulated in the reproductive midshipman saccular epithelium, 8 of which are enriched in mouse hair cells, validating their hair cell-specific functions across vertebrates. We identified a suite of differentially expressed genes belonging to neurotransmission and steroid-signaling pathways, consistent with previous work showing the importance of these characters in regulating hair cell auditory sensitivity in midshipman fish and, more broadly, vertebrates. The results were also consistent with auditory hair cells being generally more physiologically active when animals are in a reproductive state, a time of enhanced sensory-motor coupling between the auditory periphery and the upper harmonics of vocalizations. Together with several new candidate genes, our results identify discrete patterns of gene expression linked to frequency- and steroid-dependent plasticity of hair cell auditory sensitivity.
Allopregnanolone induces state-dependent fear via the bed nucleus of the stria terminalis
Acca, Gillian M.; Mathew, Abel S.; Jin, Jingji; Maren, Stephen; Nagaya, Naomi
2017-01-01
Gonadal steroids and their metabolites have been shown to be important modulators of emotional behavior. Allopregnanolone (ALLO), for example, is a metabolite of progesterone that has been linked to anxiety-related disorders such as posttraumatic stress disorder. In rodents, it has been shown to reduce anxiety in a number of behavioral paradigms including Pavlovian fear conditioning. We have recently found that expression of conditioned contextual (but not auditory) freezing in rats can be suppressed by infusion of ALLO into the bed nucleus of the stria terminalis (BNST). To further explore the nature of this effect, we infused ALLO into the BNST of male rats prior to both conditioning and testing. We found that suppression of contextual fear occurred when the hormone was present during either conditioning or testing but not during both procedures, suggesting that ALLO acts in a state-dependent manner within the BNST. A shift in interoceptive context during testing for animals conditioned under ALLO provided further support for this mechanism of hormonal action on contextual fear. Interestingly, infusions of ALLO into the basolateral amygdala produced a state-independent suppression of both conditioned contextual and auditory freezing. Altogether, these results suggest that ALLO can influence the acquisition and expression of fear memories by both state-dependent and state-independent mechanisms. PMID:28104355
Opposing and following responses in sensorimotor speech control: Why responses go both ways.
Franken, Matthias K; Acheson, Daniel J; McQueen, James M; Hagoort, Peter; Eisner, Frank
2018-06-04
When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some responses follow the perturbation. In the present study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is made. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: The system initially responds by doing the opposite of what it was doing. This effect and the nontrivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production system's state at the time of perturbation.
Cortical evoked responses associated with arousal from sleep.
Phillips, Derrick J; Schei, Jennifer L; Meighan, Peter C; Rector, David M
2011-01-01
To determine if low-level intermittent auditory stimuli have the potential to disrupt sleep during 24-h recordings, we assessed arousal occurrence to varying stimulus intensities. Additionally, if stimulus-generated evoked response potential (ERP) components provide a metric of underlying cortical state, then a particular ERP structure may precede an arousal. Physiological electrodes measuring EEG, EKG, and EMG were implanted into 5 adult female Sprague-Dawley rats. We delivered auditory stimuli of varying intensities (50-75 dBa sound pressure level SPL) at random intervals of 6-12 s over a 24-hour period. Recordings were divided into 2-s epochs and scored for sleep/wake state. Following each stimulus, we identified whether the animal stayed asleep or woke. We then sorted the stimuli depending on prior and post-stimulus state, and measured ERP components. Auditory stimuli did not produce a significant increase in the number of arousals compared to silent control periods. Overall, arousal from REM sleep occurred more often compared to quiet sleep. ERPs preceding an arousal had decreased mean area and shorter N1 latency. Low level auditory stimuli did not fragment animal sleep since we observed no significant change in arousal occurrence. Arousals that occurred within 4 s of a stimulus exhibited an ERP mean area and latency had features similar to ERPs generated during wake, indicating that the underlying cortical tissue state may contribute to physiological conditions required for arousal.
Sisneros, Joseph A
2009-03-01
The plainfin midshipman fish (Porichthys notatus Girard, 1854) is a vocal species of batrachoidid fish that generates acoustic signals for intraspecific communication during social and reproductive activity and has become a good model for investigating the neural and endocrine mechanisms of vocal-acoustic communication. Reproductively active female plainfin midshipman fish use their auditory sense to detect and locate "singing" males, which produce a multiharmonic advertisement call to attract females for spawning. The seasonal onset of male advertisement calling in the midshipman fish coincides with an increase in the range of frequency sensitivity of the female's inner ear saccule, the main organ of hearing, thus leading to enhanced encoding of the dominant frequency components of male advertisement calls. Non-reproductive females treated with either testosterone or 17β-estradiol exhibit a dramatic increase in the inner ear's frequency sensitivity that mimics the reproductive female's auditory phenotype and leads to an increased detection of the male's advertisement call. This novel form of auditory plasticity provides an adaptable mechanism that enhances coupling between sender and receiver in vocal communication. This review focuses on recent evidence for seasonal reproductive-state and steroid-dependent plasticity of auditory frequency sensitivity in the peripheral auditory system of the midshipman fish. The potential steroid-dependent mechanism(s) that lead to this novel form of auditory and behavioral plasticity are also discussed. © 2009 ISZS, Blackwell Publishing and IOZ/CAS.
Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.
Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M
2013-11-01
Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.
A novel hybrid auditory BCI paradigm combining ASSR and P300.
Kaongoen, Netiwit; Jo, Sungho
2017-03-01
Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.
Strait, Dana L.; Kraus, Nina
2013-01-01
Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583
Allopregnanolone induces state-dependent fear via the bed nucleus of the stria terminalis.
Acca, Gillian M; Mathew, Abel S; Jin, Jingji; Maren, Stephen; Nagaya, Naomi
2017-03-01
Gonadal steroids and their metabolites have been shown to be important modulators of emotional behavior. Allopregnanolone (ALLO), for example, is a metabolite of progesterone that has been linked to anxiety-related disorders such as posttraumatic stress disorder. In rodents, it has been shown to reduce anxiety in a number of behavioral paradigms including Pavlovian fear conditioning. We have recently found that expression of conditioned contextual (but not auditory) freezing in rats can be suppressed by infusion of ALLO into the bed nucleus of the stria terminalis (BNST). To further explore the nature of this effect, we infused ALLO into the BNST of male rats prior to both conditioning and testing. We found that suppression of contextual fear occurred when the hormone was present during either conditioning or testing but not during both procedures, suggesting that ALLO acts in a state-dependent manner within the BNST. A shift in interoceptive context during testing for animals conditioned under ALLO provided further support for this mechanism of hormonal action on contextual fear. Interestingly, infusions of ALLO into the basolateral amygdala produced a state-independent suppression of both conditioned contextual and auditory freezing. Altogether, these results suggest that ALLO can influence the acquisition and expression of fear memories by both state-dependent and state-independent mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.
Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.
Barbosa, Sara; Pires, Gabriel; Nunes, Urbano
2016-03-01
Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.
Brain function assessment in different conscious states.
Ozgoren, Murat; Bayazit, Onur; Kocaaslan, Sibel; Gokmen, Necati; Oniz, Adile
2010-06-03
The study of brain functioning is a major challenge in neuroscience fields as human brain has a dynamic and ever changing information processing. Case is worsened with conditions where brain undergoes major changes in so-called different conscious states. Even though the exact definition of consciousness is a hard one, there are certain conditions where the descriptions have reached a consensus. The sleep and the anesthesia are different conditions which are separable from each other and also from wakefulness. The aim of our group has been to tackle the issue of brain functioning with setting up similar research conditions for these three conscious states. In order to achieve this goal we have designed an auditory stimulation battery with changing conditions to be recorded during a 40 channel EEG polygraph (Nuamps) session. The stimuli (modified mismatch, auditory evoked etc.) have been administered both in the operation room and the sleep lab via Embedded Interactive Stimulus Unit which was developed in our lab. The overall study has provided some results for three domains of consciousness. In order to be able to monitor the changes we have incorporated Bispectral Index Monitoring to both sleep and anesthesia conditions. The first stage results have provided a basic understanding in these altered states such that auditory stimuli have been successfully processed in both light and deep sleep stages. The anesthesia provides a sudden change in brain responsiveness; therefore a dosage dependent anesthetic administration has proved to be useful. The auditory processing was exemplified targeting N1 wave, with a thorough analysis from spectrogram to sLORETA. The frequency components were observed to be shifting throughout the stages. The propofol administration and the deeper sleep stages both resulted in the decreasing of N1 component. The sLORETA revealed similar activity at BA7 in sleep (BIS 70) and target propofol concentration of 1.2 microg/mL. The current study utilized similar stimulation and recording system and incorporated BIS dependent values to validate a common approach to sleep and anesthesia. Accordingly the brain has a complex behavior pattern, dynamically changing its responsiveness in accordance with stimulations and states.
Sensitivity and specificity of auditory steady‐state response testing
Rabelo, Camila Maia; Schochat, Eliane
2011-01-01
INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442
Bhandiwad, Ashwin A; Whitchurch, Elizabeth A; Colleye, Orphal; Zeddies, David G; Sisneros, Joseph A
2017-03-01
Adult female and nesting (type I) male midshipman fish (Porichthys notatus) exhibit an adaptive form of auditory plasticity for the enhanced detection of social acoustic signals. Whether this adaptive plasticity also occurs in "sneaker" type II males is unknown. Here, we characterize auditory-evoked potentials recorded from hair cells in the saccule of reproductive and non-reproductive "sneaker" type II male midshipman to determine whether this sexual phenotype exhibits seasonal, reproductive state-dependent changes in auditory sensitivity and frequency response to behaviorally relevant auditory stimuli. Saccular potentials were recorded from the middle and caudal region of the saccule while sound was presented via an underwater speaker. Our results indicate saccular hair cells from reproductive type II males had thresholds based on measures of sound pressure and acceleration (re. 1 µPa and 1 ms -2 , respectively) that were ~8-21 dB lower than non-reproductive type II males across a broad range of frequencies, which include the dominant higher frequencies in type I male vocalizations. This increase in type II auditory sensitivity may potentially facilitate eavesdropping by sneaker males and their assessment of vocal type I males for the selection of cuckoldry sites during the breeding season.
Geissler, Diana B.; Schmidt, H. Sabine; Ehret, Günter
2016-01-01
Activation of the auditory cortex (AC) by a given sound pattern is plastic, depending, in largely unknown ways, on the physiological state and the behavioral context of the receiving animal and on the receiver's experience with the sounds. Such plasticity can be inferred when house mouse mothers respond maternally to pup ultrasounds right after parturition and naïve females have to learn to respond. Here we use c-FOS immunocytochemistry to quantify highly activated neurons in the AC fields and layers of seven groups of mothers and naïve females who have different knowledge about and are differently motivated to respond to acoustic models of pup ultrasounds of different behavioral significance. Profiles of FOS-positive cells in the AC primary fields (AI, AAF), the ultrasonic field (UF), the secondary field (AII), and the dorsoposterior field (DP) suggest that activation reflects in AI, AAF, and UF the integration of sound properties with animal state-dependent factors, in the higher-order field AII the news value of a given sound in the behavioral context, and in the higher-order field DP the level of maternal motivation and, by left-hemisphere activation advantage, the recognition of the meaning of sounds in the given context. Anesthesia reduced activation in all fields, especially in cortical layers 2/3. Thus, plasticity in the AC is field-specific preparing different output of AC fields in the process of perception, recognition and responding to communication sounds. Further, the activation profiles of the auditory cortical fields suggest the differentiation between brains hormonally primed to know (mothers) and brains which acquired knowledge via implicit learning (naïve females). In this way, auditory cortical activation discriminates between instinctive (mothers) and learned (naïve females) cognition. PMID:27013959
Tzaneva, L
1996-09-01
The discomfort threshold problem is not yet clear from the audiological point of view. Its significance for work physiology and hygiene is not enough clarified. This paper discussed the results of a study of the discomfort threshold, performed including 385 operators from the State Company "Kremikovtzi", divided into 4 groups (3 groups according to length of service and one control group). The most prominent changes were found in operators with increased tonal auditory threshold up to 45 and over 50 dB with high confidential probability. The observed changes are distributed in 3 groups: 1. increased tonal auditory threshold (up to 30 dB) without decrease of the discomfort threshold; 2. decreased discomfort threshold (with about 15-20 dB) at increased tonal auditory threshold (up to 45 dB); 3. decreased discomfort threshold at increased (over 50 dB) tonal auditory threshold. The auditory scope of the operators, belonging to groups III and IV (with the longest length of service) is narrowed, being distorted for the latter. This pathophysiological phenomenon can be explained by an enhanced effect of sound irritation and the presence of a recruitment phenomenon with possible engagement of the central part of the auditory analyzer. It is concluded that the discomfort threshold is a sensitive indicator for the state of the individual norms for speech-sound-noise discomfort. The comparison of the discomfort threshold with the hygienic standards and the noise levels at each particular working place can be used as a criterion for the professional selection for work in conditions of masking noise effect and its tolerance with respect to achieving the individual discomfort level depending on the intensity of the speech-sound-noise signals at a particular working place.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Xiang, Juanjuan; Simon, Jonathan; Elhilali, Mounya
2010-01-01
Processing of complex acoustic scenes depends critically on the temporal integration of sensory information as sounds evolve naturally over time. It has been previously speculated that this process is guided by both innate mechanisms of temporal processing in the auditory system, as well as top-down mechanisms of attention, and possibly other schema-based processes. In an effort to unravel the neural underpinnings of these processes and their role in scene analysis, we combine Magnetoencephalography (MEG) with behavioral measures in humans in the context of polyrhythmic tone sequences. While maintaining unchanged sensory input, we manipulate subjects’ attention to one of two competing rhythmic streams in the same sequence. The results reveal that the neural representation of the attended rhythm is significantly enhanced both in its steady-state power and spatial phase coherence relative to its unattended state, closely correlating with its perceptual detectability for each listener. Interestingly, the data reveals a differential efficiency of rhythmic rates of the order of few hertz during the streaming process, closely following known neural and behavioral measures of temporal modulation sensitivity in the auditory system. These findings establish a direct link between known temporal modulation tuning in the auditory system (particularly at the level of auditory cortex) and the temporal integration of perceptual features in a complex acoustic scene, while mediated by processes of attention. PMID:20826671
Sisneros, Joseph A
2009-08-01
The plainfin midshipman fish, Porichthys notatus, is a seasonally breeding species of marine teleost fish that generates acoustic signals for intraspecific social and reproductive-related communication. Female midshipman use the inner ear saccule as the main acoustic endorgan for hearing to detect and locate vocalizing males that produce multiharmonic advertisement calls during the breeding season. Previous work showed that the frequency sensitivity of midshipman auditory saccular afferents changed seasonally with female reproductive state such that summer reproductive females became better suited than winter nonreproductive females to encode the dominant higher harmonics of the male advertisement calls. The focus of this study was to test the hypothesis that seasonal reproductive-dependent changes in saccular afferent tuning is paralleled by similar changes in saccular sensitivity at the level of the hair-cell receptor. Here, I examined the evoked response properties of midshipman saccular hair cells from winter nonreproductive and summer reproductive females to determine if reproductive state affects the frequency response and threshold of the saccule to behaviorally relevant single tone stimuli. Saccular potentials were recorded from populations of hair cells in vivo while sound was presented by an underwater speaker. Results indicate that saccular hair cells from reproductive females had thresholds that were approximately 8 to 13 dB lower than nonreproductive females across a broad range of frequencies that included the dominant higher harmonic components and the fundamental frequency of the male's advertisement call. These seasonal-reproductive-dependent changes in thresholds varied differentially across the three (rostral, middle, and caudal) regions of the saccule. Such reproductive-dependent changes in saccule sensitivity may represent an adaptive plasticity of the midshipman auditory sense to enhance mate detection, recognition, and localization during the breeding season.
NASA Astrophysics Data System (ADS)
Xiao, Jun; Xie, Qiuyou; He, Yanbin; Yu, Tianyou; Lu, Shenglin; Huang, Ningmeng; Yu, Ronghao; Li, Yuanqing
2016-09-01
The Coma Recovery Scale-Revised (CRS-R) is a consistent and sensitive behavioral assessment standard for disorders of consciousness (DOC) patients. However, the CRS-R has limitations due to its dependence on behavioral markers, which has led to a high rate of misdiagnosis. Brain-computer interfaces (BCIs), which directly detect brain activities without any behavioral expression, can be used to evaluate a patient’s state. In this study, we explored the application of BCIs in assisting CRS-R assessments of DOC patients. Specifically, an auditory passive EEG-based BCI system with an oddball paradigm was proposed to facilitate the evaluation of one item of the auditory function scale in the CRS-R - the auditory startle. The results obtained from five healthy subjects validated the efficacy of the BCI system. Nineteen DOC patients participated in the CRS-R and BCI assessments, of which three patients exhibited no responses in the CRS-R assessment but were responsive to auditory startle in the BCI assessment. These results revealed that a proportion of DOC patients who have no behavioral responses in the CRS-R assessment can generate neural responses, which can be detected by our BCI system. Therefore, the proposed BCI may provide more sensitive results than the CRS-R and thus assist CRS-R behavioral assessments.
Xiao, Jun; Xie, Qiuyou; He, Yanbin; Yu, Tianyou; Lu, Shenglin; Huang, Ningmeng; Yu, Ronghao; Li, Yuanqing
2016-09-13
The Coma Recovery Scale-Revised (CRS-R) is a consistent and sensitive behavioral assessment standard for disorders of consciousness (DOC) patients. However, the CRS-R has limitations due to its dependence on behavioral markers, which has led to a high rate of misdiagnosis. Brain-computer interfaces (BCIs), which directly detect brain activities without any behavioral expression, can be used to evaluate a patient's state. In this study, we explored the application of BCIs in assisting CRS-R assessments of DOC patients. Specifically, an auditory passive EEG-based BCI system with an oddball paradigm was proposed to facilitate the evaluation of one item of the auditory function scale in the CRS-R - the auditory startle. The results obtained from five healthy subjects validated the efficacy of the BCI system. Nineteen DOC patients participated in the CRS-R and BCI assessments, of which three patients exhibited no responses in the CRS-R assessment but were responsive to auditory startle in the BCI assessment. These results revealed that a proportion of DOC patients who have no behavioral responses in the CRS-R assessment can generate neural responses, which can be detected by our BCI system. Therefore, the proposed BCI may provide more sensitive results than the CRS-R and thus assist CRS-R behavioral assessments.
Enhanced attention-dependent activity in the auditory cortex of older musicians.
Zendel, Benjamin Rich; Alain, Claude
2014-01-01
Musical training improves auditory processing abilities, which correlates with neuro-plastic changes in exogenous (input-driven) and endogenous (attention-dependent) components of auditory event-related potentials (ERPs). Evidence suggests that musicians, compared to non-musicians, experience less age-related decline in auditory processing abilities. Here, we investigated whether lifelong musicianship mitigates exogenous or endogenous processing by measuring auditory ERPs in younger and older musicians and non-musicians while they either attended to auditory stimuli or watched a muted subtitled movie of their choice. Both age and musical training-related differences were observed in the exogenous components; however, the differences between musicians and non-musicians were similar across the lifespan. These results suggest that exogenous auditory ERPs are enhanced in musicians, but decline with age at the same rate. On the other hand, attention-related activity, modeled in the right auditory cortex using a discrete spatiotemporal source analysis, was selectively enhanced in older musicians. This suggests that older musicians use a compensatory strategy to overcome age-related decline in peripheral and exogenous processing of acoustic information. Copyright © 2014 Elsevier Inc. All rights reserved.
Auditory beat stimulation and its effects on cognition and mood States.
Chaieb, Leila; Wilpert, Elke Caroline; Reber, Thomas P; Fell, Juergen
2015-01-01
Auditory beat stimulation may be a promising new tool for the manipulation of cognitive processes and the modulation of mood states. Here, we aim to review the literature examining the most current applications of auditory beat stimulation and its targets. We give a brief overview of research on auditory steady-state responses and its relationship to auditory beat stimulation (ABS). We have summarized relevant studies investigating the neurophysiological changes related to ABS and how they impact upon the design of appropriate stimulation protocols. Focusing on binaural-beat stimulation, we then discuss the role of monaural- and binaural-beat frequencies in cognition and mood states, in addition to their efficacy in targeting disease symptoms. We aim to highlight important points concerning stimulation parameters and try to address why there are often contradictory findings with regard to the outcomes of ABS.
Vahaba, Daniel M; Macedo-Lima, Matheus; Remage-Healey, Luke
2017-01-01
Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor's song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM's established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E 2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches ( Taeniopygia guttata ) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E 2 administration on sensory processing. In sensory-aged subjects, E 2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E 2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E 2 sensitivity that each precisely track a key neural "switch point" from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds.
2017-01-01
Abstract Vocal learning occurs during an experience-dependent, age-limited critical period early in development. In songbirds, vocal learning begins when presinging birds acquire an auditory memory of their tutor’s song (sensory phase) followed by the onset of vocal production and refinement (sensorimotor phase). Hearing is necessary throughout the vocal learning critical period. One key brain area for songbird auditory processing is the caudomedial nidopallium (NCM), a telencephalic region analogous to mammalian auditory cortex. Despite NCM’s established role in auditory processing, it is unclear how the response properties of NCM neurons may shift across development. Moreover, communication processing in NCM is rapidly enhanced by local 17β-estradiol (E2) administration in adult songbirds; however, the function of dynamically fluctuating E2 in NCM during development is unknown. We collected bilateral extracellular recordings in NCM coupled with reverse microdialysis delivery in juvenile male zebra finches (Taeniopygia guttata) across the vocal learning critical period. We found that auditory-evoked activity and coding accuracy were substantially higher in the NCM of sensory-aged animals compared to sensorimotor-aged animals. Further, we observed both age-dependent and lateralized effects of local E2 administration on sensory processing. In sensory-aged subjects, E2 decreased auditory responsiveness across both hemispheres; however, a similar trend was observed in age-matched control subjects. In sensorimotor-aged subjects, E2 dampened auditory responsiveness in left NCM but enhanced auditory responsiveness in right NCM. Our results reveal an age-dependent physiological shift in auditory processing and lateralized E2 sensitivity that each precisely track a key neural “switch point” from purely sensory (pre-singing) to sensorimotor (singing) in developing songbirds. PMID:29255797
Spiking in auditory cortex following thalamic stimulation is dominated by cortical network activity
Krause, Bryan M.; Raz, Aeyal; Uhlrich, Daniel J.; Smith, Philip H.; Banks, Matthew I.
2014-01-01
The state of the sensory cortical network can have a profound impact on neural responses and perception. In rodent auditory cortex, sensory responses are reported to occur in the context of network events, similar to brief UP states, that produce “packets” of spikes and are associated with synchronized synaptic input (Bathellier et al., 2012; Hromadka et al., 2013; Luczak et al., 2013). However, traditional models based on data from visual and somatosensory cortex predict that ascending sensory thalamocortical (TC) pathways sequentially activate cells in layers 4 (L4), L2/3, and L5. The relationship between these two spatio-temporal activity patterns is unclear. Here, we used calcium imaging and electrophysiological recordings in murine auditory TC brain slices to investigate the laminar response pattern to stimulation of TC afferents. We show that although monosynaptically driven spiking in response to TC afferents occurs, the vast majority of spikes fired following TC stimulation occurs during brief UP states and outside the context of the L4>L2/3>L5 activation sequence. Specifically, monosynaptic subthreshold TC responses with similar latencies were observed throughout layers 2–6, presumably via synapses onto dendritic processes located in L3 and L4. However, monosynaptic spiking was rare, and occurred primarily in L4 and L5 non-pyramidal cells. By contrast, during brief, TC-induced UP states, spiking was dense and occurred primarily in pyramidal cells. These network events always involved infragranular layers, whereas involvement of supragranular layers was variable. During UP states, spike latencies were comparable between infragranular and supragranular cells. These data are consistent with a model in which activation of auditory cortex, especially supragranular layers, depends on internally generated network events that represent a non-linear amplification process, are initiated by infragranular cells and tightly regulated by feed-forward inhibitory cells. PMID:25285071
Leske, Sabine; Ruhnau, Philipp; Frey, Julia; Lithari, Chrysa; Müller, Nadia; Hartmann, Thomas; Weisz, Nathan
2015-01-01
An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm. PMID:26408799
Auditory Beat Stimulation and its Effects on Cognition and Mood States
Chaieb, Leila; Wilpert, Elke Caroline; Reber, Thomas P.; Fell, Juergen
2015-01-01
Auditory beat stimulation may be a promising new tool for the manipulation of cognitive processes and the modulation of mood states. Here, we aim to review the literature examining the most current applications of auditory beat stimulation and its targets. We give a brief overview of research on auditory steady-state responses and its relationship to auditory beat stimulation (ABS). We have summarized relevant studies investigating the neurophysiological changes related to ABS and how they impact upon the design of appropriate stimulation protocols. Focusing on binaural-beat stimulation, we then discuss the role of monaural- and binaural-beat frequencies in cognition and mood states, in addition to their efficacy in targeting disease symptoms. We aim to highlight important points concerning stimulation parameters and try to address why there are often contradictory findings with regard to the outcomes of ABS. PMID:26029120
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath
2018-05-24
Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Multimodal lexical processing in auditory cortex is literacy skill dependent.
McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R
2014-09-01
Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Auditory brainstem response to complex sounds: a tutorial
Skoe, Erika; Kraus, Nina
2010-01-01
This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007
Kuriki, Shinya; Kobayashi, Yusuke; Kobayashi, Takanari; Tanaka, Keita; Uchikawa, Yoshinori
2013-02-01
The auditory steady-state response (ASSR) is a weak potential or magnetic response elicited by periodic acoustic stimuli with a maximum response at about a 40-Hz periodicity. In most previous studies using amplitude-modulated (AM) tones of stimulus sound, long lasting tones of more than 10 s in length were used. However, characteristics of the ASSR elicited by short AM tones have remained unclear. In this study, we examined magnetoencephalographic (MEG) ASSR using a sequence of sinusoidal AM tones of 0.78 s in length with various tone frequencies of 440-990 Hz in about one octave variation. It was found that the amplitude of the ASSR was invariant with tone frequencies when the level of sound pressure was adjusted along an equal-loudness curve. The amplitude also did not depend on the existence of preceding tone or difference in frequency of the preceding tone. When the sound level of AM tones was changed with tone frequencies in the same range of 440-990 Hz, the amplitude of ASSR varied in a proportional manner to the sound level. These characteristics are favorable for the use of ASSR in studying temporal processing of auditory information in the auditory cortex. The lack of adaptation in the ASSR elicited by a sequence of short tones may be ascribed to the neural activity of widely accepted generator of magnetic ASSR in the primary auditory cortex. Copyright © 2012 Elsevier B.V. All rights reserved.
Source analysis of auditory steady-state responses in acoustic and electric hearing.
Luke, Robert; De Vos, Astrid; Wouters, Jan
2017-02-15
Speech is a complex signal containing a broad variety of acoustic information. For accurate speech reception, the listener must perceive modulations over a range of envelope frequencies. Perception of these modulations is particularly important for cochlear implant (CI) users, as all commercial devices use envelope coding strategies. Prolonged deafness affects the auditory pathway. However, little is known of how cochlear implantation affects the neural processing of modulated stimuli. This study investigates and contrasts the neural processing of envelope rate modulated signals in acoustic and CI listeners. Auditory steady-state responses (ASSRs) are used to study the neural processing of amplitude modulated (AM) signals. A beamforming technique is applied to determine the increase in neural activity relative to a control condition, with particular attention paid to defining the accuracy and precision of this technique relative to other tomographies. In a cohort of 44 acoustic listeners, the location, activity and hemispheric lateralisation of ASSRs is characterised while systematically varying the modulation rate (4, 10, 20, 40 and 80Hz) and stimulation ear (right, left and bilateral). We demonstrate a complex pattern of laterality depending on both modulation rate and stimulation ear that is consistent with, and extends, existing literature. We present a novel extension to the beamforming method which facilitates source analysis of electrically evoked auditory steady-state responses (EASSRs). In a cohort of 5 right implanted unilateral CI users, the neural activity is determined for the 40Hz rate and compared to the acoustic cohort. Results indicate that CI users activate typical thalamic locations for 40Hz stimuli. However, complementary to studies of transient stimuli, the CI population has atypical hemispheric laterality, preferentially activating the contralateral hemisphere. Copyright © 2016. Published by Elsevier Inc.
System and algorithm for evaluation of human auditory analyzer state
NASA Astrophysics Data System (ADS)
Bachynskiy, Mykhaylo V.; Azarkhov, Oleksandr Yu.; Shtofel, Dmytro Kh.; Horbatiuk, Svitlana M.; Ławicki, Tomasz; Kalizhanova, Aliya; Smailova, Saule; Askarova, Nursanat
2017-08-01
The paper discusses questions of human auditory state evaluation with technical means. It considers the disadvantages of existing clinical audiometry methods and systems. It is proposed to use method for evaluating of auditory analyzer state by means of pulsometry to get the medical study more objective and efficient. It provides for use of two optoelectronic sensors located on the carotid artery and ear lobe, Using this method the biotechnical system for evaluation and stimulation of human auditory analyzer stare wad developed. Its hardware and software were substantiated. Different modes of simulation in the designed system were tested and the influence of the procedure on a patient was studied.
Auditory hallucinations induced by trazodone
Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji
2014-01-01
A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048
In search of an auditory engram.
Fritz, Jonathan; Mishkin, Mortimer; Saunders, Richard C
2005-06-28
Monkeys trained preoperatively on a task designed to assess auditory recognition memory were impaired after removal of either the rostral superior temporal gyrus or the medial temporal lobe but were unaffected by lesions of the rhinal cortex. Behavioral analysis indicated that this result occurred because the monkeys did not or could not use long-term auditory recognition, and so depended instead on short-term working memory, which is unaffected by rhinal lesions. The findings suggest that monkeys may be unable to place representations of auditory stimuli into a long-term store and thus question whether the monkey's cerebral memory mechanisms in audition are intrinsically different from those in other sensory modalities. Furthermore, it raises the possibility that language is unique to humans not only because it depends on speech but also because it requires long-term auditory memory.
Cell-specific gain modulation by synaptically released zinc in cortical circuits of audition.
Anderson, Charles T; Kumar, Manoj; Xiong, Shanshan; Tzounopoulos, Thanos
2017-09-09
In many excitatory synapses, mobile zinc is found within glutamatergic vesicles and is coreleased with glutamate. Ex vivo studies established that synaptically released (synaptic) zinc inhibits excitatory neurotransmission at lower frequencies of synaptic activity but enhances steady state synaptic responses during higher frequencies of activity. However, it remains unknown how synaptic zinc affects neuronal processing in vivo. Here, we imaged the sound-evoked neuronal activity of the primary auditory cortex in awake mice. We discovered that synaptic zinc enhanced the gain of sound-evoked responses in CaMKII-expressing principal neurons, but it reduced the gain of parvalbumin- and somatostatin-expressing interneurons. This modulation was sound intensity-dependent and, in part, NMDA receptor-independent. By establishing a previously unknown link between synaptic zinc and gain control of auditory cortical processing, our findings advance understanding about cortical synaptic mechanisms and create a new framework for approaching and interpreting the role of the auditory cortex in sound processing.
Zucki, Fernanda; Morata, Thais C; Duarte, Josilene L; Ferreira, Maria Cecília F; Salgado, Manoel H; Alvarenga, Kátia F
The literature has reported the association between lead and auditory effects, based on clinical and experimental studies. However, there is no consensus regarding the effects of lead in the auditory system, or its correlation with the concentration of the metal in the blood. To investigate the maturation state of the auditory system, specifically the auditory nerve and brainstem, in rats exposed to lead acetate and supplemented with ferrous sulfate. 30 weanling male rats (Rattus norvegicus, Wistar) were distributed into six groups of five animals each and exposed to one of two concentrations of lead acetate (100 or 400mg/L) and supplemented with ferrous sulfate (20mg/kg). The maturation state of the auditory nerve and brainstem was analyzed using Brainstem Auditory Evoked Potential before and after lead exposure. The concentration of lead in blood and brainstem was analyzed using Inductively Coupled Plasma-Mass Spectrometry. We verified that the concentration of Pb in blood and in brainstem presented a high correlation (r=0.951; p<0.0001). Both concentrations of lead acetate affected the maturation state of the auditory system, being the maturation slower in the regions corresponding to portion of the auditory nerve (wave I) and cochlear nuclei (wave II). The ferrous sulfate supplementation reduced significantly the concentration of lead in blood and brainstem for the group exposed to the lowest concentration of lead (100mg/L), but not for the group exposed to the higher concentration (400mg/L). This study indicate that the lead acetate can have deleterious effects on the maturation of the auditory nerve and brainstem (cochlear nucleus region), as detected by the Brainstem Auditory Evoked Potentials, and the ferrous sulphate can partially amend this effect. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. All rights reserved.
Theoretical Tinnitus Framework: A Neurofunctional Model.
Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C B; Sani, Siamak S; Ekhtiari, Hamed; Sanchez, Tanit G
2016-01-01
Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the "sourceless" sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general personality biases toward evaluative conditioning combined with a cognitive-emotional negative appraisal of stimuli such as the case of people with present hypochondria. We acknowledge that the projected Neurofunctional Tinnitus Model does not cover all tinnitus variations and patients. To support our model, we present evidence from several studies using neuroimaging, electrophysiology, brain lesion, and behavioral techniques.
Theoretical Tinnitus Framework: A Neurofunctional Model
Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C. B.; Sani, Siamak S.; Ekhtiari, Hamed; Sanchez, Tanit G.
2016-01-01
Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the “sourceless” sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general personality biases toward evaluative conditioning combined with a cognitive-emotional negative appraisal of stimuli such as the case of people with present hypochondria. We acknowledge that the projected Neurofunctional Tinnitus Model does not cover all tinnitus variations and patients. To support our model, we present evidence from several studies using neuroimaging, electrophysiology, brain lesion, and behavioral techniques. PMID:27594822
ERIC Educational Resources Information Center
Hughes, Robert W.; Vachon, Francois; Jones, Dylan M.
2007-01-01
The disruption of short-term memory by to-be-ignored auditory sequences (the changing-state effect) has often been characterized as attentional capture by deviant events (deviation effect). However, the present study demonstrates that changing-state and deviation effects are functionally distinct forms of auditory distraction: The disruption of…
Diminished auditory sensory gating during active auditory verbal hallucinations.
Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia
2017-10-01
Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.
Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.
Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M
1991-06-01
An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.
Geva, R; Eshel, R; Leitner, Y; Fattal-Valevski, A; Harel, S
2008-12-01
Recent reports showed that children born with intrauterine growth restriction (IUGR) are at greater risk of experiencing verbal short-term memory span (STM) deficits that may impede their learning capacities at school. It is still unknown whether these deficits are modality dependent. This long-term, prospective design study examined modality-dependent verbal STM functions in children who were diagnosed at birth with IUGR (n = 138) and a control group (n = 64). Their STM skills were evaluated individually at 9 years of age with four conditions of the Visual-Aural Digit Span Test (VADS; Koppitz, 1981): auditory-oral, auditory-written, visuospatial-oral and visuospatial-written. Cognitive competence was evaluated with the short form of the Wechsler Intelligence Scales for Children--revised (WISC-R95; Wechsler, 1998). We found IUGR-related specific auditory-oral STM deficits (p < .036) in conjunction with two double dissociations: an auditory-visuospatial (p < .014) and an input-output processing distinction (p < .014). Cognitive competence had a significant effect on all four conditions; however, the effect of IUGR on the auditory-oral condition was not overridden by the effect of intelligence quotient (IQ). Intrauterine growth restriction affects global competence and inter-modality processing, as well as distinct auditory input processing related to verbal STM functions. The findings support a long-term relationship between prenatal aberrant head growth and auditory verbal STM deficits by the end of the first decade of life. Empirical, clinical and educational implications are presented.
Interaction of language, auditory and memory brain networks in auditory verbal hallucinations.
Ćurčić-Blake, Branislava; Ford, Judith M; Hubl, Daniela; Orlov, Natasza D; Sommer, Iris E; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W; David, Olivier; Mulert, Christoph; Woodward, Todd S; Aleman, André
2017-01-01
Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of particular relevance. However, reconciliation of these theories with experimental evidence is missing. We review 50 studies investigating functional (EEG and fMRI) and anatomic (diffusion tensor imaging) connectivity in these networks, and explore the evidence supporting abnormal connectivity in these networks associated with AVH. We distinguish between functional connectivity during an actual hallucination experience (symptom capture) and functional connectivity during either the resting state or a task comparing individuals who hallucinate with those who do not (symptom association studies). Symptom capture studies clearly reveal a pattern of increased coupling among the auditory, language and striatal regions. Anatomical and symptom association functional studies suggest that the interhemispheric connectivity between posterior auditory regions may depend on the phase of illness, with increases in non-psychotic individuals and first episode patients and decreases in chronic patients. Leading hypotheses involving concepts as unstable memories, source monitoring, top-down attention, and hybrid models of hallucinations are supported in part by the published connectivity data, although several caveats and inconsistencies remain. Specifically, possible changes in fronto-temporal connectivity are still under debate. Precise hypotheses concerning the directionality of connections deduced from current theoretical approaches should be tested using experimental approaches that allow for discrimination of competing hypotheses. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Forlano, Paul M; Sisneros, Joseph A
2016-01-01
The plainfin midshipman fish (Porichthys notatus) is a well-studied model to understand the neural and endocrine mechanisms underlying vocal-acoustic communication across vertebrates. It is well established that steroid hormones such as estrogen drive seasonal peripheral auditory plasticity in female Porichthys in order to better encode the male's advertisement call. However, little is known of the neural substrates that underlie the motivation and coordinated behavioral response to auditory social signals. Catecholamines, which include dopamine and noradrenaline, are good candidates for this function, as they are thought to modulate the salience of and reinforce appropriate behavior to socially relevant stimuli. This chapter summarizes our recent studies which aimed to characterize catecholamine innervation in the central and peripheral auditory system of Porichthys as well as test the hypotheses that innervation of the auditory system is seasonally plastic and catecholaminergic neurons are activated in response to conspecific vocalizations. Of particular significance is the discovery of direct dopaminergic innervation of the saccule, the main hearing end organ, by neurons in the diencephalon, which also robustly innervate the cholinergic auditory efferent nucleus in the hindbrain. Seasonal changes in dopamine innervation in both these areas appear dependent on reproductive state in females and may ultimately function to modulate the sensitivity of the peripheral auditory system as an adaptation to the seasonally changing soundscape. Diencephalic dopaminergic neurons are indeed active in response to exposure to midshipman vocalizations and are in a perfect position to integrate the detection and appropriate motor response to conspecific acoustic signals for successful reproduction.
Context-dependent plasticity in the subcortical encoding of linguistic pitch patterns
Lau, Joseph C. Y.; Wong, Patrick C. M.
2016-01-01
We examined the mechanics of online experience-dependent auditory plasticity by assessing the influence of prior context on the frequency-following responses (FFRs), which reflect phase-locked responses from neural ensembles within the subcortical auditory system. FFRs were elicited to a Cantonese falling lexical pitch pattern from 24 native speakers of Cantonese in a variable context, wherein the falling pitch pattern randomly occurred in the context of two other linguistic pitch patterns; in a patterned context, wherein, the falling pitch pattern was presented in a predictable sequence along with two other pitch patterns, and in a repetitive context, wherein the falling pitch pattern was presented with 100% probability. We found that neural tracking of the stimulus pitch contour was most faithful and accurate when listening context was patterned and least faithful when the listening context was variable. The patterned context elicited more robust pitch tracking relative to the repetitive context, suggesting that context-dependent plasticity is most robust when the context is predictable but not repetitive. Our study demonstrates a robust influence of prior listening context that works to enhance online neural encoding of linguistic pitch patterns. We interpret these results as indicative of an interplay between contextual processes that are responsive to predictability as well as novelty in the presentation context. NEW & NOTEWORTHY Human auditory perception in dynamic listening environments requires fine-tuning of sensory signal based on behaviorally relevant regularities in listening context, i.e., online experience-dependent plasticity. Our finding suggests what partly underlie online experience-dependent plasticity are interplaying contextual processes in the subcortical auditory system that are responsive to predictability as well as novelty in listening context. These findings add to the literature that looks to establish the neurophysiological bases of auditory system plasticity, a central issue in auditory neuroscience. PMID:27832606
Context-dependent plasticity in the subcortical encoding of linguistic pitch patterns.
Lau, Joseph C Y; Wong, Patrick C M; Chandrasekaran, Bharath
2017-02-01
We examined the mechanics of online experience-dependent auditory plasticity by assessing the influence of prior context on the frequency-following responses (FFRs), which reflect phase-locked responses from neural ensembles within the subcortical auditory system. FFRs were elicited to a Cantonese falling lexical pitch pattern from 24 native speakers of Cantonese in a variable context, wherein the falling pitch pattern randomly occurred in the context of two other linguistic pitch patterns; in a patterned context, wherein, the falling pitch pattern was presented in a predictable sequence along with two other pitch patterns, and in a repetitive context, wherein the falling pitch pattern was presented with 100% probability. We found that neural tracking of the stimulus pitch contour was most faithful and accurate when listening context was patterned and least faithful when the listening context was variable. The patterned context elicited more robust pitch tracking relative to the repetitive context, suggesting that context-dependent plasticity is most robust when the context is predictable but not repetitive. Our study demonstrates a robust influence of prior listening context that works to enhance online neural encoding of linguistic pitch patterns. We interpret these results as indicative of an interplay between contextual processes that are responsive to predictability as well as novelty in the presentation context. Human auditory perception in dynamic listening environments requires fine-tuning of sensory signal based on behaviorally relevant regularities in listening context, i.e., online experience-dependent plasticity. Our finding suggests what partly underlie online experience-dependent plasticity are interplaying contextual processes in the subcortical auditory system that are responsive to predictability as well as novelty in listening context. These findings add to the literature that looks to establish the neurophysiological bases of auditory system plasticity, a central issue in auditory neuroscience. Copyright © 2017 the American Physiological Society.
Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H R; Schmidt, Marc
2013-06-01
Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. Copyright © 2013 Elsevier Ltd. All rights reserved.
Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H.R.; Schmidt, Marc
2015-01-01
Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC’s auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf’s involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. PMID:23603062
A functional MRI study of happy and sad affective states induced by classical music.
Mitterschiffthaler, Martina T; Fu, Cynthia H Y; Dalton, Jeffrey A; Andrew, Christopher M; Williams, Steven C R
2007-11-01
The present study investigated the functional neuroanatomy of transient mood changes in response to Western classical music. In a pilot experiment, 53 healthy volunteers (mean age: 32.0; SD = 9.6) evaluated their emotional responses to 60 classical musical pieces using a visual analogue scale (VAS) ranging from 0 (sad) through 50 (neutral) to 100 (happy). Twenty pieces were found to accurately induce the intended emotional states with good reliability, consisting of 5 happy, 5 sad, and 10 emotionally unevocative, neutral musical pieces. In a subsequent functional magnetic resonance imaging (fMRI) study, the blood oxygenation level dependent (BOLD) signal contrast was measured in response to the mood state induced by each musical stimulus in a separate group of 16 healthy participants (mean age: 29.5; SD = 5.5). Mood state ratings during scanning were made by a VAS, which confirmed the emotional valence of the selected stimuli. Increased BOLD signal contrast during presentation of happy music was found in the ventral and dorsal striatum, anterior cingulate, parahippocampal gyrus, and auditory association areas. With sad music, increased BOLD signal responses were noted in the hippocampus/amygdala and auditory association areas. Presentation of neutral music was associated with increased BOLD signal responses in the insula and auditory association areas. Our findings suggest that an emotion processing network in response to music integrates the ventral and dorsal striatum, areas involved in reward experience and movement; the anterior cingulate, which is important for targeting attention; and medial temporal areas, traditionally found in the appraisal and processing of emotions. Copyright 2006 Wiley-Liss, Inc.
Jaeger, Manuela; Bleichner, Martin G; Bauer, Anna-Katharina R; Mirkovic, Bojana; Debener, Stefan
2018-02-27
The acoustic envelope of human speech correlates with the syllabic rate (4-8 Hz) and carries important information for intelligibility, which is typically compromised in multi-talker, noisy environments. In order to better understand the dynamics of selective auditory attention to low frequency modulated sound sources, we conducted a two-stream auditory steady-state response (ASSR) selective attention electroencephalogram (EEG) study. The two streams consisted of 4 and 7 Hz amplitude and frequency modulated sounds presented from the left and right side. One of two streams had to be attended while the other had to be ignored. The attended stream always contained a target, allowing for the behavioral confirmation of the attention manipulation. EEG ASSR power analysis revealed a significant increase in 7 Hz power for the attend compared to the ignore conditions. There was no significant difference in 4 Hz power when the 4 Hz stream had to be attended compared to when it had to be ignored. This lack of 4 Hz attention modulation could be explained by a distracting effect of a third frequency at 3 Hz (beat frequency) perceivable when the 4 and 7 Hz streams are presented simultaneously. Taken together our results show that low frequency modulations at syllabic rate are modulated by selective spatial attention. Whether attention effects act as enhancement of the attended stream or suppression of to be ignored stream may depend on how well auditory streams can be segregated.
Auditory steady-state response in cochlear implant patients.
Torres-Fortuny, Alejandro; Arnaiz-Marquez, Isabel; Hernández-Pérez, Heivet; Eimil-Suárez, Eduardo
2018-03-19
Auditory steady state responses to continuous amplitude modulated tones at rates between 70 and 110Hz, have been proposed as a feasible alternative to objective frequency specific audiometry in cochlear implant subjects. The aim of the present study is to obtain physiological thresholds by means of auditory steady-state response in cochlear implant patients (Clarion HiRes 90K), with acoustic stimulation, on free field conditions and to verify its biological origin. 11 subjects comprised the sample. Four amplitude modulated tones of 500, 1000, 2000 and 4000Hz were used as stimuli, using the multiple frequency technique. The recording of auditory steady-state response was also recorded at 0dB HL of intensity, non-specific stimulus and using a masking technique. The study enabled the electrophysiological thresholds to be obtained for each subject of the explored sample. There were no auditory steady-state responses at either 0dB or non-specific stimulus recordings. It was possible to obtain the masking thresholds. A difference was identified between behavioral and electrophysiological thresholds of -6±16, -2±13, 0±22 and -8±18dB at frequencies of 500, 1000, 2000 and 4000Hz respectively. The auditory steady state response seems to be a suitable technique to evaluate the hearing threshold in cochlear implant subjects. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.
Pilocarpine Seizures Cause Age-Dependent Impairment in Auditory Location Discrimination
ERIC Educational Resources Information Center
Neill, John C.; Liu, Zhao; Mikati, Mohammad; Holmes, Gregory L.
2005-01-01
Children who have status epilepticus have continuous or rapidly repeating seizures that may be life-threatening and may cause life-long changes in brain and behavior. The extent to which status epilepticus causes deficits in auditory discrimination is unknown. A naturalistic auditory location discrimination method was used to evaluate this…
Auditory Perceptual Abilities Are Associated with Specific Auditory Experience
Zaltz, Yael; Globerson, Eitan; Amir, Noam
2017-01-01
The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318
Petrac, D C; Bedwell, J S; Renk, K; Orem, D M; Sims, V
2009-07-01
There have been relatively few studies on the relationship between recent perceived environmental stress and cognitive performance, and the existing studies do not control for state anxiety during the cognitive testing. The current study addressed this need by examining recent self-reported environmental stress and divided attention performance, while controlling for state anxiety. Fifty-four university undergraduates who self-reported a wide range of perceived recent stress (10-item perceived stress scale) completed both single and dual (simultaneous auditory and visual stimuli) continuous performance tests. Partial correlation analysis showed a statistically significant positive correlation between perceived stress and the auditory omission errors from the dual condition, after controlling for state anxiety and auditory omission errors from the single condition (r = 0.41). This suggests that increased environmental stress relates to decreased divided attention performance in auditory vigilance. In contrast, an increase in state anxiety (controlling for perceived stress) was related to a decrease in auditory omission errors from the dual condition (r = - 0.37), which suggests that state anxiety may improve divided attention performance. Results suggest that further examination of the neurobiological consequences of environmental stress on divided attention and other executive functioning tasks is needed.
Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling
Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.; ...
2017-06-30
Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less
Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.
Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less
Neuronal correlates of visual and auditory alertness in the DMT and ketamine model of psychosis.
Daumann, J; Wagner, D; Heekeren, K; Neukirch, A; Thiel, C M; Gouzoulis-Mayfrank, E
2010-10-01
Deficits in attentional functions belong to the core cognitive symptoms in schizophrenic patients. Alertness is a nonselective attention component that refers to a state of general readiness that improves stimulus processing and response initiation. The main goal of the present study was to investigate cerebral correlates of alertness in the human 5HT(2A) agonist and N-methyl-D-aspartic acid (NMDA) antagonist model of psychosis. Fourteen healthy volunteers participated in a randomized double-blind, cross-over event-related functional magnetic resonance imaging (fMRI) study with dimethyltryptamine (DMT) and S-ketamine. A target detection task with cued and uncued trials in both the visual and the auditory modality was used. Administration of DMT led to decreased blood oxygenation level-dependent response during performance of an alertness task, particularly in extrastriate regions during visual alerting and in temporal regions during auditory alerting. In general, the effects for the visual modality were more pronounced. In contrast, administration of S-ketamine led to increased cortical activation in the left insula and precentral gyrus in the auditory modality. The results of the present study might deliver more insight into potential differences and overlapping pathomechanisms in schizophrenia. These conclusions must remain preliminary and should be explored by further fMRI studies with schizophrenic patients performing modality-specific alertness tasks.
Threshold and Beyond: Modeling The Intensity Dependence of Auditory Responses
2007-01-01
In many studies of auditory-evoked responses to low-intensity sounds, the response amplitude appears to increase roughly linearly with the sound level in decibels (dB), corresponding to a logarithmic intensity dependence. But the auditory system is assumed to be linear in the low-intensity limit. The goal of this study was to resolve the seeming contradiction. Based on assumptions about the rate-intensity functions of single auditory-nerve fibers and the pattern of cochlear excitation caused by a tone, a model for the gross response of the population of auditory nerve fibers was developed. In accordance with signal detection theory, the model denies the existence of a threshold. This implies that regarding the detection of a significant stimulus-related effect, a reduction in sound intensity can always be compensated for by increasing the measurement time, at least in theory. The model suggests that the gross response is proportional to intensity when the latter is low (range I), and a linear function of sound level at higher intensities (range III). For intensities in between, it is concluded that noisy experimental data may provide seemingly irrefutable evidence of a linear dependence on sound pressure (range II). In view of the small response amplitudes that are to be expected for intensity range I, direct observation of the predicted proportionality with intensity will generally be a challenging task for an experimenter. Although the model was developed for the auditory nerve, the basic conclusions are probably valid for higher levels of the auditory system, too, and might help to improve models for loudness at threshold. PMID:18008105
Adaptation to Vocal Expressions Reveals Multistep Perception of Auditory Emotion
Maurage, Pierre; Rouger, Julien; Latinus, Marianne; Belin, Pascal
2014-01-01
The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect. PMID:24920615
Stimulus-specific adaptation and deviance detection in the inferior colliculus
Ayala, Yaneri A.; Malmierca, Manuel S.
2013-01-01
Deviancy detection in the continuous flow of sensory information into the central nervous system is of vital importance for animals. The task requires neuronal mechanisms that allow for an efficient representation of the environment by removing statistically redundant signals. Recently, the neuronal principles of auditory deviance detection have been approached by studying the phenomenon of stimulus-specific adaptation (SSA). SSA is a reduction in the responsiveness of a neuron to a common or repetitive sound while the neuron remains highly sensitive to rare sounds (Ulanovsky et al., 2003). This phenomenon could enhance the saliency of unexpected, deviant stimuli against a background of repetitive signals. SSA shares many similarities with the evoked potential known as the “mismatch negativity,” (MMN) and it has been linked to cognitive process such as auditory memory and scene analysis (Winkler et al., 2009) as well as to behavioral habituation (Netser et al., 2011). Neurons exhibiting SSA can be found at several levels of the auditory pathway, from the inferior colliculus (IC) up to the auditory cortex (AC). In this review, we offer an account of the state-of-the art of SSA studies in the IC with the aim of contributing to the growing interest in the single-neuron electrophysiology of auditory deviance detection. The dependence of neuronal SSA on various stimulus features, e.g., probability of the deviant stimulus and repetition rate, and the roles of the AC and inhibition in shaping SSA at the level of the IC are addressed. PMID:23335883
Adaptation to vocal expressions reveals multistep perception of auditory emotion.
Bestelmeyer, Patricia E G; Maurage, Pierre; Rouger, Julien; Latinus, Marianne; Belin, Pascal
2014-06-11
The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect. Copyright © 2014 Bestelmeyer et al.
Seki, Yoshimasa; Okanoya, Kazuo
2008-02-01
Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.
de Hoz, Livia; Gierej, Dorota; Lioudyno, Victoria; Jaworski, Jacek; Blazejczyk, Magda; Cruces-Solís, Hugo; Beroun, Anna; Lebitko, Tomasz; Nikolaev, Tomasz; Knapska, Ewelina; Nelken, Israel; Kaczmarek, Leszek
2018-05-01
The behavioral changes that comprise operant learning are associated with plasticity in early sensory cortices as well as with modulation of gene expression, but the connection between the behavioral, electrophysiological, and molecular changes is only partially understood. We specifically manipulated c-Fos expression, a hallmark of learning-induced synaptic plasticity, in auditory cortex of adult mice using a novel approach based on RNA interference. Locally blocking c-Fos expression caused a specific behavioral deficit in a sound discrimination task, in parallel with decreased cortical experience-dependent plasticity, without affecting baseline excitability or basic auditory processing. Thus, c-Fos-dependent experience-dependent cortical plasticity is necessary for frequency discrimination in an operant behavioral task. Our results connect behavioral, molecular and physiological changes and demonstrate a role of c-Fos in experience-dependent plasticity and learning.
Is the Role of External Feedback in Auditory Skill Learning Age Dependent?
Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat
2017-12-20
The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for frequency) task, with external feedback (EF) provided for half of them. Data supported the following findings: (a) Children learned the difference limen for frequency task only when EF was provided. (b) The ability of the children to benefit from EF was associated with better cognitive skills. (c) Adults showed significant learning whether EF was provided or not. (d) In children, within-session learning following training was dependent on the provision of feedback, whereas between-sessions learning occurred irrespective of feedback. EF was found beneficial for auditory skill learning of 7-9-year-old children but not for young adults. The data support the supervised Hebbian model for auditory skill learning, suggesting combined bottom-up internal neural feedback controlled by top-down monitoring. In the case of immature executive functions, EF enhanced auditory skill learning. This study has implications for the design of training protocols in the auditory modality for different age groups, as well as for special populations.
Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory
2017-01-01
Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom sound perception and potentially serve as an objective measure of central neural pathology. PMID:28604786
Lovelace, Jonathan W.; Wen, Teresa H.; Reinhard, Sarah; Hsu, Mike S.; Sidhu, Harpreet; Ethell, Iryna M.; Binder, Devin K.; Razak, Khaleel A.
2016-01-01
Sensory processing deficits are common in autism spectrum disorders, but the underlying mechanisms are unclear. Fragile X Syndrome (FXS) is a leading genetic cause of intellectual disability and autism. Electrophysiological responses in humans with FXS show reduced habituation with sound repetition and this deficit may underlie auditory hypersensitivity in FXS. Our previous study in Fmr1 knockout (KO) mice revealed an unusually long state of increased sound-driven excitability in auditory cortical neurons suggesting that cortical responses to repeated sounds may exhibit abnormal habituation as in humans with FXS. Here, we tested this prediction by comparing cortical event related potentials (ERP) recorded from wildtype (WT) and Fmr1 KO mice. We report a repetition-rate dependent reduction in habituation of N1 amplitude in Fmr1 KO mice and show that matrix metalloproteinase −9 (MMP-9), one of the known FMRP targets, contributes to the reduced ERP habituation. Our studies demonstrate a significant up-regulation of MMP-9 levels in the auditory cortex of adult Fmr1 KO mice, whereas a genetic deletion of Mmp-9 reverses ERP habituation deficits in Fmr1 KO mice. Although the N1 amplitude of Mmp-9/Fmr1 DKO recordings was larger than WT and KO recordings, the habituation of ERPs in Mmp-9/Fmr1 DKO mice is similar to WT mice implicating MMP-9 as a potential target for reversing sensory processing deficits in FXS. Together these data establish ERP habituation as a translation relevant, physiological pre-clinical marker of auditory processing deficits in FXS and suggest that abnormal MMP-9 regulation is a mechanism underlying auditory hypersensitivity in FXS. PMID:26850918
Anatomical Substrates of Visual and Auditory Miniature Second-language Learning
Newman-Norlund, Roger D.; Frey, Scott H.; Petitto, Laura-Ann; Grafton, Scott T.
2007-01-01
Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca’s area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca’s area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the ‘‘critical period.’’ The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance. PMID:17129186
Cecere, Roberto; Gross, Joachim; Thut, Gregor
2016-06-01
The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Is the Role of External Feedback in Auditory Skill Learning Age Dependent?
ERIC Educational Resources Information Center
Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat
2017-01-01
Purpose: The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Method: Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for…
ERIC Educational Resources Information Center
Ceponiene, Rita; Service, Elisabet; Kurjenluoma, Sanna; Cheour, Marie; Naatanen, Risto
1999-01-01
Compared the mismatch-negativity (MMN) component of auditory event-related brain potentials to explore the relationship between phonological short-term memory and auditory-sensory processing in 7- to 9-year olds scoring the highest and lowest on a pseudoword repetition test. Found that high and low repeaters differed in MMN amplitude to speech…
Nir, Yuval; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Banks, Matthew I.; Tononi, Giulio
2015-01-01
Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic “gate,” which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences <8% between states). The processing of deviant tones was also compared in sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13–20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas. PMID:24323498
A model of head-related transfer functions based on a state-space analysis
NASA Astrophysics Data System (ADS)
Adams, Norman Herkamp
This dissertation develops and validates a novel state-space method for binaural auditory display. Binaural displays seek to immerse a listener in a 3D virtual auditory scene with a pair of headphones. The challenge for any binaural display is to compute the two signals to supply to the headphones. The present work considers a general framework capable of synthesizing a wide variety of auditory scenes. The framework models collections of head-related transfer functions (HRTFs) simultaneously. This framework improves the flexibility of contemporary displays, but it also compounds the steep computational cost of the display. The cost is reduced dramatically by formulating the collection of HRTFs in the state-space and employing order-reduction techniques to design efficient approximants. Order-reduction techniques based on the Hankel-operator are found to yield accurate low-cost approximants. However, the inter-aural time difference (ITD) of the HRTFs degrades the time-domain response of the approximants. Fortunately, this problem can be circumvented by employing a state-space architecture that allows the ITD to be modeled outside of the state-space. Accordingly, three state-space architectures are considered. Overall, a multiple-input, single-output (MISO) architecture yields the best compromise between performance and flexibility. The state-space approximants are evaluated both empirically and psychoacoustically. An array of truncated FIR filters is used as a pragmatic reference system for comparison. For a fixed cost bound, the state-space systems yield lower approximation error than FIR arrays for D>10, where D is the number of directions in the HRTF collection. A series of headphone listening tests are also performed to validate the state-space approach, and to estimate the minimum order N of indiscriminable approximants. For D = 50, the state-space systems yield order thresholds less than half those of the FIR arrays. Depending upon the stimulus uncertainty, a minimum state-space order of 7≤N≤23 appears to be adequate. In conclusion, the proposed state-space method enables a more flexible and immersive binaural display with low computational cost.
Pre-attentive, context-specific representation of fear memory in the auditory cortex of rat.
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
2013-01-01
Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
Bender, Stephan; Behringer, Stephanie; Freitag, Christine M; Resch, Franz; Weisbrod, Matthias
2010-12-01
To elucidate the contributions of modality-dependent post-processing in auditory, motor and visual cortical areas to short-term memory. We compared late negative waves (N700) during the post-processing of single lateralized stimuli which were separated by long intertrial intervals across the auditory, motor and visual modalities. Tasks either required or competed with attention to post-processing of preceding events, i.e. active short-term memory maintenance. N700 indicated that cortical post-processing exceeded short movements as well as short auditory or visual stimuli for over half a second without intentional short-term memory maintenance. Modality-specific topographies pointed towards sensory (respectively motor) generators with comparable time-courses across the different modalities. Lateralization and amplitude of auditory/motor/visual N700 were enhanced by active short-term memory maintenance compared to attention to current perceptions or passive stimulation. The memory-related N700 increase followed the characteristic time-course and modality-specific topography of the N700 without intentional memory-maintenance. Memory-maintenance-related lateralized negative potentials may be related to a less lateralised modality-dependent post-processing N700 component which occurs also without intentional memory maintenance (automatic memory trace or effortless attraction of attention). Encoding to short-term memory may involve controlled attention to modality-dependent post-processing. Similar short-term memory processes may exist in the auditory, motor and visual systems. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Effects of visual working memory on brain information processing of irrelevant auditory stimuli.
Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye
2014-01-01
Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.
Thresholding of auditory cortical representation by background noise
Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju
2014-01-01
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029
Cerebral responses to local and global auditory novelty under general anesthesia
Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir
2017-01-01
Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046
Newborn hearing screening update for midwifery practice.
Narrigan, D
2000-01-01
Neonatal identification of congenital hearing impairment allows interventions during the first 3 years, the critical period for language and speech development. Two recently developed biophysical testing methods offer simple, accurate, and relatively inexpensive means to identify the one to three in 1,000 healthy newborns with hearing loss. Universal screening for auditory system integrity is advocated, because almost half of all newborns with hearing impairment have no risk factors associated with this impairment. Critics of universal screening cite the high rate of false positive tests (up to 7%), which increases program costs from follow-up and re-testing large numbers of infants to ensure identifying the few affected infants. As of early 2000, 24 states had introduced some type of auditory screening program, and the U.S. Congress had passed legislation with appropriations mandating state-based auditory screening for all newborns. Midwives practicing in states already mandating biophysical screening need to comply with their local requirements; those in other states may voluntarily incorporate new auditory test methods into practice.
2013-01-01
Background At present, there is no consensus on how to clinically assess localisation to sound in patients recovering from coma. We here studied auditory localisation using the patient’s own name as compared to a meaningless sound (i.e., ringing bell). Methods Eighty-six post-comatose patients diagnosed with a vegetative state/unresponsive wakefulness syndrome or a minimally conscious state were prospectively included. Localisation of auditory stimulation (i.e., head or eyes orientation toward the sound) was assessed using the patient’s own name as compared to a ringing bell. Statistical analyses used binomial testing with bonferroni correction for multiple comparisons. Results 37 (43%) out of the 86 studied patients showed localisation to auditory stimulation. More patients (n=34, 40%) oriented the head or eyes to their own name as compared to sound (n=20, 23%; p<0.001). Conclusions When assessing auditory function in disorders of consciousness, using the patient’s own name is here shown to be more suitable to elicit a response as compared to neutral sound. PMID:23506054
Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith
ERIC Educational Resources Information Center
Bailey, Frank S.; Yocum, Russell G.
2015-01-01
The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…
Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues
Liu, Andrew S K; Tsunada, Joji; Gold, Joshua I; Cohen, Yale E
2015-01-01
Auditory perception depends on the temporal structure of incoming acoustic stimuli. Here, we examined whether a temporal manipulation that affects the perceptual grouping also affects the time dependence of decisions regarding those stimuli. We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency. We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals. Despite these strong perceptual differences, this manipulation did not affect the efficiency of how auditory information was integrated over time to form a decision. Instead, the grouping manipulation affected subjects' speed-accuracy trade-offs. These results indicate that the temporal dynamics of evidence accumulation for auditory perceptual decisions can be invariant to manipulations that affect the perceptual grouping of the evidence.
Separating pitch chroma and pitch height in the human brain
Warren, J. D.; Uppenkamp, S.; Patterson, R. D.; Griffiths, T. D.
2003-01-01
Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas. PMID:12909719
Separating pitch chroma and pitch height in the human brain.
Warren, J D; Uppenkamp, S; Patterson, R D; Griffiths, T D
2003-08-19
Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas.
Murphy, Kathleen M; Saunders, Muriel D; Saunders, Richard R; Olswang, Lesley B
2004-01-01
The effects of different types and amounts of environmental stimuli (visual and auditory) on microswitch use and behavioral states of three individuals with profound multiple impairments were examined. The individual's switch use and behavioral states were measured under three setting conditions: natural stimuli (typical visual and auditory stimuli in a recreational situation), reduced visual stimuli, and reduced visual and auditory stimuli. Results demonstrated differential switch use in all participants with the varying environmental setting conditions. No consistent effects were observed in behavioral state related to environmental condition. Predominant behavioral state scores and switch use did not systematically covary with any participant. Results suggest the importance of considering environmental stimuli in relationship to switch use when working with individuals with profound multiple impairments.
Neural coding strategies in auditory cortex.
Wang, Xiaoqin
2007-07-01
In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.
The Effect of Cognitive Control on Different Types of Auditory Distraction.
Bell, Raoul; Röer, Jan P; Marsh, John E; Storch, Dunja; Buchner, Axel
2017-09-01
Deviant as well as changing auditory distractors interfere with short-term memory. According to the duplex model of auditory distraction, the deviation effect is caused by a shift of attention while the changing-state effect is due to obligatory order processing. This theory predicts that foreknowledge should reduce the deviation effect, but should have no effect on the changing-state effect. We compared the effect of foreknowledge on the two phenomena directly within the same experiment. In a pilot study, specific foreknowledge was impotent in reducing either the changing-state effect or the deviation effect, but it reduced disruption by sentential speech, suggesting that the effects of foreknowledge on auditory distraction may increase with the complexity of the stimulus material. Given the unexpected nature of this finding, we tested whether the same finding would be obtained in (a) a direct preregistered replication in Germany and (b) an additional replication with translated stimulus materials in Sweden.
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing
Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.
Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.
Kantrowitz, J T; Hoptman, M J; Leitman, D I; Silipo, G; Javitt, D C
2014-01-01
Intact sarcasm perception is a crucial component of social cognition and mentalizing (the ability to understand the mental state of oneself and others). In sarcasm, tone of voice is used to negate the literal meaning of an utterance. In particular, changes in pitch are used to distinguish between sincere and sarcastic utterances. Schizophrenia patients show well-replicated deficits in auditory function and functional connectivity (FC) within and between auditory cortical regions. In this study we investigated the contributions of auditory deficits to sarcasm perception in schizophrenia. Auditory measures including pitch processing, auditory emotion recognition (AER) and sarcasm detection were obtained from 76 patients with schizophrenia/schizo-affective disorder and 72 controls. Resting-state FC (rsFC) was obtained from a subsample and was analyzed using seeds placed in both auditory cortex and meta-analysis-defined core-mentalizing regions relative to auditory performance. Patients showed large effect-size deficits across auditory measures. Sarcasm deficits correlated significantly with general functioning and impaired pitch processing both across groups and within the patient group alone. Patients also showed reduced sensitivity to alterations in mean pitch and variability. For patients, sarcasm discrimination correlated exclusively with the level of rsFC within primary auditory regions whereas for controls, correlations were observed exclusively within core-mentalizing regions (the right posterior superior temporal gyrus, anterior superior temporal sulcus and insula, and left posterior medial temporal gyrus). These findings confirm the contribution of auditory deficits to theory of mind (ToM) impairments in schizophrenia, and demonstrate that FC within auditory, but not core-mentalizing, regions is rate limiting with respect to sarcasm detection in schizophrenia.
Hasni, Anita A; Adamson, Lauren B; Williamson, Rebecca A; Robins, Diana L
2017-12-01
Theory of mind (ToM) gradually develops during the preschool years. Measures of ToM usually target visual experience, but auditory experiences also provide valuable social information. Given differences between the visual and auditory modalities (e.g., sights persist, sounds fade) and the important role environmental input plays in social-cognitive development, we asked whether modality might influence the progression of ToM development. The current study expands Wellman and Liu's ToM scale (2004) by testing 66 preschoolers using five standard visual ToM tasks and five newly crafted auditory ToM tasks. Age and gender effects were found, with 4- and 5-year-olds demonstrating greater ToM abilities than 3-year-olds and girls passing more tasks than boys; there was no significant effect of modality. Both visual and auditory tasks formed a scalable set. These results indicate that there is considerable consistency in when children are able to use visual and auditory inputs to reason about various aspects of others' mental states. Copyright © 2017 Elsevier Inc. All rights reserved.
Di Bonito, Maria; Studer, Michèle
2017-01-01
During development, the organization of the auditory system into distinct functional subcircuits depends on the spatially and temporally ordered sequence of neuronal specification, differentiation, migration and connectivity. Regional patterning along the antero-posterior axis and neuronal subtype specification along the dorso-ventral axis intersect to determine proper neuronal fate and assembly of rhombomere-specific auditory subcircuits. By taking advantage of the increasing number of transgenic mouse lines, recent studies have expanded the knowledge of developmental mechanisms involved in the formation and refinement of the auditory system. Here, we summarize several findings dealing with the molecular and cellular mechanisms that underlie the assembly of central auditory subcircuits during mouse development, focusing primarily on the rhombomeric and dorso-ventral origin of auditory nuclei and their associated molecular genetic pathways. PMID:28469562
Auditory psychophysics and perception.
Hirsh, I J; Watson, C S
1996-01-01
In this review of auditory psychophysics and perception, we cite some important books, research monographs, and research summaries from the past decade. Within auditory psychophysics, we have singled out some topics of current importance: Cross-Spectral Processing, Timbre and Pitch, and Methodological Developments. Complex sounds and complex listening tasks have been the subject of new studies in auditory perception. We review especially work that concerns auditory pattern perception, with emphasis on temporal aspects of the patterns and on patterns that do not depend on the cognitive structures often involved in the perception of speech and music. Finally, we comment on some aspects of individual difference that are sufficiently important to question the goal of characterizing auditory properties of the typical, average, adult listener. Among the important factors that give rise to these individual differences are those involved in selective processing and attention.
ERIC Educational Resources Information Center
Ugwuanyi, L. T.; Adaka, T. A.
2015-01-01
The paper focused on the effect of auditory training on reading comprehension of children with hearing impairment in Enugu State. A total of 33 children with conductive, sensory neural and mixed hearing loss were sampled for the study in the two schools for the Deaf in Enugu State. The design employed for the study was a quasi experiment (pre-test…
NASA Technical Reports Server (NTRS)
Dornhoffer, John L.; Mamiya, N.; Bray, P.; Skinner, Robert D.; Garcia-Rill, Edgar
2002-01-01
Sopite syndrome, characterized by loss of initiative, sensitivity to normally innocuous sensory stimuli, and impaired concentration amounting to a sensory gating deficit, is commonly associated with Space Motion Sickness (SMS). The amplitude of the P50 potential is a measure of level of arousal, and a paired-stimulus paradigm can be used to measure sensory gating. We used the rotary chair to elicit the sensory mismatch that occurs with SMS by overstimulating the vestibular apparatus. The effects of rotation on the manifestation of the P50 midlatency auditory evoked response were then assessed as a measure of arousal and distractibility. Results showed that rotation-induced motion sickness produced no change in the level of arousal but did produce a significant deficit in sensory gating, indicating that some of the attentional and cognitive deficits observed with SMS may be due to distractibility induced by decreased habituation to repetitive stimuli.
Listening to Rhythmic Music Reduces Connectivity within the Basal Ganglia and the Reward System.
Brodal, Hans P; Osnes, Berge; Specht, Karsten
2017-01-01
Music can trigger emotional responses in a more direct way than any other stimulus. In particular, music-evoked pleasure involves brain networks that are part of the reward system. Furthermore, rhythmic music stimulates the basal ganglia and may trigger involuntary movements to the beat. In the present study, we created a continuously playing rhythmic, dance floor-like composition where the ambient noise from the MR scanner was incorporated as an additional instrument of rhythm. By treating this continuous stimulation paradigm as a variant of resting-state, the data was analyzed with stochastic dynamic causal modeling (sDCM), which was used for exploring functional dependencies and interactions between core areas of auditory perception, rhythm processing, and reward processing. The sDCM model was a fully connected model with the following areas: auditory cortex, putamen/pallidum, and ventral striatum/nucleus accumbens of both hemispheres. The resulting estimated parameters were compared to ordinary resting-state data, without an additional continuous stimulation. Besides reduced connectivity within the basal ganglia, the results indicated a reduced functional connectivity of the reward system, namely the right ventral striatum/nucleus accumbens from and to the basal ganglia and auditory network while listening to rhythmic music. In addition, the right ventral striatum/nucleus accumbens demonstrated also a change in its hemodynamic parameter, reflecting an increased level of activation. These converging results may indicate that the dopaminergic reward system reduces its functional connectivity and relinquishing its constraints on other areas when we listen to rhythmic music.
Listening to Rhythmic Music Reduces Connectivity within the Basal Ganglia and the Reward System
Brodal, Hans P.; Osnes, Berge; Specht, Karsten
2017-01-01
Music can trigger emotional responses in a more direct way than any other stimulus. In particular, music-evoked pleasure involves brain networks that are part of the reward system. Furthermore, rhythmic music stimulates the basal ganglia and may trigger involuntary movements to the beat. In the present study, we created a continuously playing rhythmic, dance floor-like composition where the ambient noise from the MR scanner was incorporated as an additional instrument of rhythm. By treating this continuous stimulation paradigm as a variant of resting-state, the data was analyzed with stochastic dynamic causal modeling (sDCM), which was used for exploring functional dependencies and interactions between core areas of auditory perception, rhythm processing, and reward processing. The sDCM model was a fully connected model with the following areas: auditory cortex, putamen/pallidum, and ventral striatum/nucleus accumbens of both hemispheres. The resulting estimated parameters were compared to ordinary resting-state data, without an additional continuous stimulation. Besides reduced connectivity within the basal ganglia, the results indicated a reduced functional connectivity of the reward system, namely the right ventral striatum/nucleus accumbens from and to the basal ganglia and auditory network while listening to rhythmic music. In addition, the right ventral striatum/nucleus accumbens demonstrated also a change in its hemodynamic parameter, reflecting an increased level of activation. These converging results may indicate that the dopaminergic reward system reduces its functional connectivity and relinquishing its constraints on other areas when we listen to rhythmic music. PMID:28400717
Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve
Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.
2015-01-01
The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538
Intrinsic network activity in tinnitus investigated using functional MRI
Leaver, Amber M.; Turesky, Ted K.; Seydell-Greenwald, Anna; Morgan, Susan; Kim, Hung J.; Rauschecker, Josef P.
2016-01-01
Tinnitus is an increasingly common disorder in which patients experience phantom auditory sensations, usually ringing or buzzing in the ear. Tinnitus pathophysiology has been repeatedly shown to involve both auditory and non-auditory brain structures, making network-level studies of tinnitus critical. In this magnetic resonance imaging (MRI) study, we used two resting-state functional connectivity (RSFC) approaches to better understand functional network disturbances in tinnitus. First, we demonstrated tinnitus-related reductions in RSFC between specific brain regions and resting-state networks (RSNs), defined by independent components analysis (ICA) and chosen for their overlap with structures known to be affected in tinnitus. Then, we restricted ICA to data from tinnitus patients, and identified one RSN not apparent in control data. This tinnitus RSN included auditory-sensory regions like inferior colliculus and medial Heschl’s gyrus, as well as classically non-auditory regions like the mediodorsal nucleus of the thalamus, striatum, lateral prefrontal and orbitofrontal cortex. Notably, patients’ reported tinnitus loudness was positively correlated with RSFC between the mediodorsal nucleus and the tinnitus RSN, indicating that this network may underlie the auditory-sensory experience of tinnitus. These data support the idea that tinnitus involves network dysfunction, and further stress the importance of communication between auditory-sensory and fronto-striatal circuits in tinnitus pathophysiology. PMID:27091485
Sound localization by echolocating bats
NASA Astrophysics Data System (ADS)
Aytekin, Murat
Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.
Rapid recalibration of speech perception after experiencing the McGurk illusion.
Lüttke, Claudia S; Pérez-Bellido, Alexis; de Lange, Floris P
2018-03-01
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as 'ada'). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as 'ada'. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to 'ada' (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization.
Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.
Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H
2013-07-01
Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.
Reproduction of auditory and visual standards in monochannel cochlear implant users.
Kanabus, Magdalena; Szelag, Elzbieta; Kolodziejczyk, Iwona; Szuchnik, Joanna
2004-01-01
The temporal reproduction of standard durations ranging from 1 to 9 seconds was investigated in monochannel cochlear implant (CI) users and in normally hearing subjects for the auditory and visual modality. The results showed that the pattern of performance in patients depended on their level of auditory comprehension. Results for CI users, who displayed relatively good auditory comprehension, did not differ from that of normally hearing subjects for both modalities. Patients with poor auditory comprehension significantly overestimated shorter auditory standards (1, 1.5 and 2.5 s), compared to both patients with good comprehension and controls. For the visual modality the between-group comparisons were not significant. These deficits in the reproduction of auditory standards were explained in accordance with both the attentional-gate model and the role of working memory in prospective time judgment. The impairments described above can influence the functioning of the temporal integration mechanism that is crucial for auditory speech comprehension on the level of words and phrases. We postulate that the deficits in time reproduction of short standards may be one of the possible reasons for poor speech understanding in monochannel CI users.
Brown, Trecia A; Joanisse, Marc F; Gati, Joseph S; Hughes, Sarah M; Nixon, Pam L; Menon, Ravi S; Lomber, Stephen G
2013-01-01
Much of what is known about the cortical organization for audition in humans draws from studies of auditory cortex in the cat. However, these data build largely on electrophysiological recordings that are both highly invasive and provide less evidence concerning macroscopic patterns of brain activation. Optical imaging, using intrinsic signals or dyes, allows visualization of surface-based activity but is also quite invasive. Functional magnetic resonance imaging (fMRI) overcomes these limitations by providing a large-scale perspective of distributed activity across the brain in a non-invasive manner. The present study used fMRI to characterize stimulus-evoked activity in auditory cortex of an anesthetized (ketamine/isoflurane) cat, focusing specifically on the blood-oxygen-level-dependent (BOLD) signal time course. Functional images were acquired for adult cats in a 7 T MRI scanner. To determine the BOLD signal time course, we presented 1s broadband noise bursts between widely spaced scan acquisitions at randomized delays (1-12 s in 1s increments) prior to each scan. Baseline trials in which no stimulus was presented were also acquired. Our results indicate that the BOLD response peaks at about 3.5s in primary auditory cortex (AI) and at about 4.5 s in non-primary areas (AII, PAF) of cat auditory cortex. The observed peak latency is within the range reported for humans and non-human primates (3-4 s). The time course of hemodynamic activity in cat auditory cortex also occurs on a comparatively shorter scale than in cat visual cortex. The results of this study will provide a foundation for future auditory fMRI studies in the cat to incorporate these hemodynamic response properties into appropriate analyses of cat auditory cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
Auditory Signal Processing in Communication: Perception and Performance of Vocal Sounds
Prather, Jonathan F.
2013-01-01
Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717
Ostrand, Rachel; Blumstein, Sheila E.; Ferreira, Victor S.; Morgan, James L.
2016-01-01
Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021
Johnsen, Erik; Hugdahl, Kenneth; Fusar-Poli, Paolo; Kroken, Rune A; Kompus, Kristiina
2013-01-01
Experiencing auditory verbal hallucinations is a prominent symptom in schizophrenia that also occurs in subjects at enhanced risk for psychosis and in the general population. Drug treatment of auditory hallucinations is challenging, because the current understanding is limited with respect to the neural mechanisms involved, as well as how CNS drugs, such as antipsychotics, influence the subjective experience and neurophysiology of hallucinations. In this article, the authors review studies of the effect of antipsychotic medication on brain activation as measured with functional MRI in patients with auditory verbal hallucinations. First, the authors examine the neural correlates of ongoing auditory hallucinations. Then, the authors critically discuss studies addressing the antipsychotic effect on the neural correlates of complex cognitive tasks. Current evidence suggests that blood oxygen level-dependant effects of antipsychotic drugs reflect specific, regional effects but studies on the neuropharmacology of auditory hallucinations are scarce. Future directions for pharmacological neuroimaging of auditory hallucinations are discussed.
Mechanisms of spectral and temporal integration in the mustached bat inferior colliculus
Wenstrup, Jeffrey James; Nataraj, Kiran; Sanchez, Jason Tait
2012-01-01
This review describes mechanisms and circuitry underlying combination-sensitive response properties in the auditory brainstem and midbrain. Combination-sensitive neurons, performing a type of auditory spectro-temporal integration, respond to specific, properly timed combinations of spectral elements in vocal signals and other acoustic stimuli. While these neurons are known to occur in the auditory forebrain of many vertebrate species, the work described here establishes their origin in the auditory brainstem and midbrain. Focusing on the mustached bat, we review several major findings: (1) Combination-sensitive responses involve facilitatory interactions, inhibitory interactions, or both when activated by distinct spectral elements in complex sounds. (2) Combination-sensitive responses are created in distinct stages: inhibition arises mainly in lateral lemniscal nuclei of the auditory brainstem, while facilitation arises in the inferior colliculus (IC) of the midbrain. (3) Spectral integration underlying combination-sensitive responses requires a low-frequency input tuned well below a neuron's characteristic frequency (ChF). Low-ChF neurons in the auditory brainstem project to high-ChF regions in brainstem or IC to create combination sensitivity. (4) At their sites of origin, both facilitatory and inhibitory combination-sensitive interactions depend on glycinergic inputs and are eliminated by glycine receptor blockade. Surprisingly, facilitatory interactions in IC depend almost exclusively on glycinergic inputs and are largely independent of glutamatergic and GABAergic inputs. (5) The medial nucleus of the trapezoid body (MNTB), the lateral lemniscal nuclei, and the IC play critical roles in creating combination-sensitive responses. We propose that these mechanisms, based on work in the mustached bat, apply to a broad range of mammals and other vertebrates that depend on temporally sensitive integration of information across the audible spectrum. PMID:23109917
Inconsistent Effect of Arousal on Early Auditory Perception
Bolders, Anna C.; Band, Guido P. H.; Stallen, Pieter Jan M.
2017-01-01
Mood has been shown to influence cognitive performance. However, little is known about the influence of mood on sensory processing, specifically in the auditory domain. With the current study, we sought to investigate how auditory processing of neutral sounds is affected by the mood state of the listener. This was tested in two experiments by measuring masked-auditory detection thresholds before and after a standard mood-induction procedure. In the first experiment (N = 76), mood was induced by imagining a mood-appropriate event combined with listening to mood inducing music. In the second experiment (N = 80), imagining was combined with affective picture viewing to exclude any possibility of confounding the results by acoustic properties of the music. In both experiments, the thresholds were determined by means of an adaptive staircase tracking method in a two-interval forced-choice task. Masked detection thresholds were compared between participants in four different moods (calm, happy, sad, and anxious), which enabled differentiation of mood effects along the dimensions arousal and pleasure. Results of the two experiments were analyzed both in separate analyses and in a combined analysis. The first experiment showed that, while there was no impact of pleasure level on the masked threshold, lower arousal was associated with lower threshold (higher masked sensitivity). However, as indicated by an interaction effect between experiment and arousal, arousal did have a different effect on the threshold in Experiment 2. Experiment 2 showed a trend of arousal in opposite direction. These results show that the effect of arousal on auditory-masked sensitivity may depend on the modality of the mood-inducing stimuli. As clear conclusions regarding the genuineness of the arousal effect on the masked threshold cannot be drawn, suggestions for further research that could clarify this issue are provided. PMID:28424639
Effect of negative emotions evoked by light, noise and taste on trigeminal thermal sensitivity.
Yang, Guangju; Baad-Hansen, Lene; Wang, Kelun; Xie, Qiu-Fei; Svensson, Peter
2014-11-07
Patients with migraine often have impaired somatosensory function and experience headache attacks triggered by exogenous stimulus, such as light, sound or taste. This study aimed to assess the influence of three controlled conditioning stimuli (visual, auditory and gustatory stimuli and combined stimuli) on affective state and thermal sensitivity in healthy human participants. All participants attended four experimental sessions with visual, auditory and gustatory conditioning stimuli and combination of all stimuli, in a randomized sequence. In each session, the somatosensory sensitivity was tested in the perioral region with use of thermal stimuli with and without the conditioning stimuli. Positive and Negative Affect States (PANAS) were assessed before and after the tests. Subject based ratings of the conditioning and test stimuli in addition to skin temperature and heart rate as indicators of arousal responses were collected in real time during the tests. The three conditioning stimuli all induced significant increases in negative PANAS scores (paired t-test, P ≤0.016). Compared with baseline, the increases were in a near dose-dependent manner during visual and auditory conditioning stimulation. No significant effects of any single conditioning stimuli were observed on trigeminal thermal sensitivity (P ≥0.051) or arousal parameters (P ≥0.057). The effects of combined conditioning stimuli on subjective ratings (P ≤0.038) and negative affect (P = 0.011) were stronger than those of single stimuli. All three conditioning stimuli provided a simple way to evoke a negative affective state without physical arousal or influence on trigeminal thermal sensitivity. Multisensory conditioning had stronger effects but also failed to modulate thermal sensitivity, suggesting that so-called exogenous trigger stimuli e.g. bright light, noise, unpleasant taste in patients with migraine may require a predisposed or sensitized nervous system.
Estradiol-dependent modulation of auditory processing and selectivity in songbirds
Maney, Donna; Pinaud, Raphael
2011-01-01
The steroid hormone estradiol plays an important role in reproductive development and behavior and modulates a wide array of physiological and cognitive processes. Recently, reports from several research groups have converged to show that estradiol also powerfully modulates sensory processing, specifically, the physiology of central auditory circuits in songbirds. These investigators have discovered that (1) behaviorally-relevant auditory experience rapidly increases estradiol levels in the auditory forebrain; (2) estradiol instantaneously enhances the responsiveness and coding efficiency of auditory neurons; (3) these changes are mediated by a non-genomic effect of brain-generated estradiol on the strength of inhibitory neurotransmission; and (4) estradiol regulates biochemical cascades that induce the expression of genes involved in synaptic plasticity. Together, these findings have established estradiol as a central regulator of auditory function and intensified the need to consider brain-based mechanisms, in addition to peripheral organ dysfunction, in hearing pathologies associated with estrogen deficiency. PMID:21146556
Moving in time: Bayesian causal inference explains movement coordination to auditory beats
Elliott, Mark T.; Wing, Alan M.; Welchman, Andrew E.
2014-01-01
Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved. PMID:24850915
Revisiting Arieti's “Listening Attitude” and Hallucinated Voices
Hoffman, Ralph E.
2010-01-01
Silvano Arieti proposed that auditory/verbal hallucinations (AVHs) are triggered by momentary states of heightened auditory attention that he identified as a “listening attitude.” Studies and clinical observations by our group support this view. Patients enrolled in our repetitive transcranial magnetic stimulation trials, if experiencing a significant curtailment of these hallucinations, often report an episodic sense that their voices are still occurring even if they no longer can be heard, suggesting episodic states of heightened auditory expectancy. Moreover, a functional magnetic resonance study reported by our group detected activation in the left insula prior to hallucination events. This finding is suggestive of activation in the same region detected in healthy subjects during “auditory search” in response to ambiguous sounds when anticipating meaningful speech. AVHs often are experienced with a deep emotional salience and may occur in the context of dramatic social isolation that together could reinforce heightened auditory expectancy. These findings and clinical observations suggest that Arieti's original formulation deserves further study. PMID:20363873
Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues1,2,3
Tsunada, Joji
2015-01-01
Abstract Auditory perception depends on the temporal structure of incoming acoustic stimuli. Here, we examined whether a temporal manipulation that affects the perceptual grouping also affects the time dependence of decisions regarding those stimuli. We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency. We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals. Despite these strong perceptual differences, this manipulation did not affect the efficiency of how auditory information was integrated over time to form a decision. Instead, the grouping manipulation affected subjects’ speed−accuracy trade-offs. These results indicate that the temporal dynamics of evidence accumulation for auditory perceptual decisions can be invariant to manipulations that affect the perceptual grouping of the evidence. PMID:26464975
Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment
Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru
2013-01-01
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873
States of Awareness I: Subliminal Perception Relationship to Situational Awareness
1993-05-01
one experiment, the visual detection threshold was raised by simultaneous auditory stimulation involving subliminal emotional words. Similar results...an assessment was made of the effects of both subliminal and supraliminal auditory accessory stimulation (white noise) on a visual detection task... stimulation investigation. Both subliminal and supraliminal auditory stimulation were employed to evaluate possible differential effects in visual illusions
Auditory-vocal mirroring in songbirds.
Mooney, Richard
2014-01-01
Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.
Bidelman, Gavin M
2016-10-01
Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.
Neuronal activity in primate auditory cortex during the performance of audiovisual tasks.
Brosch, Michael; Selezneva, Elena; Scheich, Henning
2015-03-01
This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Stimulus-Dependent Flexibility in Non-Human Auditory Pitch Processing
ERIC Educational Resources Information Center
Bregman, Micah R.; Patel, Aniruddh D.; Gentner, Timothy Q.
2012-01-01
Songbirds and humans share many parallels in vocal learning and auditory sequence processing. However, the two groups differ notably in their abilities to recognize acoustic sequences shifted in absolute pitch (pitch height). Whereas humans maintain accurate recognition of words or melodies over large pitch height changes, songbirds are…
Gavrilescu, M; Rossell, S; Stuart, G W; Shea, T L; Innes-Brown, H; Henshall, K; McKay, C; Sergejew, A A; Copolov, D; Egan, G F
2010-07-01
Previous research has reported auditory processing deficits that are specific to schizophrenia patients with a history of auditory hallucinations (AH). One explanation for these findings is that there are abnormalities in the interhemispheric connectivity of auditory cortex pathways in AH patients; as yet this explanation has not been experimentally investigated. We assessed the interhemispheric connectivity of both primary (A1) and secondary (A2) auditory cortices in n=13 AH patients, n=13 schizophrenia patients without auditory hallucinations (non-AH) and n=16 healthy controls using functional connectivity measures from functional magnetic resonance imaging (fMRI) data. Functional connectivity was estimated from resting state fMRI data using regions of interest defined for each participant based on functional activation maps in response to passive listening to words. Additionally, stimulus-induced responses were regressed out of the stimulus data and the functional connectivity was estimated for the same regions to investigate the reliability of the estimates. AH patients had significantly reduced interhemispheric connectivity in both A1 and A2 when compared with non-AH patients and healthy controls. The latter two groups did not show any differences in functional connectivity. Further, this pattern of findings was similar across the two datasets, indicating the reliability of our estimates. These data have identified a trait deficit specific to AH patients. Since this deficit was characterized within both A1 and A2 it is expected to result in the disruption of multiple auditory functions, for example, the integration of basic auditory information between hemispheres (via A1) and higher-order language processing abilities (via A2).
Missing a trick: Auditory load modulates conscious awareness in audition.
Fairnie, Jake; Moore, Brian C J; Remington, Anna
2016-07-01
In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Visual and auditory steady-state responses in attention-deficit/hyperactivity disorder.
Khaleghi, Ali; Zarafshan, Hadi; Mohammadi, Mohammad Reza
2018-05-22
We designed a study to investigate the patterns of the steady-state visual evoked potential (SSVEP) and auditory steady-state response (ASSR) in adolescents with attention-deficit/hyperactivity disorder (ADHD) when performing a motor response inhibition task. Thirty 12- to 18-year-old adolescents with ADHD and 30 healthy control adolescents underwent an electroencephalogram (EEG) examination during steady-state stimuli when performing a stop-signal task. Then, we calculated the amplitude and phase of the steady-state responses in both visual and auditory modalities. Results showed that adolescents with ADHD had a significantly poorer performance in the stop-signal task during both visual and auditory stimuli. The SSVEP amplitude of the ADHD group was larger than that of the healthy control group in most regions of the brain, whereas the ASSR amplitude of the ADHD group was smaller than that of the healthy control group in some brain regions (e.g., right hemisphere). In conclusion, poorer task performance (especially inattention) and neurophysiological results in ADHD demonstrate a possible impairment in the interconnection of the association cortices in the parietal and temporal lobes and the prefrontal cortex. Also, the motor control problems in ADHD may arise from neural deficits in the frontoparietal and occipitoparietal systems and other brain structures such as cerebellum.
Phencyclidine Disrupts the Auditory Steady State Response in Rats
Leishman, Emma; O’Donnell, Brian F.; Millward, James B.; Vohs, Jenifer L.; Rass, Olga; Krishnan, Giri P.; Bolbecker, Amanda R.; Morzorati, Sandra L.
2015-01-01
The Auditory Steady-State Response (ASSR) in the electroencephalogram (EEG) is usually reduced in schizophrenia (SZ), particularly to 40 Hz stimulation. The gamma frequency ASSR deficit has been attributed to N-methyl-D-aspartate receptor (NMDAR) hypofunction. We tested whether the NMDAR antagonist, phencyclidine (PCP), produced similar ASSR deficits in rats. EEG was recorded from awake rats via intracranial electrodes overlaying the auditory cortex and at the vertex of the skull. ASSRs to click trains were recorded at 10, 20, 30, 40, 50, and 55 Hz and measured by ASSR Mean Power (MP) and Phase Locking Factor (PLF). In Experiment 1, the effect of different subcutaneous doses of PCP (1.0, 2.5 and 4.0 mg/kg) on the ASSR in 12 rats was assessed. In Experiment 2, ASSRs were compared in PCP treated rats and control rats at baseline, after acute injection (5 mg/kg), following two weeks of subchronic, continuous administration (5 mg/kg/day), and one week after drug cessation. Acute administration of PCP increased PLF and MP at frequencies of stimulation below 50 Hz, and decreased responses at higher frequencies at the auditory cortex site. Acute administration had a less pronounced effect at the vertex site, with a reduction of either PLF or MP observed at frequencies above 20 Hz. Acute effects increased in magnitude with higher doses of PCP. Consistent effects were not observed after subchronic PCP administration. These data indicate that acute administration of PCP, a NMDAR antagonist, produces an increase in ASSR synchrony and power at low frequencies of stimulation and a reduction of high frequency (> 40 Hz) ASSR activity in rats. Subchronic, continuous administration of PCP, on the other hand, has little impact on ASSRs. Thus, while ASSRs are highly sensitive to NMDAR antagonists, their translational utility as a cross-species biomarker for NMDAR hypofunction in SZ and other disorders may be dependent on dose and schedule. PMID:26258486
Modality-dependent effect of motion information in sensory-motor synchronised tapping.
Ono, Kentaro
2018-05-14
Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.
Altoè, Alessandro; Pulkki, Ville; Verhulst, Sarah
2018-07-01
The basolateral membrane of the mammalian inner hair cell (IHC) expresses large voltage and Ca 2+ gated outward K + currents. To quantify how the voltage-dependent activation of the K + channels affects the functionality of the auditory nerve innervating the IHC, this study adopts a model of mechanical-to-neural transduction in which the basolateral K + conductances of the IHC can be made voltage-dependent or not. The model shows that the voltage-dependent activation of the K + channels (i) enhances the phase-locking properties of the auditory fiber (AF) responses; (ii) enables the auditory nerve to encode a large dynamic range of sound levels; (iii) enables the AF responses to synchronize precisely with the envelope of amplitude modulated stimuli; and (iv), is responsible for the steep offset responses of the AFs. These results suggest that the basolateral K + channels play a major role in determining the well-known response properties of the AFs and challenge the classical view that describes the IHC membrane as an electrical low-pass filter. In contrast to previous models of the IHC-AF complex, this study ascribes many of the AF response properties to fairly basic mechanisms in the IHC membrane rather than to complex mechanisms in the synapse. Copyright © 2018 Elsevier B.V. All rights reserved.
Rapid recalibration of speech perception after experiencing the McGurk illusion
Pérez-Bellido, Alexis; de Lange, Floris P.
2018-01-01
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as ‘ada’). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as ‘ada’. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to ‘ada’ (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization. PMID:29657743
Hemispheric differences in processing of vocalizations depend on early experience.
Phan, Mimi L; Vicario, David S
2010-02-02
An intriguing phenomenon in the neurobiology of language is lateralization: the dominant role of one hemisphere in a particular function. Lateralization is not exclusive to language because lateral differences are observed in other sensory modalities, behaviors, and animal species. Despite much scientific attention, the function of lateralization, its possible dependence on experience, and the functional implications of such dependence have yet to be clearly determined. We have explored the role of early experience in the development of lateralized sensory processing in the brain, using the songbird model of vocal learning. By controlling exposure to natural vocalizations (through isolation, song tutoring, and muting), we manipulated the postnatal auditory environment of developing zebra finches, and then assessed effects on hemispheric specialization for communication sounds in adulthood. Using bilateral multielectrode recordings from a forebrain auditory area known to selectively process species-specific vocalizations, we found that auditory responses to species-typical songs and long calls, in both male and female birds, were stronger in the right hemisphere than in the left, and that right-side responses adapted more rapidly to stimulus repetition. We describe specific instances, particularly in males, where these lateral differences show an influence of auditory experience with song and/or the bird's own voice during development.
Methodological challenges and solutions in auditory functional magnetic resonance imaging
Peelle, Jonathan E.
2014-01-01
Functional magnetic resonance imaging (fMRI) studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI) sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS) imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI. PMID:25191218
Neural dynamics underlying attentional orienting to auditory representations in short-term memory.
Backer, Kristina C; Binns, Malcolm A; Alain, Claude
2015-01-21
Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.
Neurotoxicity of trimethyltin in rat cochlear organotypic cultures
Yu, Jintao; Ding, Dalian; Sun, Hong; Salvi, Richard; Roth, Jerome A.
2015-01-01
Trimethyltin (TMT), which has a variety of applications in industry and agricultural is a neurotoxin that is known to affect the auditory system as well as central nervous system (CNS) of humans and experimental animals. However, the mechanisms underlying TMT-induced auditory dysfunction are poorly understood. To gain insights into the neurotoxic effect of TMT on the peripheral auditory system, we treated cochlear organotypic cultures with concentrations of TMT ranging from 5 to 100 μM for 24 h. Interestingly, TMT preferentially damaged auditory nerve fibers and spiral ganglion neurons in a dose-dependent manner, but had no noticeable effects on the sensory hair cells at the doses employed. TMT-induced damage to auditory neurons was associated with significant soma shrinkage, nuclear condensation and activation of caspase-3, biomarkers indicative of apoptotic cell death. Our findings show that TMT is exclusively neurotoxicity in rat cochlear organotypic culture and that TMT-induced auditory neuron death occurs through a caspase-mediated apoptotic pathway. PMID:25957118
Prins, John M; Brooks, Diane M; Thompson, Charles M; Lurie, Diana I
2010-12-01
Lead (Pb) exposure is a risk factor for neurological dysfunction. How Pb produces these behavioral deficits is unknown, but Pb exposure during development is associated with auditory temporal processing deficits in both humans and animals. Pb disrupts cellular energy metabolism and efficient energy production is crucial for auditory neurons to maintain high rates of synaptic activity. The voltage-dependent anion channel (VDAC) is involved in the regulation of mitochondrial physiology and is a critical component in controlling mitochondrial energy production. We have previously demonstrated that VDAC is an in vitro target for Pb, therefore, VDAC may represent a potential target for Pb in the auditory system. In order to determine whether Pb alters VDAC expression in central auditory neurons, CBA/CaJ mice (n=3-5/group) were exposed to 0.01mM, or 0.1mM Pb acetate during development via drinking water. At P21, immunohistochemistry reveals a significant decrease for VDAC in neurons of the Medial Nucleus of the Trapezoid Body. Western blot analysis confirms that Pb results in a significant decrease for VDAC. Decreases in VDAC expression could lead to an upregulation of other cellular energy producing systems as a compensatory mechanism, and a Pb-induced increase in brain type creatine kinase is observed in auditory regions of the brainstem. In addition, comparative proteomic analysis shows that several proteins of the glycolytic pathway, the phosphocreatine circuit, and oxidative phosphorylation are also upregulated in response to developmental Pb exposure. Thus, Pb-induced decreases in VDAC could have a significant effect on the function of auditory neurons. Copyright © 2010 Elsevier Inc. All rights reserved.
2001-05-01
displays were discussed with the test pilots that we interviewed. The pilots had mixed opinions on tactile and auditory displays. Positive comments...were noted concerning three-dimensional auditory displays, although some stated that the pilot could easily ignore the aural tone. Others complained...recognition technology was not reliable enough and worried about problems with surrounding auditory signals from anti-G straining maneuvers, oxygen
Reduced variability of auditory alpha activity in chronic tinnitus.
Schlee, Winfried; Schecklmann, Martin; Lehner, Astrid; Kreuzer, Peter M; Vielsmeier, Veronika; Poeppl, Timm B; Langguth, Berthold
2014-01-01
Subjective tinnitus is characterized by the conscious perception of a phantom sound which is usually more prominent under silence. Resting state recordings without any auditory stimulation demonstrated a decrease of cortical alpha activity in temporal areas of subjects with an ongoing tinnitus perception. This is often interpreted as an indicator for enhanced excitability of the auditory cortex in tinnitus. In this study we want to further investigate this effect by analysing the moment-to-moment variability of the alpha activity in temporal areas. Magnetoencephalographic resting state recordings of 21 tinnitus subjects and 21 healthy controls were analysed with respect to the mean and the variability of spectral power in the alpha frequency band over temporal areas. A significant decrease of auditory alpha activity was detected for the low alpha frequency band (8-10 Hz) but not for the upper alpha band (10-12 Hz). Furthermore, we found a significant decrease of alpha variability for the tinnitus group. This result was significant for the lower alpha frequency range and not significant for the upper alpha frequencies. Tinnitus subjects with a longer history of tinnitus showed less variability of their auditory alpha activity which might be an indicator for reduced adaptability of the auditory cortex in chronic tinnitus.
Stuttering Inhibition via Altered Auditory Feedback during Scripted Telephone Conversations
ERIC Educational Resources Information Center
Hudock, Daniel; Kalinowski, Joseph
2014-01-01
Background: Overt stuttering is inhibited by approximately 80% when people who stutter read aloud as they hear an altered form of their speech feedback to them. However, levels of stuttering inhibition vary from 60% to 100% depending on speaking situation and signal presentation. For example, binaural presentations of delayed auditory feedback…
ERIC Educational Resources Information Center
Savage, Melissa N.
2014-01-01
Some students with disabilities develop a dependence on others for support and can benefit from self-management strategies to increase independence. Self-operated auditory prompting systems are an effective self-management intervention used to increase independence for students with disabilities while continuing to provide the support that they…
The Role of Auditory Feedback in the Encoding of Paralinguistic Responses.
ERIC Educational Resources Information Center
Plazewski, Joseph G.; Allen, Vernon L.
Twenty college students participated in an examination of the role of auditory feedback in the encoding of paralinguistic affect by adults. A dependent measure indicating the accuracy of paralinguistic communication of affect was obtained by comparing the level of affect that encoders intended to produce with ratings of vocal intonations from…
Infants' Auditory Enumeration: Evidence for Analog Magnitudes in the Small Number Range
ERIC Educational Resources Information Center
vanMarle, Kristy; Wynn, Karen
2009-01-01
Vigorous debate surrounds the issue of whether infants use different representational mechanisms to discriminate small and large numbers. We report evidence for ratio-dependent performance in infants' discrimination of small numbers of auditory events, suggesting that infants can use analog magnitudes to represent small values, at least in the…
The effects of divided attention on auditory priming.
Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W
2007-09-01
Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.
Porges, Stephen W; Macellaio, Matthew; Stanfill, Shannon D; McCue, Kimberly; Lewis, Gregory F; Harden, Emily R; Handelman, Mika; Denver, John; Bazhenova, Olga V; Heilman, Keri J
2013-06-01
The current study evaluated processes underlying two common symptoms (i.e., state regulation problems and deficits in auditory processing) associated with a diagnosis of autism spectrum disorders. Although these symptoms have been treated in the literature as unrelated, when informed by the Polyvagal Theory, these symptoms may be viewed as the predictable consequences of depressed neural regulation of an integrated social engagement system, in which there is down regulation of neural influences to the heart (i.e., via the vagus) and to the middle ear muscles (i.e., via the facial and trigeminal cranial nerves). Respiratory sinus arrhythmia (RSA) and heart period were monitored to evaluate state regulation during a baseline and two auditory processing tasks (i.e., the SCAN tests for Filtered Words and Competing Words), which were used to evaluate auditory processing performance. Children with a diagnosis of autism spectrum disorders (ASD) were contrasted with aged matched typically developing children. The current study identified three features that distinguished the ASD group from a group of typically developing children: 1) baseline RSA, 2) direction of RSA reactivity, and 3) auditory processing performance. In the ASD group, the pattern of change in RSA during the attention demanding SCAN tests moderated the relation between performance on the Competing Words test and IQ. In addition, in a subset of ASD participants, auditory processing performance improved and RSA increased following an intervention designed to improve auditory processing. Copyright © 2012 Elsevier B.V. All rights reserved.
Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.
Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J
2016-01-01
Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities.
Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns
Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J.
2016-01-01
Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10− and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities. PMID:27932941
Temporal variability of spectro-temporal receptive fields in the anesthetized auditory cortex.
Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn
2014-01-01
Temporal variability of neuronal response characteristics during sensory stimulation is a ubiquitous phenomenon that may reflect processes such as stimulus-driven adaptation, top-down modulation or spontaneous fluctuations. It poses a challenge to functional characterization methods such as the receptive field, since these often assume stationarity. We propose a novel method for estimation of sensory neurons' receptive fields that extends the classic static linear receptive field model to the time-varying case. Here, the long-term estimate of the static receptive field serves as the mean of a probabilistic prior distribution from which the short-term temporally localized receptive field may deviate stochastically with time-varying standard deviation. The derived corresponding generalized linear model permits robust characterization of temporal variability in receptive field structure also for highly non-Gaussian stimulus ensembles. We computed and analyzed short-term auditory spectro-temporal receptive field (STRF) estimates with characteristic temporal resolution 5-30 s based on model simulations and responses from in total 60 single-unit recordings in anesthetized Mongolian gerbil auditory midbrain and cortex. Stimulation was performed with short (100 ms) overlapping frequency-modulated tones. Results demonstrate identification of time-varying STRFs, with obtained predictive model likelihoods exceeding those from baseline static STRF estimation. Quantitative characterization of STRF variability reveals a higher degree thereof in auditory cortex compared to midbrain. Cluster analysis indicates that significant deviations from the long-term static STRF are brief, but reliably estimated. We hypothesize that the observed variability more likely reflects spontaneous or state-dependent internal fluctuations that interact with stimulus-induced processing, rather than experimental or stimulus design.
Forebrain pathway for auditory space processing in the barn owl.
Cohen, Y E; Miller, G L; Knudsen, E I
1998-02-01
The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.
Grahn, Jessica A.; Henry, Molly J.; McAuley, J. Devin
2011-01-01
How we measure time and integrate temporal cues from different sensory modalities are fundamental questions in neuroscience. Sensitivity to a “beat” (such as that routinely perceived in music) differs substantially between auditory and visual modalities. Here we examined beat sensitivity in each modality, and examined cross-modal influences, using functional magnetic resonance imaging (fMRI) to characterize brain activity during perception of auditory and visual rhythms. In separate fMRI sessions, participants listened to auditory sequences or watched visual sequences. The order of auditory and visual sequence presentation was counterbalanced so that cross-modal order effects could be investigated. Participants judged whether sequences were speeding up or slowing down, and the pattern of tempo judgments was used to derive a measure of sensitivity to an implied beat. As expected, participants were less sensitive to an implied beat in visual sequences than in auditory sequences. However, visual sequences produced a stronger sense of beat when preceded by auditory sequences with identical temporal structure. Moreover, increases in brain activity were observed in the bilateral putamen for visual sequences preceded by auditory sequences when compared to visual sequences without prior auditory exposure. No such order-dependent differences (behavioral or neural) were found for the auditory sequences. The results provide further evidence for the role of the basal ganglia in internal generation of the beat and suggest that an internal auditory rhythm representation may be activated during visual rhythm perception. PMID:20858544
The spectrotemporal filter mechanism of auditory selective attention
Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.
2013-01-01
SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126
Retrosplenial cortex is required for the retrieval of remote memory for auditory cues.
Todd, Travis P; Mehlman, Max L; Keene, Christopher S; DeAngeli, Nicole E; Bucci, David J
2016-06-01
The restrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of the RSC to recently acquired auditory fear memories. Since neocortical regions have been implicated in the permanent storage of remote memories, we examined the contribution of the RSC to remotely acquired auditory fear memories. In Experiment 1, retrieval of a remotely acquired auditory fear memory was impaired when permanent lesions (either electrolytic or neurotoxic) were made several weeks after initial conditioning. In Experiment 2, using a chemogenetic approach, we observed impairments in the retrieval of remote memory for an auditory cue when the RSC was temporarily inactivated during testing. In Experiment 3, after injection of a retrograde tracer into the RSC, we observed labeled cells in primary and secondary auditory cortices, as well as the claustrum, indicating that the RSC receives direct projections from auditory regions. Overall our results indicate the RSC has a critical role in the retrieval of remotely acquired auditory fear memories, and we suggest this is related to the quality of the memory, with less precise memories being RSC dependent. © 2016 Todd et al.; Published by Cold Spring Harbor Laboratory Press.
Baltus, Alina; Herrmann, Christoph Siegfried
2016-06-01
Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
LIF potentiates the NT-3-mediated survival of spiral ganglia neurones in vitro.
Marzella, P L; Clark, G M; Shepherd, R K; Bartlett, P F; Kilpatrick, T J
1997-05-06
The survival of auditory neurones depends on the continued supply of trophic factors. Early postnatal spiral ganglion cells (SGC) in a dissociated cell culture were used as a model of auditory innervation to test the trophic factors leukaemia inhibitory factor (LIF) and neurotrophin-3 (NT-3) for their ability, individually or in combination, to promote neuronal survival. The findings suggest that LIF supports neuronal survival in a concentration-dependent manner. Moreover LIF potentiated NT-3-mediated spiral ganglion neuronal survival in a synergistic fashion.
Integration of auditory and vibrotactile stimuli: Effects of frequency
Wilson, E. Courtenay; Reed, Charlotte M.; Braida, Louis D.
2010-01-01
Perceptual integration of vibrotactile and auditory sinusoidal tone pulses was studied in detection experiments as a function of stimulation frequency. Vibrotactile stimuli were delivered through a single channel vibrator to the left middle fingertip. Auditory stimuli were presented diotically through headphones in a background of 50 dB sound pressure level broadband noise. Detection performance for combined auditory-tactile presentations was measured using stimulus levels that yielded 63% to 77% correct unimodal performance. In Experiment 1, the vibrotactile stimulus was 250 Hz and the auditory stimulus varied between 125 and 2000 Hz. In Experiment 2, the auditory stimulus was 250 Hz and the tactile stimulus varied between 50 and 400 Hz. In Experiment 3, the auditory and tactile stimuli were always equal in frequency and ranged from 50 to 400 Hz. The highest rates of detection for the combined-modality stimulus were obtained when stimulating frequencies in the two modalities were equal or closely spaced (and within the Pacinian range). Combined-modality detection for closely spaced frequencies was generally consistent with an algebraic sum model of perceptual integration; wider-frequency spacings were generally better fit by a Pythagorean sum model. Thus, perceptual integration of auditory and tactile stimuli at near-threshold levels appears to depend both on absolute frequency and relative frequency of stimulation within each modality. PMID:21117754
Evaluation of high-resolution MRI for preoperative screening for cochlear implantation
NASA Astrophysics Data System (ADS)
Madzivire, Mambidzeni; Camp, Jon J.; Lane, John; Witte, Robert J.; Robb, Richard A.
2002-05-01
The success of a cochlear implant is dependent on a functioning auditory nerve. An accurate noninvasive method for screening cochlear implant patients to help determine viability of the auditory nerve would allow physicians to better predict the success of the operation. In this study we measured the size of the auditory nerve relative to the size of the juxtaposed facial nerve and correlated these measurements with audiologic test results. The study involved 15 patients, and three normal volunteers. Noninvasive high-resolution bilateral MRI images were acquired from both 1.5T and 3T scanners. The images were reformatted to obtain an anatomically referenced oblique plane perpendicular to the auditory nerve. The cross- sectional areas of the auditory and facial nerves were determined in this plane. Assessment of the data is encouraging. The ratios of auditory to facial nerve size in the control subjects are close to the expected value of 1.0. Patient data ratios range from 0.73 to 1.3, with numbers significantly less than 1.0 suggesting auditory nerve atrophy. The acoustic nerve area correlated to audiologic test findings, particularly (R2equals0.68) to the count of words understood from a list of 100 words. These preliminary analyses suggest that a threshold of size may be determined to differentiate functional from nonfunctional auditory nerves.
Happel, Max F. K.; Ohl, Frank W.
2017-01-01
Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062
Processing of pitch and location in human auditory cortex during visual and auditory tasks.
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand.
Processing of pitch and location in human auditory cortex during visual and auditory tasks
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand. PMID:26594185
1994-07-01
psychological refractory period 15. Two-flash threshold 16. Critical flicker fusion (CFF) 17. Steady state visually evoked response 18. Auditory brain stem...States of awareness I: Subliminal erceoption relationships to situational awareness (AL-TR-1992-0085). Brooks Air Force BaSe, TX: Armstrong...the signals required different inputs (e.g., visual versus auditory ) (Colley & Beech, 1989). Despite support of this theory from such experiments
How do neurons work together? Lessons from auditory cortex.
Harris, Kenneth D; Bartho, Peter; Chadderton, Paul; Curto, Carina; de la Rocha, Jaime; Hollender, Liad; Itskov, Vladimir; Luczak, Artur; Marguet, Stephan L; Renart, Alfonso; Sakata, Shuzo
2011-01-01
Recordings of single neurons have yielded great insights into the way acoustic stimuli are represented in auditory cortex. However, any one neuron functions as part of a population whose combined activity underlies cortical information processing. Here we review some results obtained by recording simultaneously from auditory cortical populations and individual morphologically identified neurons, in urethane-anesthetized and unanesthetized passively listening rats. Auditory cortical populations produced structured activity patterns both in response to acoustic stimuli, and spontaneously without sensory input. Population spike time patterns were broadly conserved across multiple sensory stimuli and spontaneous events, exhibiting a generally conserved sequential organization lasting approximately 100 ms. Both spontaneous and evoked events exhibited sparse, spatially localized activity in layer 2/3 pyramidal cells, and densely distributed activity in larger layer 5 pyramidal cells and putative interneurons. Laminar propagation differed however, with spontaneous activity spreading upward from deep layers and slowly across columns, but sensory responses initiating in presumptive thalamorecipient layers, spreading rapidly across columns. In both unanesthetized and urethanized rats, global activity fluctuated between "desynchronized" state characterized by low amplitude, high-frequency local field potentials and a "synchronized" state of larger, lower-frequency waves. Computational studies suggested that responses could be predicted by a simple dynamical system model fitted to the spontaneous activity immediately preceding stimulus presentation. Fitting this model to the data yielded a nonlinear self-exciting system model in synchronized states and an approximately linear system in desynchronized states. We comment on the significance of these results for auditory cortical processing of acoustic and non-acoustic information. © 2010 Elsevier B.V. All rights reserved.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Touch activates human auditory cortex.
Schürmann, Martin; Caetano, Gina; Hlushchuk, Yevhen; Jousmäki, Veikko; Hari, Riitta
2006-05-01
Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment.
Modalities of memory: is reading lips like hearing voices?
Maidment, David W; Macken, Bill; Jones, Dylan M
2013-12-01
Functional similarities in verbal memory performance across presentation modalities (written, heard, lipread) are often taken to point to a common underlying representational form upon which the modalities converge. We show here instead that the pattern of performance depends critically on presentation modality and different mechanisms give rise to superficially similar effects across modalities. Lipread recency is underpinned by different mechanisms to auditory recency, and while the effect of an auditory suffix on an auditory list is due to the perceptual grouping of the suffix with the list, the corresponding effect with lipread speech is due to misidentification of the lexical content of the lipread suffix. Further, while a lipread suffix does not disrupt auditory recency, an auditory suffix does disrupt recency for lipread lists. However, this effect is due to attentional capture ensuing from the presentation of an unexpected auditory event, and is evident both with verbal and nonverbal auditory suffixes. These findings add to a growing body of evidence that short-term verbal memory performance is determined by modality-specific perceptual and motor processes, rather than by the storage and manipulation of phonological representations. Copyright © 2013 Elsevier B.V. All rights reserved.
Auditory salience using natural soundscapes.
Huang, Nicholas; Elhilali, Mounya
2017-03-01
Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience.
Sleep-dependent consolidation benefits fast transfer of time interval training.
Chen, Lihan; Guo, Lu; Bao, Ming
2017-03-01
Previous study has shown that short training (15 min) for explicitly discriminating temporal intervals between two paired auditory beeps, or between two paired tactile taps, can significantly improve observers' ability to classify the perceptual states of visual Ternus apparent motion while the training of task-irrelevant sensory properties did not help to improve visual timing (Chen and Zhou in Exp Brain Res 232(6):1855-1864, 2014). The present study examined the role of 'consolidation' after training of temporal task-irrelevant properties, or whether a pure delay (i.e., blank consolidation) following pretest of the target task would give rise to improved ability of visual interval timing, typified in visual Ternus display. A procedure of pretest-training-posttest was adopted, with the probe of discriminating Ternus apparent motion. The extended implicit training of timing in which the time intervals between paired auditory beeps or paired tactile taps were manipulated but the task was discrimination of the auditory pitches or tactile intensities, did not lead to the training benefits (Exps 1 and 3); however, a delay of 24 h after implicit training of timing, including solving 'Sudoku puzzles,' made the otherwise absent training benefits observable (Exps 2, 4, 5 and 6). The above improvements in performance were not due to a practice effect of Ternus motion (Exp 7). A general 'blank' consolidation period of 24 h also made improvements of visual timing observable (Exp 8). Taken together, the current findings indicated that sleep-dependent consolidation imposed a general effect, by potentially triggering and maintaining neuroplastic changes in the intrinsic (timing) network to enhance the ability of time perception.
The Role of the Auditory Brainstem in Processing Linguistically-Relevant Pitch Patterns
ERIC Educational Resources Information Center
Krishnan, Ananthanarayan; Gandour, Jackson T.
2009-01-01
Historically, the brainstem has been neglected as a part of the brain involved in language processing. We review recent evidence of language-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem. We argue that there is enhancing…
Perrone-Bertolotti, Marcela; Kujala, Jan; Vidal, Juan R; Hamame, Carlos M; Ossandon, Tomas; Bertrand, Olivier; Minotti, Lorella; Kahane, Philippe; Jerbi, Karim; Lachaux, Jean-Philippe
2012-12-05
As you might experience it while reading this sentence, silent reading often involves an imagery speech component: we can hear our own "inner voice" pronouncing words mentally. Recent functional magnetic resonance imaging studies have associated that component with increased metabolic activity in the auditory cortex, including voice-selective areas. It remains to be determined, however, whether this activation arises automatically from early bottom-up visual inputs or whether it depends on late top-down control processes modulated by task demands. To answer this question, we collaborated with four epileptic human patients recorded with intracranial electrodes in the auditory cortex for therapeutic purposes, and measured high-frequency (50-150 Hz) "gamma" activity as a proxy of population level spiking activity. Temporal voice-selective areas (TVAs) were identified with an auditory localizer task and monitored as participants viewed words flashed on screen. We compared neural responses depending on whether words were attended or ignored and found a significant increase of neural activity in response to words, strongly enhanced by attention. In one of the patients, we could record that response at 800 ms in TVAs, but also at 700 ms in the primary auditory cortex and at 300 ms in the ventral occipital temporal cortex. Furthermore, single-trial analysis revealed a considerable jitter between activation peaks in visual and auditory cortices. Altogether, our results demonstrate that the multimodal mental experience of reading is in fact a heterogeneous complex of asynchronous neural responses, and that auditory and visual modalities often process distinct temporal frames of our environment at the same time.
Teichert, Tobias
2017-10-01
Amplitudes of auditory evoked potentials (AEP) increase with the intensity/loudness of sounds (loudness-dependence of AEP, LDAEP), and the time between adjacent sounds (time-dependence of AEP, TDAEP). Both, blunted LDAEP and blunted TDAEP are markers of altered auditory function in schizophrenia (SZ). However, while blunted LDAEP has been attributed to altered serotonergic function, blunted TDAEP has been linked to altered NMDA receptor function. Despite phenomenological similarities of the two effects, no common pharmacological underpinnings have been identified. To test whether LDAEP and TDAEP are both affected by NMDA receptor blockade, two rhesus macaques passively listened to auditory clicks of 5 different intensities presented with stimulus-onset asynchronies ranging between 0.2 and 6.4s. 8 AEP components were analyzed, including the N85, the presumed human N1 homolog. LDAEP and TDAEP were estimated as the slopes of AEP amplitude with intensity and the logarithm of stimulus-onset asynchrony, respectively. On different days, AEPs were collected after systemic injection of MK-801 or vehicle. Both TDAEP and LDAEP of the N85 were blunted by the NMDA blocker MK-801 and recapitulate the SZ phenotype. In summary, LDAEP and TDAEP share important pharmacological commonalities that may help identify a common pharmacological intervention to normalize both electrophysiological phenotypes in SZ. Copyright © 2017 Elsevier B.V. All rights reserved.
Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.
Murai, Yuki; Yotsumoto, Yuko
2016-01-01
When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.
Thalamic and parietal brain morphology predicts auditory category learning.
Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas
2014-01-01
Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.
Chun, Sungkun; Du, Fei; Westmoreland, Joby J.; Han, Seung Baek; Wang, Yong-Dong; Eddins, Donnie; Bayazitov, Ildar T.; Devaraju, Prakash; Yu, Jing; Mellado Lagarde, Marcia M.; Anderson, Kara; Zakharenko, Stanislav S.
2016-01-01
Although 22q11.2 deletion syndrome (22q11DS) is associated with early-life behavioral abnormalities, affected individuals are also at high risk for the development of schizophrenia symptoms, including psychosis, later in life. Auditory thalamocortical projections recently emerged as a neural circuit specifically disrupted in 22q11DS mouse models, in which haploinsufficiency of the microRNA-processing gene Dgcr8 resulted in the elevation of the dopamine receptor Drd2 in the auditory thalamus, an abnormal sensitivity of thalamocortical projections to antipsychotics, and an abnormal acoustic-startle response. Here we show that these auditory thalamocortical phenotypes have a delayed onset in 22q11DS mouse models and are associated with an age-dependent reduction of the microRNA miR-338-3p, which targets Drd2 and is enriched in the thalamus of both humans and mice. Replenishing depleted miR-338-3p in mature 22q11DS mice rescued the thalamocortical abnormalities, and miR-338-3p deletion/knockdown mimicked thalamocortical and behavioral deficits and eliminated their age dependence. Therefore, miR-338-3p depletion is necessary and sufficient to disrupt auditory thalamocortical signaling in 22q11DS mouse models and may mediate the pathogenic mechanism of 22q11DS-related psychosis and control its late onset. PMID:27892953
Evaluation of sounds for hybrid and electric vehicles operating at low speed
DOT National Transportation Integrated Search
2012-10-22
Electric vehicles (EV) and hybrid electric vehicles (HEVs), operated at low speeds may reduce auditory cues used by pedestrians to assess the state of nearby traffic creating a safety issue. This field study compares the auditory detectability of num...
Tsaneva, L
1993-01-01
The results from the investigation of the threshold of discomfort in 385 operators from firm "Kremikovtsi" are discussed. The most expressed changes are found in operators with increased tonal auditory threshold up to 45 and above 50 dB, in high confidential probability. The observed changes in the threshold of discomfort are classified into 3 groups: 1). Raised tonal auditory threshold (up to 30 dB) without decrease in the threshold of discomfort; 2). Decreased threshold of discomfort (with about 15-20 dB) in raised tonal auditory threshold (up to 45 dB); 3). Decreased threshold of discomfort on the background of raised (above 50 dB) tonal auditory threshold. On 4 figures are represented audiograms, illustrating the state of tonal auditory threshold, the field of hearing and the threshold of discomfort. The field of hearing of the operators from the III and IV groups is narrowed, and in the latter also deformed. The explanation of this pathophysiological phenomenon is related to the increased effect of the sound irritation and the presence of recruitment phenomenon with possible engagement of the central end of the auditory analyser. It is underlined, that the threshold of discomfort is sensitive index for the state of the individual norms of each operator for the speech-sound-noise discomfort.(ABSTRACT TRUNCATED AT 250 WORDS)
Investigating brain response to music: a comparison of different fMRI acquisition schemes.
Mueller, Karsten; Mildner, Toralf; Fritz, Thomas; Lepsien, Jöran; Schwarzbauer, Christian; Schroeter, Matthias L; Möller, Harald E
2011-01-01
Functional magnetic resonance imaging (fMRI) in auditory experiments is a challenge, because the scanning procedure produces considerable noise that can interfere with the auditory paradigm. The noise might either mask the auditory material presented, or interfere with stimuli designed to evoke emotions because it sounds loud and rather unpleasant. Therefore, scanning paradigms that allow interleaved auditory stimulation and image acquisition appear to be advantageous. The sparse temporal sampling (STS) technique uses a very long repetition time in order to achieve a stimulus presentation in the absence of scanner noise. Although only relatively few volumes are acquired for the resulting data sets, there have been recent studies where this method has furthered remarkable results. A new development is the interleaved silent steady state (ISSS) technique. Compared with STS, this method is capable of acquiring several volumes in the time frame between the auditory trials (while the magnetization is kept in a steady state during stimulus presentation). In order to draw conclusions about the optimum fMRI procedure with auditory stimulation, different echo-planar imaging (EPI) acquisition schemes were compared: Continuous scanning, STS, and ISSS. The total acquisition time of each sequence was adjusted to about 12.5 min. The results indicate that the ISSS approach exhibits the highest sensitivity in detecting subtle activity in sub-cortical brain regions. Copyright © 2010 Elsevier Inc. All rights reserved.
Lovelace, Jonathan W; Wen, Teresa H; Reinhard, Sarah; Hsu, Mike S; Sidhu, Harpreet; Ethell, Iryna M; Binder, Devin K; Razak, Khaleel A
2016-05-01
Sensory processing deficits are common in autism spectrum disorders, but the underlying mechanisms are unclear. Fragile X Syndrome (FXS) is a leading genetic cause of intellectual disability and autism. Electrophysiological responses in humans with FXS show reduced habituation with sound repetition and this deficit may underlie auditory hypersensitivity in FXS. Our previous study in Fmr1 knockout (KO) mice revealed an unusually long state of increased sound-driven excitability in auditory cortical neurons suggesting that cortical responses to repeated sounds may exhibit abnormal habituation as in humans with FXS. Here, we tested this prediction by comparing cortical event related potentials (ERP) recorded from wildtype (WT) and Fmr1 KO mice. We report a repetition-rate dependent reduction in habituation of N1 amplitude in Fmr1 KO mice and show that matrix metalloproteinase-9 (MMP-9), one of the known FMRP targets, contributes to the reduced ERP habituation. Our studies demonstrate a significant up-regulation of MMP-9 levels in the auditory cortex of adult Fmr1 KO mice, whereas a genetic deletion of Mmp-9 reverses ERP habituation deficits in Fmr1 KO mice. Although the N1 amplitude of Mmp-9/Fmr1 DKO recordings was larger than WT and KO recordings, the habituation of ERPs in Mmp-9/Fmr1 DKO mice is similar to WT mice implicating MMP-9 as a potential target for reversing sensory processing deficits in FXS. Together these data establish ERP habituation as a translation relevant, physiological pre-clinical marker of auditory processing deficits in FXS and suggest that abnormal MMP-9 regulation is a mechanism underlying auditory hypersensitivity in FXS. Fragile X Syndrome (FXS) is the leading known genetic cause of autism spectrum disorders. Individuals with FXS show symptoms of auditory hypersensitivity. These symptoms may arise due to sustained neural responses to repeated sounds, but the underlying mechanisms remain unclear. For the first time, this study shows deficits in habituation of neural responses to repeated sounds in the Fmr1 KO mice as seen in humans with FXS. We also report an abnormally high level of matrix metalloprotease-9 (MMP-9) in the auditory cortex of Fmr1 KO mice and that deletion of Mmp-9 from Fmr1 KO mice reverses habituation deficits. These data provide a translation relevant electrophysiological biomarker for sensory deficits in FXS and implicate MMP-9 as a target for drug discovery. Copyright © 2016 Elsevier Inc. All rights reserved.
Assembly of the Auditory Circuitry by a Hox Genetic Network in the Mouse Brainstem
Di Bonito, Maria; Narita, Yuichi; Avallone, Bice; Sequino, Luigi; Mancuso, Marta; Andolfi, Gennaro; Franzè, Anna Maria; Puelles, Luis; Rijli, Filippo M.; Studer, Michèle
2013-01-01
Rhombomeres (r) contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN), and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem. PMID:23408898
Assembly of the auditory circuitry by a Hox genetic network in the mouse brainstem.
Di Bonito, Maria; Narita, Yuichi; Avallone, Bice; Sequino, Luigi; Mancuso, Marta; Andolfi, Gennaro; Franzè, Anna Maria; Puelles, Luis; Rijli, Filippo M; Studer, Michèle
2013-01-01
Rhombomeres (r) contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN), and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem.
Toscano, Massimiliano; Viganò, Alessandro; Puledda, Francesca; Verzina, Angela; Rocco, Andrea; Lenzi, Gian Luigi; Di Piero, Vittorio
2014-01-01
Anger and aggressive behavior (AB) are two of the main post-stroke behavioral manifestations, which could imply both an anger trait (TA) or a state condition of anger (SA). Serotonergic system is thought to play an inhibitory control on aggressive impulse. Nevertheless, whether 5HT has the same role in TA and in SA, is still debated. Intensity dependence of auditory evoked potentials (IDAP) is thought to be inversely related to the central 5HT tone. The aim of this study was to evaluate, in acute stroke patients, the 5HT system involvement in AB by IDAP. Consecutive stroke patients were evaluated and compared with healthy controls. The Spielberger Trait Anger Scale (STAS) was used to assess AB, SA and TA. Patients with AB and TA showed a significantly increased IDAP value, whereas patients with SA had a significantly lower IDAP; this indicates an increased 5HT tone. In acute stroke patients with AB, there is a decreased central 5HT tone. Surprisingly, we found an opposite 5HT feature between patients with TA and those showing SA, suggesting that the hypothesis of aggression based on 5HT deficiency requires further investigations. This might open new strategies in the treatment of post-stroke AB. © 2014 S. Karger AG, Basel.
Decoding spectrotemporal features of overt and covert speech from the human cortex
Martin, Stéphanie; Brunner, Peter; Holdgraf, Chris; Heinze, Hans-Jochen; Crone, Nathan E.; Rieger, Jochem; Schalk, Gerwin; Knight, Robert T.; Pasley, Brian N.
2014-01-01
Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70–150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10−5; paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate. PMID:24904404
Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier
2007-08-29
In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.
Using Auditory Steady State Responses to Outline the Functional Connectivity in the Tinnitus Brain
Schlee, Winfried; Weisz, Nathan; Bertrand, Olivier; Hartmann, Thomas; Elbert, Thomas
2008-01-01
Background Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. Methods and Findings Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. Conclusions To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus. PMID:19005566
Source Space Estimation of Oscillatory Power and Brain Connectivity in Tinnitus
Zobay, Oliver; Palmer, Alan R.; Hall, Deborah A.; Sereda, Magdalena; Adjamian, Peyman
2015-01-01
Tinnitus is the perception of an internally generated sound that is postulated to emerge as a result of structural and functional changes in the brain. However, the precise pathophysiology of tinnitus remains unknown. Llinas’ thalamocortical dysrhythmia model suggests that neural deafferentation due to hearing loss causes a dysregulation of coherent activity between thalamus and auditory cortex. This leads to a pathological coupling of theta and gamma oscillatory activity in the resting state, localised to the auditory cortex where normally alpha oscillations should occur. Numerous studies also suggest that tinnitus perception relies on the interplay between auditory and non-auditory brain areas. According to the Global Brain Model, a network of global fronto—parietal—cingulate areas is important in the generation and maintenance of the conscious perception of tinnitus. Thus, the distress experienced by many individuals with tinnitus is related to the top—down influence of this global network on auditory areas. In this magnetoencephalographic study, we compare resting-state oscillatory activity of tinnitus participants and normal-hearing controls to examine effects on spectral power as well as functional and effective connectivity. The analysis is based on beamformer source projection and an atlas-based region-of-interest approach. We find increased functional connectivity within the auditory cortices in the alpha band. A significant increase is also found for the effective connectivity from a global brain network to the auditory cortices in the alpha and beta bands. We do not find evidence of effects on spectral power. Overall, our results provide only limited support for the thalamocortical dysrhythmia and Global Brain models of tinnitus. PMID:25799178
Auditory detectability of hybrid electric vehicles by pedestrians who are blind
DOT National Transportation Integrated Search
2010-11-15
Quieter cars such as electric vehicles (EVs) and hybrid electric vehicles (HEVs) may reduce auditory cues used by pedestrians to assess the state of nearby traffic and, as a result, their use may have an adverse impact on pedestrian safety. In order ...
Effects of auditory selective attention on chirp evoked auditory steady state responses.
Bohr, Andreas; Bernarding, Corinna; Strauss, Daniel J; Corona-Strauss, Farah I
2011-01-01
Auditory steady state responses (ASSRs) are frequently used to assess auditory function. Recently, the interest in effects of attention on ASSRs has increased. In this paper, we investigated for the first time possible effects of attention on AS-SRs evoked by amplitude modulated and frequency modulated chirps paradigms. Different paradigms were designed using chirps with low and high frequency content, and the stimulation was presented in a monaural and dichotic modality. A total of 10 young subjects participated in the study, they were instructed to ignore the stimuli and after a second repetition they had to detect a deviant stimulus. In the time domain analysis, we found enhanced amplitudes for the attended conditions. Furthermore, we noticed higher amplitudes values for the condition using frequency modulated low frequency chirps evoked by a monaural stimulation. The most difference between attended and unattended modality was exhibited at the dichotic case of the amplitude modulated condition using chirps with low frequency content.
Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.
Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin
2018-02-21
In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.
2016-01-01
Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630
Human auditory steady state responses to binaural and monaural beats.
Schwarz, D W F; Taylor, P
2005-03-01
Binaural beat sensations depend upon a central combination of two different temporally encoded tones, separately presented to the two ears. We tested the feasibility to record an auditory steady state evoked response (ASSR) at the binaural beat frequency in order to find a measure for temporal coding of sound in the human EEG. We stimulated each ear with a distinct tone, both differing in frequency by 40Hz, to record a binaural beat ASSR. As control, we evoked a beat ASSR in response to both tones in the same ear. We band-pass filtered the EEG at 40Hz, averaged with respect to stimulus onset and compared ASSR amplitudes and phases, extracted from a sinusoidal non-linear regression fit to a 40Hz period average. A 40Hz binaural beat ASSR was evoked at a low mean stimulus frequency (400Hz) but became undetectable beyond 3kHz. Its amplitude was smaller than that of the acoustic beat ASSR, which was evoked at low and high frequencies. Both ASSR types had maxima at fronto-central leads and displayed a fronto-occipital phase delay of several ms. The dependence of the 40Hz binaural beat ASSR on stimuli at low, temporally coded tone frequencies suggests that it may objectively assess temporal sound coding ability. The phase shift across the electrode array is evidence for more than one origin of the 40Hz oscillations. The binaural beat ASSR is an evoked response, with novel diagnostic potential, to a signal that is not present in the stimulus, but generated within the brain.
ERIC Educational Resources Information Center
Bidelman, Gavin M.; Gandour, Jackson T.; Krishnan, Ananthanarayan
2011-01-01
Neural encoding of pitch in the auditory brainstem is known to be shaped by long-term experience with language or music, implying that early sensory processing is subject to experience-dependent neural plasticity. In language, pitch patterns consist of sequences of continuous, curvilinear contours; in music, pitch patterns consist of relatively…
Elevated correlations in neuronal ensembles of mouse auditory cortex following parturition.
Rothschild, Gideon; Cohen, Lior; Mizrahi, Adi; Nelken, Israel
2013-07-31
The auditory cortex is malleable by experience. Previous studies of auditory plasticity have described experience-dependent changes in response profiles of single neurons or changes in global tonotopic organization. However, experience-dependent changes in the dynamics of local neural populations have remained unexplored. In this study, we examined the influence of a dramatic yet natural experience in the life of female mice, giving birth and becoming a mother on single neurons and neuronal ensembles in the primary auditory cortex (A1). Using in vivo two-photon calcium imaging and electrophysiological recordings from layer 2/3 in A1 of mothers and age-matched virgin mice, we monitored changes in the responses to a set of artificial and natural sounds. Population dynamics underwent large changes as measured by pairwise and higher-order correlations, with noise correlations increasing as much as twofold in lactating mothers. Concomitantly, changes in response properties of single neurons were modest and selective. Remarkably, despite the large changes in correlations, information about stimulus identity remained essentially the same in the two groups. Our results demonstrate changes in the correlation structure of neuronal activity as a result of a natural life event.
Hao, Yongxin; Jing, He; Bi, Qiang; Zhang, Jiaozhen; Qin, Ling; Yang, Pingting
2014-12-15
Though accumulating literature implicates that cytokines are involved in the pathophysiology of mental disorders, the role of interleukin-6 (IL-6) in learning and memory functions remains unresolved. The present study was undertaken to investigate the effect of IL-6 on amygdala-dependent fear learning. Adult Wistar rats were used along with the auditory fear conditioning test and pharmacological techniques. The data showed that infusions of IL-6, aimed at the amygdala, dose-dependently impaired the acquisition and extinction of conditioned fear. In addition, the results in the Western blot analysis confirmed that JAK/STAT was temporally activated-phosphorylated by the IL-6 treatment. Moreover, the rats were treated with JSI-124, a JAK/STAT3 inhibitor, prior to the IL-6 treatment showed a significant decrease in the IL-6 induced impairments of fear conditioning. Taken together, our results demonstrate that the learning behavior of rats in the auditory fear conditioning could be modulated by IL-6 via the amygdala. Furthermore, the JAK/STAT3 activation in the amygdala seemed to play a role in the IL-6 mediated behavioral alterations of rats in auditory fear learning. Copyright © 2014 Elsevier B.V. All rights reserved.
Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J
2018-03-01
Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.
Cecere, Roberto; Gross, Joachim; Willis, Ashleigh; Thut, Gregor
2017-05-24
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory-visual (AV) or visual-auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV-VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV-VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AV maps = VA maps versus AV maps ≠ VA maps The tRSA results favored the AV maps ≠ VA maps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). Copyright © 2017 Cecere et al.
Salicylate-induced cochlear impairments, cortical hyperactivity and re-tuning, and tinnitus.
Chen, Guang-Di; Stolzberg, Daniel; Lobarinas, Edward; Sun, Wei; Ding, Dalian; Salvi, Richard
2013-01-01
High doses of sodium salicylate (SS) have long been known to induce temporary hearing loss and tinnitus, effects attributed to cochlear dysfunction. However, our recent publications reviewed here show that SS can induce profound, permanent, and unexpected changes in the cochlea and central nervous system. Prolonged treatment with SS permanently decreased the cochlear compound action potential (CAP) amplitude in vivo. In vitro, high dose SS resulted in a permanent loss of spiral ganglion neurons and nerve fibers, but did not damage hair cells. Acute treatment with high-dose SS produced a frequency-dependent decrease in the amplitude of distortion product otoacoustic emissions and CAP. Losses were greatest at low and high frequencies, but least at the mid-frequencies (10-20 kHz), the mid-frequency band that corresponds to the tinnitus pitch measured behaviorally. In the auditory cortex, medial geniculate body and amygdala, high-dose SS enhanced sound-evoked neural responses at high stimulus levels, but it suppressed activity at low intensities and elevated response threshold. When SS was applied directly to the auditory cortex or amygdala, it only enhanced sound evoked activity, but did not elevate response threshold. Current source density analysis revealed enhanced current flow into the supragranular layer of auditory cortex following systemic SS treatment. Systemic SS treatment also altered tuning in auditory cortex and amygdala; low frequency and high frequency multiunit clusters up-shifted or down-shifted their characteristic frequency into the 10-20 kHz range thereby altering auditory cortex tonotopy and enhancing neural activity at mid-frequencies corresponding to the tinnitus pitch. These results suggest that SS-induced hyperactivity in auditory cortex originates in the central nervous system, that the amygdala potentiates these effects and that the SS-induced tonotopic shifts in auditory cortex, the putative neural correlate of tinnitus, arises from the interaction between the frequency-dependent losses in the cochlea and hyperactivity in the central nervous system. Copyright © 2012 Elsevier B.V. All rights reserved.
Qiao, Zhengxue; Yang, Aiying; Qiu, Xiaohui; Yang, Xiuxian; Zhang, Congpei; Zhu, Xiongzhao; He, Jincai; Wang, Lin; Bai, Bing; Sun, Hailian; Zhao, Lun; Yang, Yanjie
2015-10-30
Gender differences in rates of major depressive disorder (MDD) are well established, but gender differences in cognitive function have been little studied. Auditory mismatch negativity (MMN) was used to investigate gender differences in pre-attentive information processing in first episode MDD. In the deviant-standard reverse oddball paradigm, duration auditory MMN was obtained in 30 patients (15 males) and 30 age-/education-matched controls. Over frontal-central areas, mean amplitude of increment MMN (to a 150-ms deviant tone) was smaller in female than male patients; there was no sex difference in decrement MMN (to a 50-ms deviant tone). Neither increment nor decrement MMN differed between female and male patients over temporal areas. Frontal-central MMN and temporal MMN did not differ between male and female controls in any condition. Over frontal-central areas, mean amplitude of increment MMN was smaller in female patients than female controls; there was no difference in decrement MMN. Neither increment nor decrement MMN differed between female patients and female controls over temporal areas. Frontal-central MMN and temporal MMN did not differ between male patients and male controls. Mean amplitude of increment MMN in female patients did not correlate with symptoms, suggesting this sex-specific deficit is a trait- not a state-dependent phenomenon. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Spectrotemporal dynamics of auditory cortical synaptic receptive field plasticity.
Froemke, Robert C; Martins, Ana Raquel O
2011-09-01
The nervous system must dynamically represent sensory information in order for animals to perceive and operate within a complex, changing environment. Receptive field plasticity in the auditory cortex allows cortical networks to organize around salient features of the sensory environment during postnatal development, and then subsequently refine these representations depending on behavioral context later in life. Here we review the major features of auditory cortical receptive field plasticity in young and adult animals, focusing on modifications to frequency tuning of synaptic inputs. Alteration in the patterns of acoustic input, including sensory deprivation and tonal exposure, leads to rapid adjustments of excitatory and inhibitory strengths that collectively determine the suprathreshold tuning curves of cortical neurons. Long-term cortical plasticity also requires co-activation of subcortical neuromodulatory control nuclei such as the cholinergic nucleus basalis, particularly in adults. Regardless of developmental stage, regulation of inhibition seems to be a general mechanism by which changes in sensory experience and neuromodulatory state can remodel cortical receptive fields. We discuss recent findings suggesting that the microdynamics of synaptic receptive field plasticity unfold as a multi-phase set of distinct phenomena, initiated by disrupting the balance between excitation and inhibition, and eventually leading to wide-scale changes to many synapses throughout the cortex. These changes are coordinated to enhance the representations of newly-significant stimuli, possibly for improved signal processing and language learning in humans. Copyright © 2011 Elsevier B.V. All rights reserved.
Spectrotemporal Dynamics of Auditory Cortical Synaptic Receptive Field Plasticity
Froemke, Robert C.; Martins, Ana Raquel O.
2011-01-01
The nervous system must dynamically represent sensory information in order for animals to perceive and operate within a complex, changing environment. Receptive field plasticity in the auditory cortex allows cortical networks to organize around salient features of the sensory environment during postnatal development, and then subsequently refine these representations depending on behavioral context later in life. Here we review the major features of auditory cortical receptive field plasticity in young and adult animals, focusing on modifications to frequency tuning of synaptic inputs. Alteration in the patterns of acoustic input, including sensory deprivation and tonal exposure, leads to rapid adjustments of excitatory and inhibitory strengths that collectively determine the suprathreshold tuning curves of cortical neurons. Long-term cortical plasticity also requires co-activation of subcortical neuromodulatory control nuclei such as the cholinergic nucleus basalis, particularly in adults. Regardless of developmental stage, regulation of inhibition seems to be a general mechanism by which changes in sensory experience and neuromodulatory state can remodel cortical receptive fields. We discuss recent findings suggesting that the microdynamics of synaptic receptive field plasticity unfold as a multi-phase set of distinct phenomena, initiated by disrupting the balance between excitation and inhibition, and eventually leading to wide-scale changes to many synapses throughout the cortex. These changes are coordinated to enhance the representations of newly-significant stimuli, possibly for improved signal processing and language learning in humans. PMID:21426927
Chen, Yu-Chen; Li, Xiaowei; Liu, Lijie; Wang, Jian; Lu, Chun-Qiang; Yang, Ming; Jiao, Yun; Zang, Feng-Chao; Radziwon, Kelly; Chen, Guang-Di; Sun, Wei; Krishnan Muthaiah, Vijaya Prakash; Salvi, Richard; Teng, Gao-Jun
2015-01-01
Hearing loss often triggers an inescapable buzz (tinnitus) and causes everyday sounds to become intolerably loud (hyperacusis), but exactly where and how this occurs in the brain is unknown. To identify the neural substrate for these debilitating disorders, we induced both tinnitus and hyperacusis with an ototoxic drug (salicylate) and used behavioral, electrophysiological, and functional magnetic resonance imaging (fMRI) techniques to identify the tinnitus–hyperacusis network. Salicylate depressed the neural output of the cochlea, but vigorously amplified sound-evoked neural responses in the amygdala, medial geniculate, and auditory cortex. Resting-state fMRI revealed hyperactivity in an auditory network composed of inferior colliculus, medial geniculate, and auditory cortex with side branches to cerebellum, amygdala, and reticular formation. Functional connectivity revealed enhanced coupling within the auditory network and segments of the auditory network and cerebellum, reticular formation, amygdala, and hippocampus. A testable model accounting for distress, arousal, and gating of tinnitus and hyperacusis is proposed. DOI: http://dx.doi.org/10.7554/eLife.06576.001 PMID:25962854
Most, Tova; Aviner, Chen
2009-01-01
This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust. The emotional content was placed upon the same neutral sentence. The stimuli were presented in auditory, visual, and combined auditory-visual modes. The results revealed better auditory identification by the participants with NH in comparison to all groups of participants with hearing loss (HL). No differences were found among the groups with HL in each of the 3 modes. Although auditory-visual perception was better than visual-only perception for the participants with NH, no such differentiation was found among the participants with HL. The results question the efficiency of some currently used CIs in providing the acoustic cues required to identify the speaker's emotional state.
Modulating Human Auditory Processing by Transcranial Electrical Stimulation
Heimrath, Kai; Fiene, Marina; Rufener, Katharina S.; Zaehle, Tino
2016-01-01
Transcranial electrical stimulation (tES) has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation, transcranial alternating current stimulation, and transcranial random noise stimulation has emerged. While the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders. PMID:27013969
Ponnath, Abhilash; Farris, Hamilton E.
2014-01-01
Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3–10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene. PMID:25120437
Ponnath, Abhilash; Farris, Hamilton E
2014-01-01
Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.
Stekelenburg, Jeroen J; Keetels, Mirjam
2016-05-01
The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.
Cortical Development and Neuroplasticity in Auditory Neuropathy Spectrum Disorder
Sharma, Anu; Cardon, Garrett
2015-01-01
Cortical development is dependent to a large extent on stimulus-driven input. Auditory Neuropathy Spectrum Disorder (ANSD) is a recently described form of hearing impairment where neural dys-synchrony is the predominant characteristic. Children with ANSD provide a unique platform to examine the effects of asynchronous and degraded afferent stimulation on cortical auditory neuroplasticity and behavioral processing of sound. In this review, we describe patterns of auditory cortical maturation in children with ANSD. The disruption of cortical maturation that leads to these various patterns includes high levels of intra-individual cortical variability and deficits in cortical phase synchronization of oscillatory neural responses. These neurodevelopmental changes, which are constrained by sensitive periods for central auditory maturation, are correlated with behavioral outcomes for children with ANSD. Overall, we hypothesize that patterns of cortical development in children with ANSD appear to be markers of the severity of the underlying neural dys-synchrony, providing prognostic indicators of success of clinical intervention with amplification and/or electrical stimulation. PMID:26070426
Visual attention modulates brain activation to angry voices.
Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas
2011-06-29
In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.
AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX
Weinberger, Norman M.
2009-01-01
Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002
Modulation of cannabinoid signaling by amygdala α2-adrenergic system in fear conditioning.
Nasehi, Mohammad; Zamanparvar, Majid; Ebrahimi-Ghiri, Mohaddeseh; Zarrindast, Mohammad-Reza
2016-03-01
The noradrenergic system plays a critical role in the modulation of emotional state, primarily related to anxiety, arousal, and stress. Growing evidence suggests that the endocannabinoid system mediates stress responses and emotional homeostasis, in part, by targeting noradrenergic circuits. In addition, there is an interaction between the cannabinoid and noradrenergic system that has significant functional and behavioral implications. Considering the importance of these systems in forming memories for fearful events, we have investigated the involvement of basolateral amygdala (BLA) α2-adrenoceptors on ACPA (as selective cannabinoid CB1 agonist)-induced inhibition of the acquisition of contextual and auditory conditioned fear. A contextual and auditory fear conditioning apparatus for assess fear memory in adult male NMRI mice was used. Pre-training, intraperitoneal administration of ACPA decreased the percentage freezing time in contextual (at doses of 0.05 and 0.1mg/kg) and auditory (at dose of 0.1 mg/kg) in the fear conditioning task, indicating memory acquisition deficit. The same result was observed with intra-BLA microinjection of clonidine (0.001-0.5 μg/mouse, for both memories), as α2-adrenoceptor agonist and yohimbine (at doses of 0.005 and 0.05 for contextual and at dose of 0.05 μg/mouse for auditory fear memory), as α2-adrenoceptor antagonist. In addition, intra-BLA microinjection of clonidine (0.0005 μg/mouse) did not alter ACPA response in both conditions, while the same dose of yohimbine potentiated ACPA response at the lower dose on contextual fear memory. It is concluded that BLA α2-adrenergic receptors may be involved in context- but not tone-dependent fear memory impairment induced by activation of CB1 receptors. Copyright © 2015. Published by Elsevier B.V.
Effect of signal to noise ratio on the speech perception ability of older adults
Shojaei, Elahe; Ashayeri, Hassan; Jafari, Zahra; Zarrin Dast, Mohammad Reza; Kamali, Koorosh
2016-01-01
Background: Speech perception ability depends on auditory and extra-auditory elements. The signal- to-noise ratio (SNR) is an extra-auditory element that has an effect on the ability to normally follow speech and maintain a conversation. Speech in noise perception difficulty is a common complaint of the elderly. In this study, the importance of SNR magnitude as an extra-auditory effect on speech perception in noise was examined in the elderly. Methods: The speech perception in noise test (SPIN) was conducted on 25 elderly participants who had bilateral low–mid frequency normal hearing thresholds at three SNRs in the presence of ipsilateral white noise. These participants were selected by available sampling method. Cognitive screening was done using the Persian Mini Mental State Examination (MMSE) test. Results: Independent T- test, ANNOVA and Pearson Correlation Index were used for statistical analysis. There was a significant difference in word discrimination scores at silence and at three SNRs in both ears (p≤0.047). Moreover, there was a significant difference in word discrimination scores for paired SNRs (0 and +5, 0 and +10, and +5 and +10 (p≤0.04)). No significant correlation was found between age and word recognition scores at silence and at three SNRs in both ears (p≥0.386). Conclusion: Our results revealed that decreasing the signal level and increasing the competing noise considerably reduced the speech perception ability in normal hearing at low–mid thresholds in the elderly. These results support the critical role of SNRs for speech perception ability in the elderly. Furthermore, our results revealed that normal hearing elderly participants required compensatory strategies to maintain normal speech perception in challenging acoustic situations. PMID:27390712
Anomal, Renata; de Villers-Sidani, Etienne; Merzenich, Michael M; Panizzutti, Rogerio
2013-01-01
Sensory experience powerfully shapes cortical sensory representations during an early developmental "critical period" of plasticity. In the rat primary auditory cortex (A1), the experience-dependent plasticity is exemplified by significant, long-lasting distortions in frequency representation after mere exposure to repetitive frequencies during the second week of life. In the visual system, the normal unfolding of critical period plasticity is strongly dependent on the elaboration of brain-derived neurotrophic factor (BDNF), which promotes the establishment of inhibition. Here, we tested the hypothesis that BDNF signaling plays a role in the experience-dependent plasticity induced by pure tone exposure during the critical period in the primary auditory cortex. Elvax resin implants filled with either a blocking antibody against BDNF or the BDNF protein were placed on the A1 of rat pups throughout the critical period window. These pups were then exposed to 7 kHz pure tone for 7 consecutive days and their frequency representations were mapped. BDNF blockade completely prevented the shaping of cortical tuning by experience and resulted in poor overall frequency tuning in A1. By contrast, BDNF infusion on the developing A1 amplified the effect of 7 kHz tone exposure compared to control. These results indicate that BDNF signaling participates in the experience-dependent plasticity induced by pure tone exposure during the critical period in A1.
Psychophysical and Neural Correlates of Auditory Attraction and Aversion
NASA Astrophysics Data System (ADS)
Patten, Kristopher Jakob
This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.
Sharma, Anu; Campbell, Julia; Cardon, Garrett
2015-02-01
Cortical development is dependent on extrinsic stimulation. As such, sensory deprivation, as in congenital deafness, can dramatically alter functional connectivity and growth in the auditory system. Cochlear implants ameliorate deprivation-induced delays in maturation by directly stimulating the central nervous system, and thereby restoring auditory input. The scenario in which hearing is lost due to deafness and then reestablished via a cochlear implant provides a window into the development of the central auditory system. Converging evidence from electrophysiologic and brain imaging studies of deaf animals and children fitted with cochlear implants has allowed us to elucidate the details of the time course for auditory cortical maturation under conditions of deprivation. Here, we review how the P1 cortical auditory evoked potential (CAEP) provides useful insight into sensitive period cut-offs for development of the primary auditory cortex in deaf children fitted with cochlear implants. Additionally, we present new data on similar sensitive period dynamics in higher-order auditory cortices, as measured by the N1 CAEP in cochlear implant recipients. Furthermore, cortical re-organization, secondary to sensory deprivation, may take the form of compensatory cross-modal plasticity. We provide new case-study evidence that cross-modal re-organization, in which intact sensory modalities (i.e., vision and somatosensation) recruit cortical regions associated with deficient sensory modalities (i.e., auditory) in cochlear implanted children may influence their behavioral outcomes with the implant. Improvements in our understanding of developmental neuroplasticity in the auditory system should lead to harnessing central auditory plasticity for superior clinical technique. Copyright © 2014 Elsevier B.V. All rights reserved.
Effect of Human Auditory Efferent Feedback on Cochlear Gain and Compression
Drga, Vit; Plack, Christopher J.
2014-01-01
The mammalian auditory system includes a brainstem-mediated efferent pathway from the superior olivary complex by way of the medial olivocochlear system, which reduces the cochlear response to sound (Warr and Guinan, 1979; Liberman et al., 1996). The human medial olivocochlear response has an onset delay of between 25 and 40 ms and rise and decay constants in the region of 280 and 160 ms, respectively (Backus and Guinan, 2006). Physiological studies with nonhuman mammals indicate that onset and decay characteristics of efferent activation are dependent on the temporal and level characteristics of the auditory stimulus (Bacon and Smith, 1991; Guinan and Stankovic, 1996). This study uses a novel psychoacoustical masking technique using a precursor sound to obtain a measure of the efferent effect in humans. This technique avoids confounds currently associated with other psychoacoustical measures. Both temporal and level dependency of the efferent effect was measured, providing a comprehensive measure of the effect of human auditory efferents on cochlear gain and compression. Results indicate that a precursor (>20 dB SPL) induced efferent activation, resulting in a decrease in both maximum gain and maximum compression, with linearization of the compressive function for input sound levels between 50 and 70 dB SPL. Estimated gain decreased as precursor level increased, and increased as the silent interval between the precursor and combined masker-signal stimulus increased, consistent with a decay of the efferent effect. Human auditory efferent activation linearizes the cochlear response for mid-level sounds while reducing maximum gain. PMID:25392499
Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults.
Lee, Ahreum; Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju
2018-04-11
Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standar Desviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition.
Language networks in anophthalmia: maintained hierarchy of processing in 'visual' cortex.
Watkins, Kate E; Cowey, Alan; Alexander, Iona; Filippini, Nicola; Kennedy, James M; Smith, Stephen M; Ragge, Nicola; Bridge, Holly
2012-05-01
Imaging studies in blind subjects have consistently shown that sensory and cognitive tasks evoke activity in the occipital cortex, which is normally visual. The precise areas involved and degree of activation are dependent upon the cause and age of onset of blindness. Here, we investigated the cortical language network at rest and during an auditory covert naming task in five bilaterally anophthalmic subjects, who have never received visual input. When listening to auditory definitions and covertly retrieving words, these subjects activated lateral occipital cortex bilaterally in addition to the language areas activated in sighted controls. This activity was significantly greater than that present in a control condition of listening to reversed speech. The lateral occipital cortex was also recruited into a left-lateralized resting-state network that usually comprises anterior and posterior language areas. Levels of activation to the auditory naming and reversed speech conditions did not differ in the calcarine (striate) cortex. This primary 'visual' cortex was not recruited to the left-lateralized resting-state network and showed high interhemispheric correlation of activity at rest, as is typically seen in unimodal cortical areas. In contrast, the interhemispheric correlation of resting activity in extrastriate areas was reduced in anophthalmia to the level of cortical areas that are heteromodal, such as the inferior frontal gyrus. Previous imaging studies in the congenitally blind show that primary visual cortex is activated in higher-order tasks, such as language and memory to a greater extent than during more basic sensory processing, resulting in a reversal of the normal hierarchy of functional organization across 'visual' areas. Our data do not support such a pattern of organization in anophthalmia. Instead, the patterns of activity during task and the functional connectivity at rest are consistent with the known hierarchy of processing in these areas normally seen for vision. The differences in cortical organization between bilateral anophthalmia and other forms of congenital blindness are considered to be due to the total absence of stimulation in 'visual' cortex by light or retinal activity in the former condition, and suggests development of subcortical auditory input to the geniculo-striate pathway.
Integrated trimodal SSEP experimental setup for visual, auditory and tactile stimulation
NASA Astrophysics Data System (ADS)
Kuś, Rafał; Spustek, Tomasz; Zieleniewska, Magdalena; Duszyk, Anna; Rogowski, Piotr; Suffczyński, Piotr
2017-12-01
Objective. Steady-state evoked potentials (SSEPs), the brain responses to repetitive stimulation, are commonly used in both clinical practice and scientific research. Particular brain mechanisms underlying SSEPs in different modalities (i.e. visual, auditory and tactile) are very complex and still not completely understood. Each response has distinct resonant frequencies and exhibits a particular brain topography. Moreover, the topography can be frequency-dependent, as in case of auditory potentials. However, to study each modality separately and also to investigate multisensory interactions through multimodal experiments, a proper experimental setup appears to be of critical importance. The aim of this study was to design and evaluate a novel SSEP experimental setup providing a repetitive stimulation in three different modalities (visual, tactile and auditory) with a precise control of stimuli parameters. Results from a pilot study with a stimulation in a particular modality and in two modalities simultaneously prove the feasibility of the device to study SSEP phenomenon. Approach. We developed a setup of three separate stimulators that allows for a precise generation of repetitive stimuli. Besides sequential stimulation in a particular modality, parallel stimulation in up to three different modalities can be delivered. Stimulus in each modality is characterized by a stimulation frequency and a waveform (sine or square wave). We also present a novel methodology for the analysis of SSEPs. Main results. Apart from constructing the experimental setup, we conducted a pilot study with both sequential and simultaneous stimulation paradigms. EEG signals recorded during this study were analyzed with advanced methodology based on spatial filtering and adaptive approximation, followed by statistical evaluation. Significance. We developed a novel experimental setup for performing SSEP experiments. In this sense our study continues the ongoing research in this field. On the other hand, the described setup along with the presented methodology is a considerable improvement and an extension of methods constituting the state-of-the-art in the related field. Device flexibility both with developed analysis methodology can lead to further development of diagnostic methods and provide deeper insight into information processing in the human brain.
Horiuchi, Timothy K.
2011-01-01
Short-term synaptic plasticity acts as a time- and firing rate-dependent filter that mediates the transmission of information across synapses. In the avian auditory brainstem, specific forms of plasticity are expressed at different terminals of the same auditory nerve fibers and contribute to the divergence of acoustic timing and intensity information. To identify key differences in the plasticity properties, we made patch-clamp recordings from neurons in the cochlear nucleus responsible for intensity coding, nucleus angularis, and measured the time course of the recovery of excitatory postsynaptic currents following short-term synaptic depression. These synaptic responses showed a very rapid recovery, following a bi-exponential time course with a fast time constant of ~40 ms and a dependence on the presynaptic activity levels, resulting in a crossing over of the recovery trajectories following high-rate versus low-rate stimulation trains. We also show that the recorded recovery in the intensity pathway differs from similar recordings in the timing pathway, specifically the cochlear nucleus magnocellularis, in two ways: (1) a fast recovery that was not due to recovery from postsynaptic receptor desensitization and (2) a recovery trajectory that was characterized by a non-monotonic bump that may be due in part to facilitation mechanisms more prevalent in the intensity pathway. We tested whether a previously proposed model of synaptic transmission based on vesicle depletion and sequential steps of vesicle replenishment could account for the recovery responses, and found it was insufficient, suggesting an activity-dependent feedback mechanism is present. We propose that the rapid recovery following depression allows improved coding of natural auditory signals that often consist of sound bursts separated by short gaps. PMID:21409439
Categorization of extremely brief auditory stimuli: domain-specific or domain-general processes?
Bigand, Emmanuel; Delbé, Charles; Gérard, Yannick; Tillmann, Barbara
2011-01-01
The present study investigated the minimum amount of auditory stimulation that allows differentiation of spoken voices, instrumental music, and environmental sounds. Three new findings were reported. 1) All stimuli were categorized above chance level with 50 ms-segments. 2) When a peak-level normalization was applied, music and voices started to be accurately categorized with 20 ms-segments. When the root-mean-square (RMS) energy of the stimuli was equalized, voice stimuli were better recognized than music and environmental sounds. 3) Further psychoacoustical analyses suggest that the categorization of extremely brief auditory stimuli depends on the variability of their spectral envelope in the used set. These last two findings challenge the interpretation of the voice superiority effect reported in previously published studies and propose a more parsimonious interpretation in terms of an emerging property of auditory categorization processes.
Cardon, Garrett; Campbell, Julia; Sharma, Anu
2013-01-01
The developing auditory cortex is highly plastic. As such, the cortex is both primed to mature normally and at risk for re-organizing abnormally, depending upon numerous factors that determine central maturation. From a clinical perspective, at least two major components of development can be manipulated: 1) input to the cortex and 2) the timing of cortical input. Children with sensorineural hearing loss (SNHL) and auditory neuropathy spectrum disorder (ANSD) have provided a model of early deprivation of sensory input to the cortex, and demonstrated the resulting plasticity and development that can occur upon introduction of stimulation. In this article, we review several fundamental principles of cortical development and plasticity and discuss the clinical applications in children with SNHL and ANSD who receive intervention with hearing aids and/or cochlear implants. PMID:22668761
Auditory cortical function during verbal episodic memory encoding in Alzheimer's disease.
Dhanjal, Novraj S; Warren, Jane E; Patel, Maneesh C; Wise, Richard J S
2013-02-01
Episodic memory encoding of a verbal message depends upon initial registration, which requires sustained auditory attention followed by deep semantic processing of the message. Motivated by previous data demonstrating modulation of auditory cortical activity during sustained attention to auditory stimuli, we investigated the response of the human auditory cortex during encoding of sentences to episodic memory. Subsequently, we investigated this response in patients with mild cognitive impairment (MCI) and probable Alzheimer's disease (pAD). Using functional magnetic resonance imaging, 31 healthy participants were studied. The response in 18 MCI and 18 pAD patients was then determined, and compared to 18 matched healthy controls. Subjects heard factual sentences, and subsequent retrieval performance indicated successful registration and episodic encoding. The healthy subjects demonstrated that suppression of auditory cortical responses was related to greater success in encoding heard sentences; and that this was also associated with greater activity in the semantic system. In contrast, there was reduced auditory cortical suppression in patients with MCI, and absence of suppression in pAD. Administration of a central cholinesterase inhibitor (ChI) partially restored the suppression in patients with pAD, and this was associated with an improvement in verbal memory. Verbal episodic memory impairment in AD is associated with altered auditory cortical function, reversible with a ChI. Although these results may indicate the direct influence of pathology in auditory cortex, they are also likely to indicate a partially reversible impairment of feedback from neocortical systems responsible for sustained attention and semantic processing. Copyright © 2012 American Neurological Association.
Neuroanatomical and resting state EEG power correlates of central hearing loss in older adults.
Giroud, Nathalie; Hirsiger, Sarah; Muri, Raphaela; Kegel, Andrea; Dillier, Norbert; Meyer, Martin
2018-01-01
To gain more insight into central hearing loss, we investigated the relationship between cortical thickness and surface area, speech-relevant resting state EEG power, and above-threshold auditory measures in older adults and younger controls. Twenty-three older adults and 13 younger controls were tested with an adaptive auditory test battery to measure not only traditional pure-tone thresholds, but also above individual thresholds of temporal and spectral processing. The participants' speech recognition in noise (SiN) was evaluated, and a T1-weighted MRI image obtained for each participant. We then determined the cortical thickness (CT) and mean cortical surface area (CSA) of auditory and higher speech-relevant regions of interest (ROIs) with FreeSurfer. Further, we obtained resting state EEG from all participants as well as data on the intrinsic theta and gamma power lateralization, the latter in accordance with predictions of the Asymmetric Sampling in Time hypothesis regarding speech processing (Poeppel, Speech Commun 41:245-255, 2003). Methodological steps involved the calculation of age-related differences in behavior, anatomy and EEG power lateralization, followed by multiple regressions with anatomical ROIs as predictors for auditory performance. We then determined anatomical regressors for theta and gamma lateralization, and further constructed all regressions to investigate age as a moderator variable. Behavioral results indicated that older adults performed worse in temporal and spectral auditory tasks, and in SiN, despite having normal peripheral hearing as signaled by the audiogram. These behavioral age-related distinctions were accompanied by lower CT in all ROIs, while CSA was not different between the two age groups. Age modulated the regressions specifically in right auditory areas, where a thicker cortex was associated with better auditory performance in older adults. Moreover, a thicker right supratemporal sulcus predicted more rightward theta lateralization, indicating the functional relevance of the right auditory areas in older adults. The question how age-related cortical thinning and intrinsic EEG architecture relates to central hearing loss has so far not been addressed. Here, we provide the first neuroanatomical and neurofunctional evidence that cortical thinning and lateralization of speech-relevant frequency band power relates to the extent of age-related central hearing loss in older adults. The results are discussed within the current frameworks of speech processing and aging.
An investigation of the relation between sibilant production and somatosensory and auditory acuity
Ghosh, Satrajit S.; Matthies, Melanie L.; Maas, Edwin; Hanson, Alexandra; Tiede, Mark; Ménard, Lucie; Guenther, Frank H.; Lane, Harlan; Perkell, Joseph S.
2010-01-01
The relation between auditory acuity, somatosensory acuity and the magnitude of produced sibilant contrast was investigated with data from 18 participants. To measure auditory acuity, stimuli from a synthetic sibilant continuum ([s]-[ʃ]) were used in a four-interval, two-alternative forced choice adaptive-staircase discrimination task. To measure somatosensory acuity, small plastic domes with grooves of different spacing were pressed against each participant’s tongue tip and the participant was asked to identify one of four possible orientations of the grooves. Sibilant contrast magnitudes were estimated from productions of the words ‘said,’ ‘shed,’ ‘sid,’ and ‘shid’. Multiple linear regression revealed a significant relation indicating that a combination of somatosensory and auditory acuity measures predicts produced acoustic contrast. When the participants were divided into high- and low-acuity groups based on their median somatosensory and auditory acuity measures, separate ANOVA analyses with sibilant contrast as the dependent variable yielded a significant main effect for each acuity group. These results provide evidence that sibilant productions have auditory as well as somatosensory goals and are consistent with prior results and the theoretical framework underlying the DIVA model of speech production. PMID:21110603
Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash
2015-01-01
The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490
Neural plasticity and its initiating conditions in tinnitus.
Roberts, L E
2018-03-01
Deafferentation caused by cochlear pathology (which can be hidden from the audiogram) activates forms of neural plasticity in auditory pathways, generating tinnitus and its associated conditions including hyperacusis. This article discusses tinnitus mechanisms and suggests how these mechanisms may relate to those involved in normal auditory information processing. Research findings from animal models of tinnitus and from electromagnetic imaging of tinnitus patients are reviewed which pertain to the role of deafferentation and neural plasticity in tinnitus and hyperacusis. Auditory neurons compensate for deafferentation by increasing their input/output functions (gain) at multiple levels of the auditory system. Forms of homeostatic plasticity are believed to be responsible for this neural change, which increases the spontaneous and driven activity of neurons in central auditory structures in animals expressing behavioral evidence of tinnitus. Another tinnitus correlate, increased neural synchrony among the affected neurons, is forged by spike-timing-dependent neural plasticity in auditory pathways. Slow oscillations generated by bursting thalamic neurons verified in tinnitus animals appear to modulate neural plasticity in the cortex, integrating tinnitus neural activity with information in brain regions supporting memory, emotion, and consciousness which exhibit increased metabolic activity in tinnitus patients. The latter process may be induced by transient auditory events in normal processing but it persists in tinnitus, driven by phantom signals from the auditory pathway. Several tinnitus therapies attempt to suppress tinnitus through plasticity, but repeated sessions will likely be needed to prevent tinnitus activity from returning owing to deafferentation as its initiating condition.
Lebedeva, I S; Akhadov, T A; Petriaĭkin, A V; Kaleda, V G; Barkhatova, A N; Golubev, S A; Rumiantseva, E E; Vdovenko, A M; Fufaeva, E A; Semenova, N A
2011-01-01
Six patients in the state of remission after the first episode ofjuvenile schizophrenia and seven sex- and age-matched mentally healthy subjects were examined by fMRI and ERP methods. The auditory oddball paradigm was applied. Differences in P300 parameters didn't reach the level of significance, however, a significantly higher hemodynamic response to target stimuli was found in patients bilaterally in the supramarginal gyrus and in the right medial frontal gyrus, which points to pathology of these brain areas in supporting of auditory selective attention.
USDA-ARS?s Scientific Manuscript database
This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...
Molina, S J; Capani, F; Guelman, L R
2016-04-01
It has been previously shown that different extra-auditory alterations can be induced in animals exposed to noise at 15 days. However, data regarding exposure of younger animals, that do not have a functional auditory system, have not been obtained yet. Besides, the possibility to find a helpful strategy to restore these changes has not been explored so far. Therefore, the aims of the present work were to test age-related differences in diverse hippocampal-dependent behavioral measurements that might be affected in noise-exposed rats, as well as to evaluate the effectiveness of a potential neuroprotective strategy, the enriched environment (EE), on noise-induced behavioral alterations. Male Wistar rats of 7 and 15 days were exposed to moderate levels of noise for two hours. At weaning, animals were separated and reared either in standard or in EE cages for one week. At 28 days of age, different hippocampal-dependent behavioral assessments were performed. Results show that rats exposed to noise at 7 and 15 days were differentially affected. Moreover, EE was effective in restoring all altered variables when animals were exposed at 7 days, while a few were restored in rats exposed at 15 days. The present findings suggest that noise exposure was capable to trigger significant hippocampal-related behavioral alterations that were differentially affected, depending on the age of exposure. In addition, it could be proposed that hearing structures did not seem to be necessarily involved in the generation of noise-induced hippocampal-related behaviors, as they were observed even in animals with an immature auditory pathway. Finally, it could be hypothesized that the differential restoration achieved by EE rearing might also depend on the degree of maturation at the time of exposure and the variable evaluated, being younger animals more susceptible to environmental manipulations. Copyright © 2016 Elsevier B.V. All rights reserved.
Krishnan, Ananthanarayan; Gandour, Jackson T
2014-12-01
Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information.
Krishnan, Ananthanarayan; Gandour, Jackson T.
2015-01-01
Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information. PMID:25838636
Resting-state brain networks revealed by granger causal connectivity in frogs.
Xue, Fei; Fang, Guangzhan; Yue, Xizi; Zhao, Ermi; Brauth, Steven E; Tang, Yezhong
2016-10-15
Resting-state networks (RSNs) refer to the spontaneous brain activity generated under resting conditions, which maintain the dynamic connectivity of functional brain networks for automatic perception or higher order cognitive functions. Here, Granger causal connectivity analysis (GCCA) was used to explore brain RSNs in the music frog (Babina daunchina) during different behavioral activity phases. The results reveal that a causal network in the frog brain can be identified during the resting state which reflects both brain lateralization and sexual dimorphism. Specifically (1) ascending causal connections from the left mesencephalon to both sides of the telencephalon are significantly higher than those from the right mesencephalon, while the right telencephalon gives rise to the strongest efferent projections among all brain regions; (2) causal connections from the left mesencephalon in females are significantly higher than those in males and (3) these connections are similar during both the high and low behavioral activity phases in this species although almost all electroencephalograph (EEG) spectral bands showed higher power in the high activity phase for all nodes. The functional features of this network match important characteristics of auditory perception in this species. Thus we propose that this causal network maintains auditory perception during the resting state for unexpected auditory inputs as resting-state networks do in other species. These results are also consistent with the idea that females are more sensitive to auditory stimuli than males during the reproductive season. In addition, these results imply that even when not behaviorally active, the frogs remain vigilant for detecting external stimuli. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas
2018-03-01
Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback. Furthermore, independent of the training group, a significant spatial pre-post difference was found in the event-related component P200 ( P = .04).
Domain-specific impairment of source memory following a right posterior medial temporal lobe lesion.
Peters, Jan; Koch, Benno; Schwarz, Michael; Daum, Irene
2007-01-01
This single case analysis of memory performance in a patient with an ischemic lesion affecting posterior but not anterior right medial temporal lobe (MTL) indicates that source memory can be disrupted in a domain-specific manner. The patient showed normal recognition memory for gray-scale photos of objects (visual condition) and spoken words (auditory condition). While memory for visual source (texture/color of the background against which pictures appeared) was within the normal range, auditory source memory (male/female speaker voice) was at chance level, a performance pattern significantly different from the control group. This dissociation is consistent with recent fMRI evidence of anterior/posterior MTL dissociations depending upon the nature of source information (visual texture/color vs. auditory speaker voice). The findings are in good agreement with the view of dissociable memory processing by the perirhinal cortex (anterior MTL) and parahippocampal cortex (posterior MTL), depending upon the neocortical input that these regions receive. (c) 2007 Wiley-Liss, Inc.
Local inhibition modulates learning-dependent song encoding in the songbird auditory cortex
Thompson, Jason V.; Jeanne, James M.
2013-01-01
Changes in inhibition during development are well documented, but the role of inhibition in adult learning-related plasticity is not understood. In songbirds, vocal recognition learning alters the neural representation of songs across the auditory forebrain, including the caudomedial nidopallium (NCM), a region analogous to mammalian secondary auditory cortices. Here, we block local inhibition with the iontophoretic application of gabazine, while simultaneously measuring song-evoked spiking activity in NCM of European starlings trained to recognize sets of conspecific songs. We find that local inhibition differentially suppresses the responses to learned and unfamiliar songs and enhances spike-rate differences between learned categories of songs. These learning-dependent response patterns emerge, in part, through inhibitory modulation of selectivity for song components and the masking of responses to specific acoustic features without altering spectrotemporal tuning. The results describe a novel form of inhibitory modulation of the encoding of learned categories and demonstrate that inhibition plays a central role in shaping the responses of neurons to learned, natural signals. PMID:23155175
King, J.R.; Faugeras, F.; Gramfort, A.; Schurger, A.; El Karoui, I.; Sitt, J.D.; Rohaut, B.; Wacongne, C.; Labyt, E.; Bekinschtein, T.; Cohen, L.; Naccache, L.; Dehaene, S.
2017-01-01
Detecting residual consciousness in unresponsive patients is a major clinical concern and a challenge for theoretical neuroscience. To tackle this issue, we recently designed a paradigm that dissociates two electro-encephalographic (EEG) responses to auditory novelty. Whereas a local change in pitch automatically elicits a mismatch negativity (MMN), a change in global sound sequence leads to a late P300b response. The latter component is thought to be present only when subjects consciously perceive the global novelty. Unfortunately, it can be difficult to detect because individual variability is high, especially in clinical recordings. Here, we show that multivariate pattern classifiers can extract subject-specific EEG patterns and predict single-trial local or global novelty responses. We first validate our method with 38 high-density EEG, MEG and intracranial EEG recordings. We empirically demonstrate that our approach circumvents the issues associated with multiple comparisons and individual variability while improving the statistics. Moreover, we confirm in control subjects that local responses are robust to distraction whereas global responses depend on attention. We then investigate 104 vegetative state (VS), minimally conscious state (MCS) and conscious state (CS) patients recorded with high-density EEG. For the local response, the proportion of significant decoding scores (M = 60%) does not vary with the state of consciousness. By contrast, for the global response, only 14% of the VS patients' EEG recordings presented a significant effect, compared to 31% in MCS patients' and 52% in CS patients'. In conclusion, single-trial multivariate decoding of novelty responses provides valuable information in non-communicating patients and paves the way towards real-time monitoring of the state of consciousness. PMID:23859924
Electrophysiological measurement of human auditory function
NASA Technical Reports Server (NTRS)
Galambos, R.
1975-01-01
Contingent negative variations in the presence and amplitudes of brain potentials evoked by sound are considered. Evidence is produced that the evoked brain stem response to auditory stimuli is clearly related to brain events associated with cognitive processing of acoustic signals since their properties depend upon where the listener directs his attention, whether the signal is an expected event or a surprise, and when sound that is listened-for is heard at last.
Temporal factors affecting somatosensory–auditory interactions in speech processing
Ito, Takayuki; Gracco, Vincent L.; Ostry, David J.
2014-01-01
Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production. PMID:25452733
Filling-in visual motion with sounds.
Väljamäe, A; Soto-Faraco, S
2008-10-01
Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.
Activation of auditory cortex by anticipating and hearing emotional sounds: an MEG study.
Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina
2013-01-01
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.
Activation of Auditory Cortex by Anticipating and Hearing Emotional Sounds: An MEG Study
Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina
2013-01-01
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period. PMID:24278270
Impey, Danielle; Knott, Verner
2015-08-01
Membrane potentials and brain plasticity are basic modes of cerebral information processing. Both can be externally (non-invasively) modulated by weak transcranial direct current stimulation (tDCS). Polarity-dependent tDCS-induced reversible circumscribed increases and decreases in cortical excitability and functional changes have been observed following stimulation of motor and visual cortices but relatively little research has been conducted with respect to the auditory cortex. The aim of this pilot study was to examine the effects of tDCS on auditory sensory discrimination in healthy participants (N = 12) assessed with the mismatch negativity (MMN) brain event-related potential (ERP). In a randomized, double-blind, sham-controlled design, participants received anodal tDCS over the primary auditory cortex (2 mA for 20 min) in one session and 'sham' stimulation (i.e., no stimulation except initial ramp-up for 30 s) in the other session. MMN elicited by changes in auditory pitch was found to be enhanced after receiving anodal tDCS compared to 'sham' stimulation, with the effects being evidenced in individuals with relatively reduced (vs. increased) baseline amplitudes and with relatively small (vs. large) pitch deviants. Additional studies are needed to further explore relationships between tDCS-related parameters, auditory stimulus features and individual differences prior to assessing the utility of this tool for treating auditory processing deficits in psychiatric and/or neurological disorders.
2017-01-01
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238
Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L
2017-12-13
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.
Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan
2013-10-01
Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.
The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex
Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J
2014-01-01
Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. PMID:24945075
Crowell, Sara E.; Wells-Berlin, Alicia M.; Therrien, Ronald E.; Yannuzzi, Sally E.; Carr, Catherine E.
2016-01-01
Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000−3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.
Neural correlates of auditory short-term memory in rostral superior temporal cortex
Scott, Brian H.; Mishkin, Mortimer; Yin, Pingbo
2014-01-01
Summary Background Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. Results We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed-match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing, and in their resistance to sounds intervening between the sample and match. Conclusions Like the monkeys’ behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. PMID:25456448
Motor contributions to the temporal precision of auditory attention
Morillon, Benjamin; Schroeder, Charles E.; Wyart, Valentin
2014-01-01
In temporal—or dynamic—attending theory, it is proposed that motor activity helps to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Here we develop a mechanistic behavioural account for this theory by asking human participants to track a slow reference beat, by noiseless finger pressing, while extracting auditory target tones delivered on-beat and interleaved with distractors. We find that overt rhythmic motor activity improves the segmentation of auditory information by enhancing sensitivity to target tones while actively suppressing distractor tones. This effect is triggered by cyclic fluctuations in sensory gain locked to individual motor acts, scales parametrically with the temporal predictability of sensory events and depends on the temporal alignment between motor and attention fluctuations. Together, these findings reveal how top-down influences associated with a rhythmic motor routine sharpen sensory representations, enacting auditory ‘active sensing’. PMID:25314898
The role of working memory in auditory selective attention.
Dalton, Polly; Santangelo, Valerio; Spence, Charles
2009-11-01
A growing body of research now demonstrates that working memory plays an important role in controlling the extent to which irrelevant visual distractors are processed during visual selective attention tasks (e.g., Lavie, Hirst, De Fockert, & Viding, 2004). Recently, it has been shown that the successful selection of tactile information also depends on the availability of working memory (Dalton, Lavie, & Spence, 2009). Here, we investigate whether working memory plays a role in auditory selective attention. Participants focused their attention on short continuous bursts of white noise (targets) while attempting to ignore pulsed bursts of noise (distractors). Distractor interference in this auditory task, as measured in terms of the difference in performance between congruent and incongruent distractor trials, increased significantly under high (vs. low) load in a concurrent working-memory task. These results provide the first evidence demonstrating a causal role for working memory in reducing interference by irrelevant auditory distractors.
Crowell, Sara E; Wells-Berlin, Alicia M; Therrien, Ronald E; Yannuzzi, Sally E; Carr, Catherine E
2016-05-01
Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000-3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.
Motor contributions to the temporal precision of auditory attention.
Morillon, Benjamin; Schroeder, Charles E; Wyart, Valentin
2014-10-15
In temporal-or dynamic-attending theory, it is proposed that motor activity helps to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Here we develop a mechanistic behavioural account for this theory by asking human participants to track a slow reference beat, by noiseless finger pressing, while extracting auditory target tones delivered on-beat and interleaved with distractors. We find that overt rhythmic motor activity improves the segmentation of auditory information by enhancing sensitivity to target tones while actively suppressing distractor tones. This effect is triggered by cyclic fluctuations in sensory gain locked to individual motor acts, scales parametrically with the temporal predictability of sensory events and depends on the temporal alignment between motor and attention fluctuations. Together, these findings reveal how top-down influences associated with a rhythmic motor routine sharpen sensory representations, enacting auditory 'active sensing'.
Higgins, Irina; Stringer, Simon; Schnupp, Jan
2017-01-01
The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.
Identification of a motor to auditory pathway important for vocal learning
Roberts, Todd F.; Hisey, Erin; Tanaka, Masashi; Kearney, Matthew; Chattree, Gaurav; Yang, Cindy F.; Shah, Nirao M.; Mooney, Richard
2017-01-01
Summary Learning to vocalize depends on the ability to adaptively modify the temporal and spectral features of vocal elements. Neurons that convey motor-related signals to the auditory system are theorized to facilitate vocal learning, but the identity and function of such neurons remain unknown. Here we identify a previously unknown neuron type in the songbird brain that transmits vocal motor signals to the auditory cortex. Genetically ablating these neurons in juveniles disrupted their ability to imitate features of an adult tutor’s song. Ablating these neurons in adults had little effect on previously learned songs, but interfered with their ability to adaptively modify the duration of vocal elements and largely prevented the degradation of song’s temporal features normally caused by deafening. These findings identify a motor to auditory circuit essential to vocal imitation and to the adaptive modification of vocal timing. PMID:28504672
Stringer, Simon
2017-01-01
The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable. PMID:28797034
NASA Astrophysics Data System (ADS)
Mhatre, Natasha; Robert, Daniel
2018-05-01
Tree cricket hearing shows all the features of an actively amplified auditory system, particularly spontaneous oscillations (SOs) of the tympanal membrane. As expected from an actively amplified auditory system, SO frequency and the peak frequency in evoked responses as observed in sensitivity spectra are correlated. Sensitivity spectra also show compressive non-linearity at this frequency, i.e. a reduction in peak height and sharpness with increasing stimulus amplitude. Both SO and amplified frequency also change with ambient temperature, allowing the auditory system to maintain a filter that is matched to song frequency. In tree crickets, remarkably, song frequency varies with ambient temperature. Interestingly, active amplification has been reported to be switched ON and OFF. The mechanism of this switch is as yet unknown. In order to gain insights into this switch, we recorded and analysed SOs as the auditory system transitioned from the passive (OFF) state to the active (ON) state. We found that while SO amplitude did not follow a fixed pattern, SO frequency changed during the ON-OFF transition. SOs were first detected above noise levels at low frequencies, sometimes well below the known song frequency range (0.5-1 kHz lower). SO frequency was observed to increase over the next ˜30 minutes, in the absence of any ambient temperature change, before settling at a frequency within the range of conspecific song. We examine the frequency shift in SO spectra with temperature and during the ON/OFF transition and discuss the mechanistic implications. To our knowledge, such modulation of active auditory amplification, and its dynamics are unique amongst auditory animals.
Farahani, Ehsan Darestani; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid
2017-03-01
Investigating the neural generators of auditory steady-state responses (ASSRs), i.e., auditory evoked brain responses, with a wide range of screening and diagnostic applications, has been the focus of various studies for many years. Most of these studies employed a priori assumptions regarding the number and location of neural generators. The aim of this study is to reconstruct ASSR sources with minimal assumptions in order to gain in-depth insight into the number and location of brain regions that are activated in response to low- as well as high-frequency acoustically amplitude modulated signals. In order to reconstruct ASSR sources, we applied independent component analysis with subsequent equivalent dipole modeling to single-subject EEG data (young adults, 20-30 years of age). These data were based on white noise stimuli, amplitude modulated at 4, 20, 40, or 80Hz. The independent components that exhibited a significant ASSR were clustered among all participants by means of a probabilistic clustering method based on a Gaussian mixture model. Results suggest that a widely distributed network of sources, located in cortical as well as subcortical regions, is active in response to 4, 20, 40, and 80Hz amplitude modulated noises. Some of these sources are located beyond the central auditory pathway. Comparison of brain sources in response to different modulation frequencies suggested that the identified brain sources in the brainstem, the left and the right auditory cortex show a higher responsiveness to 40Hz than to the other modulation frequencies. Copyright © 2017 Elsevier Inc. All rights reserved.
Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim
2015-06-15
Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
2014-11-01
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim
2015-01-01
Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433
Baltus, Alina; Vosskuhl, Johannes; Boetzel, Cindy; Herrmann, Christoph Siegfried
2018-05-13
Recent research provides evidence for a functional role of brain oscillations for perception. For example, auditory temporal resolution seems to be linked to individual gamma frequency of auditory cortex. Individual gamma frequency not only correlates with performance in between-channel gap detection tasks but can be modulated via auditory transcranial alternating current stimulation. Modulation of individual gamma frequency is accompanied by an improvement in gap detection performance. Aging changes electrophysiological frequency components and sensory processing mechanisms. Therefore, we conducted a study to investigate the link between individual gamma frequency and gap detection performance in elderly people using auditory transcranial alternating current stimulation. In a within-subject design, twelve participants were electrically stimulated with two individualized transcranial alternating current stimulation frequencies: 3 Hz above their individual gamma frequency (experimental condition) and 4 Hz below their individual gamma frequency (control condition) while they were performing a between-channel gap detection task. As expected, individual gamma frequencies correlated significantly with gap detection performance at baseline and in the experimental condition, transcranial alternating current stimulation modulated gap detection performance. In the control condition, stimulation did not modulate gap detection performance. In addition, in elderly, the effect of transcranial alternating current stimulation on auditory temporal resolution seems to be dependent on endogenous frequencies in auditory cortex: elderlies with slower individual gamma frequencies and lower auditory temporal resolution profit from auditory transcranial alternating current stimulation and show increased gap detection performance during stimulation. Our results strongly suggest individualized transcranial alternating current stimulation protocols for successful modulation of performance. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Effects of underwater noise on auditory sensitivity of a cyprinid fish.
Scholik, A R; Yan, H Y
2001-02-01
The ability of a fish to interpret acoustic information in its environment is crucial for its survival. Thus, it is important to understand how underwater noise affects fish hearing. In this study, the fathead minnow (Pimephales promelas) was used to examine: (1) the immediate effects of white noise exposure (0.3-4.0 kHz, 142 dB re: 1 microPa) on auditory thresholds and (2) recovery after exposure. Audiograms were measured using the auditory brainstem response protocol and compared to baseline audiograms of fathead minnows not exposed to noise. Immediately after exposure to 24 h of white noise, five out of the eight frequencies tested showed a significantly higher threshold compared to the baseline fish. Recovery was found to depend on both duration of noise exposure and auditory frequency. These results support the hypothesis that the auditory threshold of the fathead minnow can be altered by white noise, especially in its most sensitive hearing range (0.8-2.0 kHz), and provide evidence that these effects can be long term (>14 days).
Sound tuning of amygdala plasticity in auditory fear conditioning
Park, Sungmo; Lee, Junuk; Park, Kyungjoon; Kim, Jeongyeon; Song, Beomjong; Hong, Ingie; Kim, Jieun; Lee, Sukwon; Choi, Sukwoo
2016-01-01
Various auditory tones have been used as conditioned stimuli (CS) for fear conditioning, but researchers have largely neglected the effect that different types of auditory tones may have on fear memory processing. Here, we report that at lateral amygdala (LA) synapses (a storage site for fear memory), conditioning with different types of auditory CSs (2.8 kHz tone, white noise, FM tone) recruits distinct forms of long-term potentiation (LTP) and inserts calcium permeable AMPA receptor (CP-AMPAR) for variable periods. White noise or FM tone conditioning produced brief insertion (<6 hr after conditioning) of CP-AMPARs, whereas 2.8 kHz tone conditioning induced more persistent insertion (≥6 hr). Consistently, conditioned fear to 2.8 kHz tone but not to white noise or FM tones was erased by reconsolidation-update (which depends on the insertion of CP-AMPARs at LA synapses) when it was performed 6 hr after conditioning. Our data suggest that conditioning with different auditory CSs recruits distinct forms of LA synaptic plasticity, resulting in more malleable fear memory to some tones than to others. PMID:27488731
Developmental changes in automatic rule-learning mechanisms across early childhood.
Mueller, Jutta L; Friederici, Angela D; Männel, Claudia
2018-06-27
Infants' ability to learn complex linguistic regularities from early on has been revealed by electrophysiological studies indicating that 3-month-olds, but not adults, can automatically detect non-adjacent dependencies between syllables. While different ERP responses in adults and infants suggest that both linguistic rule learning and its link to basic auditory processing undergo developmental changes, systematic investigations of the developmental trajectories are scarce. In the present study, we assessed 2- and 4-year-olds' ERP indicators of pitch discrimination and linguistic rule learning in a syllable-based oddball design. To test for the relation between auditory discrimination and rule learning, ERP responses to pitch changes were used as predictor for potential linguistic rule-learning effects. Results revealed that 2-year-olds, but not 4-year-olds, showed ERP markers of rule learning. Although, 2-year-olds' rule learning was not dependent on differences in pitch perception, 4-year-old children demonstrated a dependency, such that those children who showed more pronounced responses to pitch changes still showed an effect of rule learning. These results narrow down the developmental decline of the ability for automatic linguistic rule learning to the age between 2 and 4 years, and, moreover, point towards a strong modification of this change by auditory processes. At an age when the ability of automatic linguistic rule learning phases out, rule learning can still be observed in children with enhanced auditory responses. The observed interrelations are plausible causes for age-of-acquisition effects and inter-individual differences in language learning. © 2018 John Wiley & Sons Ltd.
Maruska, Karen P; Ung, Uyhun S; Fernald, Russell D
2012-01-01
Sexual reproduction in all animals depends on effective communication between signalers and receivers. Many fish species, especially the African cichlids, are well known for their bright coloration and the importance of visual signaling during courtship and mate choice, but little is known about what role acoustic communication plays during mating and how it contributes to sexual selection in this phenotypically diverse group of vertebrates. Here we examined acoustic communication during reproduction in the social cichlid fish, Astatotilapia burtoni. We characterized the sounds and associated behaviors produced by dominant males during courtship, tested for differences in hearing ability associated with female reproductive state and male social status, and then tested the hypothesis that female mate preference is influenced by male sound production. We show that dominant males produce intentional courtship sounds in close proximity to females, and that sounds are spectrally similar to their hearing abilities. Females were 2-5-fold more sensitive to low frequency sounds in the spectral range of male courtship sounds when they were sexually-receptive compared to during the mouthbrooding parental phase. Hearing thresholds were also negatively correlated with circulating sex-steroid levels in females but positively correlated in males, suggesting a potential role for steroids in reproductive-state auditory plasticity. Behavioral experiments showed that receptive females preferred to affiliate with males that were associated with playback of courtship sounds compared to noise controls, indicating that acoustic information is likely important for female mate choice. These data show for the first time in a Tanganyikan cichlid that acoustic communication is important during reproduction as part of a multimodal signaling repertoire, and that perception of auditory information changes depending on the animal's internal physiological state. Our results highlight the importance of examining non-visual sensory modalities as potential substrates for sexual selection contributing to the incredible phenotypic diversity of African cichlid fishes.
Maruska, Karen P.; Ung, Uyhun S.; Fernald, Russell D.
2012-01-01
Sexual reproduction in all animals depends on effective communication between signalers and receivers. Many fish species, especially the African cichlids, are well known for their bright coloration and the importance of visual signaling during courtship and mate choice, but little is known about what role acoustic communication plays during mating and how it contributes to sexual selection in this phenotypically diverse group of vertebrates. Here we examined acoustic communication during reproduction in the social cichlid fish, Astatotilapia burtoni. We characterized the sounds and associated behaviors produced by dominant males during courtship, tested for differences in hearing ability associated with female reproductive state and male social status, and then tested the hypothesis that female mate preference is influenced by male sound production. We show that dominant males produce intentional courtship sounds in close proximity to females, and that sounds are spectrally similar to their hearing abilities. Females were 2–5-fold more sensitive to low frequency sounds in the spectral range of male courtship sounds when they were sexually-receptive compared to during the mouthbrooding parental phase. Hearing thresholds were also negatively correlated with circulating sex-steroid levels in females but positively correlated in males, suggesting a potential role for steroids in reproductive-state auditory plasticity. Behavioral experiments showed that receptive females preferred to affiliate with males that were associated with playback of courtship sounds compared to noise controls, indicating that acoustic information is likely important for female mate choice. These data show for the first time in a Tanganyikan cichlid that acoustic communication is important during reproduction as part of a multimodal signaling repertoire, and that perception of auditory information changes depending on the animal's internal physiological state. Our results highlight the importance of examining non-visual sensory modalities as potential substrates for sexual selection contributing to the incredible phenotypic diversity of African cichlid fishes. PMID:22624055
Temporal auditory aspects in children with poor school performance and associated factors.
Rezende, Bárbara Antunes; Lemos, Stela Maris Aguiar; Medeiros, Adriane Mesquita de
2016-01-01
To investigate the auditory temporal aspects in children with poor school performance aged 7-12 years and their association with behavioral aspects, health perception, school and health profiles, and sociodemographic factors. This is an observational, analytical, transversal study including 89 children with poor school performance aged 7-12 years enrolled in the municipal public schools of a municipality in Minas Gerais state, participants of Specialized Educational Assistance. The first stage of the study was conducted with the subjects' parents aiming to collect information on sociodemographic aspects, health profile, and educational records. In addition, the parents responded to the Strengths and Difficulties Questionnaire (SDQ). The second stage was conducted with the children in order to investigate their health self-perception and analyze the auditory assessment, which consisted of meatoscopy, Transient Otoacoustic Emissions, and tests that evaluated the aspects of simple auditory temporal ordering and auditory temporal resolution. Tests assessing the temporal aspects of auditory temporal processing were considered as response variables, and the explanatory variables were grouped for univariate and multivariate logistic regression analyses. The level of significance was set at 5%. Significant statistical correlation was found between the auditory temporal aspects and the variables age, gender, presence of repetition, and health self-perception. Children with poor school performance presented changes in the auditory temporal aspects. The temporal abilities assessed suggest association with different factors such as maturational process, health self-perception, and school records.
Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults
Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju
2018-01-01
Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standard Deviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition. PMID:29641462
Koehler, Seth D.; Shore, Susan E.
2015-01-01
Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. PMID:26289461
A case of high noise sensitivity
NASA Astrophysics Data System (ADS)
Murata, M.; Sakamoto, H.
1995-10-01
A case of noise sensitivity with a five-year follow-up period is reported. The patient was a 34-year-old single man who was diagnosed as having psychosomatic disorder triggered by two stressful life events in rapid succession with secondary hypersensitivity to noise. Hypersensitivity to light and cold also developed later in the clinical course. The auditory threshold was within the normal range. The discomfort threshold as a measure of the noise sensitivity secondary to mental illness was measured repeatedly using test tone of audiometry. The discomfort threshold varied depending upon his mental status, ranging from 40-50 dB in the comparatively poorer mental state to 70-95 dB in the relatively good mental state. The features of noise sensitivity, including that secondary to mental illness, are discussed.
Functional modeling of the human auditory brainstem response to broadband stimulationa)
Verhulst, Sarah; Bharadwaj, Hari M.; Mehraei, Golbarg; Shera, Christopher A.; Shinn-Cunningham, Barbara G.
2015-01-01
Population responses such as the auditory brainstem response (ABR) are commonly used for hearing screening, but the relationship between single-unit physiology and scalp-recorded population responses are not well understood. Computational models that integrate physiologically realistic models of single-unit auditory-nerve (AN), cochlear nucleus (CN) and inferior colliculus (IC) cells with models of broadband peripheral excitation can be used to simulate ABRs and thereby link detailed knowledge of animal physiology to human applications. Existing functional ABR models fail to capture the empirically observed 1.2–2 ms ABR wave-V latency-vs-intensity decrease that is thought to arise from level-dependent changes in cochlear excitation and firing synchrony across different tonotopic sections. This paper proposes an approach where level-dependent cochlear excitation patterns, which reflect human cochlear filter tuning parameters, drive AN fibers to yield realistic level-dependent properties of the ABR wave-V. The number of free model parameters is minimal, producing a model in which various sources of hearing-impairment can easily be simulated on an individualized and frequency-dependent basis. The model fits latency-vs-intensity functions observed in human ABRs and otoacoustic emissions while maintaining rate-level and threshold characteristics of single-unit AN fibers. The simulations help to reveal which tonotopic regions dominate ABR waveform peaks at different stimulus intensities. PMID:26428802
Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi
2017-10-01
Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.
Connecting the ear to the brain: molecular mechanisms of auditory circuit assembly
Appler, Jessica M.; Goodrich, Lisa V.
2011-01-01
Our sense of hearing depends on precisely organized circuits that allow us to sense, perceive, and respond to complex sounds in our environment, from music and language to simple warning signals. Auditory processing begins in the cochlea of the inner ear, where sounds are detected by sensory hair cells and then transmitted to the central nervous system by spiral ganglion neurons, which faithfully preserve the frequency, intensity, and timing of each stimulus. During the assembly of auditory circuits, spiral ganglion neurons establish precise connections that link hair cells in the cochlea to target neurons in the auditory brainstem, develop specific firing properties, and elaborate unusual synapses both in the periphery and in the CNS. Understanding how spiral ganglion neurons acquire these unique properties is a key goal in auditory neuroscience, as these neurons represent the sole input of auditory information to the brain. In addition, the best currently available treatment for many forms of deafness is the cochlear implant, which compensates for lost hair cell function by directly stimulating the auditory nerve. Historically, studies of the auditory system have lagged behind other sensory systems due to the small size and inaccessibility of the inner ear. With the advent of new molecular genetic tools, this gap is narrowing. Here, we summarize recent insights into the cellular and molecular cues that guide the development of spiral ganglion neurons, from their origin in the proneurosensory domain of the otic vesicle to the formation of specialized synapses that ensure rapid and reliable transmission of sound information from the ear to the brain. PMID:21232575
ERIC Educational Resources Information Center
Hessler, Dorte; Jonkers, Roel; Stowe, Laurie; Bastiaanse, Roelien
2013-01-01
In the current ERP study, an active oddball task was carried out, testing pure tones and auditory, visual and audiovisual syllables. For pure tones, an MMN, an N2b, and a P3 were found, confirming traditional findings. Auditory syllables evoked an N2 and a P3. We found that the amplitude of the P3 depended on the distance between standard and…
Noise-robust speech recognition through auditory feature detection and spike sequence decoding.
Schafer, Phillip B; Jin, Dezhe Z
2014-03-01
Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.
Hausfeld, Lars; Riecke, Lars; Formisano, Elia
2018-06-01
Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Beal, Deryk S; Cheyne, Douglas O; Gracco, Vincent L; Quraan, Maher A; Taylor, Margot J; De Nil, Luc F
2010-10-01
We used magnetoencephalography to investigate auditory evoked responses to speech vocalizations and non-speech tones in adults who do and do not stutter. Neuromagnetic field patterns were recorded as participants listened to a 1 kHz tone, playback of their own productions of the vowel /i/ and vowel-initial words, and actively generated the vowel /i/ and vowel-initial words. Activation of the auditory cortex at approximately 50 and 100 ms was observed during all tasks. A reduction in the peak amplitudes of the M50 and M100 components was observed during the active generation versus passive listening tasks dependent on the stimuli. Adults who stutter did not differ in the amount of speech-induced auditory suppression relative to fluent speakers. Adults who stutter had shorter M100 latencies for the actively generated speaking tasks in the right hemisphere relative to the left hemisphere but the fluent speakers showed similar latencies across hemispheres. During passive listening tasks, adults who stutter had longer M50 and M100 latencies than fluent speakers. The results suggest that there are timing, rather than amplitude, differences in auditory processing during speech in adults who stutter and are discussed in relation to hypotheses of auditory-motor integration breakdown in stuttering. Copyright 2010 Elsevier Inc. All rights reserved.
Spatial Hearing with Incongruent Visual or Auditory Room Cues
Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten
2016-01-01
In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290
Haslbeck, Friederike Barbara; Bassler, Dirk
2018-01-01
Human and animal studies demonstrate that early auditory experiences influence brain development. The findings are particularly crucial following preterm birth as the plasticity of auditory regions, and cortex development are heavily dependent on the quality of auditory stimulation. Brain maturation in preterm infants may be affected among other things by the overwhelming auditory environment of the neonatal intensive care unit (NICU). Conversely, auditory deprivation, (e.g., the lack of the regular intrauterine rhythms of the maternal heartbeat and the maternal voice) may also have an impact on brain maturation. Therefore, a nurturing enrichment of the auditory environment for preterm infants is warranted. Creative music therapy (CMT) addresses these demands by offering infant-directed singing in lullaby-style that is continually adapted to the neonate's needs. The therapeutic approach is tailored to the individual developmental stage, entrained to the breathing rhythm, and adapted to the subtle expressions of the newborn. Not only the therapist and the neonate but also the parents play a role in CMT. In this article, we describe how to apply music therapy in a neonatal intensive care environment to support very preterm infants and their families. We speculate that the enriched musical experience may promote brain development and we critically discuss the available evidence in support of our assumption.
Whispering - The hidden side of auditory communication.
Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier
2016-11-15
Whispering is a unique expression mode that is specific to auditory communication. Individuals switch their vocalization mode to whispering especially when affected by inner emotions in certain social contexts, such as in intimate relationships or intimidating social interactions. Although this context-dependent whispering is adaptive, whispered voices are acoustically far less rich than phonated voices and thus impose higher hearing and neural auditory decoding demands for recognizing their socio-affective value by listeners. The neural dynamics underlying this recognition especially from whispered voices are largely unknown. Here we show that whispered voices in humans are considerably impoverished as quantified by an entropy measure of spectral acoustic information, and this missing information needs large-scale neural compensation in terms of auditory and cognitive processing. Notably, recognizing the socio-affective information from voices was slightly more difficult from whispered voices, probably based on missing tonal information. While phonated voices elicited extended activity in auditory regions for decoding of relevant tonal and time information and the valence of voices, whispered voices elicited activity in a complex auditory-frontal brain network. Our data suggest that a large-scale multidirectional brain network compensates for the impoverished sound quality of socially meaningful environmental signals to support their accurate recognition and valence attribution. Copyright © 2016 Elsevier Inc. All rights reserved.
Outcome survey of auditory-verbal graduates: study of clinical efficacy.
Goldberg, D M; Flexer, C
1993-05-01
Audiologists must be knowledgeable about the efficacy of aural habilitation practices because we are often the first professionals to inform parents about their child's hearing impairment. The purpose of this investigation was to document the status of graduates of one aural habilitation option; auditory-verbal. A consumer survey was completed by graduates from auditory-verbal programs in the United States and Canada. Graduates were queried regarding degree and etiology of hearing loss, age of onset, amplification, and educational and employment history, among other topics. Results indicated that the majority of the respondents were integrated into regular learning and living environments.
40 Hz Auditory Steady-State Response Is a Pharmacodynamic Biomarker for Cortical NMDA Receptors.
Sivarao, Digavalli V; Chen, Ping; Senapati, Arun; Yang, Yili; Fernandes, Alda; Benitex, Yulia; Whiterock, Valerie; Li, Yu-Wen; Ahlijanian, Michael K
2016-08-01
Schizophrenia patients exhibit dysfunctional gamma oscillations in response to simple auditory stimuli or more complex cognitive tasks, a phenomenon explained by reduced NMDA transmission within inhibitory/excitatory cortical networks. Indeed, a simple steady-state auditory click stimulation paradigm at gamma frequency (~40 Hz) has been reproducibly shown to reduce entrainment as measured by electroencephalography (EEG) in patients. However, some investigators have reported increased phase locking factor (PLF) and power in response to 40 Hz auditory stimulus in patients. Interestingly, preclinical literature also reflects this contradiction. We investigated whether a graded deficiency in NMDA transmission can account for such disparate findings by administering subanesthetic ketamine (1-30 mg/kg, i.v.) or vehicle to conscious rats (n=12) and testing their EEG entrainment to 40 Hz click stimuli at various time points (~7-62 min after treatment). In separate cohorts, we examined in vivo NMDA channel occupancy and tissue exposure to contextualize ketamine effects. We report a robust inverse relationship between PLF and NMDA occupancy 7 min after dosing. Moreover, ketamine could produce inhibition or disinhibition of the 40 Hz response in a temporally dynamic manner. These results provide for the first time empirical data to understand how cortical NMDA transmission deficit may lead to opposite modulation of the auditory steady-state response (ASSR). Importantly, our findings posit that 40 Hz ASSR is a pharmacodynamic biomarker for cortical NMDA function that is also robustly translatable. Besides schizophrenia, such a functional biomarker may be of value to neuropsychiatric disorders like bipolar and autism spectrum where 40 Hz ASSR deficits have been documented.
Familiar auditory sensory training in chronic traumatic brain injury: a case study.
Sullivan, Emily Galassi; Guernon, Ann; Blabas, Brett; Herrold, Amy A; Pape, Theresa L-B
2018-04-01
The evaluation and treatment for patients with prolonged periods of seriously impaired consciousness following traumatic brain injury (TBI), such as a vegetative or minimally conscious state, poses considerable challenges, particularly in the chronic phases of recovery. This blinded crossover study explored the effects of familiar auditory sensory training (FAST) compared with a sham stimulation in a patient seven years post severe TBI. Baseline data were collected over 4 weeks to account for variability in status with neurobehavioral measures, including the Disorders of Consciousness scale (DOCS), Coma Near Coma scale (CNC), and Consciousness Screening Algorithm. Pre-stimulation neurophysiological assessments were completed as well, namely Brainstem Auditory Evoked Potentials (BAEP) and Somatosensory Evoked Potentials (SSEP). Results revealed that a significant improvement in the DOCS neurobehavioral findings after FAST, which was not maintained during the sham. BAEP findings also improved with maintenance of these improvements following sham stimulation as evidenced by repeat testing. The results emphasize the importance for continued evaluation and treatment of individuals in chronic states of seriously impaired consciousness with a variety of tools. Further study of auditory stimulation as a passive treatment paradigm for this population is warranted. Implications for Rehabilitation Clinicians should be equipped with treatment options to enhance neurobehavioral improvements when traditional treatment methods fail to deliver or maintain functional behavioral changes. Routine assessment is crucial to detect subtle changes in neurobehavioral function even in chronic states of disordered consciousness and determine potential preserved cognitive abilities that may not be evident due to unreliable motor responses given motoric impairments. Familiar Auditory Stimulation Training (FAST) is an ideal passive stimulation that can be supplied by families, allied health clinicians and nursing staff of all levels.
Neural plasticity expressed in central auditory structures with and without tinnitus
Roberts, Larry E.; Bosnyak, Daniel J.; Thompson, David C.
2012-01-01
Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To assess this assumption, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by electroencephalography (EEG) are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated (AM) sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR) and P2 transient response known to localize to primary and non-primary auditory cortex, respectively. P2 amplitude increased over training sessions equally in participants with tinnitus and in control subjects, suggesting normal remodeling of non-primary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls the phase delay between the 40-Hz response and stimulus waveforms reduced by about 10° over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects) did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not non-primary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected. PMID:22654738
Symbolic Analysis of Heart Rate Variability During Exposure to Musical Auditory Stimulation.
Vanderlei, Franciele Marques; de Abreu, Luiz Carlos; Garner, David Matthew; Valenti, Vitor Engrácia
2016-01-01
In recent years, the application of nonlinear methods for analysis of heart rate variability (HRV) has increased. However, studies on the influence of music on cardiac autonomic modulation in those circumstances are rare. The research team aimed to evaluate the acute effects on HRV of selected auditory stimulation by 2 musical styles, measuring the results using nonlinear methods of analysis: Shannon entropy, symbolic analysis, and correlation-dimension analysis. Prospective control study in which the volunteers were exposed to music and variables were compared between control (no auditory stimulation) and during exposure to music. All procedures were performed in a sound-proofed room at the Faculty of Science and Technology at São Paulo State University (UNESP), São Paulo, Brazil. Participants were 22 healthy female students, aged between 18 and 30 y. Prior to the actual intervention, the participants remained at rest for 20 min, and then they were exposed to one of the selected types of music, either classical baroque (64-84 dB) or heavy-metal (75-84 dB). Each musical session lasted a total of 5 min and 15 s. At a point occurring up to 1 wk after that day, the participants listened to the second type of music. The 2 types of music were delivered in a random sequence that depended on the group to which the participant was assigned. The study analyzed the following HRV indices through Shannon entropy; symbolic analysis-0V%, 1V%, 2LV%, and 2ULV%; and correlation-dimension analysis. During exposure to auditory stimulation by heavy-metal or classical baroque music, the study established no statistically significant variations regarding the indices for the Shannon entropy; the symbolic analysis-0V%, 1V%, and 2ULV%; and the correlation-dimension analysis. However, during heavy-metal music, the 2LV% index in the symbolic analysis was reduced compared with the controls. Auditory stimulation with the heavy-metal music reduced the parasympathetic modulation of HRV, whereas no significant changes occurred in cardiac autonomic modulation during exposure to the classical music.
Mochida, Takemi; Gomi, Hiroaki; Kashino, Makio
2010-11-08
There has been plentiful evidence of kinesthetically induced rapid compensation for unanticipated perturbation in speech articulatory movements. However, the role of auditory information in stabilizing articulation has been little studied except for the control of voice fundamental frequency, voice amplitude and vowel formant frequencies. Although the influence of auditory information on the articulatory control process is evident in unintended speech errors caused by delayed auditory feedback, the direct and immediate effect of auditory alteration on the movements of articulators has not been clarified. This work examined whether temporal changes in the auditory feedback of bilabial plosives immediately affects the subsequent lip movement. We conducted experiments with an auditory feedback alteration system that enabled us to replace or block speech sounds in real time. Participants were asked to produce the syllable /pa/ repeatedly at a constant rate. During the repetition, normal auditory feedback was interrupted, and one of three pre-recorded syllables /pa/, /Φa/, or /pi/, spoken by the same participant, was presented once at a different timing from the anticipated production onset, while no feedback was presented for subsequent repetitions. Comparisons of the labial distance trajectories under altered and normal feedback conditions indicated that the movement quickened during the short period immediately after the alteration onset, when /pa/ was presented 50 ms before the expected timing. Such change was not significant under other feedback conditions we tested. The earlier articulation rapidly induced by the progressive auditory input suggests that a compensatory mechanism helps to maintain a constant speech rate by detecting errors between the internally predicted and actually provided auditory information associated with self movement. The timing- and context-dependent effects of feedback alteration suggest that the sensory error detection works in a temporally asymmetric window where acoustic features of the syllable to be produced may be coded.
Auditory system dysfunction in Alzheimer disease and its prodromal states: A review.
Swords, Gabriel M; Nguyen, Lydia T; Mudar, Raksha A; Llano, Daniel A
2018-07-01
Recent findings suggest that both peripheral and central auditory system dysfunction occur in the prodromal stages of Alzheimer Disease (AD), and therefore may represent early indicators of the disease. In addition, loss of auditory function itself leads to communication difficulties, social isolation and poor quality of life for both patients with AD and their caregivers. Developing a greater understanding of auditory dysfunction in early AD may shed light on the mechanisms of disease progression and carry diagnostic and therapeutic importance. Herein, we review the literature on hearing abilities in AD and its prodromal stages investigated through methods such as pure-tone audiometry, dichotic listening tasks, and evoked response potentials. We propose that screening for peripheral and central auditory dysfunction in at-risk populations is a low-cost and effective means to identify early AD pathology and provides an entry point for therapeutic interventions that enhance the quality of life of AD patients. Copyright © 2018 Elsevier B.V. All rights reserved.
Auditory Confrontation Naming in Alzheimer’s Disease
Brandt, Jason; Bakker, Arnold; Maroof, David Aaron
2010-01-01
Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630
Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin
2012-01-01
Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.
Auditory steady state response in sound field.
Hernández-Pérez, H; Torres-Fortuny, A
2013-02-01
Physiological and behavioral responses were compared in normal-hearing subjects via analyses of the auditory steady-state response (ASSR) and conventional audiometry under sound field conditions. The auditory stimuli, presented through a loudspeaker, consisted of four carrier tones (500, 1000, 2000, and 4000 Hz), presented singly for behavioral testing but combined (multiple frequency technique), to estimate thresholds using the ASSR. Twenty normal-hearing adults were examined. The average differences between the physiological and behavioral thresholds were between 17 and 22 dB HL. The Spearman rank correlation between ASSR and behavioral thresholds was significant for all frequencies (p < 0.05). Significant differences were found in the ASSR amplitude among frequencies, and strong correlations between the ASSR amplitude and the stimulus level (p < 0.05). The ASSR in sound field testing was found to yield hearing threshold estimates deemed to be reasonably well correlated with behaviorally assessed thresholds.
Schrode, Katrina M.; Bee, Mark A.
2015-01-01
ABSTRACT Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male–male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467
Calderón-Garcidueñas, Lilian; González-González, Luis O; Kulesza, Randy J; Fech, Tatiana M; Pérez-Guillé, Gabriela; Luna, Miguel Angel Jiménez-Bravo; Soriano-Rosales, Rosa Eugenia; Solorio, Edelmira; Miramontes-Higuera, José de Jesús; Gómez-Maqueo Chew, Aline; Bernal-Morúa, Alexia F; Mukherjee, Partha S; Torres-Jardón, Ricardo; Mills, Paul C; Wilson, Wayne J; Pérez-Guillé, Beatriz; D'Angiulli, Amedeo
2017-10-01
Delayed central conduction times in the auditory brainstem have been observed in Mexico City (MC) healthy children exposed to fine particulate matter (PM 2.5 ) and ozone (O 3 ) above the current United States Environmental Protection Agency (US-EPA) standards. MC children have α synuclein brainstem accumulation and medial superior olivary complex (MSO) dysmorphology. The present study used a dog model to investigate the potential effects of air pollution on the function and morphology of the auditory brainstem. Twenty-four dogs living in clean air v MC, average age 37.1 ± 26.3 months, underwent brainstem auditory evoked potential (BAEP) measurements. Eight dogs (4 MC, 4 Controls) were analysed for auditory brainstem morphology and histopathology. MC dogs showed ventral cochlear nuclei hypotrophy and MSO dysmorphology with a significant decrease in cell body size, decreased neuronal packing density with regions in the nucleus devoid of neurons and marked gliosis. MC dogs showed significant delayed BAEP absolute wave I, III and V latencies compared to controls. MC dogs show auditory nuclei dysmorphology and BAEPs consistent with an alteration of the generator sites of the auditory brainstem response waveform. This study puts forward the usefulness of BAEPs to study auditory brainstem neurodegenerative changes associated with air pollution in dogs. Recognition of the role of non-invasive BAEPs in urban dogs is warranted to elucidate novel neurodegenerative pathways link to air pollution and a promising early diagnostic strategy for Alzheimer's Disease. Copyright © 2017 Elsevier Inc. All rights reserved.
The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex.
Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J
2014-09-01
Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. © 2014 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
2017-01-01
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). PMID:28450537
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.
Kamal, Brishna; Holman, Constance; de Villers-Sidani, Etienne
2013-01-01
Age-related impairments in the primary auditory cortex (A1) include poor tuning selectivity, neural desynchronization, and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function. PMID:24062649
Lamas, Verónica; Estévez, Sheila; Pernía, Marianni; Plaza, Ignacio; Merchán, Miguel A
2017-10-11
The rat auditory cortex (AC) is becoming popular among auditory neuroscience investigators who are interested in experience-dependence plasticity, auditory perceptual processes, and cortical control of sound processing in the subcortical auditory nuclei. To address new challenges, a procedure to accurately locate and surgically expose the auditory cortex would expedite this research effort. Stereotactic neurosurgery is routinely used in pre-clinical research in animal models to engraft a needle or electrode at a pre-defined location within the auditory cortex. In the following protocol, we use stereotactic methods in a novel way. We identify four coordinate points over the surface of the temporal bone of the rat to define a window that, once opened, accurately exposes both the primary (A1) and secondary (Dorsal and Ventral) cortices of the AC. Using this method, we then perform a surgical ablation of the AC. After such a manipulation is performed, it is necessary to assess the localization, size, and extension of the lesions made in the cortex. Thus, we also describe a method to easily locate the AC ablation postmortem using a coordinate map constructed by transferring the cytoarchitectural limits of the AC to the surface of the brain.The combination of the stereotactically-guided location and ablation of the AC with the localization of the injured area in a coordinate map postmortem facilitates the validation of information obtained from the animal, and leads to a better analysis and comprehension of the data.
SDI Software Technology Program Plan Version 1.5
1987-06-01
computer generation of auditory communication of meaningful speech. Most speech synthesizers are based on mathematical models of the human vocal tract, but...oral/ auditory and multimodal communications. Although such state-of-the-art interaction technology has not fully matured, user experience has...superior I pattern matching capabilities and the subliminal intuitive deduction capability. The error performance of humans can be helped by careful
ERIC Educational Resources Information Center
Dawes, Piers; Bishop, Dorothy
2009-01-01
Background: Auditory Processing Disorder (APD) does not feature in mainstream diagnostic classifications such as the "Diagnostic and Statistical Manual of Mental Disorders, 4th Edition" (DSM-IV), but is frequently diagnosed in the United States, Australia and New Zealand, and is becoming more frequently diagnosed in the United Kingdom. Aims: To…
ERIC Educational Resources Information Center
Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi
2005-01-01
Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
2012-07-16
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
Neuronal effects of nicotine during auditory selective attention.
Smucny, Jason; Olincy, Ann; Eichman, Lindsay S; Tregellas, Jason R
2015-06-01
Although the attention-enhancing effects of nicotine have been behaviorally and neurophysiologically well-documented, its localized functional effects during selective attention are poorly understood. In this study, we examined the neuronal effects of nicotine during auditory selective attention in healthy human nonsmokers. We hypothesized to observe significant effects of nicotine in attention-associated brain areas, driven by nicotine-induced increases in activity as a function of increasing task demands. A single-blind, prospective, randomized crossover design was used to examine neuronal response associated with a go/no-go task after 7 mg nicotine or placebo patch administration in 20 individuals who underwent functional magnetic resonance imaging at 3T. The task design included two levels of difficulty (ordered vs. random stimuli) and two levels of auditory distraction (silence vs. noise). Significant treatment × difficulty × distraction interaction effects on neuronal response were observed in the hippocampus, ventral parietal cortex, and anterior cingulate. In contrast to our hypothesis, U and inverted U-shaped dependencies were observed between the effects of nicotine on response and task demands, depending on the brain area. These results suggest that nicotine may differentially affect neuronal response depending on task conditions. These results have important theoretical implications for understanding how cholinergic tone may influence the neurobiology of selective attention.
Seeing tones and hearing rectangles - Attending to simultaneous auditory and visual events
NASA Technical Reports Server (NTRS)
Casper, Patricia A.; Kantowitz, Barry H.
1985-01-01
The allocation of attention in dual-task situations depends on both the overall and the momentary demands associated with both tasks. Subjects in an inclusive- or reaction-time task responded to changes in simultaneous sequences of discrete auditory and visual stimuli. Performance on individual trials was affected by (1) the ratio of stimuli in the two tasks, (2) response demands of the two tasks, and (3) patterns inherent in the demands of one task.
Hofmann, G; Kraak, W
1976-08-31
The impact of various acoustic stimuli upon the cumulative action potential of the auditory nerves in guinea pigs is investigated by means of the averaging method. It was found that the potential amplitude within the measuring range increases with the logarithm of the rising sonic pressure velocity. Unlike the evoked response audiometry (ERA), this potential seems unsuitable for furnishing information of the frequency-dependent threshold course.
Mulert, C; Juckel, G; Augustin, H; Hegerl, U
2002-10-01
The loudness dependency of the auditory evoked potentials (LDAEP) is used as an indicator of the central serotonergic system and predicts clinical response to serotonin agonists. So far, LDAEP has been typically investigated with dipole source analysis, because with this method the primary and secondary auditory cortex (with a high versus low serotonergic innervation) can be separated at least in parts. We have developed a new analysis procedure that uses an MRI probabilistic map of the primary auditory cortex in Talairach space and analyzed the current density in this region of interest with low resolution electromagnetic tomography (LORETA). LORETA is a tomographic localization method that calculates the current density distribution in Talairach space. In a group of patients with major depression (n=15), this new method can predict the response to an selective serotonin reuptake inhibitor (citalopram) at least to the same degree than the traditional dipole source analysis method (P=0.019 vs. P=0.028). The correlation of the improvement in the Hamilton Scale is significant with the LORETA-LDAEP-values (0.56; P=0.031) but not with the dipole source analysis LDAEP-values (0.43; P=0.11). The new tomographic LDAEP analysis is a promising tool in the analysis of the central serotonergic system.
Investigating three types of continuous auditory feedback in visuo-manual tracking.
Boyer, Éric O; Bevilacqua, Frédéric; Susini, Patrick; Hanneton, Sylvain
2017-03-01
The use of continuous auditory feedback for motor control and learning is still understudied and deserves more attention regarding fundamental mechanisms and applications. This paper presents the results of three experiments studying the contribution of task-, error-, and user-related sonification to visuo-manual tracking and assessing its benefits on sensorimotor learning. First results show that sonification can help decreasing the tracking error, as well as increasing the energy in participant's movement. In the second experiment, when alternating feedback presence, the user-related sonification did not show feedback dependency effects, contrary to the error and task-related feedback. In the third experiment, a reduced exposure of 50% diminished the positive effect of sonification on performance, whereas the increase of the average energy with sound was still significant. In a retention test performed on the next day without auditory feedback, movement energy was still superior for the groups previously trained with the feedback. Although performance was not affected by sound, a learning effect was measurable in both sessions and the user-related group improved its performance also in the retention test. These results confirm that a continuous auditory feedback can be beneficial for movement training and also show an interesting effect of sonification on movement energy. User-related sonification can prevent feedback dependency and increase retention. Consequently, sonification of the user's own motion appears as a promising solution to support movement learning with interactive feedback.
Leitmeyer, Katharina; Glutz, Andrea; Radojevic, Vesna; Setz, Cristian; Huerzeler, Nathan; Bumann, Helen; Bodmer, Daniel; Brand, Yves
2015-01-01
Rapamycin is an antifungal agent with immunosuppressive properties. Rapamycin inhibits the mammalian target of rapamycin (mTOR) by blocking the mTOR complex 1 (mTORC1). mTOR is an atypical serine/threonine protein kinase, which controls cell growth, cell proliferation, and cell metabolism. However, less is known about the mTOR pathway in the inner ear. First, we evaluated whether or not the two mTOR complexes (mTORC1 and mTORC2, resp.) are present in the mammalian cochlea. Next, tissue explants of 5-day-old rats were treated with increasing concentrations of rapamycin to explore the effects of rapamycin on auditory hair cells and spiral ganglion neurons. Auditory hair cell survival, spiral ganglion neuron number, length of neurites, and neuronal survival were analyzed in vitro. Our data indicates that both mTOR complexes are expressed in the mammalian cochlea. We observed that inhibition of mTOR by rapamycin results in a dose dependent damage of auditory hair cells. Moreover, spiral ganglion neurite number and length of neurites were significantly decreased in all concentrations used compared to control in a dose dependent manner. Our data indicate that the mTOR may play a role in the survival of hair cells and modulates spiral ganglion neuronal outgrowth and neurite formation. PMID:25918725
2016-01-01
Abstract Successful language comprehension critically depends on our ability to link linguistic expressions to the entities they refer to. Without reference resolution, newly encountered language cannot be related to previously acquired knowledge. The human experience includes many different types of referents, some visual, some auditory, some very abstract. Does the neural basis of reference resolution depend on the nature of the referents, or do our brains use a modality-general mechanism for linking meanings to referents? Here we report evidence for both. Using magnetoencephalography (MEG), we varied both the modality of referents, which consisted either of visual or auditory objects, and the point at which reference resolution was possible within sentences. Source-localized MEG responses revealed brain activity associated with reference resolution that was independent of the modality of the referents, localized to the medial parietal lobe and starting ∼415 ms after the onset of reference resolving words. A modality-specific response to reference resolution in auditory domains was also found, in the vicinity of auditory cortex. Our results suggest that referential language processing cannot be reduced to processing in classical language regions and representations of the referential domain in modality-specific neural systems. Instead, our results suggest that reference resolution engages medial parietal cortex, which supports a mechanism for referential processing regardless of the content modality. PMID:28058272
Dual Gamma Rhythm Generators Control Interlaminar Synchrony in Auditory Cortex
Ainsworth, Matthew; Lee, Shane; Cunningham, Mark O.; Roopun, Anita K.; Traub, Roger D.; Kopell, Nancy J.; Whittington, Miles A.
2013-01-01
Rhythmic activity in populations of cortical neurons accompanies, and may underlie, many aspects of primary sensory processing and short-term memory. Activity in the gamma band (30 Hz up to > 100 Hz) is associated with such cognitive tasks and is thought to provide a substrate for temporal coupling of spatially separate regions of the brain. However, such coupling requires close matching of frequencies in co-active areas, and because the nominal gamma band is so spectrally broad, it may not constitute a single underlying process. Here we show that, for inhibition-based gamma rhythms in vitro in rat neocortical slices, mechanistically distinct local circuit generators exist in different laminae of rat primary auditory cortex. A persistent, 30 – 45 Hz, gap-junction-dependent gamma rhythm dominates rhythmic activity in supragranular layers 2/3, whereas a tonic depolarization-dependent, 50 – 80 Hz, pyramidal/interneuron gamma rhythm is expressed in granular layer 4 with strong glutamatergic excitation. As a consequence, altering the degree of excitation of the auditory cortex causes bifurcation in the gamma frequency spectrum and can effectively switch temporal control of layer 5 from supragranular to granular layers. Computational modeling predicts the pattern of interlaminar connections may help to stabilize this bifurcation. The data suggest that different strategies are used by primary auditory cortex to represent weak and strong inputs, with principal cell firing rate becoming increasingly important as excitation strength increases. PMID:22114273
Leitmeyer, Katharina; Glutz, Andrea; Radojevic, Vesna; Setz, Cristian; Huerzeler, Nathan; Bumann, Helen; Bodmer, Daniel; Brand, Yves
2015-01-01
Rapamycin is an antifungal agent with immunosuppressive properties. Rapamycin inhibits the mammalian target of rapamycin (mTOR) by blocking the mTOR complex 1 (mTORC1). mTOR is an atypical serine/threonine protein kinase, which controls cell growth, cell proliferation, and cell metabolism. However, less is known about the mTOR pathway in the inner ear. First, we evaluated whether or not the two mTOR complexes (mTORC1 and mTORC2, resp.) are present in the mammalian cochlea. Next, tissue explants of 5-day-old rats were treated with increasing concentrations of rapamycin to explore the effects of rapamycin on auditory hair cells and spiral ganglion neurons. Auditory hair cell survival, spiral ganglion neuron number, length of neurites, and neuronal survival were analyzed in vitro. Our data indicates that both mTOR complexes are expressed in the mammalian cochlea. We observed that inhibition of mTOR by rapamycin results in a dose dependent damage of auditory hair cells. Moreover, spiral ganglion neurite number and length of neurites were significantly decreased in all concentrations used compared to control in a dose dependent manner. Our data indicate that the mTOR may play a role in the survival of hair cells and modulates spiral ganglion neuronal outgrowth and neurite formation.
Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.
Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C
2015-11-04
Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition and global functional outcome. This study evaluated neural substrates of impaired AER in schizophrenia using a combined event-related potential/resting-state fMRI approach. Patients showed impaired mismatch negativity response to emotionally relevant frequency modulated tones along with impaired functional connectivity between auditory and medial temporal (anterior insula) cortex. These deficits contributed in parallel to impaired AER and accounted for ∼50% of variance in AER performance. Overall, these findings demonstrate the importance of both auditory-level dysfunction and impaired auditory/insula connectivity in the pathophysiology of social cognitive dysfunction in schizophrenia. Copyright © 2015 the authors 0270-6474/15/3514910-13$15.00/0.
Lacerda, Clara Fonseca; Silva, Luciana Oliveira e; de Tavares Canto, Roberto Sérgio; Cheik, Nadia Carla
2012-01-01
Summary Introduction: The aging process provokes structural modifications and functional to it greets, compromising the postural control and central processing. Studies have boarded the necessity to identify to the harmful factors of risk to aged the auditory health and security in stricken aged by auditory deficits and with alterations of balance. Objective: To evaluate the effect of auditory prosthesis in the quality of life, the balance and the fear of fall in aged with bilateral auditory loss. Method: Carried through clinical and experimental study with 56 aged ones with sensorineural auditory loss, submitted to the use of auditory prosthesis of individual sonorous amplification (AASI). The aged ones had answered to the questionnaires of quality of life Short Form Health Survey (SF-36), Falls Efficacy International Scale- (FES-I) and the test of Berg Balance Scale (BBS). After 4 months, the aged ones that they adapted to the use of the AASI had been reevaluated. Results: It had 50% of adaptation of the aged ones to the AASI. It was observed that the masculine sex had greater difficulty in adapting to the auditory device and that the variable age, degree of loss, presence of humming and vertigo had not intervened with the adaptation to auditory prosthesis. It had improvement of the quality of life in the dominance of the State General Health (EGS) and Functional Capacity (CF) and of the humming, as well as the increase of the auto-confidence after adaptation of auditory prosthesis. Conclusion: The use of auditory prosthesis provided the improvement of the domains of the quality of life, what it reflected consequently in one better auto-confidence and in the long run in the reduction of the fear of fall in aged with sensorineural auditory loss. PMID:25991930
Top-down and bottom-up modulation of brain structures involved in auditory discrimination.
Diekhof, Esther K; Biedermann, Franziska; Ruebsamen, Rudolf; Gruber, Oliver
2009-11-10
Auditory deviancy detection comprises both automatic and voluntary processing. Here, we investigated the neural correlates of different components of the sensory discrimination process using functional magnetic resonance imaging. Subliminal auditory processing of deviant events that were not detected led to activation in left superior temporal gyrus. On the other hand, both correct detection of deviancy and false alarms activated a frontoparietal network of attentional processing and response selection, i.e. this network was activated regardless of the physical presence of deviant events. Finally, activation in the putamen, anterior cingulate and middle temporal cortex depended on factual stimulus representations and occurred only during correct deviancy detection. These results indicate that sensory discrimination may rely on dynamic bottom-up and top-down interactions.
Analysis of speech sounds is left-hemisphere predominant at 100-150ms after sound onset.
Rinne, T; Alho, K; Alku, P; Holi, M; Sinkkonen, J; Virtanen, J; Bertrand, O; Näätänen, R
1999-04-06
Hemispheric specialization of human speech processing has been found in brain imaging studies using fMRI and PET. Due to the restricted time resolution, these methods cannot, however, determine the stage of auditory processing at which this specialization first emerges. We used a dense electrode array covering the whole scalp to record the mismatch negativity (MMN), an event-related brain potential (ERP) automatically elicited by occasional changes in sounds, which ranged from non-phonetic (tones) to phonetic (vowels). MMN can be used to probe auditory central processing on a millisecond scale with no attention-dependent task requirements. Our results indicate that speech processing occurs predominantly in the left hemisphere at the early, pre-attentive level of auditory analysis.
Modeling Auditory-Haptic Interface Cues from an Analog Multi-line Telephone
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Anderson, Mark R.; Bittner, Rachael M.
2012-01-01
The Western Electric Company produced a multi-line telephone during the 1940s-1970s using a six-button interface design that provided robust tactile, haptic and auditory cues regarding the "state" of the communication system. This multi-line telephone was used as a model for a trade study comparison of two interfaces: a touchscreen interface (iPad)) versus a pressure-sensitive strain gauge button interface (Phidget USB interface controllers). The experiment and its results are detailed in the authors' AES 133rd convention paper " Multimodal Information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Dispays". This Engineering Brief describes how the interface logic, visual indications, and auditory cues of the original telephone were synthesized using MAX/MSP, including the logic for line selection, line hold, and priority line activation.
Developmental differences in auditory detection and localization of approaching vehicles.
Barton, Benjamin K; Lew, Roger; Kovesdi, Casey; Cottrell, Nicholas D; Ulrich, Thomas
2013-04-01
Pedestrian safety is a significant problem in the United States, with thousands being injured each year. Multiple risk factors exist, but one poorly understood factor is pedestrians' ability to attend to vehicles using auditory cues. Auditory information in the pedestrian setting is increasing in importance with the growing number of quieter hybrid and all-electric vehicles on America's roadways that do not emit sound cues pedestrians expect from an approaching vehicle. Our study explored developmental differences in pedestrians' detection and localization of approaching vehicles. Fifty children ages 6-9 years, and 35 adults participated. Participants' performance varied significantly by age, and with increasing speed and direction of the vehicle's approach. Results underscore the importance of understanding children's and adults' use of auditory cues for pedestrian safety and highlight the need for further research. Copyright © 2013 Elsevier Ltd. All rights reserved.
Keesom, Sarah M; Morningstar, Mitchell D; Sandlain, Rebecca; Wise, Bradley M; Hurley, Laura M
2018-05-12
Early-life experiences, including maternal deprivation and social isolation during adolescence, have a profound influence on a range of adult social behaviors. Post-weaning social isolation in rodents influences behavior in part through the alteration of neuromodulatory systems, including the serotonergic system. Of significance to social behavior, the serotonergic system richly innervates brain areas involved in vocal communication, including the auditory system. However, the influence of isolation on serotonergic input to the auditory system remains underexplored. Here, we assess whether 4 weeks of post-weaning individual housing alters serotonergic fiber density in the inferior colliculus (IC), an auditory midbrain nucleus in which serotonin alters auditory-evoked activity. Individually housed male and female mice were compared to conspecifics housed socially in groups of three. Serotonergic projections were subsequently visualized with an antibody to the serotonin transporter, which labels serotonergic fibers with relatively high selectivity. Fiber densities were estimated in the three major subregions of the IC using line-scan intensity analysis. Individually housed female mice showed a significantly reduced fiber density relative to socially housed females, which was accompanied by a lower body weight in individually housed females. In contrast, social isolation did not affect serotonergic fiber density in the IC of males. This finding suggests that sensitivity of the serotonergic system to social isolation is sex-dependent, which could be due to a sex difference in the effect of isolation on psychosocial stress. Since serotonin availability depends on social context, this finding further suggests that social isolation can alter the acute social regulation of auditory processing. Copyright © 2018. Published by Elsevier B.V.
Cook, Peter; Rouse, Andrew; Wilson, Margaret; Reichmuth, Colleen
2013-11-01
Is the ability to entrain motor activity to a rhythmic auditory stimulus, that is "keep a beat," dependent on neural adaptations supporting vocal mimicry? That is the premise of the vocal learning and synchronization hypothesis, recently advanced to explain the basis of this behavior (A. Patel, 2006, Musical Rhythm, Linguistic Rhythm, and Human Evolution, Music Perception, 24, 99-104). Prior to the current study, only vocal mimics, including humans, cockatoos, and budgerigars, have been shown to be capable of motoric entrainment. Here we demonstrate that a less vocally flexible animal, a California sea lion (Zalophus californianus), can learn to entrain head bobbing to an auditory rhythm meeting three criteria: a behavioral response that does not reproduce the stimulus; performance transfer to a range of novel tempos; and entrainment to complex, musical stimuli. These findings show that the capacity for entrainment of movement to rhythmic sounds does not depend on a capacity for vocal mimicry, and may be more widespread in the animal kingdom than previously hypothesized.
Assessment of Styrene Oxide Neurotoxicity Using In Vitro Auditory Cortex Networks
Gopal, Kamakshi V.; Wu, Calvin; Moore, Ernest J.; Gross, Guenter W.
2011-01-01
Styrene oxide (SO) (C8H8O), the major metabolite of styrene (C6H5CH=CH2), is widely used in industrial applications. Styrene and SO are neurotoxic and cause damaging effects on the auditory system. However, little is known about their concentration-dependent electrophysiological and morphological effects. We used spontaneously active auditory cortex networks (ACNs) growing on microelectrode arrays (MEA) to characterize neurotoxic effects of SO. Acute application of 0.1 to 3.0 mM SO showed concentration-dependent inhibition of spike activity with no noticeable morphological changes. The spike rate IC50 (concentration inducing 50% inhibition) was 511 ± 60 μM (n = 10). Subchronic (5 hr) single applications of 0.5 mM SO also showed 50% activity reduction with no overt changes in morphology. The results imply that electrophysiological toxicity precedes cytotoxicity. Five-hour exposures to 2 mM SO revealed neuronal death, irreversible activity loss, and pronounced glial swelling. Paradoxical “protection” by 40 μM bicuculline suggests binding of SO to GABA receptors. PMID:23724250
Transfer characteristics of the hair cell's afferent synapse
NASA Astrophysics Data System (ADS)
Keen, Erica C.; Hudspeth, A. J.
2006-04-01
The sense of hearing depends on fast, finely graded neurotransmission at the ribbon synapses connecting hair cells to afferent nerve fibers. The processing that occurs at this first chemical synapse in the auditory pathway determines the quality and extent of the information conveyed to the central nervous system. Knowledge of the synapse's input-output function is therefore essential for understanding how auditory stimuli are encoded. To investigate the transfer function at the hair cell's synapse, we developed a preparation of the bullfrog's amphibian papilla. In the portion of this receptor organ representing stimuli of 400-800 Hz, each afferent nerve fiber forms several synaptic terminals onto one to three hair cells. By performing simultaneous voltage-clamp recordings from presynaptic hair cells and postsynaptic afferent fibers, we established that the rate of evoked vesicle release, as determined from the average postsynaptic current, depends linearly on the amplitude of the presynaptic Ca2+ current. This result implies that, for receptor potentials in the physiological range, the hair cell's synapse transmits information with high fidelity. auditory system | exocytosis | glutamate | ribbon synapse | synaptic vesicle
Moyer, Caitlin E.; Delevich, Kristen M.; Fish, Kenneth N.; Asafu-Adjei, Josephine K.; Sampson, Allan R.; Dorph-Petersen, Karl-Anton; Lewis, David A.; Sweet, Robert A.
2012-01-01
Background Schizophrenia is associated with perceptual and physiological auditory processing impairments that may result from primary auditory cortex excitatory and inhibitory circuit pathology. High-frequency oscillations are important for auditory function and are often reported to be disrupted in schizophrenia. These oscillations may, in part, depend on upregulation of gamma-aminobutyric acid synthesis by glutamate decarboxylase 65 (GAD65) in response to high interneuron firing rates. It is not known whether levels of GAD65 protein or GAD65-expressing boutons are altered in schizophrenia. Methods We studied two cohorts of subjects with schizophrenia and matched control subjects, comprising 27 pairs of subjects. Relative fluorescence intensity, density, volume, and number of GAD65-immunoreactive boutons in primary auditory cortex were measured using quantitative confocal microscopy and stereologic sampling methods. Bouton fluorescence intensities were used to compare the relative expression of GAD65 protein within boutons between diagnostic groups. Additionally, we assessed the correlation between previously measured dendritic spine densities and GAD65-immunoreactive bouton fluorescence intensities. Results GAD65-immunoreactive bouton fluorescence intensity was reduced by 40% in subjects with schizophrenia and was correlated with previously measured reduced spine density. The reduction was greater in subjects who were not living independently at time of death. In contrast, GAD65-immunoreactive bouton density and number were not altered in deep layer 3 of primary auditory cortex of subjects with schizophrenia. Conclusions Decreased expression of GAD65 protein within inhibitory boutons could contribute to auditory impairments in schizophrenia. The correlated reductions in dendritic spines and GAD65 protein suggest a relationship between inhibitory and excitatory synapse pathology in primary auditory cortex. PMID:22624794
Fonseca, P J; Correia, T
2007-05-01
The effects of temperature on hearing in the cicada Tettigetta josei were studied. The activity of the auditory nerve and the responses of auditory interneurons to stimuli of different frequencies and intensities were recorded at different temperatures ranging from 16 degrees C to 29 degrees C. Firstly, in order to investigate the temperature dependence of hearing processes, we analyzed its effects on auditory tuning, sensitivity, latency and Q(10dB). Increasing temperature led to an upward shift of the characteristic hearing frequency, to an increase in sensitivity and to a decrease in the latency of the auditory response both in the auditory nerve recordings (periphery) and in some interneurons at the metathoracic-abdominal ganglionic complex (MAC). Characteristic frequency shifts were only observed at low frequency (3-8 kHz). No changes were seen in Q(10dB). Different tuning mechanisms underlying frequency selectivity may explain the results observed. Secondly, we investigated the role of the mechanical sensory structures that participate in the transduction process. Laser vibrometry measurements revealed that the vibrations of the tympanum and tympanal apodeme are temperature independent in the biologically relevant range (18-35 degrees C). Since the above mentioned effects of temperature are present in the auditory nerve recordings, the observed shifts in frequency tuning must be performed by mechanisms intrinsic to the receptor cells. Finally, the role of potassium channels in the response of the auditory system was investigated using a specific inhibitor of these channels, tetraethylammonium (TEA). TEA caused shifts on tuning and sensitivity of the summed response of the receptors similar to the effects of temperature. Thus, potassium channels are implicated in the tuning of the receptor cells.
Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto
2016-01-01
A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention. PMID:26924959
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation
Lopez-Poveda, Enrique A.; Barrios, Pablo
2013-01-01
Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176
Sousa, Ana Constantino; Didoné, Dayane Domeneghini; Sleifer, Pricila
2017-01-01
Introduction Preterm neonates are at risk of changes in their auditory system development, which explains the need for auditory monitoring of this population. The Auditory Steady-State Response (ASSR) is an objective method that allows obtaining the electrophysiological thresholds with greater applicability in neonatal and pediatric population. Objective The purpose of this study is to compare the ASSR thresholds in preterm and term infants evaluated during two stages. Method The study included 63 normal hearing neonates: 33 preterm and 30 term. They underwent assessment of ASSR in both ears simultaneously through insert phones in the frequencies of 500 to 4000Hz with the amplitude modulated from 77 to 103Hz. We presented the intensity at a decreasing level to detect the minimum level of responses. At 18 months, 26 of 33 preterm infants returned for the new assessment for ASSR and were compared with 30 full-term infants. We compared between groups according to gestational age. Results Electrophysiological thresholds were higher in preterm than in full-term neonates ( p < 0.05) at the first testing. There were no significant differences between ears and gender. At 18 months, there was no difference between groups ( p > 0.05) in all the variables described. Conclusion In the first evaluation preterm had higher thresholds in ASSR. There was no difference at 18 months of age, showing the auditory maturation of preterm infants throughout their development. PMID:28680486
Pinaud, Raphael; Terleph, Thomas A.; Tremere, Liisa A.; Phan, Mimi L.; Dagostin, André A.; Leão, Ricardo M.; Mello, Claudio V.; Vicario, David S.
2008-01-01
The role of GABA in the central processing of complex auditory signals is not fully understood. We have studied the involvement of GABAA-mediated inhibition in the processing of birdsong, a learned vocal communication signal requiring intact hearing for its development and maintenance. We focused on caudomedial nidopallium (NCM), an area analogous to parts of the mammalian auditory cortex with selective responses to birdsong. We present evidence that GABAA-mediated inhibition plays a pronounced role in NCM's auditory processing of birdsong. Using immunocytochemistry, we show that approximately half of NCM's neurons are GABAergic. Whole cell patch-clamp recordings in a slice preparation demonstrate that, at rest, spontaneously active GABAergic synapses inhibit excitatory inputs onto NCM neurons via GABAA receptors. Multi-electrode electrophysiological recordings in awake birds show that local blockade of GABAA-mediated inhibition in NCM markedly affects the temporal pattern of song-evoked responses in NCM without modifications in frequency tuning. Surprisingly, this blockade increases the phasic and largely suppresses the tonic response component, reflecting dynamic relationships of inhibitory networks that could include disinhibition. Thus processing of learned natural communication sounds in songbirds, and possibly other vocal learners, may depend on complex interactions of inhibitory networks. PMID:18480371
Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina
2013-02-01
Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.
Sex-specific cognitive abnormalities in early-onset psychosis.
Ruiz-Veguilla, Miguel; Moreno-Granados, Josefa; Salcedo-Marin, Maria D; Barrigon, Maria L; Blanco-Morales, Maria J; Igunza, Evelio; Cañabate, Anselmo; Garcia, Maria D; Guijarro, Teresa; Diaz-Atienza, Francisco; Ferrin, Maite
2017-01-01
Brain maturation differs depending on the area of the brain and sex. Girls show an earlier peak in maturation of the prefrontal cortex. Although differences between adult females and males with schizophrenia have been widely studied, there has been less research in girls and boys with psychosis. The purpose of this study was to examine differences in verbal and visual memory, verbal working memory, auditory attention, processing speed, and cognitive flexibility between boys and girls. We compared a group of 80 boys and girls with first-episode psychosis to a group of controls. We found interactions between group and sex in verbal working memory (p = 0.04) and auditory attention (p = 0.01). The female controls showed better working memory (p = 0.01) and auditory attention (p = 0.001) than males. However, we did not find any sex differences in working memory (p = 0.91) or auditory attention (p = 0.93) in the psychosis group. These results are consistent with the presence of sex-modulated cognitive profiles at first presentation of early-onset psychosis.
Harrison, Neil R; Woodhouse, Rob
2016-05-01
Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.
Sun, Hongyu; Takesian, Anne E; Wang, Ting Ting; Lippman-Bell, Jocelyn J; Hensch, Takao K; Jensen, Frances E
2018-05-29
Heightened neural excitability in infancy and childhood results in increased susceptibility to seizures. Such early-life seizures are associated with language deficits and autism that can result from aberrant development of the auditory cortex. Here, we show that early-life seizures disrupt a critical period (CP) for tonotopic map plasticity in primary auditory cortex (A1). We show that this CP is characterized by a prevalence of "silent," NMDA-receptor (NMDAR)-only, glutamate receptor synapses in auditory cortex that become "unsilenced" due to activity-dependent AMPA receptor (AMPAR) insertion. Induction of seizures prior to this CP occludes tonotopic map plasticity by prematurely unsilencing NMDAR-only synapses. Further, brief treatment with the AMPAR antagonist NBQX following seizures, prior to the CP, prevents synapse unsilencing and permits subsequent A1 plasticity. These findings reveal that early-life seizures modify CP regulators and suggest that therapeutic targets for early post-seizure treatment can rescue CP plasticity. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Neural correlates of auditory short-term memory in rostral superior temporal cortex.
Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo
2014-12-01
Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or during both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing and in their resistance to sounds intervening between the sample and match. Like the monkeys' behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. Copyright © 2014 Elsevier Ltd. All rights reserved.
Baseline vestibular and auditory findings in a trial of post-concussive syndrome
Meehan, Anna; Searing, Elizabeth; Weaver, Lindell; Lewandowski, Andrew
2016-01-01
Previous studies have reported high rates of auditory and vestibular-balance deficits immediately following head injury. This study uses a comprehensive battery of assessments to characterize auditory and vestibular function in 71 U.S. military service members with chronic symptoms following mild traumatic brain injury that did not resolve with traditional interventions. The majority of the study population reported hearing loss (70%) and recent vestibular symptoms (83%). Central auditory deficits were most prevalent, with 58% of participants failing the SCAN3:A screening test and 45% showing abnormal responses on auditory steady-state response testing presented at a suprathreshold intensity. Only 17% of the participants had abnormal hearing (⟩25 dB hearing loss) based on the pure-tone average. Objective vestibular testing supported significant deficits in this population, regardless of whether the participant self-reported active symptoms. Composite score on the Sensory Organization Test was lower than expected from normative data (mean 69.6 ±vestibular tests, vestibulo-ocular reflex, central auditory dysfunction, mild traumatic brain injury, post-concussive symptoms, hearing15.6). High abnormality rates were found in funduscopy torsion (58%), oculomotor assessments (49%), ocular and cervical vestibular evoked myogenic potentials (46% and 33%, respectively), and monothermal calorics (40%). It is recommended that a full peripheral and central auditory, oculomotor, and vestibular-balance evaluation be completed on military service members who have sustained head trauma.
Click train encoding in primary and non-primary auditory cortex of anesthetized macaque monkeys.
Oshurkova, E; Scheich, H; Brosch, M
2008-06-02
We studied encoding of temporally modulated sounds in 28 multiunits in the primary auditory cortical field (AI) and in 35 multiunits in the secondary auditory cortical field (caudomedial auditory cortical field, CM) by presenting periodic click trains with click rates between 1 and 300 Hz lasting for 2-4 s. We found that all multiunits increased or decreased their firing rate during the steady state portion of the click train and that all except two multiunits synchronized their firing to individual clicks in the train. Rate increases and synchronized responses were most prevalent and strongest at low click rates, as expressed by best modulation frequency, limiting frequency, percentage of responsive multiunits, and average rate response and vector strength. Synchronized responses occurred up to 100 Hz; rate response occurred up to 300 Hz. Both auditory fields responded similarly to low click rates but differed at click rates above approximately 12 Hz at which more multiunits in AI than in CM exhibited synchronized responses and increased rate responses and more multiunits in CM exhibited decreased rate responses. These findings suggest that the auditory cortex of macaque monkeys encodes temporally modulated sounds similar to the auditory cortex of other mammals. Together with other observations presented in this and other reports, our findings also suggest that AI and CM have largely overlapping sensitivities for acoustic stimulus features but encode these features differently.
Lilienthal, Hellmuth; van der Ven, Leo T M; Piersma, Aldert H; Vos, Josephus G
2009-02-25
Hexabromocyclododecane (HBCD) is a widely used brominated flame retardant which has been recently detected in many environmental matrices. Data from a subacute toxicity study indicated dose-related effects particularly on the pituitary thyroid-axis and retinoids in female rats. Brominated and chlorinated aromatic hydrocarbons are also reported to exert effects on the nervous system. Several investigations revealed a pronounced sensitivity of the dopaminergic system and auditory functions to polychlorinated biphenyls. Therefore, the present experiment should examine, whether or not HBCD affects these targets. Rats were exposed to 0, 0.1, 0.3, 1, 3, 10, 30 or 100 mg HBCD/kg body weight via the diet. Exposure started before mating and was continued during mating, gestation, lactation, and after weaning in offspring. Haloperidol-induced catalepsy and brainstem auditory evoked potentials (BAEPs) were used to assess dopamine-dependent behavior and hearing function in adult male and female offspring. On the catalepsy test, reduced latencies to movement onset were observed mainly in female offspring, indicating influences on dopamine-dependent behavior. The overall pattern of BAEP alterations, with increased thresholds and prolonged latencies of early waves, suggested a predominant cochlear effect. Effects were dose-dependent with lower bounds of benchmark doses (BMDL) between < or =1 and 10 mg/kg body weight for both catalepsy and BAEP thresholds. Tissue concentrations at the BMDL values obtained in this study were 3-4 orders of magnitude higher than current exposure levels in humans.
NASA Astrophysics Data System (ADS)
Lyon, Richard F.
2011-11-01
A cascade of two-pole-two-zero filters with level-dependent pole and zero dampings, with few parameters, can provide a good match to human psychophysical and physiological data. The model has been fitted to data on detection threshold for tones in notched-noise masking, including bandwidth and filter shape changes over a wide range of levels, and has been shown to provide better fits with fewer parameters compared to other auditory filter models such as gammachirps. Originally motivated as an efficient machine implementation of auditory filtering related to the WKB analysis method of cochlear wave propagation, such filter cascades also provide good fits to mechanical basilar membrane data, and to auditory nerve data, including linear low-frequency tail response, level-dependent peak gain, sharp tuning curves, nonlinear compression curves, level-independent zero-crossing times in the impulse response, realistic instantaneous frequency glides, and appropriate level-dependent group delay even with minimum-phase response. As part of exploring different level-dependent parameterizations of such filter cascades, we have identified a simple sufficient condition for stable zero-crossing times, based on the shifting property of the Laplace transform: simply move all the s-domain poles and zeros by equal amounts in the real-s direction. Such pole-zero filter cascades are efficient front ends for machine hearing applications, such as music information retrieval, content identification, speech recognition, and sound indexing.
Synchronization to auditory and visual rhythms in hearing and deaf individuals
Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen
2014-01-01
A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395
The plastic ear and perceptual relearning in auditory spatial perception
Carlile, Simon
2014-01-01
The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497
Alpha Rhythms in Audition: Cognitive and Clinical Perspectives
Weisz, Nathan; Hartmann, Thomas; Müller, Nadia; Lorenz, Isabel; Obleser, Jonas
2011-01-01
Like the visual and the sensorimotor systems, the auditory system exhibits pronounced alpha-like resting oscillatory activity. Due to the relatively small spatial extent of auditory cortical areas, this rhythmic activity is less obvious and frequently masked by non-auditory alpha-generators when recording non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). Following stimulation with sounds, marked desynchronizations can be observed between 6 and 12 Hz, which can be localized to the auditory cortex. However knowledge about the functional relevance of the auditory alpha rhythm has remained scarce so far. Results from the visual and sensorimotor system have fuelled the hypothesis of alpha activity reflecting a state of functional inhibition. The current article pursues several intentions: (1) Firstly we review and present own evidence (MEG, EEG, sEEG) for the existence of an auditory alpha-like rhythm independent of visual or motor generators, something that is occasionally met with skepticism. (2) In a second part we will discuss tinnitus and how this audiological symptom may relate to reduced background alpha. The clinical part will give an introduction into a method which aims to modulate neurophysiological activity hypothesized to underlie this distressing disorder. Using neurofeedback, one is able to directly target relevant oscillatory activity. Preliminary data point to a high potential of this approach for treating tinnitus. (3) Finally, in a cognitive neuroscientific part we will show that auditory alpha is modulated by anticipation/expectations with and without auditory stimulation. We will also introduce ideas and initial evidence that alpha oscillations are involved in the most complex capability of the auditory system, namely speech perception. The evidence presented in this article corroborates findings from other modalities, indicating that alpha-like activity functionally has an universal inhibitory role across sensory modalities. PMID:21687444
Schrode, Katrina M; Bee, Mark A
2015-03-01
Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. © 2015. Published by The Company of Biologists Ltd.
Auditory processing disorders and problems with hearing-aid fitting in old age.
Antonelli, A R
1978-01-01
The hearing handicap experienced by elderly subjects depends only partially on end-organ impairment. Not only the neural unit loss along the central auditory pathways contributes to decreased speech discrimination, but also learning processes are slowed down. Diotic listening in elderly people seems to fasten learning of discrimination in critical conditions, as in the case of sensitized speech. This fact, and the binaural gain through the binaural release from masking, stress the superiority, on theoretical grounds, of binaural over monaural hearing-aid fitting.
Temporal processing and adaptation in the songbird auditory forebrain.
Nagel, Katherine I; Doupe, Allison J
2006-09-21
Songbird auditory neurons must encode the dynamics of natural sounds at many volumes. We investigated how neural coding depends on the distribution of stimulus intensities. Using reverse-correlation, we modeled responses to amplitude-modulated sounds as the output of a linear filter and a nonlinear gain function, then asked how filters and nonlinearities depend on the stimulus mean and variance. Filter shape depended strongly on mean amplitude (volume): at low mean, most neurons integrated sound over many milliseconds, while at high mean, neurons responded more to local changes in amplitude. Increasing the variance (contrast) of amplitude modulations had less effect on filter shape but decreased the gain of firing in most cells. Both filter and gain changes occurred rapidly after a change in statistics, suggesting that they represent nonlinearities in processing. These changes may permit neurons to signal effectively over a wider dynamic range and are reminiscent of findings in other sensory systems.
Schelinski, Stefanie; Riedel, Philipp; von Kriegstein, Katharina
2014-12-01
In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory-only speech recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Krishnan, Ananthanarayan; Gandour, Jackson T.; Smalt, Christopher J.; Bidelman, Gavin M.
2010-01-01
Experience-dependent enhancement of neural encoding of pitch in the auditory brainstem has been observed for only specific portions of native pitch contours exhibiting high rates of pitch acceleration, irrespective of speech or nonspeech contexts. This experiment allows us to determine whether this language-dependent advantage transfers to…
Kuriki, Shinya; Yokosawa, Koichi; Takahashi, Makoto
2013-01-01
The auditory illusory perception “scale illusion” occurs when a tone of ascending scale is presented in one ear, a tone of descending scale is presented simultaneously in the other ear, and vice versa. Most listeners hear illusory percepts of smooth pitch contours of the higher half of the scale in the right ear and the lower half in the left ear. Little is known about neural processes underlying the scale illusion. In this magnetoencephalographic study, we recorded steady-state responses to amplitude-modulated short tones having illusion-inducing pitch sequences, where the sound level of the modulated tones was manipulated to decrease monotonically with increase in pitch. The steady-state responses were decomposed into right- and left-sound components by means of separate modulation frequencies. It was found that the time course of the magnitude of response components of illusion-perceiving listeners was significantly correlated with smooth pitch contour of illusory percepts and that the time course of response components of stimulus-perceiving listeners was significantly correlated with discontinuous pitch contour of stimulus percepts in addition to the contour of illusory percepts. The results suggest that the percept of illusory pitch sequence was represented in the neural activity in or near the primary auditory cortex, i.e., the site of generation of auditory steady-state response, and that perception of scale illusion is maintained by automatic low-level processing. PMID:24086676
Discrimination of timbre in early auditory responses of the human brain.
Seol, Jaeho; Oh, MiAe; Kim, June Sic; Jin, Seung-Hyun; Kim, Sun Il; Chung, Chun Kee
2011-01-01
The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1)-testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.
Different auditory feedback control for echolocation and communication in horseshoe bats.
Liu, Ying; Feng, Jiang; Metzner, Walter
2013-01-01
Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this "auditory fovea", horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC) behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs) and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs) and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea.
Different Auditory Feedback Control for Echolocation and Communication in Horseshoe Bats
Liu, Ying; Feng, Jiang; Metzner, Walter
2013-01-01
Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this “auditory fovea”, horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC) behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs) and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs) and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea. PMID:23638137
Auditory temporal processing in healthy aging: a magnetoencephalographic study
Sörös, Peter; Teismann, Inga K; Manemann, Elisabeth; Lütkenhöner, Bernd
2009-01-01
Background Impaired speech perception is one of the major sequelae of aging. In addition to peripheral hearing loss, central deficits of auditory processing are supposed to contribute to the deterioration of speech perception in older individuals. To test the hypothesis that auditory temporal processing is compromised in aging, auditory evoked magnetic fields were recorded during stimulation with sequences of 4 rapidly recurring speech sounds in 28 healthy individuals aged 20 – 78 years. Results The decrement of the N1m amplitude during rapid auditory stimulation was not significantly different between older and younger adults. The amplitudes of the middle-latency P1m wave and of the long-latency N1m, however, were significantly larger in older than in younger participants. Conclusion The results of the present study do not provide evidence for the hypothesis that auditory temporal processing, as measured by the decrement (short-term habituation) of the major auditory evoked component, the N1m wave, is impaired in aging. The differences between these magnetoencephalographic findings and previously published behavioral data might be explained by differences in the experimental setting between the present study and previous behavioral studies, in terms of speech rate, attention, and masking noise. Significantly larger amplitudes of the P1m and N1m waves suggest that the cortical processing of individual sounds differs between younger and older individuals. This result adds to the growing evidence that brain functions, such as sensory processing, motor control and cognitive processing, can change during healthy aging, presumably due to experience-dependent neuroplastic mechanisms. PMID:19351410
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256
Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.
Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359
Self-grounding visual, auditory and olfactory autobiographical memories.
Knez, Igor; Ljunglöf, Louise; Arshamian, Artin; Willander, Johan
2017-07-01
Given that autobiographical memory provides a cognitive foundation for the self, we investigated the relative importance of visual, auditory and olfactory autobiographical memories for the self. Thirty subjects, with a mean age of 35.4years, participated in a study involving a three×three within-subject design containing nine different types of autobiographical memory cues: pictures, sounds and odors presented with neutral, positive and negative valences. It was shown that visual compared to auditory and olfactory autobiographical memories involved higher cognitive and emotional constituents for the self. Furthermore, there was a trend showing positive autobiographical memories to increase their proportion to both cognitive and emotional components of the self, from olfactory to auditory to visually cued autobiographical memories; but, yielding a reverse trend for negative autobiographical memories. Finally, and independently of modality, positive affective states were shown to be more involved in autobiographical memory than negative ones. Copyright © 2017 Elsevier Inc. All rights reserved.
Binaural beats increase interhemispheric alpha-band coherence between auditory cortices.
Solcà, Marco; Mottaz, Anaïs; Guggisberg, Adrian G
2016-02-01
Binaural beats (BBs) are an auditory illusion occurring when two tones of slightly different frequency are presented separately to each ear. BBs have been suggested to alter physiological and cognitive processes through synchronization of the brain hemispheres. To test this, we recorded electroencephalograms (EEG) at rest and while participants listened to BBs or a monaural control condition during which both tones were presented to both ears. We calculated for each condition the interhemispheric coherence, which expressed the synchrony between neural oscillations of both hemispheres. Compared to monaural beats and resting state, BBs enhanced interhemispheric coherence between the auditory cortices. Beat frequencies in the alpha (10 Hz) and theta (4 Hz) frequency range both increased interhemispheric coherence selectively at alpha frequencies. In a second experiment, we evaluated whether this coherence increase has a behavioral aftereffect on binaural listening. No effects were observed in a dichotic digit task performed immediately after BBs presentation. Our results suggest that BBs enhance alpha-band oscillation synchrony between the auditory cortices during auditory stimulation. This effect seems to reflect binaural integration rather than entrainment. Copyright © 2015 Elsevier B.V. All rights reserved.
Neural Correlates of Multisensory Perceptual Learning
Powers, Albert R.; Hevey, Matthew A.; Wallace, Mark T.
2012-01-01
The brain’s ability to bind incoming auditory and visual stimuli depends critically on the temporal structure of this information. Specifically, there exists a temporal window of audiovisual integration within which stimuli are highly likely to be perceived as part of the same environmental event. Several studies have described the temporal bounds of this window, but few have investigated its malleability. Recently, our laboratory has demonstrated that a perceptual training paradigm is capable of eliciting a 40% narrowing in the width of this window that is stable for at least one week after cessation of training. In the current study we sought to reveal the neural substrates of these changes. Eleven human subjects completed an audiovisual simultaneity judgment training paradigm, immediately before and after which they performed the same task during an event-related 3T fMRI session. The posterior superior temporal sulcus (pSTS) and areas of auditory and visual cortex exhibited robust BOLD decreases following training, and resting state and effective connectivity analyses revealed significant increases in coupling among these cortices after training. These results provide the first evidence of the neural correlates underlying changes in multisensory temporal binding and that likely represent the substrate for a multisensory temporal binding window. PMID:22553032
Perceptual Plasticity for Auditory Object Recognition
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
2017-01-01
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524
Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter
2018-05-01
Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.
Handschuh, Juliane
2014-01-01
Dopaminergic neurotransmission in primary auditory cortex (AI) has been shown to be involved in learning and memory functions. Moreover, dopaminergic projections and D1/D5 receptor distributions display a layer-dependent organization, suggesting specific functions in the cortical circuitry. However, the circuit effects of dopaminergic neurotransmission in sensory cortex and their possible roles in perception, learning, and memory are largely unknown. Here, we investigated layer-specific circuit effects of dopaminergic neuromodulation using current source density (CSD) analysis in AI of Mongolian gerbils. Pharmacological stimulation of D1/D5 receptors increased auditory-evoked synaptic currents in infragranular layers, prolonging local thalamocortical input via positive feedback between infragranular output and granular input. Subsequently, dopamine promoted sustained cortical activation by prolonged recruitment of long-range corticocortical networks. A detailed circuit analysis combining layer-specific intracortical microstimulation (ICMS), CSD analysis, and pharmacological cortical silencing revealed that cross-laminar feedback enhanced by dopamine relied on a positive, fast-acting recurrent corticoefferent loop, most likely relayed via local thalamic circuits. Behavioral signal detection analysis further showed that activation of corticoefferent output by infragranular ICMS, which mimicked auditory activation under dopaminergic influence, was most effective in eliciting a behaviorally detectable signal. Our results show that D1/D5-mediated dopaminergic modulation in sensory cortex regulates positive recurrent corticoefferent feedback, which enhances states of high, persistent activity in sensory cortex evoked by behaviorally relevant stimuli. In boosting horizontal network interactions, this potentially promotes the readout of task-related information from cortical synapses and improves behavioral stimulus detection. PMID:24453315
Hashimoto, Ryu-Ichiro; Itahashi, Takashi; Okada, Rieko; Hasegawa, Sayaka; Tani, Masayuki; Kato, Nobumasa; Mimura, Masaru
2018-01-01
Abnormalities in functional brain networks in schizophrenia have been studied by examining intrinsic and extrinsic brain activity under various experimental paradigms. However, the identified patterns of abnormal functional connectivity (FC) vary depending on the adopted paradigms. Thus, it is unclear whether and how these patterns are inter-related. In order to assess relationships between abnormal patterns of FC during intrinsic activity and those during extrinsic activity, we adopted a data-fusion approach and applied partial least square (PLS) analyses to FC datasets from 25 patients with chronic schizophrenia and 25 age- and sex-matched normal controls. For the input to the PLS analyses, we generated a pair of FC maps during the resting state (REST) and the auditory deviance response (ADR) from each participant using the common seed region in the left middle temporal gyrus, which is a focus of activity associated with auditory verbal hallucinations (AVHs). PLS correlation (PLS-C) analysis revealed that patients with schizophrenia have significantly lower loadings of a component containing positive FCs in default-mode network regions during REST and a component containing positive FCs in the auditory and attention-related networks during ADR. Specifically, loadings of the REST component were significantly correlated with the severities of positive symptoms and AVH in patients with schizophrenia. The co-occurrence of such altered FC patterns during REST and ADR was replicated using PLS regression, wherein FC patterns during REST are modeled to predict patterns during ADR. These findings provide an integrative understanding of altered FCs during intrinsic and extrinsic activity underlying core schizophrenia symptoms.
SanMiguel, Iria; Corral, María-José; Escera, Carles
2008-07-01
The sensitivity of involuntary attention to top-down modulation was tested using an auditory-visual distraction task and a working memory (WM) load manipulation in subjects performing a simple visual classification task while ignoring contingent auditory stimulation. The sounds were repetitive standard tones (80%) and environmental novel sounds (20%). Distraction caused by the novel sounds was compared across a 1-back WM condition and a no-memory control condition, both involving the comparison of two digits. Event-related brain potentials (ERPs) to the sounds were recorded, and the N1/MMN (mismatch negativity), novelty-P3, and RON components were identified in the novel minus standard difference waveforms. Distraction was reduced in the WM condition, both behaviorally and as indexed by an attenuation of the late phase of the novelty-P3. The transient/change detection mechanism indexed by MMN was not affected by the WM manipulation. Sustained slow frontal and parietal waveforms related to WM processes were found on the standard ERPs. The present results indicate that distraction caused by irrelevant novel sounds is reduced when a WM component is involved in the task, and that this modulation by WM load takes place at a late state of the orienting response, all in all confirming that involuntary attention is under the control of top-down mechanisms. Moreover, as these results contradict predictions of the load theory of selective attention and cognitive control, it is suggested that the WM load effects on distraction depend on the nature of the distractor-target relationships.
Speech processing in children with functional articulation disorders.
Gósy, Mária; Horváth, Viktória
2015-03-01
This study explored auditory speech processing and comprehension abilities in 5-8-year-old monolingual Hungarian children with functional articulation disorders (FADs) and their typically developing peers. Our main hypothesis was that children with FAD would show co-existing auditory speech processing disorders, with different levels of these skills depending on the nature of the receptive processes. The tasks included (i) sentence and non-word repetitions, (ii) non-word discrimination and (iii) sentence and story comprehension. Results suggest that the auditory speech processing of children with FAD is underdeveloped compared with that of typically developing children, and largely varies across task types. In addition, there are differences between children with FAD and controls in all age groups from 5 to 8 years. Our results have several clinical implications.
Identification of a pathway for intelligible speech in the left temporal lobe
Scott, Sophie K.; Blank, C. Catrin; Rosen, Stuart; Wise, Richard J. S.
2017-01-01
Summary It has been proposed that the identification of sounds, including species-specific vocalizations, by primates depends on anterior projections from the primary auditory cortex, an auditory pathway analogous to the ventral route proposed for the visual identification of objects. We have identified a similar route in the human for understanding intelligible speech. Using PET imaging to identify separable neural subsystems within the human auditory cortex, we used a variety of speech and speech-like stimuli with equivalent acoustic complexity but varying intelligibility. We have demonstrated that the left superior temporal sulcus responds to the presence of phonetic information, but its anterior part only responds if the stimulus is also intelligible. This novel observation demonstrates a left anterior temporal pathway for speech comprehension. PMID:11099443
A temperature rise reduces trial-to-trial variability of locust auditory neuron responses.
Eberhard, Monika J B; Schleimer, Jan-Hendrik; Schreiber, Susanne; Ronacher, Bernhard
2015-09-01
The neurophysiology of ectothermic animals, such as insects, is affected by environmental temperature, as their body temperature fluctuates with ambient conditions. Changes in temperature alter properties of neurons and, consequently, have an impact on the processing of information. Nevertheless, nervous system function is often maintained over a broad temperature range, exhibiting a surprising robustness to variations in temperature. A special problem arises for acoustically communicating insects, as in these animals mate recognition and mate localization typically rely on the decoding of fast amplitude modulations in calling and courtship songs. In the auditory periphery, however, temporal resolution is constrained by intrinsic neuronal noise. Such noise predominantly arises from the stochasticity of ion channel gating and potentially impairs the processing of sensory signals. On the basis of intracellular recordings of locust auditory neurons, we show that intrinsic neuronal variability on the level of spikes is reduced with increasing temperature. We use a detailed mathematical model including stochastic ion channel gating to shed light on the underlying biophysical mechanisms in auditory receptor neurons: because of a redistribution of channel-induced current noise toward higher frequencies and specifics of the temperature dependence of the membrane impedance, membrane potential noise is indeed reduced at higher temperatures. This finding holds under generic conditions and physiologically plausible assumptions on the temperature dependence of the channels' kinetics and peak conductances. We demonstrate that the identified mechanism also can explain the experimentally observed reduction of spike timing variability at higher temperatures. Copyright © 2015 the American Physiological Society.
A temperature rise reduces trial-to-trial variability of locust auditory neuron responses
Schleimer, Jan-Hendrik; Schreiber, Susanne; Ronacher, Bernhard
2015-01-01
The neurophysiology of ectothermic animals, such as insects, is affected by environmental temperature, as their body temperature fluctuates with ambient conditions. Changes in temperature alter properties of neurons and, consequently, have an impact on the processing of information. Nevertheless, nervous system function is often maintained over a broad temperature range, exhibiting a surprising robustness to variations in temperature. A special problem arises for acoustically communicating insects, as in these animals mate recognition and mate localization typically rely on the decoding of fast amplitude modulations in calling and courtship songs. In the auditory periphery, however, temporal resolution is constrained by intrinsic neuronal noise. Such noise predominantly arises from the stochasticity of ion channel gating and potentially impairs the processing of sensory signals. On the basis of intracellular recordings of locust auditory neurons, we show that intrinsic neuronal variability on the level of spikes is reduced with increasing temperature. We use a detailed mathematical model including stochastic ion channel gating to shed light on the underlying biophysical mechanisms in auditory receptor neurons: because of a redistribution of channel-induced current noise toward higher frequencies and specifics of the temperature dependence of the membrane impedance, membrane potential noise is indeed reduced at higher temperatures. This finding holds under generic conditions and physiologically plausible assumptions on the temperature dependence of the channels' kinetics and peak conductances. We demonstrate that the identified mechanism also can explain the experimentally observed reduction of spike timing variability at higher temperatures. PMID:26041833
Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger
2016-05-01
A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.
Corley, Michael J; Caruso, Michael J; Takahashi, Lorey K
2012-01-18
Posttraumatic stress disorder (PTSD) is characterized by stress-induced symptoms including exaggerated fear memories, hypervigilance and hyperarousal. However, we are unaware of an animal model that investigates these hallmarks of PTSD especially in relation to fear extinction and habituation. Therefore, to develop a valid animal model of PTSD, we exposed rats to different intensities of footshock stress to determine their effects on either auditory predator odor fear extinction or habituation of fear sensitization. In Experiment 1, rats were exposed to acute footshock stress (no shock control, 0.4 mA, or 0.8 mA) immediately prior to auditory fear conditioning training involving the pairing of auditory clicks with a cloth containing cat odor. When presented to the conditioned auditory clicks in the next 5 days of extinction testing conducted in a runway apparatus with a hide box, rats in the two shock groups engaged in higher levels of freezing and head out vigilance-like behavior from the hide box than the no shock control group. This increase in fear behavior during extinction testing was likely due to auditory activation of the conditioned fear state because Experiment 2 demonstrated that conditioned fear behavior was not broadly increased in the absence of the conditioned auditory stimulus. Experiment 3 was then conducted to determine whether acute exposure to stress induces a habituation resistant sensitized fear state. We found that rats exposed to 0.8 mA footshock stress and subsequently tested for 5 days in the runway hide box apparatus with presentations of nonassociative auditory clicks exhibited high initial levels of freezing, followed by head out behavior and culminating in the occurrence of locomotor hyperactivity. In addition, Experiment 4 indicated that without delivery of nonassociative auditory clicks, 0.8 mA footshock stressed rats did not exhibit robust increases in sensitized freezing and locomotor hyperactivity, albeit head out vigilance-like behavior continued to be observed. In summary, our animal model provides novel information on the effects of different intensities of footshock stress, auditory-predator odor fear conditioning, and their interactions on facilitating either extinction-resistant or habituation-resistant fear-related behavior. These results lay the foundation for exciting new investigations of the hallmarks of PTSD that include the stress-induced formation and persistence of traumatic memories and sensitized fear. Copyright © 2011 Elsevier Inc. All rights reserved.
Accounting for rate-dependent category boundary shifts in speech perception.
Bosker, Hans Rutger
2017-01-01
The perception of temporal contrasts in speech is known to be influenced by the speech rate in the surrounding context. This rate-dependent perception is suggested to involve general auditory processes because it is also elicited by nonspeech contexts, such as pure tone sequences. Two general auditory mechanisms have been proposed to underlie rate-dependent perception: durational contrast and neural entrainment. This study compares the predictions of these two accounts of rate-dependent speech perception by means of four experiments, in which participants heard tone sequences followed by Dutch target words ambiguous between /ɑs/ "ash" and /a:s/ "bait". Tone sequences varied in the duration of tones (short vs. long) and in the presentation rate of the tones (fast vs. slow). Results show that the duration of preceding tones did not influence target perception in any of the experiments, thus challenging durational contrast as explanatory mechanism behind rate-dependent perception. Instead, the presentation rate consistently elicited a category boundary shift, with faster presentation rates inducing more /a:s/ responses, but only if the tone sequence was isochronous. Therefore, this study proposes an alternative, neurobiologically plausible account of rate-dependent perception involving neural entrainment of endogenous oscillations to the rate of a rhythmic stimulus.
Attentional capture by taboo words: A functional view of auditory distraction.
Röer, Jan P; Körner, Ulrike; Buchner, Axel; Bell, Raoul
2017-06-01
It is well established that task-irrelevant, to-be-ignored speech adversely affects serial short-term memory (STM) for visually presented items compared with a quiet control condition. However, there is an ongoing debate about whether the semantic content of the speech has the capacity to capture attention and to disrupt memory performance. In the present article, we tested whether taboo words are more difficult to ignore than neutral words. Taboo words or neutral words were presented as (a) steady state sequences in which the same distractor word was repeated, (b) changing state sequences in which different distractor words were presented, and (c) auditory deviant sequences in which a single distractor word deviated from a sequence of repeated words. Experiments 1 and 2 showed that taboo words disrupted performance more than neutral words. This taboo effect did not habituate and it did not differ between individuals with high and low working memory capacity. In Experiments 3 and 4, in which only a single deviant taboo word was presented, no taboo effect was obtained. These results do not support the idea that the processing of the auditory distractors' semantic content is the result of occasional attention switches to the auditory modality. Instead, the overall pattern of results is more in line with a functional view of auditory distraction, according to which the to-be-ignored modality is routinely monitored for potentially important stimuli (e.g., self-relevant or threatening information), the detection of which draws processing resources away from the primary task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Effect of Mild Cognitive Impairment and Alzheimer Disease on Auditory Steady-State Responses
Shahmiri, Elaheh; Jafari, Zahra; Noroozian, Maryam; Zendehbad, Azadeh; Haddadzadeh Niri, Hassan; Yoonessi, Ali
2017-01-01
Introduction: Mild Cognitive Impairment (MCI), a disorder of the elderly people, is difficult to diagnose and often progresses to Alzheimer Disease (AD). Temporal region is one of the initial areas, which gets impaired in the early stage of AD. Therefore, auditory cortical evoked potential could be a valuable neuromarker for detecting MCI and AD. Methods: In this study, the thresholds of Auditory Steady-State Response (ASSR) to 40 Hz and 80 Hz were compared between Alzheimer Disease (AD), MCI, and control groups. A total of 42 patients (12 with AD, 15 with MCI, and 15 elderly normal controls) were tested for ASSR. Hearing thresholds at 500, 1000, and 2000 Hz in both ears with modulation rates of 40 and 80 Hz were obtained. Results: Significant differences in normal subjects were observed in estimated ASSR thresholds with 2 modulation rates in 3 frequencies in both ears. However, the difference was significant only in 500 Hz in the MCI group, and no significant differences were observed in the AD group. In addition, significant differences were observed between the normal subjects and AD patients with regard to the estimated ASSR thresholds with 2 modulation rates and 3 frequencies in both ears. A significant difference was observed between the normal and MCI groups at 2000 Hz, too. An increase in estimated 40 Hz ASSR thresholds in patients with AD and MCI suggests neural changes in auditory cortex compared to that in normal ageing. Conclusion: Auditory threshold estimation with low and high modulation rates by ASSR test could be a potentially helpful test for detecting cognitive impairment. PMID:29158880
Probability of detecting band-tailed pigeons during call-broadcast versus auditory surveys
Kirkpatrick, C.; Conway, C.J.; Hughes, K.M.; Devos, J.C.
2007-01-01
Estimates of population trend for the interior subspecies of band-tailed pigeon (Patagioenas fasciata fasciata) are not available because no standardized survey method exists for monitoring the interior subspecies. We evaluated 2 potential band-tailed pigeon survey methods (auditory and call-broadcast surveys) from 2002 to 2004 in 5 mountain ranges in southern Arizona, USA, and in mixed-conifer forest throughout the state. Both auditory and call-broadcast surveys produced low numbers of cooing pigeons detected per survey route (x?? ??? 0.67) and had relatively high temporal variance in average number of cooing pigeons detected during replicate surveys (CV ??? 161%). However, compared to auditory surveys, use of call-broadcast increased 1) the percentage of replicate surveys on which ???1 cooing pigeon was detected by an average of 16%, and 2) the number of cooing pigeons detected per survey route by an average of 29%, with this difference being greatest during the first 45 minutes of the morning survey period. Moreover, probability of detecting a cooing pigeon was 27% greater during call-broadcast (0.80) versus auditory (0.63) surveys. We found that cooing pigeons were most common in mixed-conifer forest in southern Arizona and density of male pigeons in mixed-conifer forest throughout the state averaged 0.004 (SE = 0.001) pigeons/ha. Our results are the first to show that call-broadcast increases the probability of detecting band-tailed pigeons (or any species of Columbidae) during surveys. Call-broadcast surveys may provide a useful method for monitoring populations of the interior subspecies of band-tailed pigeon in areas where other survey methods are inappropriate.
Auditory closed-loop stimulation of the sleep slow oscillation enhances memory.
Ngo, Hong-Viet V; Martinetz, Thomas; Born, Jan; Mölle, Matthias
2013-05-08
Brain rhythms regulate information processing in different states to enable learning and memory formation. The <1 Hz sleep slow oscillation hallmarks slow-wave sleep and is critical to memory consolidation. Here we show in sleeping humans that auditory stimulation in phase with the ongoing rhythmic occurrence of slow oscillation up states profoundly enhances the slow oscillation rhythm, phase-coupled spindle activity, and, consequently, the consolidation of declarative memory. Stimulation out of phase with the ongoing slow oscillation rhythm remained ineffective. Closed-loop in-phase stimulation provides a straight-forward tool to enhance sleep rhythms and their functional efficacy. Copyright © 2013 Elsevier Inc. All rights reserved.
Mode-Locking Behavior of Izhikevich Neuron Under Periodic External Forcing
NASA Astrophysics Data System (ADS)
Farokhniaee, Amirali; Large, Edward
2015-03-01
In this study we obtained the regions of existence of various mode-locked states on the periodic-strength plane, which are called Arnold Tongues, for Izhikevich neurons. The study is based on the new model for neurons by Izhikevich (2003) which is the normal form of Hodgkin-Huxley neuron. This model is much simpler in terms of the dimension of the coupled non-linear differential equations compared to other existing models, but excellent for generating the complex spiking patterns observed in real neurons. Many neurons in the auditory system of the brain must encode amplitude variations of a periodic signal. These neurons under periodic stimulation display rich dynamical states including mode-locking and chaotic responses. Periodic stimuli such as sinusoidal waves and amplitude modulated (AM) sounds can lead to various forms of n : m mode-locked states, similar to mode-locking phenomenon in a LASER resonance cavity. Obtaining Arnold tongues provides useful insight into the organization of mode-locking behavior of neurons under periodic forcing. Hence we can describe the construction of harmonic and sub-harmonic responses in the early processing stages of the auditory system, such as the auditory nerve and cochlear nucleus.
Sonnleitner, Andreas; Treder, Matthias Sebastian; Simon, Michael; Willmann, Sven; Ewald, Arne; Buchner, Axel; Schrauf, Michael
2014-01-01
Driver distraction is responsible for a substantial number of traffic accidents. This paper describes the impact of an auditory secondary task on drivers' mental states during a primary driving task. N=20 participants performed the test procedure in a car following task with repeated forced braking on a non-public test track. Performance measures (provoked reaction time to brake lights) and brain activity (EEG alpha spindles) were analyzed to describe distracted drivers. Further, a classification approach was used to investigate whether alpha spindles can predict drivers' mental states. Results show that reaction times and alpha spindle rate increased with time-on-task. Moreover, brake reaction times and alpha spindle rate were significantly higher while driving with auditory secondary task opposed to driving only. In single-trial classification, a combination of spindle parameters yielded a median classification error of about 8% in discriminating the distracted from the alert driving. Reduced driving performance (i.e., prolonged brake reaction times) during increased cognitive load is assumed to be indicated by EEG alpha spindles, enabling the quantification of driver distraction in experiments on public roads without verbally assessing the drivers' mental states. Copyright © 2013 Elsevier Ltd. All rights reserved.
Intracerebral evidence of rhythm transform in the human auditory cortex.
Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis
2017-07-01
Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (<30 Hz) and high-frequency (>30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.
Gordon, K A; Papsin, B C; Harrison, R V
2007-08-01
The role of apical versus basal cochlear implant electrode stimulation on central auditory development was examined. We hypothesized that, in children with early onset deafness, auditory development evoked by basal electrode stimulation would differ from that evoked more apically. Responses of the auditory nerve and brainstem, evoked by an apical and a basal implant electrode, were measured over the first year of cochlear implant use in 50 children with early onset severe to profound deafness who used hearing aids prior to implantation. Responses at initial stimulation were of larger amplitude and shorter latency when evoked by the apical electrode. No significant effects of residual hearing or age were found on initial response amplitudes or latencies. With implant use, responses evoked by both electrodes showed decreases in wave and interwave latencies reflecting decreased neural conduction time through the brainstem. Apical versus basal differences persisted with implant experience with one exception; eIII-eV interlatency differences decreased with implant use. Acute stimulation shows prolongation of basally versus apically evoked auditory nerve and brainstem responses in children with severe to profound deafness. Interwave latencies reflecting neural conduction along the caudal and rostral portions of the brainstem decreased over the first year of implant use. Differences in neural conduction times evoked by apical versus basal electrode stimulation persisted in the caudal but not rostral brainstem. Activity-dependent changes of the auditory brainstem occur in response to both apical and basal cochlear implant electrode stimulation.
Selective attention in normal and impaired hearing.
Shinn-Cunningham, Barbara G; Best, Virginia
2008-12-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.
Selective Attention in Normal and Impaired Hearing
Shinn-Cunningham, Barbara G.; Best, Virginia
2008-01-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202
Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.
Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal
2016-01-01
Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.
A framework for testing and comparing binaural models.
Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M
2018-03-01
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.
Beckers, Gabriël J L; Gahr, Manfred
2012-08-01
Auditory systems bias responses to sounds that are unexpected on the basis of recent stimulus history, a phenomenon that has been widely studied using sequences of unmodulated tones (mismatch negativity; stimulus-specific adaptation). Such a paradigm, however, does not directly reflect problems that neural systems normally solve for adaptive behavior. We recorded multiunit responses in the caudomedial auditory forebrain of anesthetized zebra finches (Taeniopygia guttata) at 32 sites simultaneously, to contact calls that recur probabilistically at a rate that is used in communication. Neurons in secondary, but not primary, auditory areas respond preferentially to calls when they are unexpected (deviant) compared with the same calls when they are expected (standard). This response bias is predominantly due to sites more often not responding to standard events than to deviant events. When two call stimuli alternate between standard and deviant roles, most sites exhibit a response bias to deviant events of both stimuli. This suggests that biases are not based on a use-dependent decrease in response strength but involve a more complex mechanism that is sensitive to auditory deviance per se. Furthermore, between many secondary sites, responses are tightly synchronized, a phenomenon that is driven by internal neuronal interactions rather than by the timing of stimulus acoustic features. We hypothesize that this deviance-sensitive, internally synchronized network of neurons is involved in the involuntary capturing of attention by unexpected and behaviorally potentially relevant events in natural auditory scenes.
Characteristics of hearing and echolocation in under-studied odontocete species
NASA Astrophysics Data System (ADS)
Smith, Adam B.
All odontoctes (toothed whales and dolphins) studied to date have been shown to echolocate. They use sound as their primary means for foraging, navigation, and communication with conspecifics and are thus considered acoustic specialists. However, the vast majority of what is known about odontocete acoustic systems comes from only a handful of the 76 recognized extant species. The research presented in this dissertation investigated basic characteristics of odontocete hearing and echolocation, including auditory temporal resolution, auditory pathways, directional hearing, and transmission beam characteristics, in individuals of five different odontocete species that are understudied. Modulation rate transfer functions were measured from formerly stranded individuals of four different species (Stenella longirostris, Feresa attenuata, Globicephala melas, Mesoplodon densirostris) using non-invasive auditory evoked potential methods. All individuals showed acute auditory temporal resolution that was comparable to other studied odontocete species. Using the same electrophysiological methods, auditory pathways and directional hearing were investigated in a Risso's dolphin (Grampus griseus) using both localized and far-field acoustic stimuli. The dolphin's hearing showed significant, frequency dependent asymmetry to localized sound presented on the right and left sides of its head. The dolphin also showed acute, but mostly symmetrical, directional auditory sensitivity to sounds presented in the far-field. Furthermore, characteristics of the echolocation transmission beam of this same individual Risso's dolphin were measured using a 16 element hydrophone array. The dolphin exhibited both single and dual lobed beam shapes that were more directional than similar measurements from a bottlenose dolphin, harbor porpoise, and false killer whale.
A Dynamic Compressive Gammachirp Auditory Filterbank
Irino, Toshio; Patterson, Roy D.
2008-01-01
It is now common to use knowledge about human auditory processing in the development of audio signal processors. Until recently, however, such systems were limited by their linearity. The auditory filter system is known to be level-dependent as evidenced by psychophysical data on masking, compression, and two-tone suppression. However, there were no analysis/synthesis schemes with nonlinear filterbanks. This paper describe18300060s such a scheme based on the compressive gammachirp (cGC) auditory filter. It was developed to extend the gammatone filter concept to accommodate the changes in psychophysical filter shape that are observed to occur with changes in stimulus level in simultaneous, tone-in-noise masking. In models of simultaneous noise masking, the temporal dynamics of the filtering can be ignored. Analysis/synthesis systems, however, are intended for use with speech sounds where the glottal cycle can be long with respect to auditory time constants, and so they require specification of the temporal dynamics of auditory filter. In this paper, we describe a fast-acting level control circuit for the cGC filter and show how psychophysical data involving two-tone suppression and compression can be used to estimate the parameter values for this dynamic version of the cGC filter (referred to as the “dcGC” filter). One important advantage of analysis/synthesis systems with a dcGC filterbank is that they can inherit previously refined signal processing algorithms developed with conventional short-time Fourier transforms (STFTs) and linear filterbanks. PMID:19330044
On pure word deafness, temporal processing, and the left hemisphere.
Stefanatos, Gerry A; Gershkoff, Arthur; Madigan, Sean
2005-07-01
Pure word deafness (PWD) is a rare neurological syndrome characterized by severe difficulties in understanding and reproducing spoken language, with sparing of written language comprehension and speech production. The pathognomonic disturbance of auditory comprehension appears to be associated with a breakdown in processes involved in mapping auditory input to lexical representations of words, but the functional locus of this disturbance and the localization of the responsible lesion have long been disputed. We report here on a woman with PWD resulting from a circumscribed unilateral infarct involving the left superior temporal lobe who demonstrated significant problems processing transitional spectrotemporal cues in both speech and nonspeech sounds. On speech discrimination tasks, she exhibited poor differentiation of stop consonant-vowel syllables distinguished by voicing onset and brief formant frequency transitions. Isolated formant transitions could be reliably discriminated only at very long durations (> 200 ms). By contrast, click fusion threshold, which depends on millisecond-level resolution of brief auditory events, was normal. These results suggest that the problems with speech analysis in this case were not secondary to general constraints on auditory temporal resolution. Rather, they point to a disturbance of left hemisphere auditory mechanisms that preferentially analyze rapid spectrotemporal variations in frequency. The findings have important implications for our conceptualization of PWD and its subtypes.
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Ito, Masanori; Kado, Naoki; Suzuki, Toshiaki; Ando, Hiroshi
2013-01-01
[Purpose] The purpose of this study was to investigate the influence of external pacing with periodic auditory stimuli on the control of periodic movement. [Subjects and Methods] Eighteen healthy subjects performed self-paced, synchronization-continuation, and syncopation-continuation tapping. Inter-onset intervals were 1,000, 2,000 and 5,000 ms. The variability of inter-tap intervals was compared between the different pacing conditions and between self-paced tapping and each continuation phase. [Results] There were no significant differences in the mean and standard deviation of the inter-tap interval between pacing conditions. For the 1,000 and 5,000 ms tasks, there were significant differences in the mean inter-tap interval following auditory pacing compared with self-pacing. For the 2,000 ms syncopation condition and 5,000 ms task, there were significant differences from self-pacing in the standard deviation of the inter-tap interval following auditory pacing. [Conclusion] These results suggest that the accuracy of periodic movement with intervals of 1,000 and 5,000 ms can be improved by the use of auditory pacing. However, the consistency of periodic movement is mainly dependent on the inherent skill of the individual; thus, improvement of consistency based on pacing is unlikely. PMID:24259932
The effect of noise exposure during the developmental period on the function of the auditory system.
Bureš, Zbyněk; Popelář, Jiří; Syka, Josef
2017-09-01
Recently, there has been growing evidence that development and maturation of the auditory system depends substantially on the afferent activity supplying inputs to the developing centers. In cases when this activity is altered during early ontogeny as a consequence of, e.g., an unnatural acoustic environment or acoustic trauma, the structure and function of the auditory system may be severely affected. Pathological alterations may be found in populations of ribbon synapses of the inner hair cells, in the structure and function of neuronal circuits, or in auditory driven behavioral and psychophysical performance. Three characteristics of the developmental impairment are of key importance: first, they often persist to adulthood, permanently influencing the quality of life of the subject; second, their manifestations are different and sometimes even contradictory to the impairments induced by noise trauma in adulthood; third, they may be 'hidden' and difficult to diagnose by standard audiometric procedures used in clinical practice. This paper reviews the effects of early interventions to the auditory system, in particular, of sound exposure during ontogeny. We summarize the results of recent morphological, electrophysiological, and behavioral experiments, discuss the putative mechanisms and hypotheses, and draw possible consequences for human neonatal medicine and noise health. Copyright © 2016 Elsevier B.V. All rights reserved.
Curculigo orchioides protects cisplatin-induced cell damage.
Kang, Tong Ho; Hong, Bin Na; Jung, Su-Young; Lee, Jeong-Han; So, Hong-Seob; Park, Raekil; You, Yong-Ouk
2013-01-01
Cisplatin is commonly used as a chemotherapeutic agent against many human cancers. However, it generates reactive oxygen species (ROS) and has serious dose-limiting side effects, including ototoxicity. The roots of Curculigo orchioides (C. orchioides) have been used to treat auditory diseases such as tinnitus and hearing loss in Chinese traditional medicine. In the present study, we investigated the protective effects of an ethanol extract obtained from C. orchioides rhizome (COR) on cisplatin-induced cell damage in auditory cells (HEI-OC1). COR (2.5-25 μg/ml) inhibited cisplatin-induced HEI-OC1 cell damage in a dose-dependent manner. To investigate the protective mechanism of COR on cisplatin cytotoxicity in HEI-OC1 cells, we measured the effects of COR on ROS generation and lipid peroxidation in cisplatin-treated cells as well as its scavenging activities against superoxide radicals, hydroxyl radicals, hydrogen peroxide, and DPPH radicals. COR (1-25 μg/ml) had scavenging activities against superoxide radicals, hydroxyl radicals, hydrogen peroxide, and DPPH radicals, as well as reduced lipid peroxidation. In in vivo experiments, COR was shown to reduce cochlear and peripheral auditory function impairments through cisplatin-induced auditory damage in mice. These results indicate that COR protects from cisplatin-induced auditory damage by inhibiting lipid peroxidation and scavenging activities against free radicals.
Dynamic speech representations in the human temporal lobe.
Leonard, Matthew K; Chang, Edward F
2014-09-01
Speech perception requires rapid integration of acoustic input with context-dependent knowledge. Recent methodological advances have allowed researchers to identify underlying information representations in primary and secondary auditory cortex and to examine how context modulates these representations. We review recent studies that focus on contextual modulations of neural activity in the superior temporal gyrus (STG), a major hub for spectrotemporal encoding. Recent findings suggest a highly interactive flow of information processing through the auditory ventral stream, including influences of higher-level linguistic and metalinguistic knowledge, even within individual areas. Such mechanisms may give rise to more abstract representations, such as those for words. We discuss the importance of characterizing representations of context-dependent and dynamic patterns of neural activity in the approach to speech perception research. Copyright © 2014 Elsevier Ltd. All rights reserved.
Spatialized audio improves call sign recognition during multi-aircraft control.
Kim, Sungbin; Miller, Michael E; Rusnock, Christina F; Elshaw, John J
2018-07-01
We investigated the impact of a spatialized audio display on response time, workload, and accuracy while monitoring auditory information for relevance. The human ability to differentiate sound direction implies that spatial audio may be used to encode information. Therefore, it is hypothesized that spatial audio cues can be applied to aid differentiation of critical versus noncritical verbal auditory information. We used a human performance model and a laboratory study involving 24 participants to examine the effect of applying a notional, automated parser to present audio in a particular ear depending on information relevance. Operator workload and performance were assessed while subjects listened for and responded to relevant audio cues associated with critical information among additional noncritical information. Encoding relevance through spatial location in a spatial audio display system--as opposed to monophonic, binaural presentation--significantly reduced response time and workload, particularly for noncritical information. Future auditory displays employing spatial cues to indicate relevance have the potential to reduce workload and improve operator performance in similar task domains. Furthermore, these displays have the potential to reduce the dependence of workload and performance on the number of audio cues. Published by Elsevier Ltd.
Wang, Xiao-Dong; Wang, Ming; Chen, Lin
2013-09-01
In Mandarin Chinese, a tonal language, pitch level and pitch contour are two dimensions of lexical tones according to their acoustic features (i.e., pitch patterns). A change in pitch level features a step change whereas that in pitch contour features a continuous variation in voice pitch. Currently, relatively little is known about the hemispheric lateralization for the processing of each dimension. To address this issue, we made whole-head electrical recordings of mismatch negativity in native Chinese speakers in response to the contrast of Chinese lexical tones in each dimension. We found that pre-attentive auditory processing of pitch level was obviously lateralized to the right hemisphere whereas there is a tendency for that of pitch contour to be lateralized to the left. We also found that the brain responded faster to pitch level than to pitch contour at a pre-attentive stage. These results indicate that the hemispheric lateralization for early auditory processing of lexical tones depends on the pitch level and pitch contour, and suggest an underlying inter-hemispheric interactive mechanism for the processing. © 2013 Elsevier Ltd. All rights reserved.
Modality Specific Cerebro-Cerebellar Activations in Verbal Working Memory: An fMRI Study
Kirschen, Matthew P.; Chen, S. H. Annabel; Desmond, John E.
2010-01-01
Verbal working memory (VWM) engages frontal and temporal/parietal circuits subserving the phonological loop, as well as, superior and inferior cerebellar regions which have projections from these neocortical areas. Different cerebro-cerebellar circuits may be engaged for integrating aurally- and visually-presented information for VWM. The present fMRI study investigated load (2, 4, or 6 letters) and modality (auditory and visual) dependent cerebro-cerebellar VWM activation using a Sternberg task. FMRI revealed modality-independent activations in left frontal (BA 6/9/44), insular, cingulate (BA 32), and bilateral inferior parietal/supramarginal (BA 40) regions, as well as in bilateral superior (HVI) and right inferior (HVIII) cerebellar regions. Visual presentation evoked prominent activations in right superior (HVI/CrusI) cerebellum, bilateral occipital (BA19) and left parietal (BA7/40) cortex while auditory presentation showed robust activations predominately in bilateral temporal regions (BA21/22). In the cerebellum, we noted a visual to auditory emphasis of function progressing from superior to inferior and from lateral to medial regions. These results extend our previous findings of fMRI activation in cerebro-cerebellar networks during VWM, and demonstrate both modality dependent commonalities and differences in activations with increasing memory load. PMID:20714061
Modality specific cerebro-cerebellar activations in verbal working memory: an fMRI study.
Kirschen, Matthew P; Chen, S H Annabel; Desmond, John E
2010-01-01
Verbal working memory (VWM) engages frontal and temporal/parietal circuits subserving the phonological loop, as well as, superior and inferior cerebellar regions which have projections from these neocortical areas. Different cerebro-cerebellar circuits may be engaged for integrating aurally- and visually-presented information for VWM. The present fMRI study investigated load (2, 4, or 6 letters) and modality (auditory and visual) dependent cerebro-cerebellar VWM activation using a Sternberg task. FMRI revealed modality-independent activations in left frontal (BA 6/9/44), insular, cingulate (BA 32), and bilateral inferior parietal/supramarginal (BA 40) regions, as well as in bilateral superior (HVI) and right inferior (HVIII) cerebellar regions. Visual presentation evoked prominent activations in right superior (HVI/CrusI) cerebellum, bilateral occipital (BA19) and left parietal (BA7/40) cortex while auditory presentation showed robust activations predominantly in bilateral temporal regions (BA21/22). In the cerebellum, we noted a visual to auditory emphasis of function progressing from superior to inferior and from lateral to medial regions. These results extend our previous findings of fMRI activation in cerebro-cerebellar networks during VWM, and demonstrate both modality dependent commonalities and differences in activations with increasing memory load.
Crinion, Jenny; Price, Cathy J
2005-12-01
Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.
Yoo, Ga Eul; Kim, Soo Ji
2016-01-01
Given the increasing evidence demonstrating the effects of rhythmic auditory cueing for motor rehabilitation of stroke patients, this synthesized analysis is needed in order to improve rehabilitative practice and maximize clinical effectiveness. This study aimed to systematically analyze the literature on rhythmic auditory cueing for motor rehabilitation of stroke patients by highlighting the outcome variables, type of cueing, and stage of stroke. A systematic review with meta-analysis of randomized controlled or clinically controlled trials was conducted. Electronic databases and music therapy journals were searched for studies including stroke, the use of rhythmic auditory cueing, and motor outcomes, such as gait and upper-extremity function. A total of 10 studies (RCT or CCT) with 356 individuals were included for meta-analysis. There were large effect sizes (Hedges's g = 0.984 for walking velocity; Hedges's g = 0.840 for cadence; Hedges's g = 0.760 for stride length; and Hedges's g = 0.456 for Fugl-Meyer test scores) in the use of rhythmic auditory cueing. Additional subgroup analysis demonstrated that although the type of rhythmic cueing and stage of stroke did not lead to statistically substantial group differences, the effect sizes and heterogeneity values in each subgroup implied possible differences in treatment effect. This study corroborates the beneficial effects of rhythmic auditory cueing, supporting its expanded application to broadened areas of rehabilitation for stroke patients. Also, it suggests the future investigation of the differential outcomes depending on how rhythmic auditory cueing is provided in terms of type and intensity implemented. © the American Music Therapy Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A verbal behavior analysis of auditory hallucinations
Burns, Caleb E. S.; Heiby, Elaine M.; Tharp, Roland G.
1983-01-01
A review of recent research on the non-medical control of auditory hallucinations is presented. It is suggested that the decreases in hallucinatory behavior obtained in studies using aversive contingencies may be attributable to the disruption of the chains of behavior involved. The results of several additional studies are interpreted as indicating that methods of stimulus control and the use of incompatible behaviors may be effective in reducing the rate of auditory hallucinations. Research relating auditory hallucinations to subvocalizations is presented in support of the view that hallucinatory phenomena are sometimes related to the subject's own vocal productions. Skinner's views (1934, 1936, 1953, 1957, 1980) are then presented as possible explanations of some hallucinatory behavior. It is suggested that some auditory hallucinations consit of the mishearing of environmental and physiological stimuli as voices in a fashion similar to that which Skinner observed in his work with the verbal summator. The maintenance of long chains of such responses may be largely attributable to self-intraverbal influences (such as are present during automatic writing). With some auditory hallucinations, this progression involves first mishearing ambiguous stimuli as voices and then attributing the voices to some cause (e.g., insanity, the television, radio, or God). Later, the frequent and ongoing chains of such behavior may contaminate other verbal responses. Such verbal behavior may be parasitic on “normal verbal behavior” (and hence, not directly dependent on consquences for maintenance), may be cued by various stimuli (including respiration), and may interfere with other covert and overt behavior. Several studies to investigate this view are presented. It is hoped that such research will lead to a better understanding of the major issues involved in the etiology and treatment of auditory hallucinations in particular and perhaps of psychosis in general. PMID:22478583
Auditory decision aiding in supervisory control of multiple unmanned aerial vehicles.
Donmez, Birsen; Cummings, M L; Graham, Hudson D
2009-10-01
This article is an investigation of the effectiveness of sonifications, which are continuous auditory alerts mapped to the state of a monitored task, in supporting unmanned aerial vehicle (UAV) supervisory control. UAV supervisory control requires monitoring a UAV across multiple tasks (e.g., course maintenance) via a predominantly visual display, which currently is supported with discrete auditory alerts. Sonification has been shown to enhance monitoring performance in domains such as anesthesiology by allowing an operator to immediately determine an entity's (e.g., patient) current and projected states, and is a promising alternative to discrete alerts in UAV control. However, minimal research compares sonification to discrete alerts, and no research assesses the effectiveness of sonification for monitoring multiple entities (e.g., multiple UAVs). The authors conducted an experiment with 39 military personnel, using a simulated setup. Participants controlled single and multiple UAVs and received sonifications or discrete alerts based on UAV course deviations and late target arrivals. Regardless of the number of UAVs supervised, the course deviation sonification resulted in reactions to course deviations that were 1.9 s faster, a 19% enhancement, compared with discrete alerts. However, course deviation sonifications interfered with the effectiveness of discrete late arrival alerts in general and with operator responses to late arrivals when supervising multiple vehicles. Sonifications can outperform discrete alerts when designed to aid operators to predict future states of monitored tasks. However, sonifications may mask other auditory alerts and interfere with other monitoring tasks that require divided attention. This research has implications for supervisory control display design.
You are only as old as you sound: auditory aftereffects in vocal age perception.
Zäske, Romi; Schweinberger, Stefan R
2011-12-01
High-level adaptation not only biases the perception of faces, but also causes transient distortions in auditory perception of non-linguistic voice information about gender, identity, and emotional intonation. Here we report a novel auditory aftereffect in perceiving vocal age: age estimates were elevated in age-morphed test voices when preceded by adaptor voices of young speakers (∼20 yrs), compared to old adaptor voices (∼70 yrs). This vocal age aftereffect (VAAE) complements a recently reported face aftereffect (Schweinberger et al., 2010) and points to selective neuronal coding of vocal age. Intriguingly, post-adaptation assessment revealed that VAAEs could persist for minutes after adaptation, although reduced in magnitude. As an important qualification, VAAEs during post-adaptation were modulated by gender congruency between speaker and listener. For both male and female listeners, VAAEs were much reduced for test voices of opposite gender. Overall, this study establishes a new auditory aftereffect in the perception of vocal age. We offer a tentative sociobiological explanation for the differential, gender-dependent recovery from vocal age adaptation. Copyright © 2011 Elsevier B.V. All rights reserved.
Rohmann, Kevin N.; Bass, Andrew H.
2011-01-01
SUMMARY Vertebrates displaying seasonal shifts in reproductive behavior provide the opportunity to investigate bidirectional plasticity in sensory function. The midshipman teleost fish exhibits steroid-dependent plasticity in frequency encoding by eighth nerve auditory afferents. In this study, evoked potentials were recorded in vivo from the saccule, the main auditory division of the inner ear of most teleosts, to test the hypothesis that males and females exhibit seasonal changes in hair cell physiology in relation to seasonal changes in plasma levels of steroids. Thresholds across the predominant frequency range of natural vocalizations were significantly less in both sexes in reproductive compared with non-reproductive conditions, with differences greatest at frequencies corresponding to call upper harmonics. A subset of non-reproductive males exhibiting an intermediate saccular phenotype had elevated testosterone levels, supporting the hypothesis that rising steroid levels induce non-reproductive to reproductive transitions in saccular physiology. We propose that elevated levels of steroids act via long-term (days to weeks) signaling pathways to upregulate ion channel expression generating higher resonant frequencies characteristic of non-mammalian auditory hair cells, thereby lowering acoustic thresholds. PMID:21562181
Klump, Georg M.; Tollin, Daniel J.
2016-01-01
The auditory brainstem response (ABR) is a sound-evoked non-invasively measured electrical potential representing the sum of neuronal activity in the auditory brainstem and midbrain. ABR peak amplitudes and latencies are widely used in human and animal auditory research and for clinical screening. The binaural interaction component (BIC) of the ABR stands for the difference between the sum of the monaural ABRs and the ABR obtained with binaural stimulation. The BIC comprises a series of distinct waves, the largest of which (DN1) has been used for evaluating binaural hearing in both normal hearing and hearing-impaired listeners. Based on data from animal and human studies, we discuss the possible anatomical and physiological bases of the BIC (DN1 in particular). The effects of electrode placement and stimulus characteristics on the binaurally evoked ABR are evaluated. We review how inter-aural time and intensity differences affect the BIC and, analyzing these dependencies, draw conclusion about the mechanism underlying the generation of the BIC. Finally, the utility of the BIC for clinical diagnoses are summarized. PMID:27232077
Temporal processing dysfunction in schizophrenia.
Carroll, Christine A; Boggs, Jennifer; O'Donnell, Brian F; Shekhar, Anantha; Hetrick, William P
2008-07-01
Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the pathophysiology of schizophrenia, there remains a paucity of research directly examining overt timing performance in the disorder. Accordingly, the present study investigated timing in schizophrenia using a well-established task of time perception. Twenty-three individuals with schizophrenia and 22 non-psychiatric control participants completed a temporal bisection task, which required participants to make temporal judgments about auditory and visually presented durations ranging from 300 to 600 ms. Both schizophrenia and control groups displayed greater visual compared to auditory timing variability, with no difference between groups in the visual modality. However, individuals with schizophrenia exhibited less temporal precision than controls in the perception of auditory durations. These findings correlated with parameter estimates obtained from a quantitative model of time estimation, and provide evidence of a fundamental deficit in temporal auditory precision in schizophrenia.
Subcortical encoding of sound is enhanced in bilinguals and relates to executive function advantages
Krizman, Jennifer; Marian, Viorica; Shook, Anthony; Skoe, Erika; Kraus, Nina
2012-01-01
Bilingualism profoundly affects the brain, yielding functional and structural changes in cortical regions dedicated to language processing and executive function [Crinion J, et al. (2006) Science 312:1537–1540; Kim KHS, et al. (1997) Nature 388:171–174]. Comparatively, musical training, another type of sensory enrichment, translates to expertise in cognitive processing and refined biological processing of sound in both cortical and subcortical structures. Therefore, we asked whether bilingualism can also promote experience-dependent plasticity in subcortical auditory processing. We found that adolescent bilinguals, listening to the speech syllable [da], encoded the stimulus more robustly than age-matched monolinguals. Specifically, bilinguals showed enhanced encoding of the fundamental frequency, a feature known to underlie pitch perception and grouping of auditory objects. This enhancement was associated with executive function advantages. Thus, through experience-related tuning of attention, the bilingual auditory system becomes highly efficient in automatically processing sound. This study provides biological evidence for system-wide neural plasticity in auditory experts that facilitates a tight coupling of sensory and cognitive functions. PMID:22547804
Karimi, D; Mondor, T A; Mann, D D
2008-01-01
The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the subjects preferred the combination of visual mode for the steering task and auditory mode for the monitoring task.
Aging effects on functional auditory and visual processing using fMRI with variable sensory loading.
Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S
2013-05-01
Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older compared to younger people. This may negatively impact on the fidelity of information available to higher cognitive processing. Such evidence may inform future studies focused on cognitive decline in aging. Copyright © 2012 Elsevier Ltd. All rights reserved.
Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning
2012-01-01
Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus-tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive and low-cost treatment approach for tonal tinnitus into routine clinical practice.
Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning
2012-01-01
Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus—tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive and low-cost treatment approach for tonal tinnitus into routine clinical practice. PMID:22754508
Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.
Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie
2016-12-07
A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.
Effect of EEG Referencing Methods on Auditory Mismatch Negativity
Mahajan, Yatin; Peter, Varghese; Sharma, Mridula
2017-01-01
Auditory event-related potentials (ERPs) have consistently been used in the investigation of auditory and cognitive processing in the research and clinical laboratories. There is currently no consensus on the choice of appropriate reference for auditory ERPs. The most commonly used references in auditory ERP research are the mathematically linked-mastoids (LM) and average referencing (AVG). Since LM and AVG referencing procedures do not solve the issue of electrically-neutral reference, Reference Electrode Standardization Technique (REST) was developed to create a neutral reference for EEG recordings. The aim of the current research is to compare the influence of the reference on amplitude and latency of auditory mismatch negativity (MMN) as a function of magnitude of frequency deviance across three commonly used electrode montages (16, 32, and 64-channel) using REST, LM, and AVG reference procedures. The current study was designed to determine if the three reference methods capture the variation in amplitude and latency of MMN with the deviance magnitude. We recorded MMN from 12 normal hearing young adults in an auditory oddball paradigm with 1,000 Hz pure tone as standard and 1,030, 1,100, and 1,200 Hz as small, medium and large frequency deviants, respectively. The EEG data recorded to these sounds was re-referenced using REST, LM, and AVG methods across 16-, 32-, and 64-channel EEG electrode montages. Results revealed that while the latency of MMN decreased with increment in frequency of deviant sounds, no effect of frequency deviance was present for amplitude of MMN. There was no effect of referencing procedure on the experimental effect tested. The amplitude of MMN was largest when the ERP was computed using LM referencing and the REST referencing produced the largest amplitude of MMN for 64-channel montage. There was no effect of electrode-montage on AVG referencing induced ERPs. Contrary to our predictions, the results suggest that the auditory MMN elicited as a function of increments in frequency deviance does not depend on the choice of referencing procedure. The results also suggest that auditory ERPs generated using REST referencing is contingent on the electrode arrays more than the AVG referencing. PMID:29066945
Estimates of auditory risk from outdoor impulse noise. II: Civilian firearms.
Flamme, Gregory A; Wong, Adam; Liebe, Kevin; Lynd, James
2009-01-01
Firearm impulses are common noise exposures in the United States. This study records, describes and analyzes impulses produced outdoors by civilian firearms with respect to the amount of auditory risk they pose to the unprotected listener under various listening conditions. Risk estimates were obtained using three contemporary damage risk criteria (DRC) including a waveform parameter-based approach (peak SPL and B-duration), an energy-based criterion (A-weighted SEL and equivalent continuous level) and a physiological model (AHAAH). Results from these DRC were converted into a number of maximum permissible unprotected exposures to facilitate interpretation. Acoustic characteristics of firearm impulses differed substantially across guns, ammunition, and microphone location. The type of gun, ammunition and the microphone location all significantly affected estimates of auditory risk from firearms. Vast differences in maximum permissible exposures were observed; the rank order of the differences varied with the source of the impulse. Unprotected exposure to firearm noise is not recommended, but people electing to fire a gun without hearing protection should be advised to minimize auditory risk through careful selection of ammunition and shooting environment. Small-caliber guns with long barrels and guns loaded with the least powerful ammunition tend to be associated with the least auditory risk.
Auditory Resting-State Network Connectivity in Tinnitus: A Functional MRI Study
Maudoux, Audrey; Lefebvre, Philippe; Cabay, Jean-Evrard; Demertzi, Athena; Vanhaudenhuyse, Audrey; Laureys, Steven; Soddu, Andrea
2012-01-01
The underlying functional neuroanatomy of tinnitus remains poorly understood. Few studies have focused on functional cerebral connectivity changes in tinnitus patients. The aim of this study was to test if functional MRI “resting-state” connectivity patterns in auditory network differ between tinnitus patients and normal controls. Thirteen chronic tinnitus subjects and fifteen age-matched healthy controls were studied on a 3 tesla MRI. Connectivity was investigated using independent component analysis and an automated component selection approach taking into account the spatial and temporal properties of each component. Connectivity in extra-auditory regions such as brainstem, basal ganglia/NAc, cerebellum, parahippocampal, right prefrontal, parietal, and sensorimotor areas was found to be increased in tinnitus subjects. The right primary auditory cortex, left prefrontal, left fusiform gyrus, and bilateral occipital regions showed a decreased connectivity in tinnitus. These results show that there is a modification of cortical and subcortical functional connectivity in tinnitus encompassing attentional, mnemonic, and emotional networks. Our data corroborate the hypothesized implication of non-auditory regions in tinnitus physiopathology and suggest that various regions of the brain seem involved in the persistent awareness of the phenomenon as well as in the development of the associated distress leading to disabling chronic tinnitus. PMID:22574141
Xia, Shuang; Song, TianBin; Che, Jing; Li, Qiang; Chai, Chao; Zheng, Meizhu; Shen, Wen
2017-01-01
Early hearing deprivation could affect the development of auditory, language, and vision ability. Insufficient or no stimulation of the auditory cortex during the sensitive periods of plasticity could affect the function of hearing, language, and vision development. Twenty-three infants with congenital severe sensorineural hearing loss (CSSHL) and 17 age and sex matched normal hearing subjects were recruited. The amplitude of low frequency fluctuations (ALFF) and regional homogeneity (ReHo) of the auditory, language, and vision related brain areas were compared between deaf infants and normal subjects. Compared with normal hearing subjects, decreased ALFF and ReHo were observed in auditory and language-related cortex. Increased ALFF and ReHo were observed in vision related cortex, which suggest that hearing and language function were impaired and vision function was enhanced due to the loss of hearing. ALFF of left Brodmann area 45 (BA45) was negatively correlated with deaf duration in infants with CSSHL. ALFF of right BA39 was positively correlated with deaf duration in infants with CSSHL. In conclusion, ALFF and ReHo can reflect the abnormal brain function in language, auditory, and visual information processing in infants with CSSHL. This demonstrates that the development of auditory, language, and vision processing function has been affected by congenital severe sensorineural hearing loss before 4 years of age.
Light Effects on Behavioural Performance Depend on the Individual State of Vigilance
Barba, Antonio; Padilla, Francisca
2016-01-01
Research has shown that exposure to bright white light or blue-enriched light enhances alertness, but this effect is not consistently observed in tasks demanding high-level cognition (e.g., Sustained Attention to Response Task—SART, which measures inhibitory control). Individual differences in sensitivity to light effects might be mediated by variations in the basal level of arousal. We tested this hypothesis by measuring the participants’ behavioural state of vigilance before light exposure, through the Psychomotor Vigilance Task. Then we compared the effects of a blue-enriched vs. dim light at nighttime on the performance of the auditory SART, by controlling for individual differences in basal arousal. The results replicated the alerting effects of blue-enriched light, as indexed by lower values of both proximal temperature and distal-proximal gradient. The main finding was that lighting effects on SART performance were highly variable across individuals and depended on their prior state of vigilance. Specifically, participants with higher levels of basal vigilance before light exposure benefited most from blue-enriched lighting, responding faster in the SART. These results highlight the importance of considering basal vigilance to define the boundary conditions of light effects on cognitive performance. Our study adds to current research delineating the complex and reciprocal interactions between lighting effects, arousal, cognitive task demands and behavioural performance. PMID:27820822
Light Effects on Behavioural Performance Depend on the Individual State of Vigilance.
Correa, Ángel; Barba, Antonio; Padilla, Francisca
2016-01-01
Research has shown that exposure to bright white light or blue-enriched light enhances alertness, but this effect is not consistently observed in tasks demanding high-level cognition (e.g., Sustained Attention to Response Task-SART, which measures inhibitory control). Individual differences in sensitivity to light effects might be mediated by variations in the basal level of arousal. We tested this hypothesis by measuring the participants' behavioural state of vigilance before light exposure, through the Psychomotor Vigilance Task. Then we compared the effects of a blue-enriched vs. dim light at nighttime on the performance of the auditory SART, by controlling for individual differences in basal arousal. The results replicated the alerting effects of blue-enriched light, as indexed by lower values of both proximal temperature and distal-proximal gradient. The main finding was that lighting effects on SART performance were highly variable across individuals and depended on their prior state of vigilance. Specifically, participants with higher levels of basal vigilance before light exposure benefited most from blue-enriched lighting, responding faster in the SART. These results highlight the importance of considering basal vigilance to define the boundary conditions of light effects on cognitive performance. Our study adds to current research delineating the complex and reciprocal interactions between lighting effects, arousal, cognitive task demands and behavioural performance.
Alderson-Day, Ben; Diederen, Kelly; Fernyhough, Charles; Ford, Judith M.; Horga, Guillermo; Margulies, Daniel S.; McCarthy-Jones, Simon; Northoff, Georg; Shine, James M.; Turner, Jessica; van de Ven, Vincent; van Lutterveld, Remko; Waters, Flavie; Jardri, Renaud
2016-01-01
In recent years, there has been increasing interest in the potential for alterations to the brain’s resting-state networks (RSNs) to explain various kinds of psychopathology. RSNs provide an intriguing new explanatory framework for hallucinations, which can occur in different modalities and population groups, but which remain poorly understood. This collaboration from the International Consortium on Hallucination Research (ICHR) reports on the evidence linking resting-state alterations to auditory hallucinations (AH) and provides a critical appraisal of the methodological approaches used in this area. In the report, we describe findings from resting connectivity fMRI in AH (in schizophrenia and nonclinical individuals) and compare them with findings from neurophysiological research, structural MRI, and research on visual hallucinations (VH). In AH, various studies show resting connectivity differences in left-hemisphere auditory and language regions, as well as atypical interaction of the default mode network and RSNs linked to cognitive control and salience. As the latter are also evident in studies of VH, this points to a domain-general mechanism for hallucinations alongside modality-specific changes to RSNs in different sensory regions. However, we also observed high methodological heterogeneity in the current literature, affecting the ability to make clear comparisons between studies. To address this, we provide some methodological recommendations and options for future research on the resting state and hallucinations. PMID:27280452
Cell-intrinsic mechanisms of temperature compensation in a grasshopper sensory receptor neuron
Roemschied, Frederic A; Eberhard, Monika JB; Schleimer, Jan-Hendrik; Ronacher, Bernhard; Schreiber, Susanne
2014-01-01
Changes in temperature affect biochemical reaction rates and, consequently, neural processing. The nervous systems of poikilothermic animals must have evolved mechanisms enabling them to retain their functionality under varying temperatures. Auditory receptor neurons of grasshoppers respond to sound in a surprisingly temperature-compensated manner: firing rates depend moderately on temperature, with average Q10 values around 1.5. Analysis of conductance-based neuron models reveals that temperature compensation of spike generation can be achieved solely relying on cell-intrinsic processes and despite a strong dependence of ion conductances on temperature. Remarkably, this type of temperature compensation need not come at an additional metabolic cost of spike generation. Firing rate-based information transfer is likely to increase with temperature and we derive predictions for an optimal temperature dependence of the tympanal transduction process fostering temperature compensation. The example of auditory receptor neurons demonstrates how neurons may exploit single-cell mechanisms to cope with multiple constraints in parallel. DOI: http://dx.doi.org/10.7554/eLife.02078.001 PMID:24843016
Todd, N P M; Paillard, A C; Kluk, K; Whittle, E; Colebatch, J G
2014-06-01
Todd et al. (2014) have recently demonstrated the presence of vestibular dependent changes both in the morphology and in the intensity dependence of auditory evoked potentials (AEPs) when passing through the vestibular threshold as determined by vestibular evoked myogenic potentials (VEMPs). In this paper we extend this work by comparing left vs. right ear stimulation and by conducting a source analysis of the resulting evoked potentials of short and long latency. Ten healthy, right-handed subjects were recruited and evoked potentials were recorded to both left- and right-ear sound stimulation, above and below vestibular threshold. Below VEMP threshold, typical AEPs were recorded, consisting of mid-latency (MLR) waves Na and Pa followed by long latency AEPs (LAEPs) N1 and P2. In the supra-threshold condition, the expected changes in morphology were observed, consisting of: (1) short-latency vestibular evoked potentials (VsEPs) which have no auditory correlate, i.e. the ocular VEMP (OVEMP) and inion response related potentials; (2) a later deflection, labelled N42/P52, followed by the LAEPs N1 and P2. Statistical analysis of the vestibular dependent responses indicated a contralateral effect for inion related short-latency responses and a left-ear/right-hemisphere advantage for the long-latency responses. Source analysis indicated that the short-latency effects may be mediated by a contralateral projection to left cerebellum, while the long-latency effects were mediated by a contralateral projection to right cingulate cortex. In addition we found evidence of a possible vestibular contribution to the auditory T-complex in radial temporal lobe sources. These last results raise the possibility that acoustic activation of the otolith organs could potentially contribute to auditory processing. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Integration and segregation in auditory streaming
NASA Astrophysics Data System (ADS)
Almonte, Felix; Jirsa, Viktor K.; Large, Edward W.; Tuller, Betty
2005-12-01
We aim to capture the perceptual dynamics of auditory streaming using a neurally inspired model of auditory processing. Traditional approaches view streaming as a competition of streams, realized within a tonotopically organized neural network. In contrast, we view streaming to be a dynamic integration process which resides at locations other than the sensory specific neural subsystems. This process finds its realization in the synchronization of neural ensembles or in the existence of informational convergence zones. Our approach uses two interacting dynamical systems, in which the first system responds to incoming acoustic stimuli and transforms them into a spatiotemporal neural field dynamics. The second system is a classification system coupled to the neural field and evolves to a stationary state. These states are identified with a single perceptual stream or multiple streams. Several results in human perception are modelled including temporal coherence and fission boundaries [L.P.A.S. van Noorden, Temporal coherence in the perception of tone sequences, Ph.D. Thesis, Eindhoven University of Technology, The Netherlands, 1975], and crossing of motions [A.S. Bregman, Auditory Scene Analysis: The Perceptual Organization of Sound, MIT Press, 1990]. Our model predicts phenomena such as the existence of two streams with the same pitch, which cannot be explained by the traditional stream competition models. An experimental study is performed to provide proof of existence of this phenomenon. The model elucidates possible mechanisms that may underlie perceptual phenomena.
Wilson, Uzma S.; Kaf, Wafaa A.; Danesh, Ali A.; Lichtenhan, Jeffery T.
2016-01-01
Objective To determine the clinical utility of narrow-band chirp evoked 40-Hz sinusoidal auditory steady state responses (s-ASSR) in the assessment of low-frequency hearing in noisy participants. Design Tone bursts and narrow-band chirps were used to respectively evoke auditory brainstem responses (tb-ABR) and 40-Hz s-ASSR thresholds with the Kalman-weighted filtering technique and were compared to behavioral thresholds at 500, 2000, and 4000 Hz. A repeated measure ANOVA and post-hoc t-tests, and simple regression analyses were performed for each of the three stimulus frequencies. Study Sample Thirty young adults aged 18–25 with normal hearing participated in this study. Results When 4000 equivalent responses averages were used, the range of mean s-ASSR thresholds from 500, 2000, and 4000 Hz were 17–22 dB lower (better) than when 2000 averages were used. The range of mean tb-ABR thresholds were lower by 11–15 dB for 2000 and 4000 Hz when twice as many equivalent response averages were used, while mean tb-ABR thresholds for 500 Hz were indistinguishable regardless of additional response averaging Conclusion Narrow band chirp evoked 40-Hz s-ASSR requires a ~15 dB smaller correction factor than tb-ABR for estimating low-frequency auditory threshold in noisy participants when adequate response averaging is used. PMID:26795555
Núñez-Batalla, Faustino; Noriega-Iglesias, Sabel; Guntín-García, Maite; Carro-Fernández, Pilar; Llorente-Pendás, José Luis
2016-01-01
Conventional audiometry is the gold standard for quantifying and describing hearing loss. Alternative methods become necessary to assess subjects who are too young to respond reliably. Auditory evoked potentials constitute the most widely used method for determining hearing thresholds objectively; however, this stimulus is not frequency specific. The advent of the auditory steady-state response (ASSR) leads to more specific threshold determination. The current study describes and compares ASSR, auditory brainstem response (ABR) and conventional behavioural tone audiometry thresholds in a group of infants with various degrees of hearing loss. A comparison was made between ASSR, ABR and behavioural hearing thresholds in 35 infants detected in the neonatal hearing screening program. Mean difference scores (±SD) between ABR and high frequency ABR thresholds were 11.2 dB (±13) and 10.2 dB (±11). Pearson correlations between the ASSR and audiometry thresholds were 0.80 and 0.91 (500Hz); 0.84 and 0.82 (1000Hz); 0.85 and 0.84 (2000Hz); and 0.83 and 0.82 (4000Hz). The ASSR technique is a valuable extension of the clinical test battery for hearing-impaired children. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.
Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise.
Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W
2011-03-08
How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.
Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn
2017-01-01
This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants' auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research.
Zhang, G-Y; Yang, M; Liu, B; Huang, Z-C; Li, J; Chen, J-Y; Chen, H; Zhang, P-P; Liu, L-J; Wang, J; Teng, G-J
2016-01-28
Previous studies often report that early auditory deprivation or congenital deafness contributes to cross-modal reorganization in the auditory-deprived cortex, and this cross-modal reorganization limits clinical benefit from cochlear prosthetics. However, there are inconsistencies among study results on cortical reorganization in those subjects with long-term unilateral sensorineural hearing loss (USNHL). It is also unclear whether there exists a similar cross-modal plasticity of the auditory cortex for acquired monaural deafness and early or congenital deafness. To address this issue, we constructed the directional brain functional networks based on entropy connectivity of resting-state functional MRI and researched changes of the networks. Thirty-four long-term USNHL individuals and seventeen normally hearing individuals participated in the test, and all USNHL patients had acquired deafness. We found that certain brain regions of the sensorimotor and visual networks presented enhanced synchronous output entropy connectivity with the left primary auditory cortex in the left long-term USNHL individuals as compared with normally hearing individuals. Especially, the left USNHL showed more significant changes of entropy connectivity than the right USNHL. No significant plastic changes were observed in the right USNHL. Our results indicate that the left primary auditory cortex (non-auditory-deprived cortex) in patients with left USNHL has been reorganized by visual and sensorimotor modalities through cross-modal plasticity. Furthermore, the cross-modal reorganization also alters the directional brain functional networks. The auditory deprivation from the left or right side generates different influences on the human brain. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Musical experience sharpens human cochlear tuning.
Bidelman, Gavin M; Nelms, Caitlin; Bhagat, Shaum P
2016-05-01
The mammalian cochlea functions as a filter bank that performs a spectral, Fourier-like decomposition on the acoustic signal. While tuning can be compromised (e.g., broadened with hearing impairment), whether or not human cochlear frequency resolution can be sharpened through experiential factors (e.g., training or learning) has not yet been established. Previous studies have demonstrated sharper psychophysical tuning curves in trained musicians compared to nonmusicians, implying superior peripheral tuning. However, these findings are based on perceptual masking paradigms, and reflect engagement of the entire auditory system rather than cochlear tuning, per se. Here, by directly mapping physiological tuning curves from stimulus frequency otoacoustic emissions (SFOAEs)-cochlear emitted sounds-we show that estimates of human cochlear tuning in a high-frequency cochlear region (4 kHz) is further sharpened (by a factor of 1.5×) in musicians and improves with the number of years of their auditory training. These findings were corroborated by measurements of psychophysical tuning curves (PTCs) derived via simultaneous masking, which similarly showed sharper tuning in musicians. Comparisons between SFOAE and PTCs revealed closer correspondence between physiological and behavioral curves in musicians, indicating that tuning is also more consistent between different levels of auditory processing in trained ears. Our findings demonstrate an experience-dependent enhancement in the resolving power of the cochlear sensory epithelium and the spectral resolution of human hearing and provide a peripheral account for the auditory perceptual benefits observed in musicians. Both local and feedback (e.g., medial olivocochlear efferent) mechanisms are discussed as potential mechanisms for experience-dependent tuning. Copyright © 2016 Elsevier B.V. All rights reserved.
Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals
Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz
2016-01-01
Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. PMID:27169504
Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals.
Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz; MacNeilage, Paul R
2016-08-01
Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. Copyright © 2016 the American Physiological Society.
Statistical context shapes stimulus-specific adaptation in human auditory cortex
Henry, Molly J.; Fromboluti, Elisa Kim; McAuley, J. Devin
2015-01-01
Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. PMID:25652920
Impairing the useful field of view in natural scenes: Tunnel vision versus general interference.
Ringer, Ryan V; Throneburg, Zachary; Johnson, Aaron P; Kramer, Arthur F; Loschky, Lester C
2016-01-01
A fundamental issue in visual attention is the relationship between the useful field of view (UFOV), the region of visual space where information is encoded within a single fixation, and eccentricity. A common assumption is that impairing attentional resources reduces the size of the UFOV (i.e., tunnel vision). However, most research has not accounted for eccentricity-dependent changes in spatial resolution, potentially conflating fixed visual properties with flexible changes in visual attention. Williams (1988, 1989) argued that foveal loads are necessary to reduce the size of the UFOV, producing tunnel vision. Without a foveal load, it is argued that the attentional decrement is constant across the visual field (i.e., general interference). However, other research asserts that auditory working memory (WM) loads produce tunnel vision. To date, foveal versus auditory WM loads have not been compared to determine if they differentially change the size of the UFOV. In two experiments, we tested the effects of a foveal (rotated L vs. T discrimination) task and an auditory WM (N-back) task on an extrafoveal (Gabor) discrimination task. Gabor patches were scaled for size and processing time to produce equal performance across the visual field under single-task conditions, thus removing the confound of eccentricity-dependent differences in visual sensitivity. The results showed that although both foveal and auditory loads reduced Gabor orientation sensitivity, only the foveal load interacted with retinal eccentricity to produce tunnel vision, clearly demonstrating task-specific changes to the form of the UFOV. This has theoretical implications for understanding the UFOV.
Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung
2017-01-01
Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework’s simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications. PMID:28350887
Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung
2017-01-01
Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.
NASA Astrophysics Data System (ADS)
Deprez, Hanne; Gransier, Robin; Hofmann, Michael; van Wieringen, Astrid; Wouters, Jan; Moonen, Marc
2018-02-01
Objective. Electrically evoked auditory steady-state responses (EASSRs) are potentially useful for objective cochlear implant (CI) fitting and follow-up of the auditory maturation in infants and children with a CI. EASSRs are recorded in the electro-encephalogram (EEG) in response to electrical stimulation with continuous pulse trains, and are distorted by significant CI artifacts related to this electrical stimulation. The aim of this study is to evaluate a CI artifacts attenuation method based on independent component analysis (ICA) for three EASSR datasets. Approach. ICA has often been used to remove CI artifacts from the EEG to record transient auditory responses, such as cortical evoked auditory potentials. Independent components (ICs) corresponding to CI artifacts are then often manually identified. In this study, an ICA based CI artifacts attenuation method was developed and evaluated for EASSR measurements with varying CI artifacts and EASSR characteristics. Artifactual ICs were automatically identified based on their spectrum. Main results. For 40 Hz amplitude modulation (AM) stimulation at comfort level, in high SNR recordings, ICA succeeded in removing CI artifacts from all recording channels, without distorting the EASSR. For lower SNR recordings, with 40 Hz AM stimulation at lower levels, or 90 Hz AM stimulation, ICA either distorted the EASSR or could not remove all CI artifacts in most subjects, except for two of the seven subjects tested with low level 40 Hz AM stimulation. Noise levels were reduced after ICA was applied, and up to 29 ICs were rejected, suggesting poor ICA separation quality. Significance. We hypothesize that ICA is capable of separating CI artifacts and EASSR in case the contralateral hemisphere is EASSR dominated. For small EASSRs or large CI artifact amplitudes, ICA separation quality is insufficient to ensure complete CI artifacts attenuation without EASSR distortion.
Park, Jangho; Chung, Seockhoon; Lee, Jiho; Sung, Joo Hyun; Cho, Seung Woo; Sim, Chang Sun
2017-04-12
Excessive noise affects human health and interferes with daily activities. Although environmental noise may not directly cause mental illness, it may accelerate and intensify the development of latent mental disorders. Noise sensitivity (NS) is considered a moderator of non-auditory noise effects. In the present study, we aimed to assess whether NS is associated with non-auditory effects. We recruited a community sample of 1836 residents residing in Ulsan and Seoul, South Korea. From July to November 2015, participants were interviewed regarding their demographic characteristics, socioeconomic status, medical history, and NS. The non-auditory effects of noise were assessed using the Center of Epidemiologic Studies Depression, Insomnia Severity index, State Trait Anxiety Inventory state subscale, and Stress Response Inventory-Modified Form. Individual noise levels were recorded from noise maps. A three-model multivariate logistic regression analysis was performed to identify factors that might affect psychiatric illnesses. Participants ranged in age from 19 to 91 years (mean: 47.0 ± 16.1 years), and 37.9% (n = 696) were male. Participants with high NS were more likely to have been diagnosed with diabetes and hyperlipidemia and to use psychiatric medication. The multivariable analysis indicated that even after adjusting for noise-related variables, sociodemographic factors, medical illness, and duration of residence, subjects in the high NS group were more than 2 times more likely to experience depression and insomnia and 1.9 times more likely to have anxiety, compared with those in the low NS group. Noise exposure level was not identified as an explanatory value. NS increases the susceptibility and hence moderates there actions of individuals to noise. NS, rather than noise itself, is associated with an elevated susceptibility to non-auditory effects.
22q11.2 Deletion Syndrome Is Associated With Impaired Auditory Steady-State Gamma Response
Pellegrino, Giovanni; Birknow, Michelle Rosgaard; Kjær, Trine Nørgaard; Baaré, William Frans Christiaan; Didriksen, Michael; Olsen, Line; Werge, Thomas; Mørup, Morten; Siebner, Hartwig Roman
2018-01-01
Abstract Background The 22q11.2 deletion syndrome confers a markedly increased risk for schizophrenia. 22q11.2 deletion carriers without manifest psychotic disorder offer the possibility to identify functional abnormalities that precede clinical onset. Since schizophrenia is associated with a reduced cortical gamma response to auditory stimulation at 40 Hz, we hypothesized that the 40 Hz auditory steady-state response (ASSR) may be attenuated in nonpsychotic individuals with a 22q11.2 deletion. Methods Eighteen young nonpsychotic 22q11.2 deletion carriers and a control group of 27 noncarriers with comparable age range (12–25 years) and sex ratio underwent 128-channel EEG. We recorded the cortical ASSR to a 40 Hz train of clicks, given either at a regular inter-stimulus interval of 25 ms or at irregular intervals jittered between 11 and 37 ms. Results Healthy noncarriers expressed a stable ASSR to regular but not in the irregular 40 Hz click stimulation. Both gamma power and inter-trial phase coherence of the ASSR were markedly reduced in the 22q11.2 deletion group. The ability to phase lock cortical gamma activity to regular auditory 40 Hz stimulation correlated with the individual expression of negative symptoms in deletion carriers (ρ = −0.487, P = .041). Conclusions Nonpsychotic 22q11.2 deletion carriers lack efficient phase locking of evoked gamma activity to regular 40 Hz auditory stimulation. This abnormality indicates a dysfunction of fast intracortical oscillatory processing in the gamma-band. Since ASSR was attenuated in nonpsychotic deletion carriers, ASSR deficiency may constitute a premorbid risk marker of schizophrenia. PMID:28521049
Heo, Jeong; Baek, Hyun Jae; Hong, Seunghyeok; Chang, Min Hye; Lee, Jeong Su; Park, Kwang Suk
2017-05-01
Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kim, Seung-Goo; Knösche, Thomas R
2017-08-01
Absolute pitch (AP) is the ability to recognize pitch chroma of tonal sound without external references, providing a unique model of the human auditory system (Zatorre: Nat Neurosci 6 () 692-695). In a previous study (Kim and Knösche: Hum Brain Mapp () 3486-3501), we identified enhanced intracortical myelination in the right planum polare (PP) in musicians with AP, which could be a potential site for perceptional processing of pitch chroma information. We speculated that this area, which initiates the ventral auditory pathway, might be crucially involved in the perceptual stage of the AP process in the context of the "dual pathway hypothesis" that suggests the role of the ventral pathway in processing nonspatial information related to the identity of an auditory object (Rauschecker: Eur J Neurosci 41 () 579-585). To test our conjecture on the ventral pathway, we investigated resting state functional connectivity (RSFC) using functional magnetic resonance imaging (fMRI) from musicians with varying degrees of AP. Should our hypothesis be correct, RSFC via the ventral pathway is expected to be stronger in musicians with AP, whereas such group effect is not predicted in the RSFC via the dorsal pathway. In the current data, we found greater RSFC between the right PP and bilateral anteroventral auditory cortices in musicians with AP. In contrast, we did not find any group difference in the RSFC of the planum temporale (PT) between musicians with and without AP. We believe that these findings support our conjecture on the critical role of the ventral pathway in AP recognition. Hum Brain Mapp 38:3899-3916, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Leicht, Gregor; Vauth, Sebastian; Polomac, Nenad; Andreou, Christina; Rauh, Jonas; Mußmann, Marius; Karow, Anne; Mulert, Christoph
2016-01-01
Objectives. Abnormalities of oscillatory gamma activity are supposed to reflect a core pathophysiological mechanism underlying cognitive disturbances in schizophrenia. The auditory evoked gamma-band response (aeGBR) is known to be reduced across all stages of the disease. The present study aimed to elucidate alterations of an aeGBR-specific network mediated by gamma oscillations in the high-risk state of psychosis (HRP) by means of functional magnetic resonance imaging (fMRI) informed by electroencephalography (EEG). Methods. EEG and fMRI were simultaneously recorded from 27 HRP individuals and 26 healthy controls (HC) during performance of a cognitively demanding auditory reaction task. We used single trial coupling of the aeGBR with the corresponding blood oxygen level depending response (EEG-informed fMRI). Results. A gamma-band–specific network was significantly lower active in HRP subjects compared with HC (random effects analysis, P < .01, Bonferroni-corrected for multiple comparisons) accompanied by a worse task performance. This network involved the bilateral auditory cortices, the thalamus and frontal brain regions including the anterior cingulate cortex, as well as the bilateral dorsolateral prefrontal cortex. Conclusions. For the first time we report a reduced activation of an aeGBR-specific network in HRP subjects brought forward by EEG-informed fMRI. Because the HRP reflects the clinical risk for conversion to psychotic disorders including schizophrenia and the aeGBR has repeatedly been shown to be altered in patients with schizophrenia the results of our study point towards a potential applicability of aeGBR disturbances as a marker for the prediction of transition of HRP subjects to schizophrenia. PMID:26163477
Leicht, Gregor; Vauth, Sebastian; Polomac, Nenad; Andreou, Christina; Rauh, Jonas; Mußmann, Marius; Karow, Anne; Mulert, Christoph
2016-01-01
Abnormalities of oscillatory gamma activity are supposed to reflect a core pathophysiological mechanism underlying cognitive disturbances in schizophrenia. The auditory evoked gamma-band response (aeGBR) is known to be reduced across all stages of the disease. The present study aimed to elucidate alterations of an aeGBR-specific network mediated by gamma oscillations in the high-risk state of psychosis (HRP) by means of functional magnetic resonance imaging (fMRI) informed by electroencephalography (EEG). EEG and fMRI were simultaneously recorded from 27 HRP individuals and 26 healthy controls (HC) during performance of a cognitively demanding auditory reaction task. We used single trial coupling of the aeGBR with the corresponding blood oxygen level depending response (EEG-informed fMRI). A gamma-band-specific network was significantly lower active in HRP subjects compared with HC (random effects analysis, P < .01, Bonferroni-corrected for multiple comparisons) accompanied by a worse task performance. This network involved the bilateral auditory cortices, the thalamus and frontal brain regions including the anterior cingulate cortex, as well as the bilateral dorsolateral prefrontal cortex. For the first time we report a reduced activation of an aeGBR-specific network in HRP subjects brought forward by EEG-informed fMRI. Because the HRP reflects the clinical risk for conversion to psychotic disorders including schizophrenia and the aeGBR has repeatedly been shown to be altered in patients with schizophrenia the results of our study point towards a potential applicability of aeGBR disturbances as a marker for the prediction of transition of HRP subjects to schizophrenia. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.
Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention
Noppeney, Uta
2018-01-01
Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567
Dose-dependent suppression by ethanol of transient auditory 40-Hz response.
Jääskeläinen, I P; Hirvonen, J; Saher, M; Pekkonen, E; Sillanaukee, P; Näätänen, R; Tiitinen, H
2000-02-01
Acute alcohol (ethanol) challenge is known to induce various cognitive disturbances, yet the neural basis of the effect is poorly known. The auditory transient evoked gamma-band (40-Hz) oscillatory responses have been suggested to be associated with various perceptual and cognitive functions in humans; however, alcohol effects on auditory 40-Hz responses have not been investigated to date. The objective of the study was to test the dose-related impact of alcohol on auditory transient evoked 40-Hz responses during a selective-attention task. Ten healthy social drinkers ingested, in four separate sessions, 0.00, 0. 25, 0.50, or 0.75 g/kg of 10% (v/v) alcohol solution. The order of the sessions was randomized and a double-blind procedure was employed. During a selective attention task, 300-Hz standard and 330-Hz deviant tones were presented to the left ear, and 1000-Hz standards and 1100-Hz deviants to the right ear of the subjects (P=0. 425 for each standard, P=0.075 for each deviant). The subjects attended to a designated ear, and were to detect the deviants therein while ignoring tones to the other ear. The auditory transient evoked 40-Hz responses elicited by both the attended and unattended standard tones were significantly suppressed by the 0.50 and 0.75 g/kg alcohol doses. Alcohol suppresses auditory transient evoked 40-Hz oscillations already with moderate blood alcohol concentrations. Given the putative role of gamma-band oscillations in cognition, this finding could be associated with certain alcohol-induced cognitive deficits.
Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns
Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Valente, Giancarlo; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia
2017-01-01
Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2–4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice). PMID:28420788
Knowledge of response location alone is not sufficient to generate social inhibition of return.
Welsh, Timothy N; Manzone, Joseph; McDougall, Laura
2014-11-01
Previous research has revealed that the inhibition of return (IOR) effect emerges when individuals respond to a target at the same location as their own previous response or the previous response of a co-actor. The latter social IOR effect is thought to occur because the observation of co-actor's response evokes a representation of that action in the observer and that the observation-evoked response code subsequently activates the inhibitory mechanisms underlying IOR. The present study was conducted to determine if knowledge of the co-actor's response alone is sufficient to evoke social IOR. Pairs of participants completed responses to targets that appeared at different button locations. Button contact generated location-contingent auditory stimuli (high and low tones in Experiment 1 and colour words in Experiment 2). In the Full condition, the observer saw the response and heard the auditory stimuli. In the Auditory Only condition, the observer did not see the co-actor's response, but heard the auditory stimuli generated via button contact to indicate response endpoint. It was found that, although significant individual and social IOR effects emerged in the Full conditions, there were no social IOR effects in the Auditory Only conditions. These findings suggest that knowledge of the co-actor's response alone via auditory information is not sufficient to activate the inhibitory processes leading to IOR. The activation of the mechanisms that lead to social IOR seems to be dependent on processing channels that code the spatial characteristics of action. Copyright © 2014 Elsevier B.V. All rights reserved.
Developmental Profiling of Spiral Ganglion Neurons Reveals Insights into Auditory Circuit Assembly
Lu, Cindy C.; Appler, Jessica M.; Houseman, E. Andres; Goodrich, Lisa V.
2011-01-01
The sense of hearing depends on the faithful transmission of sound information from the ear to the brain by spiral ganglion (SG) neurons. However, how SG neurons develop the connections and properties that underlie auditory processing is largely unknown. We catalogued gene expression in mouse SG neurons from embryonic day 12 (E12), when SG neurons first extend projections, up until postnatal day 15 (P15), after the onset of hearing. For comparison, we also analyzed the closely-related vestibular ganglion (VG). Gene ontology analysis confirmed enriched expression of genes associated with gene regulation and neurite outgrowth at early stages, with the SG and VG often expressing different members of the same gene family. At later stages, the neurons transcribe more genes related to mature function, and exhibit a dramatic increase in immune gene expression. Comparisons of the two populations revealed enhanced expression of TGFβ pathway components in SG neurons and established new markers that consistently distinguish auditory and vestibular neurons. Unexpectedly, we found that Gata3, a transcription factor commonly associated with auditory development, is also expressed in VG neurons at early stages. We therefore defined new cohorts of transcription factors and axon guidance molecules that are uniquely expressed in SG neurons and may drive auditory-specific aspects of their differentiation and wiring. We show that one of these molecules, the receptor guanylyl cyclase Npr2, is required for bifurcation of the SG central axon. Hence, our data set provides a useful resource for uncovering the molecular basis of specific auditory circuit assembly events. PMID:21795542
The effect of aborting ongoing movements on end point position estimation.
Itaguchi, Yoshihiro; Fukuzawa, Kazuyoshi
2013-11-01
The present study investigated the impact of motor commands to abort ongoing movement on position estimation. Participants carried out visually guided reaching movements on a horizontal plane with their eyes open. By setting a mirror above their arm, however, they could not see the arm, only the start and target points. They estimated the position of their fingertip based solely on proprioception after their reaching movement was stopped before reaching the target. The participants stopped reaching as soon as they heard an auditory cue or were mechanically prevented from moving any further by an obstacle in their path. These reaching movements were carried out at two different speeds (fast or slow). It was assumed that additional motor commands to abort ongoing movement were required and that their magnitude was high, low, and zero, in the auditory-fast condition, the auditory-slow condition, and both the obstacle conditions, respectively. There were two main results. (1) When the participants voluntarily stopped a fast movement in response to the auditory cue (the auditory-fast condition), they showed more underestimates than in the other three conditions. This underestimate effect was positively related to movement velocity. (2) An inverted-U-shaped bias pattern as a function of movement distance was observed consistently, except in the auditory-fast condition. These findings indicate that voluntarily stopping fast ongoing movement created a negative bias in the position estimate, supporting the idea that additional motor commands or efforts to abort planned movement are involved with the position estimation system. In addition, spatially probabilistic inference and signal-dependent noise may explain the underestimate effect of aborting ongoing movement.
Biased and unbiased perceptual decision-making on vocal emotions.
Dricu, Mihai; Ceravolo, Leonardo; Grandjean, Didier; Frühholz, Sascha
2017-11-24
Perceptual decision-making on emotions involves gathering sensory information about the affective state of another person and forming a decision on the likelihood of a particular state. These perceptual decisions can be of varying complexity as determined by different contexts. We used functional magnetic resonance imaging and a region of interest approach to investigate the brain activation and functional connectivity behind two forms of perceptual decision-making. More complex unbiased decisions on affective voices recruited an extended bilateral network consisting of the posterior inferior frontal cortex, the orbitofrontal cortex, the amygdala, and voice-sensitive areas in the auditory cortex. Less complex biased decisions on affective voices distinctly recruited the right mid inferior frontal cortex, pointing to a functional distinction in this region following decisional requirements. Furthermore, task-induced neural connectivity revealed stronger connections between these frontal, auditory, and limbic regions during unbiased relative to biased decision-making on affective voices. Together, the data shows that different types of perceptual decision-making on auditory emotions have distinct patterns of activations and functional coupling that follow the decisional strategies and cognitive mechanisms involved during these perceptual decisions.
EEG Responses to Auditory Stimuli for Automatic Affect Recognition
Hettich, Dirk T.; Bolinger, Elaina; Matuz, Tamara; Birbaumer, Niels; Rosenstiel, Wolfgang; Spüler, Martin
2016-01-01
Brain state classification for communication and control has been well established in the area of brain-computer interfaces over the last decades. Recently, the passive and automatic extraction of additional information regarding the psychological state of users from neurophysiological signals has gained increased attention in the interdisciplinary field of affective computing. We investigated how well specific emotional reactions, induced by auditory stimuli, can be detected in EEG recordings. We introduce an auditory emotion induction paradigm based on the International Affective Digitized Sounds 2nd Edition (IADS-2) database also suitable for disabled individuals. Stimuli are grouped in three valence categories: unpleasant, neutral, and pleasant. Significant differences in time domain domain event-related potentials are found in the electroencephalogram (EEG) between unpleasant and neutral, as well as pleasant and neutral conditions over midline electrodes. Time domain data were classified in three binary classification problems using a linear support vector machine (SVM) classifier. We discuss three classification performance measures in the context of affective computing and outline some strategies for conducting and reporting affect classification studies. PMID:27375410
Is Rest Really Rest? Resting State Functional Connectivity during Rest and Motor Task Paradigms.
Jurkiewicz, Michael T; Crawley, Adrian P; Mikulis, David J
2018-04-18
Numerous studies have identified the default mode network (DMN) within the brain of healthy individuals, which has been attributed to the ongoing mental activity of the brain during the wakeful resting-state. While engaged during specific resting-state fMRI paradigms, it remains unclear as to whether traditional block-design simple movement fMRI experiments significantly influence the default mode network or other areas. Using blood-oxygen level dependent (BOLD) fMRI we characterized the pattern of functional connectivity in healthy subjects during a resting-state paradigm and compared this to the same resting-state analysis performed on motor task data residual time courses after regressing out the task paradigm. Using seed-voxel analysis to define the DMN, the executive control network (ECN), and sensorimotor, auditory and visual networks, the resting-state analysis of the residual time courses demonstrated reduced functional connectivity in the motor network and reduced connectivity between the insula and the ECN compared to the standard resting-state datasets. Overall, performance of simple self-directed motor tasks does little to change the resting-state functional connectivity across the brain, especially in non-motor areas. This would suggest that previously acquired fMRI studies incorporating simple block-design motor tasks could be mined retrospectively for assessment of the resting-state connectivity.
Schumacher, Joseph W.; Schneider, David M.
2011-01-01
The majority of sensory physiology experiments have used anesthesia to facilitate the recording of neural activity. Current techniques allow researchers to study sensory function in the context of varying behavioral states. To reconcile results across multiple behavioral and anesthetic states, it is important to consider how and to what extent anesthesia plays a role in shaping neural response properties. The role of anesthesia has been the subject of much debate, but the extent to which sensory coding properties are altered by anesthesia has yet to be fully defined. In this study we asked how urethane, an anesthetic commonly used for avian and mammalian sensory physiology, affects the coding of complex communication vocalizations (songs) and simple artificial stimuli in the songbird auditory midbrain. We measured spontaneous and song-driven spike rates, spectrotemporal receptive fields, and neural discriminability from responses to songs in single auditory midbrain neurons. In the same neurons, we recorded responses to pure tone stimuli ranging in frequency and intensity. Finally, we assessed the effect of urethane on population-level representations of birdsong. Results showed that intrinsic neural excitability is significantly depressed by urethane but that spectral tuning, single neuron discriminability, and population representations of song do not differ significantly between unanesthetized and anesthetized animals. PMID:21543752
Vestibular receptors contribute to cortical auditory evoked potentials.
Todd, Neil P M; Paillard, Aurore C; Kluk, Karolina; Whittle, Elizabeth; Colebatch, James G
2014-03-01
Acoustic sensitivity of the vestibular apparatus is well-established, but the contribution of vestibular receptors to the late auditory evoked potentials of cortical origin is unknown. Evoked potentials from 500 Hz tone pips were recorded using 70 channel EEG at several intensities below and above the vestibular acoustic threshold, as determined by vestibular evoked myogenic potentials (VEMPs). In healthy subjects both auditory mid- and long-latency auditory evoked potentials (AEPs), consisting of Na, Pa, N1 and P2 waves, were observed in the sub-threshold conditions. However, in passing through the vestibular threshold, systematic changes were observed in the morphology of the potentials and in the intensity dependence of their amplitude and latency. These changes were absent in a patient without functioning vestibular receptors. In particular, for the healthy subjects there was a fronto-central negativity, which appeared at about 42 ms, referred to as an N42, prior to the AEP N1. Source analysis of both the N42 and N1 indicated involvement of cingulate cortex, as well as bilateral superior temporal cortex. Our findings are best explained by vestibular receptors contributing to what were hitherto considered as purely auditory evoked potentials and in addition tentatively identify a new component that appears to be primarily of vestibular origin. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Binaural auditory beats affect long-term memory.
Garcia-Argibay, Miguel; Santed, Miguel A; Reales, José M
2017-12-08
The presentation of two pure tones to each ear separately with a slight difference in their frequency results in the perception of a single tone that fluctuates in amplitude at a frequency that equals the difference of interaural frequencies. This perceptual phenomenon is known as binaural auditory beats, and it is thought to entrain electrocortical activity and enhance cognition functions such as attention and memory. The aim of this study was to determine the effect of binaural auditory beats on long-term memory. Participants (n = 32) were kept blind to the goal of the study and performed both the free recall and recognition tasks after being exposed to binaural auditory beats, either in the beta (20 Hz) or theta (5 Hz) frequency bands and white noise as a control condition. Exposure to beta-frequency binaural beats yielded a greater proportion of correctly recalled words and a higher sensitivity index d' in recognition tasks, while theta-frequency binaural-beat presentation lessened the number of correctly remembered words and the sensitivity index. On the other hand, we could not find differences in the conditional probability for recall given recognition between beta and theta frequencies and white noise, suggesting that the observed changes in recognition were due to the recollection component. These findings indicate that the presentation of binaural auditory beats can affect long-term memory both positively and negatively, depending on the frequency used.
Top-down modulation of visual and auditory cortical processing in aging.
Guerreiro, Maria J S; Eck, Judith; Moerel, Michelle; Evers, Elisabeth A T; Van Gerven, Pascal W M
2015-02-01
Age-related cognitive decline has been accounted for by an age-related deficit in top-down attentional modulation of sensory cortical processing. In light of recent behavioral findings showing that age-related differences in selective attention are modality dependent, our goal was to investigate the role of sensory modality in age-related differences in top-down modulation of sensory cortical processing. This question was addressed by testing younger and older individuals in several memory tasks while undergoing fMRI. Throughout these tasks, perceptual features were kept constant while attentional instructions were varied, allowing us to devise all combinations of relevant and irrelevant, visual and auditory information. We found no top-down modulation of auditory sensory cortical processing in either age group. In contrast, we found top-down modulation of visual cortical processing in both age groups, and this effect did not differ between age groups. That is, older adults enhanced cortical processing of relevant visual information and suppressed cortical processing of visual distractors during auditory attention to the same extent as younger adults. The present results indicate that older adults are capable of suppressing irrelevant visual information in the context of cross-modal auditory attention, and thereby challenge the view that age-related attentional and cognitive decline is due to a general deficits in the ability to suppress irrelevant information. Copyright © 2014 Elsevier B.V. All rights reserved.
Plastic brain mechanisms for attaining auditory temporal order judgment proficiency.
Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas
2010-04-15
Accurate perception of the order of occurrence of sensory information is critical for the building up of coherent representations of the external world from ongoing flows of sensory inputs. While some psychophysical evidence reports that performance on temporal perception can improve, the underlying neural mechanisms remain unresolved. Using electrical neuroimaging analyses of auditory evoked potentials (AEPs), we identified the brain dynamics and mechanism supporting improvements in auditory temporal order judgment (TOJ) during the course of the first vs. latter half of the experiment. Training-induced changes in brain activity were first evident 43-76 ms post stimulus onset and followed from topographic, rather than pure strength, AEP modulations. Improvements in auditory TOJ accuracy thus followed from changes in the configuration of the underlying brain networks during the initial stages of sensory processing. Source estimations revealed an increase in the lateralization of initially bilateral posterior sylvian region (PSR) responses at the beginning of the experiment to left-hemisphere dominance at its end. Further supporting the critical role of left and right PSR in auditory TOJ proficiency, as the experiment progressed, responses in the left and right PSR went from being correlated to un-correlated. These collective findings provide insights on the neurophysiologic mechanism and plasticity of temporal processing of sounds and are consistent with models based on spike timing dependent plasticity. Copyright 2010 Elsevier Inc. All rights reserved.
Kokinous, Jenny; Tavano, Alessandro; Kotz, Sonja A; Schröger, Erich
2017-02-01
The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se. Copyright © 2016 Elsevier B.V. All rights reserved.
Menashe, Shay
2017-01-01
The main aim of the present study was to determine whether adult dyslexic readers demonstrate the "Asynchrony Theory" (Breznitz [Reading Fluency: Synchronization of Processes, Lawrence Erlbaum and Associates, Mahwah, NJ, USA, 2006]) when selective attention is studied. Event-related potentials (ERPs) and behavioral parameters were collected from nonimpaired readers group and dyslexic readers group performing alphabetic and nonalphabetic tasks. The dyslexic readers group was found to demonstrate asynchrony between the auditory and the visual modalities when it came to processing alphabetic stimuli. These findings were found both for behavioral and ERPs parameters. Unlike the dyslexic readers, the nonimpaired readers showed synchronized speed of processing in the auditory and the visual modalities while processing alphabetic stimuli. The current study suggests that established reading is dependent on a synchronization between the auditory and the visual modalities even when it comes to selective attention.
Auditory cortex of newborn bats is prewired for echolocation.
Kössl, Manfred; Voss, Cornelia; Mora, Emanuel C; Macias, Silvio; Foeller, Elisabeth; Vater, Marianne
2012-04-10
Neuronal computation of object distance from echo delay is an essential task that echolocating bats must master for spatial orientation and the capture of prey. In the dorsal auditory cortex of bats, neurons specifically respond to combinations of short frequency-modulated components of emitted call and delayed echo. These delay-tuned neurons are thought to serve in target range calculation. It is unknown whether neuronal correlates of active space perception are established by experience-dependent plasticity or by innate mechanisms. Here we demonstrate that in the first postnatal week, before onset of echolocation and flight, dorsal auditory cortex already contains functional circuits that calculate distance from the temporal separation of a simulated pulse and echo. This innate cortical implementation of a purely computational processing mechanism for sonar ranging should enhance survival of juvenile bats when they first engage in active echolocation behaviour and flight.
Mhatre, Natasha; Pollack, Gerald; Mason, Andrew
2016-04-01
Tree cricket males produce tonal songs, used for mate attraction and male-male interactions. Active mechanics tunes hearing to conspecific song frequency. However, tree cricket song frequency increases with temperature, presenting a problem for tuned listeners. We show that the actively amplified frequency increases with temperature, thus shifting mechanical and neuronal auditory tuning to maintain a match with conspecific song frequency. Active auditory processes are known from several taxa, but their adaptive function has rarely been demonstrated. We show that tree crickets harness active processes to ensure that auditory tuning remains matched to conspecific song frequency, despite changing environmental conditions and signal characteristics. Adaptive tuning allows tree crickets to selectively detect potential mates or rivals over large distances and is likely to bestow a strong selective advantage by reducing mate-finding effort and facilitating intermale interactions. © 2016 The Author(s).
Mental Imagery Induces Cross-Modal Sensory Plasticity and Changes Future Auditory Perception.
Berger, Christopher C; Ehrsson, H Henrik
2018-04-01
Can what we imagine in our minds change how we perceive the world in the future? A continuous process of multisensory integration and recalibration is responsible for maintaining a correspondence between the senses (e.g., vision, touch, audition) and, ultimately, a stable and coherent perception of our environment. This process depends on the plasticity of our sensory systems. The so-called ventriloquism aftereffect-a shift in the perceived localization of sounds presented alone after repeated exposure to spatially mismatched auditory and visual stimuli-is a clear example of this type of plasticity in the audiovisual domain. In a series of six studies with 24 participants each, we investigated an imagery-induced ventriloquism aftereffect in which imagining a visual stimulus elicits the same frequency-specific auditory aftereffect as actually seeing one. These results demonstrate that mental imagery can recalibrate the senses and induce the same cross-modal sensory plasticity as real sensory stimuli.
Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli.
Kanaya, Shoko; Yokosawa, Kazuhiko
2011-02-01
Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.
Reality of auditory verbal hallucinations.
Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta
2009-11-01
Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.
Reality of auditory verbal hallucinations
Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta
2009-01-01
Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency. PMID:19620178
Pinto, Hyorrana Priscila Pereira; Carvalho, Vinícius Rezende; Medeiros, Daniel de Castro; Almeida, Ana Flávia Santos; Mendes, Eduardo Mazoni Andrade Marçal; Moraes, Márcio Flávio Dutra
2017-04-07
Epilepsy is a neurological disease related to the occurrence of pathological oscillatory activity, but the basic physiological mechanisms of seizure remain to be understood. Our working hypothesis is that specific sensory processing circuits may present abnormally enhanced predisposition for coordinated firing in the dysfunctional brain. Such facilitated entrainment could share a similar mechanistic process as those expediting the propagation of epileptiform activity throughout the brain. To test this hypothesis, we employed the Wistar audiogenic rat (WAR) reflex animal model, which is characterized by having seizures triggered reliably by sound. Sound stimulation was modulated in amplitude to produce an auditory steady-state-evoked response (ASSR; -53.71Hz) that covers bottom-up and top-down processing in a time scale compatible with the dynamics of the epileptic condition. Data from inferior colliculus (IC) c-Fos immunohistochemistry and electrographic recordings were gathered for both the control Wistar group and WARs. Under 85-dB SLP auditory stimulation, compared to controls, the WARs presented higher number of Fos-positive cells (at IC and auditory temporal lobe) and a significant increase in ASSR-normalized energy. Similarly, the 110-dB SLP sound stimulation also statistically increased ASSR-normalized energy during ictal and post-ictal periods. However, at the transition from the physiological to pathological state (pre-ictal period), the WAR ASSR analysis demonstrated a decline in normalized energy and a significant increase in circular variance values compared to that of controls. These results indicate an enhanced coordinated firing state for WARs, except immediately before seizure onset (suggesting pre-ictal neuronal desynchronization with external sensory drive). These results suggest a competing myriad of interferences among different networks that after seizure onset converge to a massive oscillatory circuit. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Yoncheva, Yuliya N.; Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.
2013-01-01
ERP responses to spoken words are sensitive to both rhyming effects and effects of associated spelling patterns. Are such effects automatically elicited by spoken words or dependent on selectively attending to phonology? To address this question, ERP responses to spoken word pairs were investigated under two equally demanding listening tasks that…
Forward Masking of the Speech-Evoked Auditory Brainstem Response.
Hodge, Sarah E; Menezes, Denise C; Brown, Kevin D; Grose, John H
2018-02-01
The hypothesis tested was that forward masking of the speech-evoked auditory brainstem response (sABR) increases peak latency as an inverse function of masker-signal interval (Δt), and that the overall persistence of forward masking is age dependent. Older listeners exhibit deficits in forward masking. If forward-masked sABRs provide an objective measure of the susceptibility of speech sounds to prior stimulation, then this provides a novel approach to examining the age dependence of temporal processing. A /da/ stimulus forward masked by speech-shaped noise (Δt = 4-64 ms) was used to measure sABRs in 10 younger and nine older participants. Forward masking of subsegments of the /da/ stimulus (Δt = 16 ms) and click trains (Δt = 0-64 ms) was also measured. Forward-masked sABRs from young participants showed an increase in latency with decreasing Δt for the initial peak. Latency shifts for later peaks were smaller and more uniform. None of the peak latencies returned to baseline by Δt = 64 ms. Forward-masked /da/ subsegments showed peak latency shifts that did not depend simply on peak position, while forward-masked click trains showed latency shifts that were dependent on click position. The sABRs from older adults were less robust but confirmed the viability of the approach. Forward masking of the sABR provides an objective measure of the susceptibility of the auditory system to prior stimulation. Failure of recovery functions to return to baseline suggests an interaction between forward masking by the prior masker and temporal effects within the stimulus itself.
Proceedings of the Ship Production Symposium, held in New Orleans, Louisiana, on 2-4 September 1992
1992-09-01
that enables an observer to experience an environment or a task by means of visual, auditory , and sensory simulation (50). The equipment includes a... auditory images. Less progress has been made on general-purpose tactile sensory response equipment. Quasi-realistic graphical output has already helped in...The second is the United States of America funding was earmarked for a U. S. yard to help stimulate the U.S. economy. In essence, the RSV
2006-12-01
Biology of Marine Mammals, San Diego, California, 12 - 16 December. Finneran, J. J. and Houser, D. S. 2004. Objective measures of steady-state...Gervais’ beaked whale auditory evoked potential hearing measurements. 16th Biennial Conference on the Biology of Marine Mammals, San Diego, California...Biennial Conference on the Biology of Marine Mammals, San Diego, California, 12 - 16 December. 16 FTR N00014-04-1-0455 BIOMIMETICA Invited Lectures
Effects of Transcranial Direct Current Stimulation on Expression of Immediate Early Genes (IEG’s)
2015-12-01
enhancing cognitive capabilities in human subjects1, 2, and 3. Studies have also shown tDCS can produce positive outcomes in treating depression ...translated into DNA, they can re-enter the nucleus and cause the induction of novel gene transcription (Figure 1). As stated earlier, there has been...in striatum due to caffeine intake26, and activation in auditory cortex due to auditory cues27. cFos is able to auto- regulate itself, by a negative
Analysis of stimulus-related activity in rat auditory cortex using complex spectral coefficients
Krause, Bryan M.
2013-01-01
The neural mechanisms of sensory responses recorded from the scalp or cortical surface remain controversial. Evoked vs. induced response components (i.e., changes in mean vs. variance) are associated with bottom-up vs. top-down processing, but trial-by-trial response variability can confound this interpretation. Phase reset of ongoing oscillations has also been postulated to contribute to sensory responses. In this article, we present evidence that responses under passive listening conditions are dominated by variable evoked response components. We measured the mean, variance, and phase of complex time-frequency coefficients of epidurally recorded responses to acoustic stimuli in rats. During the stimulus, changes in mean, variance, and phase tended to co-occur. After the stimulus, there was a small, low-frequency offset response in the mean and modest, prolonged desynchronization in the alpha band. Simulations showed that trial-by-trial variability in the mean can account for most of the variance and phase changes observed during the stimulus. This variability was state dependent, with smallest variability during periods of greatest arousal. Our data suggest that cortical responses to auditory stimuli reflect variable inputs to the cortical network. These analyses suggest that caution should be exercised when interpreting variance and phase changes in terms of top-down cortical processing. PMID:23657279
Cortical thickness as a contributor to abnormal oscillations in schizophrenia?
Edgar, J Christopher; Chen, Yu-Han; Lanza, Matthew; Howell, Breannan; Chow, Vivian Y; Heiken, Kory; Liu, Song; Wootton, Cassandra; Hunter, Michael A; Huang, Mingxiong; Miller, Gregory A; Cañive, José M
2014-01-01
Although brain rhythms depend on brain structure (e.g., gray and white matter), to our knowledge associations between brain oscillations and structure have not been investigated in healthy controls (HC) or in individuals with schizophrenia (SZ). Observing function-structure relationships, for example establishing an association between brain oscillations (defined in terms of amplitude or phase) and cortical gray matter, might inform models on the origins of psychosis. Given evidence of functional and structural abnormalities in primary/secondary auditory regions in SZ, the present study examined how superior temporal gyrus (STG) structure relates to auditory STG low-frequency and 40 Hz steady-state activity. Given changes in brain activity as a function of age, age-related associations in STG oscillatory activity were also examined. Thirty-nine individuals with SZ and 29 HC were recruited. 40 Hz amplitude-modulated tones of 1 s duration were presented. MEG and T1-weighted sMRI data were obtained. Using the sources localizing 40 Hz evoked steady-state activity (300 to 950 ms), left and right STG total power and inter-trial coherence were computed. Time-frequency group differences and associations with STG structure and age were also examined. Decreased total power and inter-trial coherence in SZ were observed in the left STG for initial post-stimulus low-frequency activity (~ 50 to 200 ms, ~ 4 to 16 Hz) as well as 40 Hz steady-state activity (~ 400 to 1000 ms). Left STG 40 Hz total power and inter-trial coherence were positively associated with left STG cortical thickness in HC, not in SZ. Left STG post-stimulus low-frequency and 40 Hz total power were positively associated with age, again only in controls. Left STG low-frequency and steady-state gamma abnormalities distinguish SZ and HC. Disease-associated damage to STG gray matter in schizophrenia may disrupt the age-related left STG gamma-band function-structure relationships observed in controls.
Evaluating dedicated and intrinsic models of temporal encoding by varying context
Spencer, Rebecca M.C.; Karmarkar, Uma; Ivry, Richard B.
2009-01-01
Two general classes of models have been proposed to account for how people process temporal information in the milliseconds range. Dedicated models entail a mechanism in which time is explicitly encoded; examples include clock–counter models and functional delay lines. Intrinsic models, such as state-dependent networks (SDN), represent time as an emergent property of the dynamics of neural processing. An important property of SDN is that the encoding of duration is context dependent since the representation of an interval will vary as a function of the initial state of the network. Consistent with this assumption, duration discrimination thresholds for auditory intervals spanning 100 ms are elevated when an irrelevant tone is presented at varying times prior to the onset of the test interval. We revisit this effect in two experiments, considering attentional issues that may also produce such context effects. The disruptive effect of a variable context was eliminated or attenuated when the intervals between the irrelevant tone and test interval were made dissimilar or the duration of the test interval was increased to 300 ms. These results indicate how attentional processes can influence the perception of brief intervals, as well as point to important constraints for SDN models. PMID:19487188
Kricos, Patricia B.
2006-01-01
The number and proportion of older adults in the United States population is increasing, and more clinical audiologists will be called upon to deliver hearing care to the approximately 35% to 50% of them who experience hearing difficulties. In recent years, the characteristics and sources of receptive communication difficulties in older individuals have been investigated by hearing scientists, cognitive psychologists, and audiologists. It is becoming increasingly apparent that cognitive compromises and psychoacoustic auditory processing disorders associated with aging may contribute to communication difficulties in this population. This paper presents an overview of best practices, based on our current knowledge base, for clinical management of older individuals with limitations in cognitive or psychoacoustic auditory processing capabilities, or both, that accompany aging. PMID:16528428
Li, Wenjing; Li, Jianhong; Xian, Junfang; Lv, Bin; Li, Meng; Wang, Chunheng; Li, Yong; Liu, Zhaohui; Liu, Sha; Wang, Zhenchang; He, Huiguang; Sabel, Bernhard A
2013-01-01
Prelingual deafness has been shown to lead to brain reorganization as demonstrated by functional parameters, but anatomical evidences still remain controversial. The present study investigated hemispheric asymmetry changes in deaf subjects using MRI, hypothesizing auditory-, language- or visual-related regions after early deafness. Prelingually deaf adolescents (n = 16) and age- and gender-matched normal controls (n = 16) were recruited and hemispheric asymmetry was evaluated with voxel-based morphometry (VBM) from MRI combined with analysis of cortical thickness (CTh). Deaf adolescents showed more rightward asymmetries (L < R) of grey matter volume (GMV) in the cerebellum and more leftward CTh asymmetries (L > R) in the posterior cingulate gyrus and gyrus rectus. More rightward CTh asymmetries were observed in the precuneus, middle and superior frontal gyri, and middle occipital gyrus. The duration of hearing aid use was correlated with asymmetry of GMV in the cerebellum and CTh in the gyrus rectus. Interestingly, the asymmetry of the auditory cortex was preserved in deaf subjects. When the brain is deprived of auditory input early in life there are signs of both irreversible morphological asymmetry changes in different brain regions but also signs of reorganization and plasticity which are dependent on hearing aid use, i.e. use-dependent.
Auditory enhancement of visual perception at threshold depends on visual abilities.
Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène
2011-06-17
Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Comastri, S. A.; Martin, G.; Simon, J. M.; Angarano, C.; Dominguez, S.; Luzzi, F.; Lanusse, M.; Ranieri, M. V.; Boccio, C. M.
2008-04-01
In Optometry and in Audiology, the routine tests to prescribe correction lenses and headsets are respectively the visual acuity test (the first chart with letters was developed by Snellen in 1862) and conventional pure tone audiometry (the first audiometer with electrical current was devised by Hartmann in 1878). At present there are psychophysical non invasive tests that, besides evaluating visual and auditory performance globally and even in cases catalogued as normal according to routine tests, supply early information regarding diseases such as diabetes, hypertension, renal failure, cardiovascular problems, etc. Concerning Optometry, one of these tests is the achromatic luminance contrast sensitivity test (introduced by Schade in 1956). Concerning Audiology, one of these tests is high frequency pure tone audiometry (introduced a few decades ago) which yields information relative to pathologies affecting the basal cochlea and complements data resulting from conventional audiometry. These utilities of the contrast sensitivity test and of pure tone audiometry derive from the facts that Fourier components constitute the basis to synthesize stimuli present at the entrance of the visual and auditory systems; that these systems responses depend on frequencies and that the patient's psychophysical state affects frequency processing. The frequency of interest in the former test is the effective spatial frequency (inverse of the angle subtended at the eye by a cycle of a sinusoidal grating and measured in cycles/degree) and, in the latter, the temporal frequency (measured in cycles/sec). Both tests have similar duration and consist in determining the patient's threshold (corresponding to the inverse multiplicative of the contrast or to the inverse additive of the sound intensity level) for each harmonic stimulus present at the system entrance (sinusoidal grating or pure tone sound). In this article the frequencies, standard normality curves and abnormal threshold shifts inherent to the contrast sensitivity test (which for simplicity could be termed "visionmetry") and to pure tone audiometry (also termed auditory sensitivity test) are analyzed with the purpose of contributing to divulge their ability to supply early information associated to pathologies not solely related to the visual and auditory systems respectively.
Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise
Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P.; Ahlfors, Seppo P.; Huang, Samantha; Raij, Tommi; Sams, Mikko; Vasios, Christos E.; Belliveau, John W.
2011-01-01
How can we concentrate on relevant sounds in noisy environments? A “gain model” suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A “tuning model” suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for “frequency tagging” of attention effects on maskers. Noise masking reduced early (50–150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50–150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise. PMID:21368107
Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn
2017-01-01
This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research. PMID:28348542
Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients.
Golob, Edward J; Winston, Jenna; Mock, Jeffrey R
2017-01-01
Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory.
Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients
Golob, Edward J.; Winston, Jenna; Mock, Jeffrey R.
2017-01-01
Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory. PMID:29218024
Neural basis of processing threatening voices in a crowded auditory world
Mothes-Lasch, Martin; Becker, Michael P. I.; Miltner, Wolfgang H. R.
2016-01-01
In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant prosodic features under varying combinations of prosody valence, auditory background complexity and attentional focus. Here, we used event-related functional magnetic resonance imaging to investigate the effects of high background sound complexity and attentional focus on brain activation to angry and neutral prosody in humans. Results show that prosody effects in mid superior temporal cortex were gated by background complexity but not attention, while prosody effects in the amygdala and anterior superior temporal cortex were gated by attention but not background complexity, suggesting distinct emotional prosody processing limitations in different regions. Crucially, if attention was focused on the highly complex background, the differential processing of emotional prosody was prevented in all brain regions, suggesting that in a distracting, complex auditory world even threatening voices may go unnoticed. PMID:26884543
Selective Attention and Sensory Modality in Aging: Curses and Blessings.
Van Gerven, Pascal W M; Guerreiro, Maria J S
2016-01-01
The notion that selective attention is compromised in older adults as a result of impaired inhibitory control is well established. Yet it is primarily based on empirical findings covering the visual modality. Auditory and especially, cross-modal selective attention are remarkably underexposed in the literature on aging. In the past 5 years, we have attempted to fill these voids by investigating performance of younger and older adults on equivalent tasks covering all four combinations of visual or auditory target, and visual or auditory distractor information. In doing so, we have demonstrated that older adults are especially impaired in auditory selective attention with visual distraction. This pattern of results was not mirrored by the results from our psychophysiological studies, however, in which both enhancement of target processing and suppression of distractor processing appeared to be age equivalent. We currently conclude that: (1) age-related differences of selective attention are modality dependent; (2) age-related differences of selective attention are limited; and (3) it remains an open question whether modality-specific age differences in selective attention are due to impaired distractor inhibition, impaired target enhancement, or both. These conclusions put the longstanding inhibitory deficit hypothesis of aging in a new perspective.
Individualization of music-based rhythmic auditory cueing in Parkinson's disease.
Bella, Simone Dalla; Dotov, Dobromir; Bardy, Benoît; de Cock, Valérie Cochen
2018-06-04
Gait dysfunctions in Parkinson's disease can be partly relieved by rhythmic auditory cueing. This consists in asking patients to walk with a rhythmic auditory stimulus such as a metronome or music. The effect on gait is visible immediately in terms of increased speed and stride length. Moreover, training programs based on rhythmic cueing can have long-term benefits. The effect of rhythmic cueing, however, varies from one patient to the other. Patients' response to the stimulation may depend on rhythmic abilities, often deteriorating with the disease. Relatively spared abilities to track the beat favor a positive response to rhythmic cueing. On the other hand, most patients with poor rhythmic abilities either do not respond to the cues or experience gait worsening when walking with cues. An individualized approach to rhythmic auditory cueing with music is proposed to cope with this variability in patients' response. This approach calls for using assistive mobile technologies capable of delivering cues that adapt in real time to patients' gait kinematics, thus affording step synchronization to the beat. Individualized rhythmic cueing can provide a safe and cost-effective alternative to standard cueing that patients may want to use in their everyday lives. © 2018 New York Academy of Sciences.
Differential cognitive and perceptual correlates of print reading versus braille reading.
Veispak, Anneli; Boets, Bart; Ghesquière, Pol
2013-01-01
The relations between reading, auditory, speech, phonological and tactile spatial processing are investigated in a Dutch speaking sample of blind braille readers as compared to sighted print readers. Performance is assessed in blind and sighted children and adults. Regarding phonological ability, braille readers perform equally well compared to print readers on phonological awareness, better on verbal short-term memory and significantly worse on lexical retrieval. The groups do not differ on speech perception or auditory processing. Braille readers, however, have more sensitive fingers than print readers. Investigation of the relations between these cognitive and perceptual skills and reading performance indicates that in the group of braille readers auditory temporal processing has a longer lasting and stronger impact not only on phonological abilities, which have to satisfy the high processing demands of the strictly serial language input, but also directly on the reading ability itself. Print readers switch between grapho-phonological and lexical reading modes depending on the familiarity of the items. Furthermore, the auditory temporal processing and speech perception, which were substantially interrelated with phonological processing, had no direct associations with print reading measures. Copyright © 2012 Elsevier Ltd. All rights reserved.
Luo, Hao; Ni, Jing-Tian; Li, Zhi-Hao; Li, Xiao-Ou; Zhang, Da-Ren; Zeng, Fan-Gang; Chen, Lin
2006-01-01
In tonal languages such as Mandarin Chinese, a lexical tone carries semantic information and is preferentially processed in the left brain hemisphere of native speakers as revealed by the functional MRI or positron emission tomography studies, which likely measure the temporally aggregated neural events including those at an attentive stage of auditory processing. Here, we demonstrate that early auditory processing of a lexical tone at a preattentive stage is actually lateralized to the right hemisphere. We frequently presented to native Mandarin Chinese speakers a meaningful auditory word with a consonant-vowel structure and infrequently varied either its lexical tone or initial consonant using an odd-ball paradigm to create a contrast resulting in a change in word meaning. The lexical tone contrast evoked a stronger preattentive response, as revealed by whole-head electric recordings of the mismatch negativity, in the right hemisphere than in the left hemisphere, whereas the consonant contrast produced an opposite pattern. Given the distinct acoustic features between a lexical tone and a consonant, this opposite lateralization pattern suggests the dependence of hemisphere dominance mainly on acoustic cues before speech input is mapped into a semantic representation in the processing stream. PMID:17159136
Kryklywy, James H; Macpherson, Ewan A; Mitchell, Derek G V
2018-04-01
Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory 'what' but not 'where' processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.
Edgar, J Christopher; Fisk, Charles L; Liu, Song; Pandey, Juhi; Herrington, John D; Schultz, Robert T; Roberts, Timothy P L
2016-01-01
x03B3; (∼30-80 Hz) brain rhythms are thought to be abnormal in neurodevelopmental disorders such as schizophrenia and autism spectrum disorder (ASD). In adult populations, auditory 40-Hz click trains or 40-Hz amplitude-modulated tones are used to assess the integrity of superior temporal gyrus (STG) 40-Hz x03B3;-band circuits. As STG 40-Hz auditory steady-state responses (ASSRs) are not fully developed in children, tasks using these stimuli may not be optimal in younger patient populations. The present study examined this issue in typically developing (TD) children as well as in children with ASD, using source localization to directly assess activity in the principal generators of the 40-Hz ASSR in the left and right primary/secondary auditory cortices. 40-Hz amplitude-modulated tones of 1 s duration were binaurally presented while magnetoencephalography data were obtained from 48 TD children (45 males; 7-14 years old) and 42 ASD children (38 males; 8-14 years old). T1-weighted structural MRI was obtained. Using single dipoles anatomically constrained to each participant's left and right Heschl's Gyrus, left and right 40-Hz ASSR total power (TP) and intertrial coherence (ITC) measures were obtained. Associations between 40-Hz ASSR TP, ITC and age as well as STG gray matter cortical thickness (CT) were assessed. Group STG function and structure differences were also examined. TD and ASD did not differ in 40-Hz ASSR TP or ITC. In TD and ASD, age was associated with left and right 40-Hz ASSR ITC (p < 0.01). The interaction term was not significant, indicating in both groups a ∼0.01/year increase in ITC. 40-Hz ASSR TP and ITC were greater in the right than left STG. Groups did not differ in STG CT, and no associations were observed between 40-Hz ASSR activity and STG CT. Finally, right STG transient x03B3; (50-100 ms and 30-50 Hz) was greater in TD versus ASD (significant for TP, trend for ITC). The 40-Hz ASSR develops, in part, via an age-related increase in neural synchrony. Greater right than left 40-Hz ASSRs (ITC and TP) suggested earlier maturation of right versus left STG neural network(s). Given a ∼0.01/year increase in ITC, 40-Hz ASSRs were weak or absent in many of the younger participants, suggesting that 40-Hz driving stimuli are not optimal for examining STG 40-Hz auditory neural circuits in younger populations. Given the caveat that 40-Hz auditory steady-state neural networks are poorly assessed in children, the present analyses did not point to atypical development of STG 40-Hz ASSRs in higher-functioning children with ASD. Although groups did not differ in 40-Hz auditory steady-state activity, replicating previous studies, there was evidence for greater right STG transient x03B3; activity in TD versus ASD. © 2016 S. Karger AG, Basel.
Alderson-Day, Ben; Diederen, Kelly; Fernyhough, Charles; Ford, Judith M; Horga, Guillermo; Margulies, Daniel S; McCarthy-Jones, Simon; Northoff, Georg; Shine, James M; Turner, Jessica; van de Ven, Vincent; van Lutterveld, Remko; Waters, Flavie; Jardri, Renaud
2016-09-01
In recent years, there has been increasing interest in the potential for alterations to the brain's resting-state networks (RSNs) to explain various kinds of psychopathology. RSNs provide an intriguing new explanatory framework for hallucinations, which can occur in different modalities and population groups, but which remain poorly understood. This collaboration from the International Consortium on Hallucination Research (ICHR) reports on the evidence linking resting-state alterations to auditory hallucinations (AH) and provides a critical appraisal of the methodological approaches used in this area. In the report, we describe findings from resting connectivity fMRI in AH (in schizophrenia and nonclinical individuals) and compare them with findings from neurophysiological research, structural MRI, and research on visual hallucinations (VH). In AH, various studies show resting connectivity differences in left-hemisphere auditory and language regions, as well as atypical interaction of the default mode network and RSNs linked to cognitive control and salience. As the latter are also evident in studies of VH, this points to a domain-general mechanism for hallucinations alongside modality-specific changes to RSNs in different sensory regions. However, we also observed high methodological heterogeneity in the current literature, affecting the ability to make clear comparisons between studies. To address this, we provide some methodological recommendations and options for future research on the resting state and hallucinations. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.
Changes in resting-state connectivity in musicians with embouchure dystonia.
Haslinger, Bernhard; Noé, Jonas; Altenmüller, Eckart; Riedl, Valentin; Zimmer, Claus; Mantel, Tobias; Dresel, Christian
2017-03-01
Embouchure dystonia is a highly disabling task-specific dystonia in professional brass musicians leading to spasms of perioral muscles while playing the instrument. As they are asymptomatic at rest, resting-state functional magnetic resonance imaging in these patients can reveal changes in functional connectivity within and between brain networks independent from dystonic symptoms. We therefore compared embouchure dystonia patients to healthy musicians with resting-state functional magnetic resonance imaging in combination with independent component analyses. Patients showed increased functional connectivity of the bilateral sensorimotor mouth area and right secondary somatosensory cortex, but reduced functional connectivity of the bilateral sensorimotor hand representation, left inferior parietal cortex, and mesial premotor cortex within the lateral motor function network. Within the auditory function network, the functional connectivity of bilateral secondary auditory cortices, right posterior parietal cortex and left sensorimotor hand area was increased, the functional connectivity of right primary auditory cortex, right secondary somatosensory cortex, right sensorimotor mouth representation, bilateral thalamus, and anterior cingulate cortex was reduced. Negative functional connectivity between the cerebellar and lateral motor function network and positive functional connectivity between the cerebellar and primary visual network were reduced. Abnormal resting-state functional connectivity of sensorimotor representations of affected and unaffected body parts suggests a pathophysiological predisposition for abnormal sensorimotor and audiomotor integration in embouchure dystonia. Altered connectivity to the cerebellar network highlights the important role of the cerebellum in this disease. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.
Neural Entrainment to Rhythmically Presented Auditory, Visual, and Audio-Visual Speech in Children
Power, Alan James; Mead, Natasha; Barnes, Lisa; Goswami, Usha
2012-01-01
Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal “samples” of information from the speech stream at different rates, phase resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (“phase locking”). Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate) based on repetition of the syllable “ba,” presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a “talking head”). To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the “ba” stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a “ba” in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling, such as dyslexia. PMID:22833726
Evaluation of peripheral auditory pathways and brainstem in obstructive sleep apnea.
Matsumura, Erika; Matas, Carla Gentile; Magliaro, Fernanda Cristina Leite; Pedreño, Raquel Meirelles; Lorenzi-Filho, Geraldo; Sanches, Seisse Gabriela Gandolfi; Carvallo, Renata Mota Mamede
2016-11-25
Obstructive sleep apnea causes changes in normal sleep architecture, fragmenting it chronically with intermittent hypoxia, leading to serious health consequences in the long term. It is believed that the occurrence of respiratory events during sleep, such as apnea and hypopnea, can impair the transmission of nerve impulses along the auditory pathway that are highly dependent on the supply of oxygen. However, this association is not well established in the literature. To compare the evaluation of peripheral auditory pathway and brainstem among individuals with and without obstructive sleep apnea. The sample consisted of 38 adult males, mean age of 35.8 (±7.2), divided into four groups matched for age and Body Mass Index. The groups were classified based on polysomnography in: control (n=10), mild obstructive sleep apnea (n=11) moderate obstructive sleep apnea (n=8) and severe obstructive sleep apnea (n=9). All study subjects denied a history of risk for hearing loss and underwent audiometry, tympanometry, acoustic reflex and Brainstem Auditory Evoked Response. Statistical analyses were performed using three-factor ANOVA, 2-factor ANOVA, chi-square test, and Fisher's exact test. The significance level for all tests was 5%. There was no difference between the groups for hearing thresholds, tympanometry and evaluated Brainstem Auditory Evoked Response parameters. An association was observed between the presence of obstructive sleep apnea and changes in absolute latency of wave V (p=0.03). There was an association between moderate obstructive sleep apnea and change of the latency of wave V (p=0.01). The presence of obstructive sleep apnea is associated with changes in nerve conduction of acoustic stimuli in the auditory pathway in the brainstem. The increase in obstructive sleep apnea severity does not promote worsening of responses assessed by audiometry, tympanometry and Brainstem Auditory Evoked Response. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Neural correlates of short-term memory in primate auditory cortex
Bigelow, James; Rossi, Breein; Poremba, Amy
2014-01-01
Behaviorally-relevant sounds such as conspecific vocalizations are often available for only a brief amount of time; thus, goal-directed behavior frequently depends on auditory short-term memory (STM). Despite its ecological significance, the neural processes underlying auditory STM remain poorly understood. To investigate the role of the auditory cortex in STM, single- and multi-unit activity was recorded from the primary auditory cortex (A1) of two monkeys performing an auditory STM task using simple and complex sounds. Each trial consisted of a sample and test stimulus separated by a 5-s retention interval. A brief wait period followed the test stimulus, after which subjects pressed a button if the sounds were identical (match trials) or withheld button presses if they were different (non-match trials). A number of units exhibited significant changes in firing rate for portions of the retention interval, although these changes were rarely sustained. Instead, they were most frequently observed during the early and late portions of the retention interval, with inhibition being observed more frequently than excitation. At the population level, responses elicited on match trials were briefly suppressed early in the sound period relative to non-match trials. However, during the latter portion of the sound, firing rates increased significantly for match trials and remained elevated throughout the wait period. Related patterns of activity were observed in prior experiments from our lab in the dorsal temporal pole (dTP) and prefrontal cortex (PFC) of the same animals. The data suggest that early match suppression occurs in both A1 and the dTP, whereas later match enhancement occurs first in the PFC, followed by A1 and later in dTP. Because match enhancement occurs first in the PFC, we speculate that enhancement observed in A1 and dTP may reflect top–down feedback. Overall, our findings suggest that A1 forms part of the larger neural system recruited during auditory STM. PMID:25177266
Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.
2016-01-01
Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822