Sample records for multichannel auditory brain

  1. Consequences of Broad Auditory Filters for Identification of Multichannel-Compressed Vowels

    ERIC Educational Resources Information Center

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose: In view of previous findings (Bor, Souza, & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method: Listeners were recruited in 3 groups:…

  2. Cooperative dynamics in auditory brain response

    NASA Astrophysics Data System (ADS)

    Kwapień, J.; DrożdŻ, S.; Liu, L. C.; Ioannides, A. A.

    1998-11-01

    Simultaneous estimates of activity in the left and right auditory cortex of five normal human subjects were extracted from multichannel magnetoencephalography recordings. Left, right, and binaural stimulations were used, in separate runs, for each subject. The resulting time series of left and right auditory cortex activity were analyzed using the concept of mutual information. The analysis constitutes an objective method to address the nature of interhemispheric correlations in response to auditory stimulations. The results provide clear evidence of the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20 ms, as can be seen in the average signal. The strength of the interhemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.

  3. Consequences of broad auditory filters for identification of multichannel-compressed vowels

    PubMed Central

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose In view of previous findings (Bor, Souza & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method Listeners were recruited in three groups: listeners with flat sensorineural loss, listeners with sloping sensorineural loss, and a control group of listeners with normal hearing. Individual auditory filter measurements were obtained at 500 and 2000 Hz. The filter widths were related to identification of vowels processed with 16-channel MCC and with a control (linear) condition. Results Listeners with flat loss had broader filters at 500 Hz but not at 2000 Hz, compared to listeners with sloping loss. Vowel identification was poorer for MCC compared to linear amplification. Listeners with flat loss made more errors than listeners with sloping loss, and there was a significant relationship between filter width and the effects of MCC. Conclusions Broadened auditory filters can reduce the ability to process amplitude-compressed vowel spectra. This suggests that individual frequency selectivity is one factor which influences benefit of MCC, when a high number of compression channels are used. PMID:22207696

  4. The auditory and non-auditory brain areas involved in tinnitus. An emergent property of multiple parallel overlapping subnetworks

    PubMed Central

    Vanneste, Sven; De Ridder, Dirk

    2012-01-01

    Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375

  5. Opposite brain laterality in analogous auditory and visual tests.

    PubMed

    Oltedal, Leif; Hugdahl, Kenneth

    2017-11-01

    Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.

  6. Optimal electrode selection for multi-channel electroencephalogram based detection of auditory steady-state responses.

    PubMed

    Van Dun, Bram; Wouters, Jan; Moonen, Marc

    2009-07-01

    Auditory steady-state responses (ASSRs) are used for hearing threshold estimation at audiometric frequencies. Hearing impaired newborns, in particular, benefit from this technique as it allows for a more precise diagnosis than traditional techniques, and a hearing aid can be better fitted at an early age. However, measurement duration of current single-channel techniques is still too long for clinical widespread use. This paper evaluates the practical performance of a multi-channel electroencephalogram (EEG) processing strategy based on a detection theory approach. A minimum electrode set is determined for ASSRs with frequencies between 80 and 110 Hz using eight-channel EEG measurements of ten normal-hearing adults. This set provides a near-optimal hearing threshold estimate for all subjects and improves response detection significantly for EEG data with numerous artifacts. Multi-channel processing does not significantly improve response detection for EEG data with few artifacts. In this case, best response detection is obtained when noise-weighted averaging is applied on single-channel data. The same test setup (eight channels, ten normal-hearing subjects) is also used to determine a minimum electrode setup for 10-Hz ASSRs. This configuration allows to record near-optimal signal-to-noise ratios for 80% of subjects.

  7. Optimal spatiotemporal representation of multichannel EEG for recognition of brain states associated with distinct visual stimulus

    NASA Astrophysics Data System (ADS)

    Hramov, Alexander; Musatov, Vyacheslav Yu.; Runnova, Anastasija E.; Efremova, Tatiana Yu.; Koronovskii, Alexey A.; Pisarchik, Alexander N.

    2018-04-01

    In the paper we propose an approach based on artificial neural networks for recognition of different human brain states associated with distinct visual stimulus. Based on the developed numerical technique and the analysis of obtained experimental multichannel EEG data, we optimize the spatiotemporal representation of multichannel EEG to provide close to 97% accuracy in recognition of the EEG brain states during visual perception. Different interpretations of an ambiguous image produce different oscillatory patterns in the human EEG with similar features for every interpretation. Since these features are inherent to all subjects, a single artificial network can classify with high quality the associated brain states of other subjects.

  8. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS)

    PubMed Central

    Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  9. A Brain System for Auditory Working Memory.

    PubMed

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  10. Thalamic and parietal brain morphology predicts auditory category learning.

    PubMed

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.

  11. Impact of mild traumatic brain injury on auditory brain stem dysfunction in mouse model.

    PubMed

    Amanipour, Reza M; Frisina, Robert D; Cresoe, Samantha A; Parsons, Teresa J; Xiaoxia Zhu; Borlongan, Cesario V; Walton, Joseph P

    2016-08-01

    The auditory brainstem response (ABR) is an electrophysiological test that examines the functionality of the auditory nerve and brainstem. Traumatic brain injury (TBI) can be detected if prolonged peak latency is observed in ABR measurements, since latency measures the neural conduction time in the brainstem, and an increase in latency can be a sign of pathological lesion at the auditory brainstem level. The ABR is elicited by brief sounds that can be used to measure hearing sensitivity as well as temporal processing. Reduction in peak amplitudes and increases in latency are indicative of dysfunction in the auditory nerve and/or central auditory pathways. In this study we used sixteen young adult mice that were divided into two groups: sham and mild traumatic brain injury (mTBI), with ABR measurements obtained prior to, and at 2, 6, and 14 weeks after injury. Abnormal ABRs were observed for the nine TBI cases as early as two weeks after injury and the deficits lasted for fourteen weeks after injury. Results indicated a significant reduction in the Peak 1 (P1) and Peak 4 (P4) amplitudes to the first noise burst, as well as an increase in latency response for P1 and P4 following mTBI. These results are the first to demonstrate auditory sound processing deficits in a rodent model of mild TBI.

  12. Plastic brain mechanisms for attaining auditory temporal order judgment proficiency.

    PubMed

    Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas

    2010-04-15

    Accurate perception of the order of occurrence of sensory information is critical for the building up of coherent representations of the external world from ongoing flows of sensory inputs. While some psychophysical evidence reports that performance on temporal perception can improve, the underlying neural mechanisms remain unresolved. Using electrical neuroimaging analyses of auditory evoked potentials (AEPs), we identified the brain dynamics and mechanism supporting improvements in auditory temporal order judgment (TOJ) during the course of the first vs. latter half of the experiment. Training-induced changes in brain activity were first evident 43-76 ms post stimulus onset and followed from topographic, rather than pure strength, AEP modulations. Improvements in auditory TOJ accuracy thus followed from changes in the configuration of the underlying brain networks during the initial stages of sensory processing. Source estimations revealed an increase in the lateralization of initially bilateral posterior sylvian region (PSR) responses at the beginning of the experiment to left-hemisphere dominance at its end. Further supporting the critical role of left and right PSR in auditory TOJ proficiency, as the experiment progressed, responses in the left and right PSR went from being correlated to un-correlated. These collective findings provide insights on the neurophysiologic mechanism and plasticity of temporal processing of sounds and are consistent with models based on spike timing dependent plasticity. Copyright 2010 Elsevier Inc. All rights reserved.

  13. Brain Region-Specific Activity Patterns after Recent or Remote Memory Retrieval of Auditory Conditioned Fear

    ERIC Educational Resources Information Center

    Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee

    2012-01-01

    Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or…

  14. Shaping the aging brain: role of auditory input patterns in the emergence of auditory cortical impairments

    PubMed Central

    Kamal, Brishna; Holman, Constance; de Villers-Sidani, Etienne

    2013-01-01

    Age-related impairments in the primary auditory cortex (A1) include poor tuning selectivity, neural desynchronization, and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function. PMID:24062649

  15. Behavioral and electrophysiological auditory processing measures in traumatic brain injury after acoustically controlled auditory training: a long-term study

    PubMed Central

    Figueiredo, Carolina Calsolari; de Andrade, Adriana Neves; Marangoni-Castan, Andréa Tortosa; Gil, Daniela; Suriano, Italo Capraro

    2015-01-01

    ABSTRACT Objective To investigate the long-term efficacy of acoustically controlled auditory training in adults after tarumatic brain injury. Methods A total of six audioogically normal individuals aged between 20 and 37 years were studied. They suffered severe traumatic brain injury with diffuse axional lesion and underwent an acoustically controlled auditory training program approximately one year before. The results obtained in the behavioral and electrophysiological evaluation of auditory processing immediately after acoustically controlled auditory training were compared to reassessment findings, one year later. Results Quantitative analysis of auditory brainsteim response showed increased absolute latency of all waves and interpeak intervals, bilaterraly, when comparing both evaluations. Moreover, increased amplitude of all waves, and the wave V amplitude was statistically significant for the right ear, and wave III for the left ear. As to P3, decreased latency and increased amplitude were found for both ears in reassessment. The previous and current behavioral assessment showed similar results, except for the staggered spondaic words in the left ear and the amount of errors on the dichotic consonant-vowel test. Conclusion The acoustically controlled auditory training was effective in the long run, since better latency and amplitude results were observed in the electrophysiological evaluation, in addition to stability of behavioral measures after one-year training. PMID:26676270

  16. Maximum-likelihood estimation of channel-dependent trial-to-trial variability of auditory evoked brain responses in MEG

    PubMed Central

    2014-01-01

    Background We propose a mathematical model for multichannel assessment of the trial-to-trial variability of auditory evoked brain responses in magnetoencephalography (MEG). Methods Following the work of de Munck et al., our approach is based on the maximum likelihood estimation and involves an approximation of the spatio-temporal covariance of the contaminating background noise by means of the Kronecker product of its spatial and temporal covariance matrices. Extending the work of de Munck et al., where the trial-to-trial variability of the responses was considered identical to all channels, we evaluate it for each individual channel. Results Simulations with two equivalent current dipoles (ECDs) with different trial-to-trial variability, one seeded in each of the auditory cortices, were used to study the applicability of the proposed methodology on the sensor level and revealed spatial selectivity of the trial-to-trial estimates. In addition, we simulated a scenario with neighboring ECDs, to show limitations of the method. We also present an illustrative example of the application of this methodology to real MEG data taken from an auditory experimental paradigm, where we found hemispheric lateralization of the habituation effect to multiple stimulus presentation. Conclusions The proposed algorithm is capable of reconstructing lateralization effects of the trial-to-trial variability of evoked responses, i.e. when an ECD of only one hemisphere habituates, whereas the activity of the other hemisphere is not subject to habituation. Hence, it may be a useful tool in paradigms that assume lateralization effects, like, e.g., those involving language processing. PMID:24939398

  17. Decoding power-spectral profiles from FMRI brain activities during naturalistic auditory experience.

    PubMed

    Hu, Xintao; Guo, Lei; Han, Junwei; Liu, Tianming

    2017-02-01

    Recent studies have demonstrated a close relationship between computational acoustic features and neural brain activities, and have largely advanced our understanding of auditory information processing in the human brain. Along this line, we proposed a multidisciplinary study to examine whether power spectral density (PSD) profiles can be decoded from brain activities during naturalistic auditory experience. The study was performed on a high resolution functional magnetic resonance imaging (fMRI) dataset acquired when participants freely listened to the audio-description of the movie "Forrest Gump". Representative PSD profiles existing in the audio-movie were identified by clustering the audio samples according to their PSD descriptors. Support vector machine (SVM) classifiers were trained to differentiate the representative PSD profiles using corresponding fMRI brain activities. Based on PSD profile decoding, we explored how the neural decodability correlated to power intensity and frequency deviants. Our experimental results demonstrated that PSD profiles can be reliably decoded from brain activities. We also suggested a sigmoidal relationship between the neural decodability and power intensity deviants of PSD profiles. Our study in addition substantiates the feasibility and advantage of naturalistic paradigm for studying neural encoding of complex auditory information.

  18. Multichannel Brain-Signal-Amplifying and Digitizing System

    NASA Technical Reports Server (NTRS)

    Gevins, Alan

    2005-01-01

    An apparatus has been developed for use in acquiring multichannel electroencephalographic (EEG) data from a human subject. EEG apparatuses with many channels in use heretofore have been too heavy and bulky to be worn, and have been limited in dynamic range to no more than 18 bits. The present apparatus is small and light enough to be worn by the subject. It is capable of amplifying EEG signals and digitizing them to 22 bits in as many as 150 channels. The apparatus is controlled by software and is plugged into the USB port of a personal computer. This apparatus makes it possible, for the first time, to obtain high-resolution functional EEG images of a thinking brain in a real-life, ambulatory setting outside a research laboratory or hospital.

  19. Multichannel audio monitor for detecting electrical signals.

    PubMed

    Friesen, W O; Stent, G S

    1978-12-01

    The multichannel audio monitor (MUCAM) permits the simultaneous auditory monitoring of concurrent trains of electrical signals generated by as many as eight different sources. The basic working principle of this device is the modulation of the amplitude of a given pure tone by the incoming signals of each input channel. The MUCAM thus converts a complex, multichannel, temporal signal sequence into a musical melody suitable for instant, subliminal pattern analysis by the human ear. Neurophysiological experiments requiring multi-electrode recordings have provided one useful application of the MUCAM.

  20. Scale-Free Brain Quartet: Artistic Filtering of Multi-Channel Brainwave Music

    PubMed Central

    Wu, Dan; Li, Chaoyi; Yao, Dezhong

    2013-01-01

    To listen to the brain activities as a piece of music, we proposed the scale-free brainwave music (SFBM) technology, which translated scalp EEGs into music notes according to the power law of both EEG and music. In the present study, the methodology was extended for deriving a quartet from multi-channel EEGs with artistic beat and tonality filtering. EEG data from multiple electrodes were first translated into MIDI sequences by SFBM, respectively. Then, these sequences were processed by a beat filter which adjusted the duration of notes in terms of the characteristic frequency. And the sequences were further filtered from atonal to tonal according to a key defined by the analysis of the original music pieces. Resting EEGs with eyes closed and open of 40 subjects were utilized for music generation. The results revealed that the scale-free exponents of the music before and after filtering were different: the filtered music showed larger variety between the eyes-closed (EC) and eyes-open (EO) conditions, and the pitch scale exponents of the filtered music were closer to 1 and thus it was more approximate to the classical music. Furthermore, the tempo of the filtered music with eyes closed was significantly slower than that with eyes open. With the original materials obtained from multi-channel EEGs, and a little creative filtering following the composition process of a potential artist, the resulted brainwave quartet opened a new window to look into the brain in an audible musical way. In fact, as the artistic beat and tonal filters were derived from the brainwaves, the filtered music maintained the essential properties of the brain activities in a more musical style. It might harmonically distinguish the different states of the brain activities, and therefore it provided a method to analyze EEGs from a relaxed audio perspective. PMID:23717527

  1. Scale-free brain quartet: artistic filtering of multi-channel brainwave music.

    PubMed

    Wu, Dan; Li, Chaoyi; Yao, Dezhong

    2013-01-01

    To listen to the brain activities as a piece of music, we proposed the scale-free brainwave music (SFBM) technology, which translated scalp EEGs into music notes according to the power law of both EEG and music. In the present study, the methodology was extended for deriving a quartet from multi-channel EEGs with artistic beat and tonality filtering. EEG data from multiple electrodes were first translated into MIDI sequences by SFBM, respectively. Then, these sequences were processed by a beat filter which adjusted the duration of notes in terms of the characteristic frequency. And the sequences were further filtered from atonal to tonal according to a key defined by the analysis of the original music pieces. Resting EEGs with eyes closed and open of 40 subjects were utilized for music generation. The results revealed that the scale-free exponents of the music before and after filtering were different: the filtered music showed larger variety between the eyes-closed (EC) and eyes-open (EO) conditions, and the pitch scale exponents of the filtered music were closer to 1 and thus it was more approximate to the classical music. Furthermore, the tempo of the filtered music with eyes closed was significantly slower than that with eyes open. With the original materials obtained from multi-channel EEGs, and a little creative filtering following the composition process of a potential artist, the resulted brainwave quartet opened a new window to look into the brain in an audible musical way. In fact, as the artistic beat and tonal filters were derived from the brainwaves, the filtered music maintained the essential properties of the brain activities in a more musical style. It might harmonically distinguish the different states of the brain activities, and therefore it provided a method to analyze EEGs from a relaxed audio perspective.

  2. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Selective entrainment of brain oscillations drives auditory perceptual organization.

    PubMed

    Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles

    2017-10-01

    Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Brain Metabolism during Hallucination-Like Auditory Stimulation in Schizophrenia

    PubMed Central

    Horga, Guillermo; Fernández-Egea, Emilio; Mané, Anna; Font, Mireia; Schatz, Kelly C.; Falcon, Carles; Lomeña, Francisco; Bernardo, Miguel; Parellada, Eduard

    2014-01-01

    Auditory verbal hallucinations (AVH) in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1) at rest and (2) during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia. PMID:24416328

  5. Brainstem auditory-evoked potentials as an objective tool for evaluating hearing dysfunction in traumatic brain injury.

    PubMed

    Lew, Henry L; Lee, Eun Ha; Miyoshi, Yasushi; Chang, Douglas G; Date, Elaine S; Jerger, James F

    2004-03-01

    Because of the violent nature of traumatic brain injury, traumatic brain injury patients are susceptible to various types of trauma involving the auditory system. We report a case of a 55-yr-old man who presented with communication problems after traumatic brain injury. Initial results from behavioral audiometry and Weber/Rinne tests were not reliable because of poor cooperation. He was transferred to our service for inpatient rehabilitation, where review of the initial head computed tomographic scan showed only left temporal bone fracture. Brainstem auditory-evoked potential was then performed to evaluate his hearing function. The results showed bilateral absence of auditory-evoked responses, which strongly suggested bilateral deafness. This finding led to a follow-up computed tomographic scan, with focus on bilateral temporal bones. A subtle transverse fracture of the right temporal bone was then detected, in addition to the left temporal bone fracture previously identified. Like children with hearing impairment, traumatic brain injury patients may not be able to verbalize their auditory deficits in a timely manner. If hearing loss is suspected in a patient who is unable to participate in traditional behavioral audiometric testing, brainstem auditory-evoked potential may be an option for evaluating hearing dysfunction.

  6. Brain Mapping of Language and Auditory Perception in High-Functioning Autistic Adults: A PET Study.

    ERIC Educational Resources Information Center

    Muller, R-A.; Behen, M. E.; Rothermel, R. D.; Chugani, D. C.; Muzik, O.; Mangner, T. J.; Chugani, H. T.

    1999-01-01

    A study used positron emission tomography (PET) to study patterns of brain activation during auditory processing in five high-functioning adults with autism. Results found that participants showed reversed hemispheric dominance during the verbal auditory stimulation and reduced activation of the auditory cortex and cerebellum. (CR)

  7. Connecting the ear to the brain: molecular mechanisms of auditory circuit assembly

    PubMed Central

    Appler, Jessica M.; Goodrich, Lisa V.

    2011-01-01

    Our sense of hearing depends on precisely organized circuits that allow us to sense, perceive, and respond to complex sounds in our environment, from music and language to simple warning signals. Auditory processing begins in the cochlea of the inner ear, where sounds are detected by sensory hair cells and then transmitted to the central nervous system by spiral ganglion neurons, which faithfully preserve the frequency, intensity, and timing of each stimulus. During the assembly of auditory circuits, spiral ganglion neurons establish precise connections that link hair cells in the cochlea to target neurons in the auditory brainstem, develop specific firing properties, and elaborate unusual synapses both in the periphery and in the CNS. Understanding how spiral ganglion neurons acquire these unique properties is a key goal in auditory neuroscience, as these neurons represent the sole input of auditory information to the brain. In addition, the best currently available treatment for many forms of deafness is the cochlear implant, which compensates for lost hair cell function by directly stimulating the auditory nerve. Historically, studies of the auditory system have lagged behind other sensory systems due to the small size and inaccessibility of the inner ear. With the advent of new molecular genetic tools, this gap is narrowing. Here, we summarize recent insights into the cellular and molecular cues that guide the development of spiral ganglion neurons, from their origin in the proneurosensory domain of the otic vesicle to the formation of specialized synapses that ensure rapid and reliable transmission of sound information from the ear to the brain. PMID:21232575

  8. Brain region-specific activity patterns after recent or remote memory retrieval of auditory conditioned fear.

    PubMed

    Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee

    2012-09-19

    Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or retrieval. To investigate this possibility, we systematically imaged the brain activity patterns in the lateral amygdala, MGm/PIN, and AuV/TeA using activity-dependent induction of immediate early gene zif268 after recent and remote memory retrieval of auditory conditioned fear. Consistent with the critical role of the amygdala in fear memory, the zif268 activity in the lateral amygdala was significantly increased after both recent and remote memory retrieval. Interesting, however, the density of zif268 (+) neurons in both MGm/PIN and AuV/TeA, particularly in layers IV and VI, was increased only after remote but not recent fear memory retrieval compared to control groups. Further analysis of zif268 signals in AuV/TeA revealed that conditioned tone induced stronger zif268 induction compared to familiar tone in each individual zif268 (+) neuron after recent memory retrieval. Taken together, our results support that the lateral amygdala is a key brain site for permanent fear memory storage and suggest that MGm/PIN and AuV/TeA might play a role for remote memory storage or retrieval of auditory conditioned fear, or, alternatively, that these auditory brain regions might have a different way of processing for familiar or conditioned tone information at recent and remote time phases.

  9. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    PubMed

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.

    PubMed

    Bauer, Martin; Trahms, Lutz; Sander, Tilmann

    2015-04-01

    The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.

  11. Auditory Temporal Processing Deficits in Chronic Stroke: A Comparison of Brain Damage Lateralization Effect.

    PubMed

    Jafari, Zahra; Esmaili, Mahdiye; Delbari, Ahmad; Mehrpour, Masoud; Mohajerani, Majid H

    2016-06-01

    There have been a few reports about the effects of chronic stroke on auditory temporal processing abilities and no reports regarding the effects of brain damage lateralization on these abilities. Our study was performed on 2 groups of chronic stroke patients to compare the effects of hemispheric lateralization of brain damage and of age on auditory temporal processing. Seventy persons with normal hearing, including 25 normal controls, 25 stroke patients with damage to the right brain, and 20 stroke patients with damage to the left brain, without aphasia and with an age range of 31-71 years were studied. A gap-in-noise (GIN) test and a duration pattern test (DPT) were conducted for each participant. Significant differences were found between the 3 groups for GIN threshold, overall GIN percent score, and DPT percent score in both ears (P ≤ .001). For all stroke patients, performance in both GIN and DPT was poorer in the ear contralateral to the damaged hemisphere, which was significant in DPT and in 2 measures of GIN (P ≤ .046). Advanced age had a negative relationship with temporal processing abilities for all 3 groups. In cases of confirmed left- or right-side stroke involving auditory cerebrum damage, poorer auditory temporal processing is associated with the ear contralateral to the damaged cerebral hemisphere. Replication of our results and the use of GIN and DPT tests for the early diagnosis of auditory processing deficits and for monitoring the effects of aural rehabilitation interventions are recommended. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  12. Simulation of talking faces in the human brain improves auditory speech recognition

    PubMed Central

    von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.

    2008-01-01

    Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648

  13. Bigger Brains or Bigger Nuclei? Regulating the Size of Auditory Structures in Birds

    PubMed Central

    Kubke, M. Fabiana; Massoglia, Dino P.; Carr, Catherine E.

    2012-01-01

    Increases in the size of the neuronal structures that mediate specific behaviors are believed to be related to enhanced computational performance. It is not clear, however, what developmental and evolutionary mechanisms mediate these changes, nor whether an increase in the size of a given neuronal population is a general mechanism to achieve enhanced computational ability. We addressed the issue of size by analyzing the variation in the relative number of cells of auditory structures in auditory specialists and generalists. We show that bird species with different auditory specializations exhibit variation in the relative size of their hindbrain auditory nuclei. In the barn owl, an auditory specialist, the hind-brain auditory nuclei involved in the computation of sound location show hyperplasia. This hyperplasia was also found in songbirds, but not in non-auditory specialists. The hyperplasia of auditory nuclei was also not seen in birds with large body weight suggesting that the total number of cells is selected for in auditory specialists. In barn owls, differences observed in the relative size of the auditory nuclei might be attributed to modifications in neurogenesis and cell death. Thus, hyperplasia of circuits used for auditory computation accompanies auditory specialization in different orders of birds. PMID:14726625

  14. Training leads to increased auditory brain-computer interface performance of end-users with motor impairments.

    PubMed

    Halder, S; Käthner, I; Kübler, A

    2016-02-01

    Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  15. The Relationship between Phonological and Auditory Processing and Brain Organization in Beginning Readers

    ERIC Educational Resources Information Center

    Pugh, Kenneth R.; Landi, Nicole; Preston, Jonathan L.; Mencl, W. Einar; Austin, Alison C.; Sibley, Daragh; Fulbright, Robert K.; Seidenberg, Mark S.; Grigorenko, Elena L.; Constable, R. Todd; Molfese, Peter; Frost, Stephen J.

    2013-01-01

    We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of…

  16. [Forensic application of brainstem auditory evoked potential in patients with brain concussion].

    PubMed

    Zheng, Xing-Bin; Li, Sheng-Yan; Huang, Si-Xing; Ma, Ke-Xin

    2008-12-01

    To investigate changes of brainstem auditory evoked potential (BAEP) in patients with brain concussion. Nineteen patients with brain concussion were studied with BAEP examination. The data was compared to the healthy persons reported in literatures. The abnormal rate of BAEP for patients with brain concussion was 89.5%. There was a statistically significant difference between the abnormal rate of patients and that of healthy persons (P<0.05). The abnormal rate of BAEP in the brainstem pathway for patients with brain concussion was 73.7%, indicating dysfunction of the brainstem in those patients. BAEP might be helpful in forensic diagnosis of brain concussion.

  17. Multichannel brain recordings in behaving Drosophila reveal oscillatory activity and local coherence in response to sensory stimulation and circuit activation

    PubMed Central

    Paulk, Angelique C.; Zhou, Yanqiong; Stratton, Peter; Liu, Li

    2013-01-01

    Neural networks in vertebrates exhibit endogenous oscillations that have been associated with functions ranging from sensory processing to locomotion. It remains unclear whether oscillations may play a similar role in the insect brain. We describe a novel “whole brain” readout for Drosophila melanogaster using a simple multichannel recording preparation to study electrical activity across the brain of flies exposed to different sensory stimuli. We recorded local field potential (LFP) activity from >2,000 registered recording sites across the fly brain in >200 wild-type and transgenic animals to uncover specific LFP frequency bands that correlate with: 1) brain region; 2) sensory modality (olfactory, visual, or mechanosensory); and 3) activity in specific neural circuits. We found endogenous and stimulus-specific oscillations throughout the fly brain. Central (higher-order) brain regions exhibited sensory modality-specific increases in power within narrow frequency bands. Conversely, in sensory brain regions such as the optic or antennal lobes, LFP coherence, rather than power, best defined sensory responses across modalities. By transiently activating specific circuits via expression of TrpA1, we found that several circuits in the fly brain modulate LFP power and coherence across brain regions and frequency domains. However, activation of a neuromodulatory octopaminergic circuit specifically increased neuronal coherence in the optic lobes during visual stimulation while decreasing coherence in central brain regions. Our multichannel recording and brain registration approach provides an effective way to track activity simultaneously across the fly brain in vivo, allowing investigation of functional roles for oscillations in processing sensory stimuli and modulating behavior. PMID:23864378

  18. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    PubMed

    Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye

    2014-01-01

    Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  19. Endogenous Delta/Theta Sound-Brain Phase Entrainment Accelerates the Buildup of Auditory Streaming.

    PubMed

    Riecke, Lars; Sack, Alexander T; Schroeder, Charles E

    2015-12-21

    In many natural listening situations, meaningful sounds (e.g., speech) fluctuate in slow rhythms among other sounds. When a slow rhythmic auditory stream is selectively attended, endogenous delta (1‒4 Hz) oscillations in auditory cortex may shift their timing so that higher-excitability neuronal phases become aligned with salient events in that stream [1, 2]. As a consequence of this stream-brain phase entrainment [3], these events are processed and perceived more readily than temporally non-overlapping events [4-11], essentially enhancing the neural segregation between the attended stream and temporally noncoherent streams [12]. Stream-brain phase entrainment is robust to acoustic interference [13-20] provided that target stream-evoked rhythmic activity can be segregated from noncoherent activity evoked by other sounds [21], a process that usually builds up over time [22-27]. However, it has remained unclear whether stream-brain phase entrainment functionally contributes to this buildup of rhythmic streams or whether it is merely an epiphenomenon of it. Here, we addressed this issue directly by experimentally manipulating endogenous stream-brain phase entrainment in human auditory cortex with non-invasive transcranial alternating current stimulation (TACS) [28-30]. We assessed the consequences of these manipulations on the perceptual buildup of the target stream (the time required to recognize its presence in a noisy background), using behavioral measures in 20 healthy listeners performing a naturalistic listening task. Experimentally induced cyclic 4-Hz variations in stream-brain phase entrainment reliably caused a cyclic 4-Hz pattern in perceptual buildup time. Our findings demonstrate that strong endogenous delta/theta stream-brain phase entrainment accelerates the perceptual emergence of task-relevant rhythmic streams in noisy environments. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Maturation of the auditory t-complex brain response across adolescence.

    PubMed

    Mahajan, Yatin; McArthur, Genevieve

    2013-02-01

    Adolescence is a time of great change in the brain in terms of structure and function. It is possible to track the development of neural function across adolescence using auditory event-related potentials (ERPs). This study tested if the brain's functional processing of sound changed across adolescence. We measured passive auditory t-complex peaks to pure tones and consonant-vowel (CV) syllables in 90 children and adolescents aged 10-18 years, as well as 10 adults. Across adolescence, Na amplitude increased to tones and speech at the right, but not left, temporal site. Ta amplitude decreased at the right temporal site for tones, and at both sites for speech. The Tb remained constant at both sites. The Na and Ta appeared to mature later in the right than left hemisphere. The t-complex peaks Na and Tb exhibited left lateralization and Ta showed right lateralization. Thus, the functional processing of sound continued to develop across adolescence and into adulthood. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  1. Multichannel electrical stimulation of the auditory nerve in man. I. Basic psychophysics.

    PubMed

    Shannon, R V

    1983-08-01

    Basic psychophysical measurements were obtained from three patients implanted with multichannel cochlear implants. This paper presents measurements from stimulation of a single channel at a time (either monopolar or bipolar). The shape of the threshold vs. frequency curve can be partially related to the membrane biophysics of the remaining spiral ganglion and/or dendrites. Nerve survival in the region of the electrode may produce some increase in the dynamic range on that electrode. Loudness was related to the stimulus amplitude by a power law with exponents between 1.6 and 3.4, depending on frequency. Intensity discrimination was better than for normal auditory stimulation, but not enough to offset the small dynamic range for electrical stimulation. Measures of temporal integration were comparable to normals, indicating a central mechanism that is still intact in implant patients. No frequency analysis of the electrical signal was observed. Each electrode produced a unique pitch sensation, but they were not simply related to the tonotopic position of the stimulated electrode. Pitch increased over more than 4 octaves (for one patient) as the frequency was increased from 100 to 300 Hz, but above 300 Hz no pitch change was observed. Possibly the major limitation of single channel cochlear implants is the 1-2 ms integration time (probably due to the capacitative properties of the nerve membrane which acts as a low-pass filter at 100 Hz). Another limitation of electrical stimulation is that there is no spectral analysis of the electrical waveform so that temporal waveform alone determines the effective stimulus.

  2. The SRI24 multichannel brain atlas: construction and applications

    NASA Astrophysics Data System (ADS)

    Rohlfing, Torsten; Zahr, Natalie M.; Sullivan, Edith V.; Pfefferbaum, Adolf

    2008-03-01

    We present a new standard atlas of the human brain based on magnetic resonance images. The atlas was generated using unbiased population registration from high-resolution images obtained by multichannel-coil acquisition at 3T in a group of 24 normal subjects. The final atlas comprises three anatomical channels (T I-weighted, early and late spin echo), three diffusion-related channels (fractional anisotropy, mean diffusivity, diffusion-weighted image), and three tissue probability maps (CSF, gray matter, white matter). The atlas is dynamic in that it is implicitly represented by nonrigid transformations between the 24 subject images, as well as distortion-correction alignments between the image channels in each subject. The atlas can, therefore, be generated at essentially arbitrary image resolutions and orientations (e.g., AC/PC aligned), without compounding interpolation artifacts. We demonstrate in this paper two different applications of the atlas: (a) region definition by label propagation in a fiber tracking study is enabled by the increased sharpness of our atlas compared with other available atlases, and (b) spatial normalization is enabled by its average shape property. In summary, our atlas has unique features and will be made available to the scientific community as a resource and reference system for future imaging-based studies of the human brain.

  3. The impact of multichannel microelectrode recording (MER) in deep brain stimulation of the basal ganglia.

    PubMed

    Kinfe, Thomas M; Vesper, Jan

    2013-01-01

    Deep brain stimulation (DBS) of the basal ganglia (Ncl. subthalamicus, Ncl. ventralis intermedius thalami, globus pallidus internus) has become an evidence-based and well-established treatment option in otherwise refractory movement disorders. The Ncl. subthalamicus (STN) is the target of choice in Parkinson's disease.However, a considerable discussion is currently ongoing with regard to the necessity for micro-electrode recording (MER) in DBS surgery.The present review provides an overview on deep brain stimulation and (MER) of the STN in patients with Parkinson's disease. Detailed description is given concerning the multichannel MER systems nowadays available for DBS of the basal ganglia, especially of the STN, as a useful tool for target refinement. Furthermore, an overview is given of the historical aspects, spatial mapping of the STN by MER, and its impact for accuracy and precision in current functional stereotactic neurosurgery.The pros concerning target refinement by MER means on the one hand, and cons including increased bleeding risk, increased operation time, local or general anesthesia, and single versus multichannel microelectrode recording are discussed in detail. Finally, the authors favor the use of MER with intraoperative testing combined with imaging to achieve a more precise electrode placement, aiming to ameliorate clinical outcome in therapy-resistant movement disorders.

  4. Prediction of Auditory and Visual P300 Brain-Computer Interface Aptitude

    PubMed Central

    Halder, Sebastian; Hammer, Eva Maria; Kleih, Sonja Claudia; Bogdan, Martin; Rosenstiel, Wolfgang; Birbaumer, Niels; Kübler, Andrea

    2013-01-01

    Objective Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball. Methods Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude. Results Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy. Conclusions Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection. Significance Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population. PMID:23457444

  5. Using Auditory Steady State Responses to Outline the Functional Connectivity in the Tinnitus Brain

    PubMed Central

    Schlee, Winfried; Weisz, Nathan; Bertrand, Olivier; Hartmann, Thomas; Elbert, Thomas

    2008-01-01

    Background Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. Methods and Findings Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. Conclusions To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus. PMID:19005566

  6. Effects of training and motivation on auditory P300 brain-computer interface performance.

    PubMed

    Baykara, E; Ruf, C A; Fioravanti, C; Käthner, I; Simon, N; Kleih, S C; Kübler, A; Halder, S

    2016-01-01

    Brain-computer interface (BCI) technology aims at helping end-users with severe motor paralysis to communicate with their environment without using the natural output pathways of the brain. For end-users in complete paralysis, loss of gaze control may necessitate non-visual BCI systems. The present study investigated the effect of training on performance with an auditory P300 multi-class speller paradigm. For half of the participants, spatial cues were added to the auditory stimuli to see whether performance can be further optimized. The influence of motivation, mood and workload on performance and P300 component was also examined. In five sessions, 16 healthy participants were instructed to spell several words by attending to animal sounds representing the rows and columns of a 5 × 5 letter matrix. 81% of the participants achieved an average online accuracy of ⩾ 70%. From the first to the fifth session information transfer rates increased from 3.72 bits/min to 5.63 bits/min. Motivation significantly influenced P300 amplitude and online ITR. No significant facilitative effect of spatial cues on performance was observed. Training improves performance in an auditory BCI paradigm. Motivation influences performance and P300 amplitude. The described auditory BCI system may help end-users to communicate independently of gaze control with their environment. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. High-density EEG characterization of brain responses to auditory rhythmic stimuli during wakefulness and NREM sleep.

    PubMed

    Lustenberger, Caroline; Patel, Yogi A; Alagapan, Sankaraleengam; Page, Jessica M; Price, Betsy; Boyle, Michael R; Fröhlich, Flavio

    2018-04-01

    Auditory rhythmic sensory stimulation modulates brain oscillations by increasing phase-locking to the temporal structure of the stimuli and by increasing the power of specific frequency bands, resulting in Auditory Steady State Responses (ASSR). The ASSR is altered in different diseases of the central nervous system such as schizophrenia. However, in order to use the ASSR as biological markers for disease states, it needs to be understood how different vigilance states and underlying brain activity affect the ASSR. Here, we compared the effects of auditory rhythmic stimuli on EEG brain activity during wake and NREM sleep, investigated the influence of the presence of dominant sleep rhythms on the ASSR, and delineated the topographical distribution of these modulations. Participants (14 healthy males, 20-33 years) completed on the same day a 60 min nap session and two 30 min wakefulness sessions (before and after the nap). During these sessions, amplitude modulated (AM) white noise auditory stimuli at different frequencies were applied. High-density EEG was continuously recorded and time-frequency analyses were performed to assess ASSR during wakefulness and NREM periods. Our analysis revealed that depending on the electrode location, stimulation frequency applied and window/frequencies analysed the ASSR was significantly modulated by sleep pressure (before and after sleep), vigilance state (wake vs. NREM sleep), and the presence of slow wave activity and sleep spindles. Furthermore, AM stimuli increased spindle activity during NREM sleep but not during wakefulness. Thus, (1) electrode location, sleep history, vigilance state and ongoing brain activity needs to be carefully considered when investigating ASSR and (2) auditory rhythmic stimuli during sleep might represent a powerful tool to boost sleep spindles. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Comparison of temporal properties of auditory single units in response to cochlear infrared laser stimulation recorded with multi-channel and single tungsten electrodes

    NASA Astrophysics Data System (ADS)

    Tan, Xiaodong; Xia, Nan; Young, Hunter; Richter, Claus-Peter

    2015-02-01

    Auditory prostheses may benefit from Infrared Neural Stimulation (INS) because optical stimulation allows for spatially selective activation of neuron populations. Selective activation of neurons in the cochlear spiral ganglion can be determined in the central nucleus of the inferior colliculus (ICC) because the tonotopic organization of frequencies in the cochlea is maintained throughout the auditory pathway. The activation profile of INS is well represented in the ICC by multichannel electrodes (MCEs). To characterize single unit properties in response to INS, however, single tungsten electrodes (STEs) should be used because of its better signal-to-noise ratio. In this study, we compared the temporal properties of ICC single units recorded with MCEs and STEs in order to characterize the response properties of single auditory neurons in response to INS in guinea pigs. The length along the cochlea stimulated with infrared radiation corresponded to a frequency range of about 0.6 octaves, similar to that recorded with STEs. The temporal properties of single units recorded with MCEs showed higher maximum rates, shorter latencies, and higher firing efficiencies compared to those recorded with STEs. When the preset amplitude threshold for triggering MCE recordings was raised to twice over the noise level, the temporal properties of the single units became similar to those obtained with STEs. Undistinguishable neural activities from multiple sources in MCE recordings could be responsible for the response property difference between MCEs and STEs. Thus, caution should be taken in single unit recordings with MCEs.

  9. Cross contrast multi-channel image registration using image synthesis for MR brain images.

    PubMed

    Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L

    2017-02-01

    Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Discrimination of timbre in early auditory responses of the human brain.

    PubMed

    Seol, Jaeho; Oh, MiAe; Kim, June Sic; Jin, Seung-Hyun; Kim, Sun Il; Chung, Chun Kee

    2011-01-01

    The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1)-testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  11. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    PubMed

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  12. Brain activity in patients with unilateral sensorineural hearing loss during auditory perception in noisy environments.

    PubMed

    Yamamoto, Katsura; Tabei, Kenichi; Katsuyama, Narumi; Taira, Masato; Kitamura, Ken

    2017-01-01

    Patients with unilateral sensorineural hearing loss (UHL) often complain of hearing difficulties in noisy environments. To clarify this, we compared brain activation in patients with UHL with that of healthy participants during speech perception in a noisy environment, using functional magnetic resonance imaging (fMRI). A pure tone of 1 kHz, or 14 monosyllabic speech sounds at 65‒70 dB accompanied by MRI scan noise at 75 dB, were presented to both ears for 1 second each and participants were instructed to press a button when they could hear the pure tone or speech sound. Based on the activation areas of healthy participants, the primary auditory cortex, the anterior auditory association areas, and the posterior auditory association areas were set as regions of interest (ROI). In each of these regions, we compared brain activity between healthy participants and patients with UHL. The results revealed that patients with right-side UHL showed different brain activity in the right posterior auditory area during perception of pure tones versus monosyllables. Clinically, left-side and right-side UHL are not presently differentiated and are similarly diagnosed and treated; however, the results of this study suggest that a lateralityspecific treatment should be chosen.

  13. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    PubMed Central

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  14. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach.

    PubMed

    Cichy, Radoslaw Martin; Teng, Santani

    2017-02-19

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  15. Human auditory evoked potentials in the assessment of brain function during major cardiovascular surgery.

    PubMed

    Rodriguez, Rosendo A

    2004-06-01

    Focal neurologic and intellectual deficits or memory problems are relatively frequent after cardiac surgery. These complications have been associated with cerebral hypoperfusion, embolization, and inflammation that occur during or after surgery. Auditory evoked potentials, a neurophysiologic technique that evaluates the function of neural structures from the auditory nerve to the cortex, provide useful information about the functional status of the brain during major cardiovascular procedures. Skepticism regarding the presence of artifacts or difficulty in their interpretation has outweighed considerations of its potential utility and noninvasiveness. This paper reviews the evidence of their potential applications in several aspects of the management of cardiac surgery patients. The sensitivity of auditory evoked potentials to the effects of changes in brain temperature makes them useful for monitoring cerebral hypothermia and rewarming during cardiopulmonary bypass. The close relationship between evoked potential waveforms and specific anatomic structures facilitates the assessment of the functional integrity of the central nervous system in cardiac surgery patients. This feature may also be relevant in the management of critical patients under sedation and coma or in the evaluation of their prognosis during critical care. Their objectivity, reproducibility, and relative insensitivity to learning effects make auditory evoked potentials attractive for the cognitive assessment of cardiac surgery patients. From a clinical perspective, auditory evoked potentials represent an additional window for the study of underlying cerebral processes in healthy and diseased patients. From a research standpoint, this technology offers opportunities for a better understanding of the particular cerebral deficits associated with patients who are undergoing major cardiovascular procedures.

  16. Connectivity in the human brain dissociates entropy and complexity of auditory inputs☆

    PubMed Central

    Nastase, Samuel A.; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-01-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. PMID:25536493

  17. Auditory brain development in premature infants: the importance of early experience.

    PubMed

    McMahon, Erin; Wintermark, Pia; Lahav, Amir

    2012-04-01

    Preterm infants in the neonatal intensive care unit (NICU) often close their eyes in response to bright lights, but they cannot close their ears in response to loud sounds. The sudden transition from the womb to the overly noisy world of the NICU increases the vulnerability of these high-risk newborns. There is a growing concern that the excess noise typically experienced by NICU infants disrupts their growth and development, putting them at risk for hearing, language, and cognitive disabilities. Preterm neonates are especially sensitive to noise because their auditory system is at a critical period of neurodevelopment, and they are no longer shielded by maternal tissue. This paper discusses the developmental milestones of the auditory system and suggests ways to enhance the quality control and type of sounds delivered to NICU infants. We argue that positive auditory experience is essential for early brain maturation and may be a contributing factor for healthy neurodevelopment. Further research is needed to optimize the hospital environment for preterm newborns and to increase their potential to develop into healthy children. © 2012 New York Academy of Sciences.

  18. Multi-channel linear descriptors for event-related EEG collected in brain computer interface.

    PubMed

    Pei, Xiao-mei; Zheng, Chong-xun; Xu, Jin; Bin, Guang-yu; Wang, Hong-wu

    2006-03-01

    By three multi-channel linear descriptors, i.e. spatial complexity (omega), field power (sigma) and frequency of field changes (phi), event-related EEG data within 8-30 Hz were investigated during imagination of left or right hand movement. Studies on the event-related EEG data indicate that a two-channel version of omega, sigma and phi could reflect the antagonistic ERD/ERS patterns over contralateral and ipsilateral areas and also characterize different phases of the changing brain states in the event-related paradigm. Based on the selective two-channel linear descriptors, the left and right hand motor imagery tasks are classified to obtain satisfactory results, which testify the validity of the three linear descriptors omega, sigma and phi for characterizing event-related EEG. The preliminary results show that omega, sigma together with phi have good separability for left and right hand motor imagery tasks, which could be considered for classification of two classes of EEG patterns in the application of brain computer interfaces.

  19. Hippocampal volume and auditory attention on a verbal memory task with adult survivors of pediatric brain tumor.

    PubMed

    Jayakar, Reema; King, Tricia Z; Morris, Robin; Na, Sabrina

    2015-03-01

    We examined the nature of verbal memory deficits and the possible hippocampal underpinnings in long-term adult survivors of childhood brain tumor. 35 survivors (M = 24.10 ± 4.93 years at testing; 54% female), on average 15 years post-diagnosis, and 59 typically developing adults (M = 22.40 ± 4.35 years, 54% female) participated. Automated FMRIB Software Library (FSL) tools were used to measure hippocampal, putamen, and whole brain volumes. The California Verbal Learning Test-Second Edition (CVLT-II) was used to assess verbal memory. Hippocampal, F(1, 91) = 4.06, ηp² = .04; putamen, F(1, 91) = 11.18, ηp² = .11; and whole brain, F(1, 92) = 18.51, ηp² = .17, volumes were significantly lower for survivors than controls (p < .05). Hippocampus and putamen volumes were significantly correlated (r = .62, p < .001) with each other, but not with total brain volume (r = .09; r = .08), for survivors and controls. Verbal memory indices of auditory attention list span (Trial 1: F(1, 92) = 12.70, η² = .12) and final list learning (Trial 5: F(1, 92) = 6.01, η² = .06) were significantly lower for survivors (p < .05). Total hippocampal volume in survivors was significantly correlated (r = .43, p = .01) with auditory attention, but none of the other CVLT-II indices. Secondary analyses for the effect of treatment factors are presented. Volumetric differences between survivors and controls exist for the whole brain and for subcortical structures on average 15 years post-diagnosis. Treatment factors seem to have a unique effect on subcortical structures. Memory differences between survivors and controls are largely contingent upon auditory attention list span. Only hippocampal volume is associated with the auditory attention list span component of verbal memory. These findings are particularly robust for survivors treated with radiation. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  20. The musical centers of the brain: Vladimir E. Larionov (1857-1929) and the functional neuroanatomy of auditory perception.

    PubMed

    Triarhou, Lazaros C; Verina, Tatyana

    2016-11-01

    In 1899 a landmark paper entitled "On the musical centers of the brain" was published in Pflügers Archiv, based on work carried out in the Anatomo-Physiological Laboratory of the Neuropsychiatric Clinic of Vladimir M. Bekhterev (1857-1927) in St. Petersburg, Imperial Russia. The author of that paper was Vladimir E. Larionov (1857-1929), a military doctor and devoted brain scientist, who pursued the problem of the localization of function in the canine and human auditory cortex. His data detailed the existence of tonotopy in the temporal lobe and further demonstrated centrifugal auditory pathways emanating from the auditory cortex and directed to the opposite hemisphere and lower brain centers. Larionov's discoveries have been largely considered as findings of the Bekhterev school. Perhaps this is why there are limited resources on Larionov, especially keeping in mind his military medical career and the fact that after 1917 he just seems to have practiced otorhinolaryngology in Odessa. Larionov died two years after Bekhterev's mysterious death of 1927. The present study highlights the pioneering contributions of Larionov to auditory neuroscience, trusting that the life and work of Vladimir Efimovich will finally, and deservedly, emerge from the shadow of his celebrated master, Vladimir Mikhailovich. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Auditory-vocal mirroring in songbirds.

    PubMed

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  2. "Where do auditory hallucinations come from?"--a brain morphometry study of schizophrenia patients with inner or outer space hallucinations.

    PubMed

    Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc; Cachia, Arnaud

    2011-01-01

    Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N=12) and patients with only inner space hallucinations (N=15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the "where" auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge.

  3. Evoked potential correlates of selective attention with multi-channel auditory inputs

    NASA Technical Reports Server (NTRS)

    Schwent, V. L.; Hillyard, S. A.

    1975-01-01

    Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.

  4. Mother’s voice and heartbeat sounds elicit auditory plasticity in the human brain before full gestation

    PubMed Central

    Webb, Alexandra R.; Heller, Howard T.; Benson, Carol B.; Lahav, Amir

    2015-01-01

    Brain development is largely shaped by early sensory experience. However, it is currently unknown whether, how early, and to what extent the newborn’s brain is shaped by exposure to maternal sounds when the brain is most sensitive to early life programming. The present study examined this question in 40 infants born extremely prematurely (between 25- and 32-wk gestation) in the first month of life. Newborns were randomized to receive auditory enrichment in the form of audio recordings of maternal sounds (including their mother’s voice and heartbeat) or routine exposure to hospital environmental noise. The groups were otherwise medically and demographically comparable. Cranial ultrasonography measurements were obtained at 30 ± 3 d of life. Results show that newborns exposed to maternal sounds had a significantly larger auditory cortex (AC) bilaterally compared with control newborns receiving standard care. The magnitude of the right and left AC thickness was significantly correlated with gestational age but not with the duration of sound exposure. Measurements of head circumference and the widths of the frontal horn (FH) and the corpus callosum (CC) were not significantly different between the two groups. This study provides evidence for experience-dependent plasticity in the primary AC before the brain has reached full-term maturation. Our results demonstrate that despite the immaturity of the auditory pathways, the AC is more adaptive to maternal sounds than environmental noise. Further studies are needed to better understand the neural processes underlying this early brain plasticity and its functional implications for future hearing and language development. PMID:25713382

  5. Turning down the noise: the benefit of musical training on the aging auditory brain.

    PubMed

    Alain, Claude; Zendel, Benjamin Rich; Hutka, Stefanie; Bidelman, Gavin M

    2014-02-01

    Age-related decline in hearing abilities is a ubiquitous part of aging, and commonly impacts speech understanding, especially when there are competing sound sources. While such age effects are partially due to changes within the cochlea, difficulties typically exist beyond measurable hearing loss, suggesting that central brain processes, as opposed to simple peripheral mechanisms (e.g., hearing sensitivity), play a critical role in governing hearing abilities late into life. Current training regimens aimed to improve central auditory processing abilities have experienced limited success in promoting listening benefits. Interestingly, recent studies suggest that in young adults, musical training positively modifies neural mechanisms, providing robust, long-lasting improvements to hearing abilities as well as to non-auditory tasks that engage cognitive control. These results offer the encouraging possibility that musical training might be used to counteract age-related changes in auditory cognition commonly observed in older adults. Here, we reviewed studies that have examined the effects of age and musical experience on auditory cognition with an emphasis on auditory scene analysis. We infer that musical training may offer potential benefits to complex listening and might be utilized as a means to delay or even attenuate declines in auditory perception and cognition that often emerge later in life. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Can You Hear Me Now? Musical Training Shapes Functional Brain Networks for Selective Auditory Attention and Hearing Speech in Noise

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2011-01-01

    Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636

  7. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.

  8. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    PubMed

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.

  9. Metabotropic glutamate receptors in auditory processing

    PubMed Central

    Lu, Yong

    2014-01-01

    As the major excitatory neurotransmitter used in the vertebrate brain, glutamate activates ionotropic and metabotropic glutamate receptors (mGluRs), which mediate fast and slow neuronal actions, respectively. Important modulatory roles of mGluRs have been shown in many brain areas, and drugs targeting mGluRs have been developed for treatment of brain disorders. Here, I review the studies on mGluRs in the auditory system. Anatomical expression of mGluRs in the cochlear nucleus has been well characterized, while data for other auditory nuclei await more systematic investigations at both the light and electron microscopy levels. The physiology of mGluRs has been extensively studied using in vitro brain slice preparations, with a focus on the lower auditory brainstem in both mammals and birds. These in vitro physiological studies have revealed that mGluRs participate in neurotransmission, regulate ionic homeostasis, induce synaptic plasticity, and maintain the balance between excitation and inhibition in a variety of auditory structures. However, very few in vivo physiological studies on mGluRs in auditory processing have been undertaken at the systems level. Many questions regarding the essential roles of mGluRs in auditory processing still remain unanswered and more rigorous basic research is warranted. PMID:24909898

  10. Preliminary Studies on Differential Expression of Auditory Functional Genes in the Brain After Repeated Blast Exposures

    DTIC Science & Technology

    2012-01-01

    exposed mice showed significant injury (Figure). The injury level was more on the medial contra- lateral side of the brain than the ipsilateral side. The...code) JRRD Volume 49, Number 7, 2012Pages 1153–1162Preliminary studies on differential expression of auditory functional genes in the brain after...hearing- related genes in different regions of the brain 6 h after repeated blast exposures in mice. Preliminary data showed that the expres- sion of

  11. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  12. Multichannel spatial auditory display for speech communications.

    PubMed

    Begault, D R; Erbe, T

    1994-10-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  13. Multichannel Spatial Auditory Display for Speed Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  14. Auditory motion-specific mechanisms in the primate brain

    PubMed Central

    Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.

    2017-01-01

    This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038

  15. Touch activates human auditory cortex.

    PubMed

    Schürmann, Martin; Caetano, Gina; Hlushchuk, Yevhen; Jousmäki, Veikko; Hari, Riitta

    2006-05-01

    Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment.

  16. Multichannel optical brain imaging to separate cerebral vascular, tissue metabolic, and neuronal effects of cocaine

    NASA Astrophysics Data System (ADS)

    Ren, Hugang; Luo, Zhongchi; Yuan, Zhijia; Pan, Yingtian; Du, Congwu

    2012-02-01

    Characterization of cerebral hemodynamic and oxygenation metabolic changes, as well neuronal function is of great importance to study of brain functions and the relevant brain disorders such as drug addiction. Compared with other neuroimaging modalities, optical imaging techniques have the potential for high spatiotemporal resolution and dissection of the changes in cerebral blood flow (CBF), blood volume (CBV), and hemoglobing oxygenation and intracellular Ca ([Ca2+]i), which serves as markers of vascular function, tissue metabolism and neuronal activity, respectively. Recently, we developed a multiwavelength imaging system and integrated it into a surgical microscope. Three LEDs of λ1=530nm, λ2=570nm and λ3=630nm were used for exciting [Ca2+]i fluorescence labeled by Rhod2 (AM) and sensitizing total hemoglobin (i.e., CBV), and deoxygenated-hemoglobin, whereas one LD of λ1=830nm was used for laser speckle imaging to form a CBF mapping of the brain. These light sources were time-sharing for illumination on the brain and synchronized with the exposure of CCD camera for multichannel images of the brain. Our animal studies indicated that this optical approach enabled simultaneous mapping of cocaine-induced changes in CBF, CBV and oxygenated- and deoxygenated hemoglobin as well as [Ca2+]i in the cortical brain. Its high spatiotemporal resolution (30μm, 10Hz) and large field of view (4x5 mm2) are advanced as a neuroimaging tool for brain functional study.

  17. Analysis of wave III of brain stem auditory evoked potential waveforms during microvascular decompression of cranial nerve VII for hemifacial spasm.

    PubMed

    Thirumala, Parthasarathy D; Krishnaiah, Balaji; Crammond, Donald J; Habeych, Miguel E; Balzer, Jeffrey R

    2014-04-01

    Intraoperative monitoring of brain stem auditory evoked potential during microvascular decompression (MVD) prevent hearing loss (HL). Previous studies have shown that changes in wave III (wIII) are an early and sensitive sign of auditory nerve injury. To evaluate the changes of amplitude and latency of wIII of brain stem auditory evoked potential during MVD and its association with postoperative HL. Hearing loss was classified by American Academy of Otolaryngology - Head and Neck Surgery (AAO-HNS) criteria, based on changes in pure tone audiometry and speech discrimination score. Retrospective analysis of wIII in patients who underwent intraoperative monitoring with brain stem auditory evoked potential during MVD was performed. A univariate logistic regression analysis was performed on independent variables amplitude of wIII and latency of wIII at change max and On-Skin, or a final recording at the time of skin closure. A further analysis for the same variables was performed adjusting for the loss of wave. The latency of wIII was not found to be significantly different between groups I and II. The amplitude of wIII was significantly decreased in the group with HL. Regression analysis did not find any increased odds of HL with changes in the amplitude of wIII. Changes in wave III did not increase the odds of HL in patients who underwent brain stem auditory evoked potential s during MVD. This information might be valuable to evaluate the value of wIII as an alarm criterion during MVD to prevent HL.

  18. Widespread Brain Areas Engaged during a Classical Auditory Streaming Task Revealed by Intracranial EEG

    PubMed Central

    Dykstra, Andrew R.; Halgren, Eric; Thesen, Thomas; Carlson, Chad E.; Doyle, Werner; Madsen, Joseph R.; Eskandar, Emad N.; Cash, Sydney S.

    2011-01-01

    The auditory system must constantly decompose the complex mixture of sound arriving at the ear into perceptually independent streams constituting accurate representations of individual sources in the acoustic environment. How the brain accomplishes this task is not well understood. The present study combined a classic behavioral paradigm with direct cortical recordings from neurosurgical patients with epilepsy in order to further describe the neural correlates of auditory streaming. Participants listened to sequences of pure tones alternating in frequency and indicated whether they heard one or two “streams.” The intracranial EEG was simultaneously recorded from sub-dural electrodes placed over temporal, frontal, and parietal cortex. Like healthy subjects, patients heard one stream when the frequency separation between tones was small and two when it was large. Robust evoked-potential correlates of frequency separation were observed over widespread brain areas. Waveform morphology was highly variable across individual electrode sites both within and across gross brain regions. Surprisingly, few evoked-potential correlates of perceptual organization were observed after controlling for physical stimulus differences. The results indicate that the cortical areas engaged during the streaming task are more complex and widespread than has been demonstrated by previous work, and that, by-and-large, correlates of bistability during streaming are probably located on a spatial scale not assessed – or in a brain area not examined – by the present study. PMID:21886615

  19. A Wearable Multi-Channel fNIRS System for Brain Imaging in Freely Moving Subjects

    PubMed Central

    Piper, Sophie K.; Krueger, Arne; Koch, Stefan P.; Mehnert, Jan; Habermehl, Christina; Steinbrink, Jens; Obrig, Hellmuth; Schmitz, Christoph H.

    2013-01-01

    Functional near infrared spectroscopy (fNIRS) is a versatile neuroimaging tool with an increasing acceptance in the neuroimaging community. While often lauded for its portability, most of the fNIRS setups employed in neuroscientific research still impose usage in a laboratory environment. We present a wearable, multi-channel fNIRS imaging system for functional brain imaging in unrestrained settings. The system operates without optical fiber bundles, using eight dual wavelength light emitting diodes and eight electro-optical sensors, which can be placed freely on the subject's head for direct illumination and detection. Its performance is tested on N = 8 subjects in a motor execution paradigm performed under three different exercising conditions: (i) during outdoor bicycle riding, (ii) while pedaling on a stationary training bicycle, and (iii) sitting still on the training bicycle. Following left hand gripping, we observe a significant decrease in the deoxyhemoglobin concentration over the contralateral motor cortex in all three conditions. A significant task-related ΔHbO2 increase was seen for the non-pedaling condition. Although the gross movements involved in pedaling and steering a bike induced more motion artifacts than carrying out the same task while sitting still, we found no significant differences in the shape or amplitude of the HbR time courses for outdoor or indoor cycling and sitting still. We demonstrate the general feasibility of using wearable multi-channel NIRS during strenuous exercise in natural, unrestrained settings and discuss the origins and effects of data artifacts. We provide quantitative guidelines for taking condition-dependent signal quality into account to allow the comparison of data across various levels of physical exercise. To the best of our knowledge, this is the first demonstration of functional NIRS brain imaging during an outdoor activity in a real life situation in humans. PMID:23810973

  20. Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.

    PubMed

    Rutkowski, Tomasz M; Mori, Hiromu

    2015-04-15

    The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  2. Verbal auditory agnosia in a patient with traumatic brain injury: A case report.

    PubMed

    Kim, Jong Min; Woo, Seung Beom; Lee, Zeeihn; Heo, Sung Jae; Park, Donghwi

    2018-03-01

    Verbal auditory agnosia is the selective inability to recognize verbal sounds. Patients with this disorder lose the ability to understand language, write from dictation, and repeat words with reserved ability to identify nonverbal sounds. However, to the best of our knowledge, there was no report about verbal auditory agnosia in adult patient with traumatic brain injury. He was able to clearly distinguish between language and nonverbal sounds, and he did not have any difficulty in identifying the environmental sounds. However, he did not follow oral commands and could not repeat and dictate words. On the other hand, he had fluent and comprehensible speech, and was able to read and understand written words and sentences. Verbal auditory agnosia INTERVENTION:: He received speech therapy and cognitive rehabilitation during his hospitalization, and he practiced understanding of verbal language by providing written sentences together. Two months after hospitalization, he regained his ability to understand some verbal words. Six months after hospitalization, his ability to understand verbal language was improved to an understandable level when speaking slowly in front of his eyes, but his comprehension of verbal sound language was still word level, not sentence level. This case gives us the lesson that the evaluation of auditory functions as well as cognition and language functions important for accurate diagnosis and appropriate treatment, because the verbal auditory agnosia tends to be easily misdiagnosed as hearing impairment, cognitive dysfunction and sensory aphasia.

  3. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise

    PubMed Central

    Ioannou, Christos I.; Pereda, Ernesto; Lindsen, Job P.; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies. PMID:26065708

  4. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise.

    PubMed

    Ioannou, Christos I; Pereda, Ernesto; Lindsen, Job P; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.

  5. Effect of low-frequency rTMS on electromagnetic tomography (LORETA) and regional brain metabolism (PET) in schizophrenia patients with auditory hallucinations.

    PubMed

    Horacek, Jiri; Brunovsky, Martin; Novak, Tomas; Skrdlantova, Lucie; Klirova, Monika; Bubenikova-Valesova, Vera; Krajca, Vladimir; Tislerova, Barbora; Kopecek, Milan; Spaniel, Filip; Mohr, Pavel; Höschl, Cyril

    2007-01-01

    Auditory hallucinations are characteristic symptoms of schizophrenia with high clinical importance. It was repeatedly reported that low frequency (auditory hallucinations. A neuroimaging study elucidating the effect of rTMS in auditory hallucinations has yet to be published. To evaluate the distribution of neuronal electrical activity and the brain metabolism changes after low-frequency rTMS in patients with auditory hallucinations. Low-frequency rTMS (0.9 Hz, 100% of motor threshold, 20 min) applied to the left temporoparietal cortex was used for 10 days in the treatment of medication-resistant auditory hallucinations in schizophrenia (n = 12). The effect of rTMS on the low-resolution brain electromagnetic tomography (LORETA) and brain metabolism ((18)FDG PET) was measured before and after 2 weeks of treatment. We found a significant improvement in the total and positive symptoms (PANSS), and on the hallucination scales (HCS, AHRS). The rTMS decreased the brain metabolism in the left superior temporal gyrus and in interconnected regions, and effected increases in the contralateral cortex and in the frontal lobes. We detected a decrease in current densities (LORETA) for the beta-1 and beta-3 bands in the left temporal lobe whereas an increase was found for beta-2 band contralaterally. Our findings implicate that the effect is connected with decreased metabolism in the cortex underlying the rTMS site, while facilitation of metabolism is propagated by transcallosal and intrahemispheric connections. The LORETA indicates that the neuroplastic changes affect the functional laterality and provide the substrate for a metabolic effect. (c) 2007 S. Karger AG, Basel.

  6. Effect of neonatal asphyxia on the impairment of the auditory pathway by recording auditory brainstem responses in newborn piglets: a new experimentation model to study the perinatal hypoxic-ischemic damage on the auditory system.

    PubMed

    Alvarez, Francisco Jose; Revuelta, Miren; Santaolalla, Francisco; Alvarez, Antonia; Lafuente, Hector; Arteaga, Olatz; Alonso-Alconada, Daniel; Sanchez-del-Rey, Ana; Hilario, Enrique; Martinez-Ibargüen, Agustin

    2015-01-01

    Hypoxia-ischemia (HI) is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets. Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs) of newborn piglets exposed to acute hypoxia/ischemia (n = 6) and a control group with no such exposure (n = 10). ABRs were recorded for both ears before the start of the experiment (baseline), after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury. Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant. The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.

  7. Far-field brainstem responses evoked by vestibular and auditory stimuli exhibit increases in interpeak latency as brain temperature is decreased

    NASA Technical Reports Server (NTRS)

    Hoffman, L. F.; Horowitz, J. M.

    1984-01-01

    The effect of decreasing of brain temperature on the brainstem auditory evoked response (BAER) in rats was investigated. Voltage pulses, applied to a piezoelectric crystal attached to the skull, were used to evoke stimuli in the auditory system by means of bone-conducted vibrations. The responses were recorded at 37 C and 34 C brain temperatures. The peaks of the BAER recorded at 34 C were delayed in comparison with the peaks from the 37 C wave, and the later peaks were more delayed than the earlier peaks. These results indicate that an increase in the interpeak latency occurs as the brain temperature is decreased. Preliminary experiments, in which responses to brief angular acceleration were used to measure the brainstem vestibular evoked response (BVER), have also indicated increases in the interpeak latency in response to the lowering of brain temperature.

  8. The auditory brain-stem response to complex sounds: a potential biomarker for guiding treatment of psychosis.

    PubMed

    Tarasenko, Melissa A; Swerdlow, Neal R; Makeig, Scott; Braff, David L; Light, Gregory A

    2014-01-01

    Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here, we describe an event-related potential (ERP) biomarker - the auditory brain-stem response (ABR) to complex sounds (cABR) - that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions.

  9. Multichannel optical mapping: investigation of depth information

    NASA Astrophysics Data System (ADS)

    Sase, Ichiro; Eda, Hideo; Seiyama, Akitoshi; Tanabe, Hiroki C.; Takatsuki, Akira; Yanagida, Toshio

    2001-06-01

    Near infrared (NIR) light has become a powerful tool for non-invasive imaging of human brain activity. Many systems have been developed to capture the changes in regional brain blood flow and hemoglobin oxygenation, which occur in the human cortex in response to neural activity. We have developed a multi-channel reflectance imaging system, which can be used as a `mapping device' and also as a `multi-channel spectrophotometer'. In the present study, we visualized changes in the hemodynamics of the human occipital region in multiple ways. (1) Stimulating left and right primary visual cortex independently by showing sector shaped checkerboards sequentially over the contralateral visual field, resulted in corresponding changes in the hemodynamics observed by `mapping' measurement. (2) Simultaneous measurement of functional-MRI and NIR (changes in total hemoglobin) during visual stimulation showed good spatial and temporal correlation with each other. (3) Placing multiple channels densely over the occipital region demonstrated spatial patterns more precisely, and depth information was also acquired by placing each pair of illumination and detection fibers at various distances. These results indicate that optical method can provide data for 3D analysis of human brain functions.

  10. Auditory-evoked cortical activity: contribution of brain noise, phase locking, and spectral power

    PubMed Central

    Harris, Kelly C.; Vaden, Kenneth I.; Dubno, Judy R.

    2017-01-01

    Background The N1-P2 is an obligatory cortical response that can reflect the representation of spectral and temporal characteristics of an auditory stimulus. Traditionally, mean amplitudes and latencies of the prominent peaks in the averaged response are compared across experimental conditions. Analyses of the peaks in the averaged response only reflect a subset of the data contained within the electroencephalogram (EEG) signal. We used single-trial analyses techniques to identify the contribution of brain noise, neural synchrony, and spectral power to the generation of P2 amplitude and how these variables may change across age group. This information is important for appropriate interpretation of event-related potentials (ERPs) results and in understanding of age-related neural pathologies. Methods EEG was measured from 25 younger and 25 older normal hearing adults. Age-related and individual differences in P2 response amplitudes, and variability in brain noise, phase locking value (PLV), and spectral power (4–8 Hz) were assessed from electrode FCz. Model testing and linear regression were used to determine the extent to which brain noise, PLV, and spectral power uniquely predicted P2 amplitudes and varied by age group. Results Younger adults had significantly larger P2 amplitudes, PLV, and power compared to older adults. Brain noise did not differ between age groups. The results of regression testing revealed that brain noise and PLV, but not spectral power were unique predictors of P2 amplitudes. Model fit was significantly better in younger than in older adults. Conclusions ERP analyses are intended to provide a better understanding of the underlying neural mechanisms that contribute to individual and group differences in behavior. The current results support that age-related declines in neural synchrony contribute to smaller P2 amplitudes in older normal hearing adults. Based on our results, we discuss potential models in which differences in neural

  11. Effect of hearing aids on auditory function in infants with perinatal brain injury and severe hearing loss.

    PubMed

    Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio

    2012-01-01

    Approximately 2-4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs.

  12. Effect of Hearing Aids on Auditory Function in Infants with Perinatal Brain Injury and Severe Hearing Loss

    PubMed Central

    Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio

    2012-01-01

    Background Approximately 2–4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. Methodology/Principal Findings A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). Conclusions/Significance This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs. PMID:22808289

  13. Development and modulation of intrinsic membrane properties control the temporal precision of auditory brain stem neurons.

    PubMed

    Franzen, Delwen L; Gleiss, Sarah A; Berger, Christina; Kümpfbeck, Franziska S; Ammer, Julian J; Felmy, Felix

    2015-01-15

    Passive and active membrane properties determine the voltage responses of neurons. Within the auditory brain stem, refinements in these intrinsic properties during late postnatal development usually generate short integration times and precise action-potential generation. This developmentally acquired temporal precision is crucial for auditory signal processing. How the interactions of these intrinsic properties develop in concert to enable auditory neurons to transfer information with high temporal precision has not yet been elucidated in detail. Here, we show how the developmental interaction of intrinsic membrane parameters generates high firing precision. We performed in vitro recordings from neurons of postnatal days 9-28 in the ventral nucleus of the lateral lemniscus of Mongolian gerbils, an auditory brain stem structure that converts excitatory to inhibitory information with high temporal precision. During this developmental period, the input resistance and capacitance decrease, and action potentials acquire faster kinetics and enhanced precision. Depending on the stimulation time course, the input resistance and capacitance contribute differentially to action-potential thresholds. The decrease in input resistance, however, is sufficient to explain the enhanced action-potential precision. Alterations in passive membrane properties also interact with a developmental change in potassium currents to generate the emergence of the mature firing pattern, characteristic of coincidence-detector neurons. Cholinergic receptor-mediated depolarizations further modulate this intrinsic excitability profile by eliciting changes in the threshold and firing pattern, irrespective of the developmental stage. Thus our findings reveal how intrinsic membrane properties interact developmentally to promote temporally precise information processing. Copyright © 2015 the American Physiological Society.

  14. Multi-channel spatial auditory display for speech communications

    NASA Astrophysics Data System (ADS)

    Begault, Durand; Erbe, Tom

    1993-10-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  15. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  16. GABA Immunoreactivity in Auditory and Song Control Brain Areas of Zebra Finches

    PubMed Central

    Pinaud, Raphael; Mello, Claudio V.

    2009-01-01

    Inhibitory transmission is critical to sensory and motor processing and is believed to play a role in experience-dependent plasticity. The main inhibitory neurotransmitter in vertebrates, GABA, has been implicated in both sensory and motor aspects of vocalization in songbirds. To understand the role of GABAergic mechanisms in vocal communication, GABAergic elements must be characterized fully. Hence, we investigated GABA immunohistochemistry in the zebra finch brain, emphasizing auditory areas and song control nuclei. Several nuclei of the ascending auditory pathway showed a moderate to high density of GABAergic neurons including the cochlear nuclei, nucleus laminaris, superior olivary nucleus, mesencephalic nucleus lateralis pars dorsalis, and nucleus ovoidalis. Telencephalic auditory areas, including field L subfields L1, L2a and L3, as well as the caudomedial nidopallium (NCM) and mesopallium (CMM), contained GABAergic cells at particularly high densities. Considerable GABA labeling was also seen in the shelf area of caudodorsal nidopallium, and the cup area in the arcopallium, as well as in area X, the lateral magnocellular nucleus of the anterior nidopallium, the robust nucleus of the arcopallium and nidopallial nucleus HVC. GABAergic cells were typically small, most likely local inhibitory interneurons, although large GABA-positive cells that were sparsely distributed were also identified. GABA-positive neurites and puncta were identified in most nuclei of the ascending auditory pathway and in song control nuclei. Our data are in accordance with a prominent role of GABAergic mechanisms in regulating the neural circuits involved in song perceptual processing, motor production, and vocal learning in songbirds. PMID:17466487

  17. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  18. Incorporating modern neuroscience findings to improve brain-computer interfaces: tracking auditory attention.

    PubMed

    Wronkiewicz, Mark; Larson, Eric; Lee, Adrian Kc

    2016-10-01

    Brain-computer interface (BCI) technology allows users to generate actions based solely on their brain signals. However, current non-invasive BCIs generally classify brain activity recorded from surface electroencephalography (EEG) electrodes, which can hinder the application of findings from modern neuroscience research. In this study, we use source imaging-a neuroimaging technique that projects EEG signals onto the surface of the brain-in a BCI classification framework. This allowed us to incorporate prior research from functional neuroimaging to target activity from a cortical region involved in auditory attention. Classifiers trained to detect attention switches performed better with source imaging projections than with EEG sensor signals. Within source imaging, including subject-specific anatomical MRI information (instead of using a generic head model) further improved classification performance. This source-based strategy also reduced accuracy variability across three dimensionality reduction techniques-a major design choice in most BCIs. Our work shows that source imaging provides clear quantitative and qualitative advantages to BCIs and highlights the value of incorporating modern neuroscience knowledge and methods into BCI systems.

  19. The Auditory Brain-Stem Response to Complex Sounds: A Potential Biomarker for Guiding Treatment of Psychosis

    PubMed Central

    Tarasenko, Melissa A.; Swerdlow, Neal R.; Makeig, Scott; Braff, David L.; Light, Gregory A.

    2014-01-01

    Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here, we describe an event-related potential (ERP) biomarker – the auditory brain-stem response (ABR) to complex sounds (cABR) – that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions. PMID:25352811

  20. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    PubMed

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. The effects of neck flexion on cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in related sensory cortices

    PubMed Central

    2012-01-01

    Background A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Methods Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10–20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Results Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Conclusions Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections. PMID:23199306

  2. Auditory hallucinations.

    PubMed

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  3. Early auditory processing in area V5/MT+ of the congenitally blind brain.

    PubMed

    Watkins, Kate E; Shakespeare, Timothy J; O'Donoghue, M Clare; Alexander, Iona; Ragge, Nicola; Cowey, Alan; Bridge, Holly

    2013-11-13

    Previous imaging studies of congenital blindness have studied individuals with heterogeneous causes of blindness, which may influence the nature and extent of cross-modal plasticity. Here, we scanned a homogeneous group of blind people with bilateral congenital anophthalmia, a condition in which both eyes fail to develop, and, as a result, the visual pathway is not stimulated by either light or retinal waves. This model of congenital blindness presents an opportunity to investigate the effects of very early visual deafferentation on the functional organization of the brain. In anophthalmic animals, the occipital cortex receives direct subcortical auditory input. We hypothesized that this pattern of subcortical reorganization ought to result in a topographic mapping of auditory frequency information in the occipital cortex of anophthalmic people. Using functional MRI, we examined auditory-evoked activity to pure tones of high, medium, and low frequencies. Activity in the superior temporal cortex was significantly reduced in anophthalmic compared with sighted participants. In the occipital cortex, a region corresponding to the cytoarchitectural area V5/MT+ was activated in the anophthalmic participants but not in sighted controls. Whereas previous studies in the blind indicate that this cortical area is activated to auditory motion, our data show it is also active for trains of pure tone stimuli and in some anophthalmic participants shows a topographic mapping (tonotopy). Therefore, this region appears to be performing early sensory processing, possibly served by direct subcortical input from the pulvinar to V5/MT+.

  4. An auditory oddball brain-computer interface for binary choices.

    PubMed

    Halder, S; Rea, M; Andreoni, R; Nijboer, F; Hammer, E M; Kleih, S C; Birbaumer, N; Kübler, A

    2010-04-01

    Brain-computer interfaces (BCIs) provide non-muscular communication for individuals diagnosed with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)). In the final stages of the disease, a BCI cannot rely on the visual modality. This study examined a method to achieve high accuracies using auditory stimuli only. We propose an auditory BCI based on a three-stimulus paradigm. This paradigm is similar to the standard oddball but includes an additional target (i.e. two target stimuli, one frequent stimulus). Three versions of the task were evaluated in which the target stimuli differed in loudness, pitch or direction. Twenty healthy participants achieved an average information transfer rate (ITR) of up to 2.46 bits/min and accuracies of 78.5%. Most subjects (14 of 20) achieved their best performance with targets differing in pitch. With this study, the viability of the paradigm was shown for healthy participants and will next be evaluated with individuals diagnosed with ALS or locked-in syndrome (LIS) after stroke. The here presented BCI offers communication with binary choices (yes/no) independent of vision. As it requires only little time per selection, it may constitute a reliable means of communication for patients who lost all motor function and have a short attention span. 2009 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  5. Stereotactically-guided Ablation of the Rat Auditory Cortex, and Localization of the Lesion in the Brain.

    PubMed

    Lamas, Verónica; Estévez, Sheila; Pernía, Marianni; Plaza, Ignacio; Merchán, Miguel A

    2017-10-11

    The rat auditory cortex (AC) is becoming popular among auditory neuroscience investigators who are interested in experience-dependence plasticity, auditory perceptual processes, and cortical control of sound processing in the subcortical auditory nuclei. To address new challenges, a procedure to accurately locate and surgically expose the auditory cortex would expedite this research effort. Stereotactic neurosurgery is routinely used in pre-clinical research in animal models to engraft a needle or electrode at a pre-defined location within the auditory cortex. In the following protocol, we use stereotactic methods in a novel way. We identify four coordinate points over the surface of the temporal bone of the rat to define a window that, once opened, accurately exposes both the primary (A1) and secondary (Dorsal and Ventral) cortices of the AC. Using this method, we then perform a surgical ablation of the AC. After such a manipulation is performed, it is necessary to assess the localization, size, and extension of the lesions made in the cortex. Thus, we also describe a method to easily locate the AC ablation postmortem using a coordinate map constructed by transferring the cytoarchitectural limits of the AC to the surface of the brain.The combination of the stereotactically-guided location and ablation of the AC with the localization of the injured area in a coordinate map postmortem facilitates the validation of information obtained from the animal, and leads to a better analysis and comprehension of the data.

  6. Familiar auditory sensory training in chronic traumatic brain injury: a case study.

    PubMed

    Sullivan, Emily Galassi; Guernon, Ann; Blabas, Brett; Herrold, Amy A; Pape, Theresa L-B

    2018-04-01

    The evaluation and treatment for patients with prolonged periods of seriously impaired consciousness following traumatic brain injury (TBI), such as a vegetative or minimally conscious state, poses considerable challenges, particularly in the chronic phases of recovery. This blinded crossover study explored the effects of familiar auditory sensory training (FAST) compared with a sham stimulation in a patient seven years post severe TBI. Baseline data were collected over 4 weeks to account for variability in status with neurobehavioral measures, including the Disorders of Consciousness scale (DOCS), Coma Near Coma scale (CNC), and Consciousness Screening Algorithm. Pre-stimulation neurophysiological assessments were completed as well, namely Brainstem Auditory Evoked Potentials (BAEP) and Somatosensory Evoked Potentials (SSEP). Results revealed that a significant improvement in the DOCS neurobehavioral findings after FAST, which was not maintained during the sham. BAEP findings also improved with maintenance of these improvements following sham stimulation as evidenced by repeat testing. The results emphasize the importance for continued evaluation and treatment of individuals in chronic states of seriously impaired consciousness with a variety of tools. Further study of auditory stimulation as a passive treatment paradigm for this population is warranted. Implications for Rehabilitation Clinicians should be equipped with treatment options to enhance neurobehavioral improvements when traditional treatment methods fail to deliver or maintain functional behavioral changes. Routine assessment is crucial to detect subtle changes in neurobehavioral function even in chronic states of disordered consciousness and determine potential preserved cognitive abilities that may not be evident due to unreliable motor responses given motoric impairments. Familiar Auditory Stimulation Training (FAST) is an ideal passive stimulation that can be supplied by families, allied health

  7. Synchrony of auditory brain responses predicts behavioral ability to keep still in children with autism spectrum disorder: Auditory-evoked response in children with autism spectrum disorder.

    PubMed

    Yoshimura, Yuko; Kikuchi, Mitsuru; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Remijn, Gerard B; Oi, Manabu; Munesue, Toshio; Higashida, Haruhiro; Minabe, Yoshio

    2016-01-01

    The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.

  8. Nonlocal atlas-guided multi-channel forest learning for human brain labeling

    PubMed Central

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang

    2016-01-01

    Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI_LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the

  9. Nonlocal atlas-guided multi-channel forest learning for human brain labeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong

    Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features canmore » be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI-LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors

  10. Specialization of the auditory processing in harbor porpoise, characterized by brain-stem potentials

    NASA Astrophysics Data System (ADS)

    Bibikov, Nikolay G.

    2002-05-01

    Brain-stem auditory evoked potentials (BAEPs) were recorded from the head surface of the three awaked harbor porpoises (Phocoena phocoena). Silver disk placed on the skin surface above the vertex bone was used as an active electrode. The experiments were performed at the Karadag biological station (the Crimea peninsula). Clicks and tone bursts were used as stimuli. The temporal and frequency selectivity of the auditory system was estimated using the methods of simultaneous and forward masking. An evident minimum of the BAEPs thresholds was observed in the range of 125-135 kHz, where the main spectral component of species-specific echolocation signal is located. In this frequency range the tonal forward masking demonstrated a strong frequency selectivity. Off-response to such tone bursts was a typical observation. An evident BAEP could be recorded up to the frequencies 190-200 kHz, however, outside the acoustical fovea the frequency selectivity was rather poor. Temporal resolution was estimated by measuring BAER recovery functions for double clicks, double tone bursts, and double noise bursts. The half-time of BAERs recovery was in the range of 0.1-0.2 ms. The data indicate that the porpoise auditory system is strongly adapted to detect ultrasonic closely spaced sounds like species-specific locating signals and echoes.

  11. Neural network approach in multichannel auditory event-related potential analysis.

    PubMed

    Wu, F Y; Slater, J D; Ramsay, R E

    1994-04-01

    Even though there are presently no clearly defined criteria for the assessment of P300 event-related potential (ERP) abnormality, it is strongly indicated through statistical analysis that such criteria exist for classifying control subjects and patients with diseases resulting in neuropsychological impairment such as multiple sclerosis (MS). We have demonstrated the feasibility of artificial neural network (ANN) methods in classifying ERP waveforms measured at a single channel (Cz) from control subjects and MS patients. In this paper, we report the results of multichannel ERP analysis and a modified network analysis methodology to enhance automation of the classification rule extraction process. The proposed methodology significantly reduces the work of statistical analysis. It also helps to standardize the criteria of P300 ERP assessment and facilitate the computer-aided analysis on neuropsychological functions.

  12. Brain hyper-reactivity to auditory novel targets in children with high-functioning autism.

    PubMed

    Gomot, Marie; Belmonte, Matthew K; Bullmore, Edward T; Bernard, Frédéric A; Baron-Cohen, Simon

    2008-09-01

    Although communication and social difficulties in autism have received a great deal of research attention, the other key diagnostic feature, extreme repetitive behaviour and unusual narrow interests, has been addressed less often. Also known as 'resistance to change' this may be related to atypical processing of infrequent, novel stimuli. This can be tested at sensory and neural levels. Our aims were to (i) examine auditory novelty detection and its neural basis in children with autism spectrum conditions (ASC) and (ii) test for brain activation patterns that correlate quantitatively with number of autistic traits as a test of the dimensional nature of ASC. The present study employed event-related fMRI during a novel auditory detection paradigm. Participants were twelve 10- to 15-year-old children with ASC and a group of 12 age-, IQ- and sex-matched typical controls. The ASC group responded faster to novel target stimuli. Group differences in brain activity mainly involved the right prefrontal-premotor and the left inferior parietal regions, which were more activated in the ASC group than in controls. In both groups, activation of prefrontal regions during target detection was positively correlated with Autism Spectrum Quotient scores measuring the number of autistic traits. These findings suggest that target detection in autism is associated not only with superior behavioural performance (shorter reaction time) but also with activation of a more widespread network of brain regions. This pattern also shows quantitative variation with number of autistic traits, in a continuum that extends to the normal population. This finding may shed light on the neurophysiological process underlying narrow interests and what clinically is called 'need for sameness'.

  13. Sex-Specific Brain Deficits in Auditory Processing in an Animal Model of Cocaine-Related Schizophrenic Disorders

    PubMed Central

    Broderick, Patricia A.; Rosenbaum, Taylor

    2013-01-01

    Cocaine is a psychostimulant in the pharmacological class of drugs called Local Anesthetics. Interestingly, cocaine is the only drug in this class that has a chemical formula comprised of a tropane ring and is, moreover, addictive. The correlation between tropane and addiction is well-studied. Another well-studied correlation is that between psychosis induced by cocaine and that psychosis endogenously present in the schizophrenic patient. Indeed, both of these psychoses exhibit much the same behavioral as well as neurochemical properties across species. Therefore, in order to study the link between schizophrenia and cocaine addiction, we used a behavioral paradigm called Acoustic Startle. We used this acoustic startle paradigm in female versus male Sprague-Dawley animals to discriminate possible sex differences in responses to startle. The startle method operates through auditory pathways in brain via a network of sensorimotor gating processes within auditory cortex, cochlear nuclei, inferior and superior colliculi, pontine reticular nuclei, in addition to mesocorticolimbic brain reward and nigrostriatal motor circuitries. This paper is the first to report sex differences to acoustic stimuli in Sprague-Dawley animals (Rattus norvegicus) although such gender responses to acoustic startle have been reported in humans (Swerdlow et al. 1997 [1]). The startle method monitors pre-pulse inhibition (PPI) as a measure of the loss of sensorimotor gating in the brain's neuronal auditory network; auditory deficiencies can lead to sensory overload and subsequently cognitive dysfunction. Cocaine addicts and schizophrenic patients as well as cocaine treated animals are reported to exhibit symptoms of defective PPI (Geyer et al., 2001 [2]). Key findings are: (a) Cocaine significantly reduced PPI in both sexes. (b) Females were significantly more sensitive than males; reduced PPI was greater in females than in males. (c) Physiological saline had no effect on startle in either sex

  14. Cerebral responses to local and global auditory novelty under general anesthesia

    PubMed Central

    Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir

    2017-01-01

    Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046

  15. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer

  16. [A novel method of multi-channel feature extraction combining multivariate autoregression and multiple-linear principal component analysis].

    PubMed

    Wang, Jinjia; Zhang, Yanna

    2015-02-01

    Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.

  17. From Complex B1 Mapping to Local SAR Estimation for Human Brain MR Imaging Using Multi-channel Transceiver Coil at 7T

    PubMed Central

    Zhang, Xiaotong; Schmitter, Sebastian; Van de Moortel, Pierre-François; Liu, Jiaen

    2014-01-01

    Elevated Specific Absorption Rate (SAR) associated with increased main magnetic field strength remains as a major safety concern in ultra-high-field (UHF) Magnetic Resonance Imaging (MRI) applications. The calculation of local SAR requires the knowledge of the electric field induced by radiofrequency (RF) excitation, and the local electrical properties of tissues. Since electric field distribution cannot be directly mapped in conventional MR measurements, SAR estimation is usually performed using numerical model-based electromagnetic simulations which, however, are highly time consuming and cannot account for the specific anatomy and tissue properties of the subject undergoing a scan. In the present study, starting from the measurable RF magnetic fields (B1) in MRI, we conducted a series of mathematical deduction to estimate the local, voxel-wise and subject-specific SAR for each single coil element using a multi-channel transceiver array coil. We first evaluated the feasibility of this approach in numerical simulations including two different human head models. We further conducted experimental study in a physical phantom and in two human subjects at 7T using a multi-channel transceiver head coil. Accuracy of the results is discussed in the context of predicting local SAR in the human brain at UHF MRI using multi-channel RF transmission. PMID:23508259

  18. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    PubMed Central

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  19. Age-related changes in auditory nerve-inner hair cell connections, hair cell numbers, auditory brain stem response and gap detection in UM-HET4 mice.

    PubMed

    Altschuler, R A; Dolan, D F; Halsey, K; Kanicki, A; Deng, N; Martin, C; Eberle, J; Kohrman, D C; Miller, R A; Schacht, J

    2015-04-30

    This study compared the timing of appearance of three components of age-related hearing loss that determine the pattern and severity of presbycusis: the functional and structural pathologies of sensory cells and neurons and changes in gap detection (GD), the latter as an indicator of auditory temporal processing. Using UM-HET4 mice, genetically heterogeneous mice derived from four inbred strains, we studied the integrity of inner and outer hair cells by position along the cochlear spiral, inner hair cell-auditory nerve connections, spiral ganglion neurons (SGN), and determined auditory thresholds, as well as pre-pulse and gap inhibition of the acoustic startle reflex (ASR). Comparisons were made between mice of 5-7, 22-24 and 27-29 months of age. There was individual variability among mice in the onset and extent of age-related auditory pathology. At 22-24 months of age a moderate to large loss of outer hair cells was restricted to the apical third of the cochlea and threshold shifts in the auditory brain stem response were minimal. There was also a large and significant loss of inner hair cell-auditory nerve connections and a significant reduction in GD. The expression of Ntf3 in the cochlea was significantly reduced. At 27-29 months of age there was no further change in the mean number of synaptic connections per inner hair cell or in GD, but a moderate to large loss of outer hair cells was found across all cochlear turns as well as significantly increased ABR threshold shifts at 4, 12, 24 and 48 kHz. A statistical analysis of correlations on an individual animal basis revealed that neither the hair cell loss nor the ABR threshold shifts correlated with loss of GD or with the loss of connections, consistent with independent pathological mechanisms. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model

    PubMed Central

    Nakao, Kazuhito; Nakazawa, Kazu

    2014-01-01

    In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs) are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR) hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP) activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 s, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1) a broadband increase in spontaneous LFP power in the absence of external inputs, and (2) a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The “paradoxically” high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception. PMID:25018691

  1. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    NASA Astrophysics Data System (ADS)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to

  2. Communication and control by listening: toward optimal design of a two-class auditory streaming brain-computer interface.

    PubMed

    Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin

    2012-01-01

    Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.

  3. Towards User-Friendly Spelling with an Auditory Brain-Computer Interface: The CharStreamer Paradigm

    PubMed Central

    Höhne, Johannes; Tangermann, Michael

    2014-01-01

    Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences. PMID:24886978

  4. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain.

    PubMed

    Higgins, Irina; Stringer, Simon; Schnupp, Jan

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.

  5. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain

    PubMed Central

    Stringer, Simon

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable. PMID:28797034

  6. Auditory and Cognitive Factors Associated with Speech-in-Noise Complaints following Mild Traumatic Brain Injury.

    PubMed

    Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J

    2017-04-01

    Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was

  7. Brain activity is related to individual differences in the number of items stored in auditory short-term memory for pitch: evidence from magnetoencephalography.

    PubMed

    Grimault, Stephan; Nolden, Sophie; Lefebvre, Christine; Vachon, François; Hyde, Krista; Peretz, Isabelle; Zatorre, Robert; Robitaille, Nicolas; Jolicoeur, Pierre

    2014-07-01

    We used magnetoencephalography (MEG) to examine brain activity related to the maintenance of non-verbal pitch information in auditory short-term memory (ASTM). We focused on brain activity that increased with the number of items effectively held in memory by the participants during the retention interval of an auditory memory task. We used very simple acoustic materials (i.e., pure tones that varied in pitch) that minimized activation from non-ASTM related systems. MEG revealed neural activity in frontal, temporal, and parietal cortices that increased with a greater number of items effectively held in memory by the participants during the maintenance of pitch representations in ASTM. The present results reinforce the functional role of frontal and temporal cortices in the retention of pitch information in ASTM. This is the first MEG study to provide both fine spatial localization and temporal resolution on the neural mechanisms of non-verbal ASTM for pitch in relation to individual differences in the capacity of ASTM. This research contributes to a comprehensive understanding of the mechanisms mediating the representation and maintenance of basic non-verbal auditory features in the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    PubMed Central

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  9. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    PubMed

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  10. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations.

    PubMed

    Ćurčić-Blake, Branislava; Ford, Judith M; Hubl, Daniela; Orlov, Natasza D; Sommer, Iris E; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W; David, Olivier; Mulert, Christoph; Woodward, Todd S; Aleman, André

    2017-01-01

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of particular relevance. However, reconciliation of these theories with experimental evidence is missing. We review 50 studies investigating functional (EEG and fMRI) and anatomic (diffusion tensor imaging) connectivity in these networks, and explore the evidence supporting abnormal connectivity in these networks associated with AVH. We distinguish between functional connectivity during an actual hallucination experience (symptom capture) and functional connectivity during either the resting state or a task comparing individuals who hallucinate with those who do not (symptom association studies). Symptom capture studies clearly reveal a pattern of increased coupling among the auditory, language and striatal regions. Anatomical and symptom association functional studies suggest that the interhemispheric connectivity between posterior auditory regions may depend on the phase of illness, with increases in non-psychotic individuals and first episode patients and decreases in chronic patients. Leading hypotheses involving concepts as unstable memories, source monitoring, top-down attention, and hybrid models of hallucinations are supported in part by the published connectivity data, although several caveats and inconsistencies remain. Specifically, possible changes in fronto-temporal connectivity are still under debate. Precise hypotheses concerning the directionality of connections deduced from current theoretical approaches should be tested using experimental approaches that allow for discrimination of competing hypotheses. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Auditory neuroimaging with fMRI and PET.

    PubMed

    Talavage, Thomas M; Gonzalez-Castillo, Javier; Scott, Sophie K

    2014-01-01

    For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Auditory short-term memory in the primate auditory cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  13. Comparisons of MRI images, and auditory-related and vocal-related protein expressions in the brain of echolocation bats and rodents.

    PubMed

    Hsiao, Chun-Jen; Hsu, Chih-Hsiang; Lin, Ching-Lung; Wu, Chung-Hsin; Jen, Philip Hung-Sun

    2016-08-17

    Although echolocating bats and other mammals share the basic design of laryngeal apparatus for sound production and auditory system for sound reception, they have a specialized laryngeal mechanism for ultrasonic sound emissions as well as a highly developed auditory system for processing species-specific sounds. Because the sounds used by bats for echolocation and rodents for communication are quite different, there must be differences in the central nervous system devoted to producing and processing species-specific sounds between them. The present study examines the difference in the relative size of several brain structures and expression of auditory-related and vocal-related proteins in the central nervous system of echolocation bats and rodents. Here, we report that bats using constant frequency-frequency-modulated sounds (CF-FM bats) and FM bats for echolocation have a larger volume of midbrain nuclei (inferior and superior colliculi) and cerebellum relative to the size of the brain than rodents (mice and rats). However, the former have a smaller volume of the cerebrum and olfactory bulb, but greater expression of otoferlin and forkhead box protein P2 than the latter. Although the size of both midbrain colliculi is comparable in both CF-FM and FM bats, CF-FM bats have a larger cerebrum and greater expression of otoferlin and forkhead box protein P2 than FM bats. These differences in brain structure and protein expression are discussed in relation to their biologically relevant sounds and foraging behavior.

  14. Auditory short-term memory in the primate auditory cortex

    PubMed Central

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ‘working memory’ bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ‘match’ stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. PMID:26541581

  15. Subthalamic nucleus deep brain stimulation affects distractor interference in auditory working memory.

    PubMed

    Camalier, Corrie R; Wang, Alice Y; McIntosh, Lindsey G; Park, Sohee; Neimat, Joseph S

    2017-03-01

    Computational and theoretical accounts hypothesize the basal ganglia play a supramodal "gating" role in the maintenance of working memory representations, especially in preservation from distractor interference. There are currently two major limitations to this account. The first is that supporting experiments have focused exclusively on the visuospatial domain, leaving questions as to whether such "gating" is domain-specific. The second is that current evidence relies on correlational measures, as it is extremely difficult to causally and reversibly manipulate subcortical structures in humans. To address these shortcomings, we examined non-spatial, auditory working memory performance during reversible modulation of the basal ganglia, an approach afforded by deep brain stimulation of the subthalamic nucleus. We found that subthalamic nucleus stimulation impaired auditory working memory performance, specifically in the group tested in the presence of distractors, even though the distractors were predictable and completely irrelevant to the encoding of the task stimuli. This study provides key causal evidence that the basal ganglia act as a supramodal filter in working memory processes, further adding to our growing understanding of their role in cognition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Electrophysiological measurement of human auditory function

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1975-01-01

    Knowledge of the human auditory evoked response is reviewed, including methods of determining this response, the way particular changes in the stimulus are coupled to specific changes in the response, and how the state of mind of the listener will influence the response. Important practical applications of this basic knowledge are discussed. Measurement of the brainstem evoked response, for instance, can state unequivocally how well the peripheral auditory apparatus functions. It might then be developed into a useful hearing test, especially for infants and preverbal or nonverbal children. Clinical applications of measuring the brain waves evoked 100 msec and later after the auditory stimulus are undetermined. These waves are clearly related to brain events associated with cognitive processing of acoustic signals, since their properties depend upon where the listener directs his attention and whether how long he expects the signal.

  17. Auditory Neuroimaging with fMRI and PET

    PubMed Central

    Talavage, Thomas M.; Gonzalez-Castillo, Javier; Scott, Sophie K.

    2013-01-01

    For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. PMID:24076424

  18. Prestimulus Network Integration of Auditory Cortex Predisposes Near-Threshold Perception Independently of Local Excitability

    PubMed Central

    Leske, Sabine; Ruhnau, Philipp; Frey, Julia; Lithari, Chrysa; Müller, Nadia; Hartmann, Thomas; Weisz, Nathan

    2015-01-01

    An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm. PMID:26408799

  19. Spatiotemporal reconstruction of auditory steady-state responses to acoustic amplitude modulations: Potential sources beyond the auditory pathway.

    PubMed

    Farahani, Ehsan Darestani; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid

    2017-03-01

    Investigating the neural generators of auditory steady-state responses (ASSRs), i.e., auditory evoked brain responses, with a wide range of screening and diagnostic applications, has been the focus of various studies for many years. Most of these studies employed a priori assumptions regarding the number and location of neural generators. The aim of this study is to reconstruct ASSR sources with minimal assumptions in order to gain in-depth insight into the number and location of brain regions that are activated in response to low- as well as high-frequency acoustically amplitude modulated signals. In order to reconstruct ASSR sources, we applied independent component analysis with subsequent equivalent dipole modeling to single-subject EEG data (young adults, 20-30 years of age). These data were based on white noise stimuli, amplitude modulated at 4, 20, 40, or 80Hz. The independent components that exhibited a significant ASSR were clustered among all participants by means of a probabilistic clustering method based on a Gaussian mixture model. Results suggest that a widely distributed network of sources, located in cortical as well as subcortical regions, is active in response to 4, 20, 40, and 80Hz amplitude modulated noises. Some of these sources are located beyond the central auditory pathway. Comparison of brain sources in response to different modulation frequencies suggested that the identified brain sources in the brainstem, the left and the right auditory cortex show a higher responsiveness to 40Hz than to the other modulation frequencies. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Fast learning of simple perceptual discriminations reduces brain activation in working memory and in high-level auditory regions.

    PubMed

    Daikhin, Luba; Ahissar, Merav

    2015-07-01

    Introducing simple stimulus regularities facilitates learning of both simple and complex tasks. This facilitation may reflect an implicit change in the strategies used to solve the task when successful predictions regarding incoming stimuli can be formed. We studied the modifications in brain activity associated with fast perceptual learning based on regularity detection. We administered a two-tone frequency discrimination task and measured brain activation (fMRI) under two conditions: with and without a repeated reference tone. Although participants could not explicitly tell the difference between these two conditions, the introduced regularity affected both performance and the pattern of brain activation. The "No-Reference" condition induced a larger activation in frontoparietal areas known to be part of the working memory network. However, only the condition with a reference showed fast learning, which was accompanied by a reduction of activity in two regions: the left intraparietal area, involved in stimulus retention, and the posterior superior-temporal area, involved in representing auditory regularities. We propose that this joint reduction reflects a reduction in the need for online storage of the compared tones. We further suggest that this change reflects an implicit strategic shift "backwards" from reliance mainly on working memory networks in the "No-Reference" condition to increased reliance on detected regularities stored in high-level auditory networks.

  1. Auditory perception vs. recognition: representation of complex communication sounds in the mouse auditory cortical fields.

    PubMed

    Geissler, Diana B; Ehret, Günter

    2004-02-01

    Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies.

  2. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    PubMed Central

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  3. Mechanism of auditory hypersensitivity in human autism using autism model rats.

    PubMed

    Ida-Eto, Michiru; Hara, Nao; Ohkawara, Takeshi; Narita, Masaaki

    2017-04-01

    Auditory hypersensitivity is one of the major complications in autism spectrum disorder. The aim of this study was to investigate whether the auditory brain center is affected in autism model rats. Autism model rats were prepared by prenatal exposure to thalidomide on embryonic day 9 and 10 in pregnant rats. The superior olivary complex (SOC), a complex of auditory nuclei, was immunostained with anti-calbindin d28k antibody at postnatal day 50. In autism model rats, SOC immunoreactivity was markedly decreased. Strength of immunostaining of SOC auditory fibers was also weak in autism model rats. Surprisingly, the size of the medial nucleus of trapezoid body, a nucleus exerting inhibitory function in SOC, was significantly decreased in autism model rats. Auditory hypersensitivity may be, in part, due to impairment of inhibitory processing by the auditory brain center. © 2016 Japan Pediatric Society.

  4. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  5. Psychological Predictors of Visual and Auditory P300 Brain-Computer Interface Performance

    PubMed Central

    Hammer, Eva M.; Halder, Sebastian; Kleih, Sonja C.; Kübler, Andrea

    2018-01-01

    Brain-Computer Interfaces (BCIs) provide communication channels independent from muscular control. In the current study we used two versions of the P300-BCI: one based on visual the other on auditory stimulation. Up to now, data on the impact of psychological variables on P300-BCI control are scarce. Hence, our goal was to identify new predictors with a comprehensive psychological test-battery. A total of N = 40 healthy BCI novices took part in a visual and an auditory BCI session. Psychological variables were measured with an electronic test-battery including clinical, personality, and performance tests. The personality factor “emotional stability” was negatively correlated (Spearman's rho = −0.416; p < 0.01) and an output variable of the non-verbal learning test (NVLT), which can be interpreted as ability to learn, correlated positively (Spearman's rho = 0.412; p < 0.01) with visual P300-BCI performance. In a linear regression analysis both independent variables explained 24% of the variance. “Emotional stability” was also negatively related to auditory P300-BCI performance (Spearman's rho = −0.377; p < 0.05), but failed significance in the regression analysis. Psychological parameters seem to play a moderate role in visual P300-BCI performance. “Emotional stability” was identified as a new predictor, indicating that BCI users who characterize themselves as calm and rational showed worse BCI performance. The positive relation of the ability to learn and BCI performance corroborates the notion that also for P300 based BCIs learning may constitute an important factor. Further studies are needed to consolidate or reject the presented predictors. PMID:29867319

  6. [Brainstem auditory evoked potentials in neurophysiological assessment of brain stem dysfunction in patients with atherostenosis of vertebral arteries].

    PubMed

    Maksimova, M Yu; Sermagambetova, Zh N; Skrylev, S I; Fedin, P A; Koshcheev, A Yu; Shchipakin, V L; Sinicyn, I A

    To assess brain stem dysfunction in patients with hemodynamically significant stenosis of vertebral arteries (VA) using short latency brainstem auditory evoked potentials (BAEP). The study group included 50 patients (mean age 64±6 years) with hemodynamically significant extracranial VA stenosis. Patients with hemodynamically significant extracranial VA stenosis had BAEP abnormalities including the elongation of interpeak intervals I-V and peak V latency as well as the reduction of peak I amplitude. After transluminal balloon angioplasty with stenting of VA stenoses, there was a shortening of peak V latency compared to the preoperative period that reflected the improvement of brain stem conductive functions. Atherostenosis of vertebral arteries is characterized by the signs of brain stem dysfunction, predominantly in the pontomesencephal brain stem. After transluminal balloon angioplasty with stenting of VA, the improvement of brain stem conductive functions was observed.

  7. Towards a truly mobile auditory brain-computer interface: exploring the P300 to take away.

    PubMed

    De Vos, Maarten; Gandras, Katharina; Debener, Stefan

    2014-01-01

    In a previous study we presented a low-cost, small, and wireless 14-channel EEG system suitable for field recordings (Debener et al., 2012, psychophysiology). In the present follow-up study we investigated whether a single-trial P300 response can be reliably measured with this system, while subjects freely walk outdoors. Twenty healthy participants performed a three-class auditory oddball task, which included rare target and non-target distractor stimuli presented with equal probabilities of 16%. Data were recorded in a seated (control condition) and in a walking condition, both of which were realized outdoors. A significantly larger P300 event-related potential amplitude was evident for targets compared to distractors (p<.001), but no significant interaction with recording condition emerged. P300 single-trial analysis was performed with regularized stepwise linear discriminant analysis and revealed above chance-level classification accuracies for most participants (19 out of 20 for the seated, 16 out of 20 for the walking condition), with mean classification accuracies of 71% (seated) and 64% (walking). Moreover, the resulting information transfer rates for the seated and walking conditions were comparable to a recently published laboratory auditory brain-computer interface (BCI) study. This leads us to conclude that a truly mobile auditory BCI system is feasible. © 2013.

  8. Anatomy, Physiology and Function of the Auditory System

    NASA Astrophysics Data System (ADS)

    Kollmeier, Birger

    The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.

  9. Visual activity predicts auditory recovery from deafness after adult cochlear implantation.

    PubMed

    Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2013-12-01

    Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.

  10. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    PubMed

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. BabySQUID: A mobile, high-resolution multichannel magnetoencephalography system for neonatal brain assessment

    NASA Astrophysics Data System (ADS)

    Okada, Yoshio; Pratt, Kevin; Atwood, Christopher; Mascarenas, Anthony; Reineman, Richard; Nurminen, Jussi; Paulson, Douglas

    2006-02-01

    We developed a prototype of a mobile, high-resolution, multichannel magnetoencephalography (MEG) system, called babySQUID, for assessing brain functions in newborns and infants. Unlike electroencephalography, MEG signals are not distorted by the scalp or the fontanels and sutures in the skull. Thus, brain activity can be measured and localized with MEG as if the sensors were above an exposed brain. The babySQUID is housed in a moveable cart small enough to be transported from one room to another. To assess brain functions, one places the baby on the bed of the cart and the head on its headrest with MEG sensors just below. The sensor array consists of 76 first-order axial gradiometers, each with a pickup coil diameter of 6mm and a baseline of 30mm, in a high-density array with a spacing of 12-14mm center-to-center. The pickup coils are 6±1mm below the outer surface of the headrest. The short gap provides unprecedented sensitivity since the scalp and skull are thin (as little as 3-4mm altogether) in babies. In an electromagnetically unshielded room in a hospital, the field sensitivity at 1kHz was ˜17fT/√Hz. The noise was reduced from ˜400to200fT/√Hz at 1Hz using a reference cancellation technique and further to ˜40fT/√Hz using a gradient common mode rejection technique. Although the residual environmental magnetic noise interfered with the operation of the babySQUID, the instrument functioned sufficiently well to detect spontaneous brain signals from babies with a signal to noise ratio (SNR) of as much as 7.6:1. In a magnetically shielded room, the field sensitivity was 17fT/√Hz at 20Hz and 30fT/√Hz at 1Hz without implementation of reference or gradient cancellation. The sensitivity was sufficiently high to detect spontaneous brain activity from a 7month old baby with a SNR as much as 40:1 and evoked somatosensory responses with a 50Hz bandwidth after as little as four averages. We expect that both the noise and the sensor gap can be reduced further by

  12. An Evaluation of Training with an Auditory P300 Brain-Computer Interface for the Japanese Hiragana Syllabary

    PubMed Central

    Halder, Sebastian; Takano, Kouji; Ora, Hiroki; Onishi, Akinari; Utsumi, Kota; Kansaku, Kenji

    2016-01-01

    Gaze-independent brain-computer interfaces (BCIs) are a possible communication channel for persons with paralysis. We investigated if it is possible to use auditory stimuli to create a BCI for the Japanese Hiragana syllabary, which has 46 Hiragana characters. Additionally, we investigated if training has an effect on accuracy despite the high amount of different stimuli involved. Able-bodied participants (N = 6) were asked to select 25 syllables (out of fifty possible choices) using a two step procedure: First the consonant (ten choices) and then the vowel (five choices). This was repeated on 3 separate days. Additionally, a person with spinal cord injury (SCI) participated in the experiment. Four out of six healthy participants reached Hiragana syllable accuracies above 70% and the information transfer rate increased from 1.7 bits/min in the first session to 3.2 bits/min in the third session. The accuracy of the participant with SCI increased from 12% (0.2 bits/min) to 56% (2 bits/min) in session three. Reliable selections from a 10 × 5 matrix using auditory stimuli were possible and performance is increased by training. We were able to show that auditory P300 BCIs can be used for communication with up to fifty symbols. This enables the use of the technology of auditory P300 BCIs with a variety of applications. PMID:27746716

  13. An Evaluation of Training with an Auditory P300 Brain-Computer Interface for the Japanese Hiragana Syllabary.

    PubMed

    Halder, Sebastian; Takano, Kouji; Ora, Hiroki; Onishi, Akinari; Utsumi, Kota; Kansaku, Kenji

    2016-01-01

    Gaze-independent brain-computer interfaces (BCIs) are a possible communication channel for persons with paralysis. We investigated if it is possible to use auditory stimuli to create a BCI for the Japanese Hiragana syllabary, which has 46 Hiragana characters. Additionally, we investigated if training has an effect on accuracy despite the high amount of different stimuli involved. Able-bodied participants ( N = 6) were asked to select 25 syllables (out of fifty possible choices) using a two step procedure: First the consonant (ten choices) and then the vowel (five choices). This was repeated on 3 separate days. Additionally, a person with spinal cord injury (SCI) participated in the experiment. Four out of six healthy participants reached Hiragana syllable accuracies above 70% and the information transfer rate increased from 1.7 bits/min in the first session to 3.2 bits/min in the third session. The accuracy of the participant with SCI increased from 12% (0.2 bits/min) to 56% (2 bits/min) in session three. Reliable selections from a 10 × 5 matrix using auditory stimuli were possible and performance is increased by training. We were able to show that auditory P300 BCIs can be used for communication with up to fifty symbols. This enables the use of the technology of auditory P300 BCIs with a variety of applications.

  14. Brain bases for auditory stimulus-driven figure-ground segregation.

    PubMed

    Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Griffiths, Timothy D

    2011-01-05

    Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.

  15. The Application of the International Classification of Functioning, Disability and Health to Functional Auditory Consequences of Mild Traumatic Brain Injury

    PubMed Central

    Werff, Kathy R. Vander

    2016-01-01

    This article reviews the auditory consequences of mild traumatic brain injury (mTBI) within the context of the International Classification of Functioning, Disability and Health (ICF). Because of growing awareness of mTBI as a public health concern and the diverse and heterogeneous nature of the individual consequences, it is important to provide audiologists and other health care providers with a better understanding of potential implications in the assessment of levels of function and disability for individual interdisciplinary remediation planning. In consideration of body structures and function, the mechanisms of injury that may result in peripheral or central auditory dysfunction in mTBI are reviewed, along with a broader scope of effects of injury to the brain. The activity limitations and participation restrictions that may affect assessment and management in the context of an individual's personal factors and their environment are considered. Finally, a review of management strategies for mTBI from an audiological perspective as part of a multidisciplinary team is included. PMID:27489400

  16. Web-based multi-channel analyzer

    DOEpatents

    Gritzo, Russ E.

    2003-12-23

    The present invention provides an improved multi-channel analyzer designed to conveniently gather, process, and distribute spectrographic pulse data. The multi-channel analyzer may operate on a computer system having memory, a processor, and the capability to connect to a network and to receive digitized spectrographic pulses. The multi-channel analyzer may have a software module integrated with a general-purpose operating system that may receive digitized spectrographic pulses for at least 10,000 pulses per second. The multi-channel analyzer may further have a user-level software module that may receive user-specified controls dictating the operation of the multi-channel analyzer, making the multi-channel analyzer customizable by the end-user. The user-level software may further categorize and conveniently distribute spectrographic pulse data employing non-proprietary, standard communication protocols and formats.

  17. Whispering - The hidden side of auditory communication.

    PubMed

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2016-11-15

    Whispering is a unique expression mode that is specific to auditory communication. Individuals switch their vocalization mode to whispering especially when affected by inner emotions in certain social contexts, such as in intimate relationships or intimidating social interactions. Although this context-dependent whispering is adaptive, whispered voices are acoustically far less rich than phonated voices and thus impose higher hearing and neural auditory decoding demands for recognizing their socio-affective value by listeners. The neural dynamics underlying this recognition especially from whispered voices are largely unknown. Here we show that whispered voices in humans are considerably impoverished as quantified by an entropy measure of spectral acoustic information, and this missing information needs large-scale neural compensation in terms of auditory and cognitive processing. Notably, recognizing the socio-affective information from voices was slightly more difficult from whispered voices, probably based on missing tonal information. While phonated voices elicited extended activity in auditory regions for decoding of relevant tonal and time information and the valence of voices, whispered voices elicited activity in a complex auditory-frontal brain network. Our data suggest that a large-scale multidirectional brain network compensates for the impoverished sound quality of socially meaningful environmental signals to support their accurate recognition and valence attribution. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates

    PubMed Central

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-01-01

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys. DOI: http://dx.doi.org/10.7554/eLife.15441.001 PMID:27438411

  19. Spatiotemporal differentiation in auditory and motor regions during auditory phoneme discrimination.

    PubMed

    Aerts, Annelies; Strobbe, Gregor; van Mierlo, Pieter; Hartsuiker, Robert J; Corthals, Paul; Santens, Patrick; De Letter, Miet

    2017-06-01

    Auditory phoneme discrimination (APD) is supported by both auditory and motor regions through a sensorimotor interface embedded in a fronto-temporo-parietal cortical network. However, the specific spatiotemporal organization of this network during APD with respect to different types of phonemic contrasts is still unclear. Here, we use source reconstruction, applied to event-related potentials in a group of 47 participants, to uncover a potential spatiotemporal differentiation in these brain regions during a passive and active APD task with respect to place of articulation (PoA), voicing and manner of articulation (MoA). Results demonstrate that in an early stage (50-110 ms), auditory, motor and sensorimotor regions elicit more activation during the passive and active APD task with MoA and active APD task with voicing compared to PoA. In a later stage (130-175 ms), the same auditory and motor regions elicit more activation during the APD task with PoA compared to MoA and voicing, yet only in the active condition, implying important timing differences. Degree of attention influences a frontal network during the APD task with PoA, whereas auditory regions are more affected during the APD task with MoA and voicing. Based on these findings, it can be carefully suggested that APD is supported by the integration of early activation of auditory-acoustic properties in superior temporal regions, more perpetuated for MoA and voicing, and later auditory-to-motor integration in sensorimotor areas, more perpetuated for PoA.

  20. Transcriptional maturation of the mouse auditory forebrain.

    PubMed

    Hackett, Troy A; Guo, Yan; Clause, Amanda; Hackett, Nicholas J; Garbett, Krassimira; Zhang, Pan; Polley, Daniel B; Mirnics, Karoly

    2015-08-14

    The maturation of the brain involves the coordinated expression of thousands of genes, proteins and regulatory elements over time. In sensory pathways, gene expression profiles are modified by age and sensory experience in a manner that differs between brain regions and cell types. In the auditory system of altricial animals, neuronal activity increases markedly after the opening of the ear canals, initiating events that culminate in the maturation of auditory circuitry in the brain. This window provides a unique opportunity to study how gene expression patterns are modified by the onset of sensory experience through maturity. As a tool for capturing these features, next-generation sequencing of total RNA (RNAseq) has tremendous utility, because the entire transcriptome can be screened to index expression of any gene. To date, whole transcriptome profiles have not been generated for any central auditory structure in any species at any age. In the present study, RNAseq was used to profile two regions of the mouse auditory forebrain (A1, primary auditory cortex; MG, medial geniculate) at key stages of postnatal development (P7, P14, P21, adult) before and after the onset of hearing (~P12). Hierarchical clustering, differential expression, and functional geneset enrichment analyses (GSEA) were used to profile the expression patterns of all genes. Selected genesets related to neurotransmission, developmental plasticity, critical periods and brain structure were highlighted. An accessible repository of the entire dataset was also constructed that permits extraction and screening of all data from the global through single-gene levels. To our knowledge, this is the first whole transcriptome sequencing study of the forebrain of any mammalian sensory system. Although the data are most relevant for the auditory system, they are generally applicable to forebrain structures in the visual and somatosensory systems, as well. The main findings were: (1) Global gene expression

  1. Estradiol-dependent modulation of auditory processing and selectivity in songbirds

    PubMed Central

    Maney, Donna; Pinaud, Raphael

    2011-01-01

    The steroid hormone estradiol plays an important role in reproductive development and behavior and modulates a wide array of physiological and cognitive processes. Recently, reports from several research groups have converged to show that estradiol also powerfully modulates sensory processing, specifically, the physiology of central auditory circuits in songbirds. These investigators have discovered that (1) behaviorally-relevant auditory experience rapidly increases estradiol levels in the auditory forebrain; (2) estradiol instantaneously enhances the responsiveness and coding efficiency of auditory neurons; (3) these changes are mediated by a non-genomic effect of brain-generated estradiol on the strength of inhibitory neurotransmission; and (4) estradiol regulates biochemical cascades that induce the expression of genes involved in synaptic plasticity. Together, these findings have established estradiol as a central regulator of auditory function and intensified the need to consider brain-based mechanisms, in addition to peripheral organ dysfunction, in hearing pathologies associated with estrogen deficiency. PMID:21146556

  2. The function of BDNF in the adult auditory system.

    PubMed

    Singer, Wibke; Panford-Walsh, Rama; Knipper, Marlies

    2014-01-01

    The inner ear of vertebrates is specialized to perceive sound, gravity and movements. Each of the specialized sensory organs within the cochlea (sound) and vestibular system (gravity, head movements) transmits information to specific areas of the brain. During development, brain-derived neurotrophic factor (BDNF) orchestrates the survival and outgrowth of afferent fibers connecting the vestibular organ and those regions in the cochlea that map information for low frequency sound to central auditory nuclei and higher-auditory centers. The role of BDNF in the mature inner ear is less understood. This is mainly due to the fact that constitutive BDNF mutant mice are postnatally lethal. Only in the last few years has the improved technology of performing conditional cell specific deletion of BDNF in vivo allowed the study of the function of BDNF in the mature developed organ. This review provides an overview of the current knowledge of the expression pattern and function of BDNF in the peripheral and central auditory system from just prior to the first auditory experience onwards. A special focus will be put on the differential mechanisms in which BDNF drives refinement of auditory circuitries during the onset of sensory experience and in the adult brain. This article is part of the Special Issue entitled 'BDNF Regulation of Synaptic Structure, Function, and Plasticity'. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Music training for the development of auditory skills.

    PubMed

    Kraus, Nina; Chandrasekaran, Bharath

    2010-08-01

    The effects of music training in relation to brain plasticity have caused excitement, evident from the popularity of books on this topic among scientists and the general public. Neuroscience research has shown that music training leads to changes throughout the auditory system that prime musicians for listening challenges beyond music processing. This effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness. Therefore, the role of music in shaping individual development deserves consideration.

  4. Hearing loss and the central auditory system: Implications for hearing aids

    NASA Astrophysics Data System (ADS)

    Frisina, Robert D.

    2003-04-01

    Hearing loss can result from disorders or damage to the ear (peripheral auditory system) or the brain (central auditory system). Here, the basic structure and function of the central auditory system will be highlighted as relevant to cases of permanent hearing loss where assistive devices (hearing aids) are called for. The parts of the brain used for hearing are altered in two basic ways in instances of hearing loss: (1) Damage to the ear can reduce the number and nature of input channels that the brainstem receives from the ear, causing plasticity of the central auditory system. This plasticity may partially compensate for the peripheral loss, or add new abnormalities such as distorted speech processing or tinnitus. (2) In some situations, damage to the brain can occur independently of the ear, as may occur in cases of head trauma, tumors or aging. Implications of deficits to the central auditory system for speech perception in noise, hearing aid use and future innovative circuit designs will be provided to set the stage for subsequent presentations in this special educational session. [Work supported by NIA-NIH Grant P01 AG09524 and the International Center for Hearing & Speech Research, Rochester, NY.

  5. An auditory brain-computer interface evoked by natural speech

    NASA Astrophysics Data System (ADS)

    Lopez-Gordo, M. A.; Fernandez, E.; Romero, S.; Pelayo, F.; Prieto, Alberto

    2012-06-01

    Brain-computer interfaces (BCIs) are mainly intended for people unable to perform any muscular movement, such as patients in a complete locked-in state. The majority of BCIs interact visually with the user, either in the form of stimulation or biofeedback. However, visual BCIs challenge their ultimate use because they require the subjects to gaze, explore and shift eye-gaze using their muscles, thus excluding patients in a complete locked-in state or under the condition of the unresponsive wakefulness syndrome. In this study, we present a novel fully auditory EEG-BCI based on a dichotic listening paradigm using human voice for stimulation. This interface has been evaluated with healthy volunteers, achieving an average information transmission rate of 1.5 bits min-1 in full-length trials and 2.7 bits min-1 using the optimal length of trials, recorded with only one channel and without formal training. This novel technique opens the door to a more natural communication with users unable to use visual BCIs, with promising results in terms of performance, usability, training and cognitive effort.

  6. Understanding perception of active noise control system through multichannel EEG analysis.

    PubMed

    Bagha, Sangeeta; Tripathy, R K; Nanda, Pranati; Preetam, C; Das, Debi Prasad

    2018-06-01

    In this Letter, a method is proposed to investigate the effect of noise with and without active noise control (ANC) on multichannel electroencephalogram (EEG) signal. The multichannel EEG signal is recorded during different listening conditions such as silent, music, noise, ANC with background noise and ANC with both background noise and music. The multiscale analysis of EEG signal of each channel is performed using the discrete wavelet transform. The multivariate multiscale matrices are formulated based on the sub-band signals of each EEG channel. The singular value decomposition is applied to the multivariate matrices of multichannel EEG at significant scales. The singular value features at significant scales and the extreme learning machine classifier with three different activation functions are used for classification of multichannel EEG signal. The experimental results demonstrate that, for ANC with noise and ANC with noise and music classes, the proposed method has sensitivity values of 75.831% ( p < 0.001 ) and 99.31% ( p < 0.001 ), respectively. The method has an accuracy value of 83.22% for the classification of EEG signal with music and ANC with music as stimuli. The important finding of this study is that by the introduction of ANC, music can be better perceived by the human brain.

  7. Flat Engineered Multichannel Reflectors

    NASA Astrophysics Data System (ADS)

    Asadchy, V. S.; Díaz-Rubio, A.; Tcvetkova, S. N.; Kwon, D.-H.; Elsakka, A.; Albooyeh, M.; Tretyakov, S. A.

    2017-07-01

    Recent advances in engineered gradient metasurfaces have enabled unprecedented opportunities for light manipulation using optically thin sheets, such as anomalous refraction, reflection, or focusing of an incident beam. Here, we introduce a concept of multichannel functional metasurfaces, which are able to control incoming and outgoing waves in a number of propagation directions simultaneously. In particular, we reveal a possibility to engineer multichannel reflectors. Under the assumption of reciprocity and energy conservation, we find that there exist three basic functionalities of such reflectors: specular, anomalous, and retroreflections. Multichannel response of a general flat reflector can be described by a combination of these functionalities. To demonstrate the potential of the introduced concept, we design and experimentally test three different multichannel reflectors: three- and five-channel retroreflectors and a three-channel power splitter. Furthermore, by extending the concept to reflectors supporting higher-order Floquet harmonics, we forecast the emergence of other multichannel flat devices, such as isolating mirrors, complex splitters, and multi-functional gratings.

  8. McGurk illusion recalibrates subsequent auditory perception

    PubMed Central

    Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  9. Grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents.

    PubMed

    Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang

    2015-01-01

    Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.

  10. Complications of pediatric auditory brain stem implantation via retrosigmoid approach.

    PubMed

    Bayazit, Yildirim A; Abaday, Ayça; Dogulu, Fikret; Göksu, Nebil

    2011-01-01

    We aimed to present the complications of auditory brain stem implantations (ABI) in pediatric patients which were performed via retrosigmoid approach. Between March 2007 and February 2010, five prelingually deaf children underwent ABI (Medel device) operation via retrosigmoid approach. All children had severe cochlear malformations. The ages ranged from 20 months to 5 years. The perioperative complications encountered in 2 patients were evaluated retrospectively. No intraoperative complication was observed in the patients. Cerebrospinal fluid (CSF) leakage was the most common postoperative complication that was seen in 2 patients. The CSF leak triggered a cascade of comorbidities, and elongated the hospitalization. Pediatric ABI surgery can lead to morbidity. The CSF leak is the most common complication encountered in retrosigmoid approach. The other complications usually result from long-term hospital stay during treatment period of the CSF leak. Therefore, every attempt must be made to prevent occurrence of CSF leaks in pediatric ABI operations. Copyright © 2011 S. Karger AG, Basel.

  11. Neural Entrainment to Auditory Imagery of Rhythms.

    PubMed

    Okawa, Haruki; Suefusa, Kaori; Tanaka, Toshihisa

    2017-01-01

    A method of reconstructing perceived or imagined music by analyzing brain activity has not yet been established. As a first step toward developing such a method, we aimed to reconstruct the imagery of rhythm, which is one element of music. It has been reported that a periodic electroencephalogram (EEG) response is elicited while a human imagines a binary or ternary meter on a musical beat. However, it is not clear whether or not brain activity synchronizes with fully imagined beat and meter without auditory stimuli. To investigate neural entrainment to imagined rhythm during auditory imagery of beat and meter, we recorded EEG while nine participants (eight males and one female) imagined three types of rhythm without auditory stimuli but with visual timing, and then we analyzed the amplitude spectra of the EEG. We also recorded EEG while the participants only gazed at the visual timing as a control condition to confirm the visual effect. Furthermore, we derived features of the EEG using canonical correlation analysis (CCA) and conducted an experiment to individually classify the three types of imagined rhythm from the EEG. The results showed that classification accuracies exceeded the chance level in all participants. These results suggest that auditory imagery of meter elicits a periodic EEG response that changes at the imagined beat and meter frequency even in the fully imagined conditions. This study represents the first step toward the realization of a method for reconstructing the imagined music from brain activity.

  12. Nonverbal auditory agnosia with lesion to Wernicke's area.

    PubMed

    Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic

    2010-01-01

    We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.

  13. Auditory brain stem response and cortical evoked potentials in children with type 1 diabetes mellitus.

    PubMed

    Radwan, Heba Mohammed; El-Gharib, Amani Mohamed; Erfan, Adel Ali; Emara, Afaf Ahmad

    2017-05-01

    Delay in ABR and CAEPs wave latencies in children with type 1DM indicates that there is abnormality in the neural conduction in DM patients. The duration of DM has greater effect on auditory function than the control of DM. Diabetes mellitus (DM) is a common endocrine and metabolic disorder. Evoked potentials offer the possibility to perform a functional evaluation of neural pathways in the central nervous system. To investigate the effect of type 1 diabetes mellitus (T1DM) on auditory brain stem response (ABR) and cortical evoked potentials (CAEPs). This study included two groups: a control group (GI), which consisted of 20 healthy children with normal peripheral hearing, and a study group (GII), which consisted of 30 children with type I DM. Basic audiological evaluation, ABR, and CAEPs were done in both groups. Delayed absolute latencies of ABR and CAEPs waves were found. Amplitudes showed no significant difference between both groups. Positive correlation was found between ABR wave latencies and duration of DM. No correlation was found between ABR, CAEPs, and glycated hemoglobin.

  14. Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners.

    PubMed

    Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim

    2015-06-15

    Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Frontal Top-Down Signals Increase Coupling of Auditory Low-Frequency Oscillations to Continuous Speech in Human Listeners

    PubMed Central

    Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim

    2015-01-01

    Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433

  16. Electrophysiological measurement of human auditory function

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1975-01-01

    Contingent negative variations in the presence and amplitudes of brain potentials evoked by sound are considered. Evidence is produced that the evoked brain stem response to auditory stimuli is clearly related to brain events associated with cognitive processing of acoustic signals since their properties depend upon where the listener directs his attention, whether the signal is an expected event or a surprise, and when sound that is listened-for is heard at last.

  17. A three-wavelength multi-channel brain functional imager based on digital lock-in photon-counting technique

    NASA Astrophysics Data System (ADS)

    Ding, Xuemei; Wang, Bingyuan; Liu, Dongyuan; Zhang, Yao; He, Jie; Zhao, Huijuan; Gao, Feng

    2018-02-01

    During the past two decades there has been a dramatic rise in the use of functional near-infrared spectroscopy (fNIRS) as a neuroimaging technique in cognitive neuroscience research. Diffuse optical tomography (DOT) and optical topography (OT) can be employed as the optical imaging techniques for brain activity investigation. However, most current imagers with analogue detection are limited by sensitivity and dynamic range. Although photon-counting detection can significantly improve detection sensitivity, the intrinsic nature of sequential excitations reduces temporal resolution. To improve temporal resolution, sensitivity and dynamic range, we develop a multi-channel continuous-wave (CW) system for brain functional imaging based on a novel lock-in photon-counting technique. The system consists of 60 Light-emitting device (LED) sources at three wavelengths of 660nm, 780nm and 830nm, which are modulated by current-stabilized square-wave signals at different frequencies, and 12 photomultiplier tubes (PMT) based on lock-in photon-counting technique. This design combines the ultra-high sensitivity of the photon-counting technique with the parallelism of the digital lock-in technique. We can therefore acquire the diffused light intensity for all the source-detector pairs (SD-pairs) in parallel. The performance assessments of the system are conducted using phantom experiments, and demonstrate its excellent measurement linearity, negligible inter-channel crosstalk, strong noise robustness and high temporal resolution.

  18. Habituation deficit of auditory N100m in patients with fibromyalgia.

    PubMed

    Choi, W; Lim, M; Kim, J S; Chung, C K

    2016-11-01

    Habituation refers to the brain's inhibitory mechanism against sensory overload and its brain correlate has been investigated in the form of a well-defined event-related potential, N100 (N1). Fibromyalgia is an extensively described chronic pain syndrome with concurrent manifestations of reduced tolerance and enhanced sensation of painful and non-painful stimulation, suggesting an association with central amplification of all sensory domains. Among diverse sensory modalities, we utilized repetitive auditory stimulation to explore the anomalous sensory information processing in fibromyalgia as evidenced by N1 habituation. Auditory N1 was assessed in 19 fibromyalgia patients and age-, education- and gender-matched 21 healthy control subjects under the duration-deviant passive oddball paradigm and magnetoencephalography recording. The brain signal of the first standard stimulus (following each deviant) and last standard stimulus (preceding each deviant) were analysed to identify N1 responses. N1 amplitude difference and adjusted amplitude ratio were computed as habituation indices. Fibromyalgia patients showed lower N1 amplitude difference (left hemisphere: p = 0.004; right hemisphere: p = 0.034) and adjusted N1 amplitude ratio (left hemisphere: p = 0.001; right hemisphere: p = 0.052) than healthy control subjects, indicating deficient auditory habituation. Further, augmented N1 amplitude pattern (p = 0.029) during the stimulus repetition was observed in fibromyalgia patients. Fibromyalgia patients failed to demonstrate auditory N1 habituation to repetitively presenting stimuli, which indicates their compromised early auditory information processing. Our findings provide neurophysiological evidence of inhibitory failure and cortical augmentation in fibromyalgia. WHAT'S ALREADY KNOWN ABOUT THIS TOPIC?: Fibromyalgia has been associated with altered filtering of irrelevant somatosensory input. However, whether this abnormality can extend to the auditory sensory

  19. Estrogenic modulation of auditory processing: a vertebrate comparison

    PubMed Central

    Caras, Melissa L.

    2013-01-01

    Sex-steroid hormones are well-known regulators of vocal motor behavior in several organisms. A large body of evidence now indicates that these same hormones modulate processing at multiple levels of the ascending auditory pathway. The goal of this review is to provide a comparative analysis of the role of estrogens in vertebrate auditory function. Four major conclusions can be drawn from the literature: First, estrogens may influence the development of the mammalian auditory system. Second, estrogenic signaling protects the mammalian auditory system from noise- and age-related damage. Third, estrogens optimize auditory processing during periods of reproductive readiness in multiple vertebrate lineages. Finally, brain-derived estrogens can act locally to enhance auditory response properties in at least one avian species. This comparative examination may lead to a better appreciation of the role of estrogens in the processing of natural vocalizations and may provide useful insights toward alleviating auditory dysfunctions emanating from hormonal imbalances. PMID:23911849

  20. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.

    PubMed

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.

  1. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children

    PubMed Central

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442

  2. Neurotrophic factor intervention restores auditory function in deafened animals

    NASA Astrophysics Data System (ADS)

    Shinohara, Takayuki; Bredberg, Göran; Ulfendahl, Mats; Pyykkö, Ilmari; Petri Olivius, N.; Kaksonen, Risto; Lindström, Bo; Altschuler, Richard; Miller, Josef M.

    2002-02-01

    A primary cause of deafness is damage of receptor cells in the inner ear. Clinically, it has been demonstrated that effective functionality can be provided by electrical stimulation of the auditory nerve, thus bypassing damaged receptor cells. However, subsequent to sensory cell loss there is a secondary degeneration of the afferent nerve fibers, resulting in reduced effectiveness of such cochlear prostheses. The effects of neurotrophic factors were tested in a guinea pig cochlear prosthesis model. After chemical deafening to mimic the clinical situation, the neurotrophic factors brain-derived neurotrophic factor and an analogue of ciliary neurotrophic factor were infused directly into the cochlea of the inner ear for 26 days by using an osmotic pump system. An electrode introduced into the cochlea was used to elicit auditory responses just as in patients implanted with cochlear prostheses. Intervention with brain-derived neurotrophic factor and the ciliary neurotrophic factor analogue not only increased the survival of auditory spiral ganglion neurons, but significantly enhanced the functional responsiveness of the auditory system as measured by using electrically evoked auditory brainstem responses. This demonstration that neurotrophin intervention enhances threshold sensitivity within the auditory system will have great clinical importance for the treatment of deaf patients with cochlear prostheses. The findings have direct implications for the enhancement of responsiveness in deafferented peripheral nerves.

  3. Bilinguals at the "cocktail party": dissociable neural activity in auditory-linguistic brain regions reveals neurobiological basis for nonnative listeners' speech-in-noise recognition deficits.

    PubMed

    Bidelman, Gavin M; Dexter, Lauren

    2015-04-01

    We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. A novel hybrid auditory BCI paradigm combining ASSR and P300.

    PubMed

    Kaongoen, Netiwit; Jo, Sungho

    2017-03-01

    Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. An anatomical and functional topography of human auditory cortical areas

    PubMed Central

    Moerel, Michelle; De Martino, Federico; Formisano, Elia

    2014-01-01

    While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that—whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis—the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions. PMID:25120426

  6. Auditory Hallucinations and the Brain's Resting-State Networks: Findings and Methodological Observations.

    PubMed

    Alderson-Day, Ben; Diederen, Kelly; Fernyhough, Charles; Ford, Judith M; Horga, Guillermo; Margulies, Daniel S; McCarthy-Jones, Simon; Northoff, Georg; Shine, James M; Turner, Jessica; van de Ven, Vincent; van Lutterveld, Remko; Waters, Flavie; Jardri, Renaud

    2016-09-01

    In recent years, there has been increasing interest in the potential for alterations to the brain's resting-state networks (RSNs) to explain various kinds of psychopathology. RSNs provide an intriguing new explanatory framework for hallucinations, which can occur in different modalities and population groups, but which remain poorly understood. This collaboration from the International Consortium on Hallucination Research (ICHR) reports on the evidence linking resting-state alterations to auditory hallucinations (AH) and provides a critical appraisal of the methodological approaches used in this area. In the report, we describe findings from resting connectivity fMRI in AH (in schizophrenia and nonclinical individuals) and compare them with findings from neurophysiological research, structural MRI, and research on visual hallucinations (VH). In AH, various studies show resting connectivity differences in left-hemisphere auditory and language regions, as well as atypical interaction of the default mode network and RSNs linked to cognitive control and salience. As the latter are also evident in studies of VH, this points to a domain-general mechanism for hallucinations alongside modality-specific changes to RSNs in different sensory regions. However, we also observed high methodological heterogeneity in the current literature, affecting the ability to make clear comparisons between studies. To address this, we provide some methodological recommendations and options for future research on the resting state and hallucinations. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.

  7. Plasticity in neuromagnetic cortical responses suggests enhanced auditory object representation

    PubMed Central

    2013-01-01

    Background Auditory perceptual learning persistently modifies neural networks in the central nervous system. Central auditory processing comprises a hierarchy of sound analysis and integration, which transforms an acoustical signal into a meaningful object for perception. Based on latencies and source locations of auditory evoked responses, we investigated which stage of central processing undergoes neuroplastic changes when gaining auditory experience during passive listening and active perceptual training. Young healthy volunteers participated in a five-day training program to identify two pre-voiced versions of the stop-consonant syllable ‘ba’, which is an unusual speech sound to English listeners. Magnetoencephalographic (MEG) brain responses were recorded during two pre-training and one post-training sessions. Underlying cortical sources were localized, and the temporal dynamics of auditory evoked responses were analyzed. Results After both passive listening and active training, the amplitude of the P2m wave with latency of 200 ms increased considerably. By this latency, the integration of stimulus features into an auditory object for further conscious perception is considered to be complete. Therefore the P2m changes were discussed in the light of auditory object representation. Moreover, P2m sources were localized in anterior auditory association cortex, which is part of the antero-ventral pathway for object identification. The amplitude of the earlier N1m wave, which is related to processing of sensory information, did not change over the time course of the study. Conclusion The P2m amplitude increase and its persistence over time constitute a neuroplastic change. The P2m gain likely reflects enhanced object representation after stimulus experience and training, which enables listeners to improve their ability for scrutinizing fine differences in pre-voicing time. Different trajectories of brain and behaviour changes suggest that the preceding effect

  8. Putting it in Context: Linking Auditory Processing with Social Behavior Circuits in the Vertebrate Brain.

    PubMed

    Petersen, Christopher L; Hurley, Laura M

    2017-10-01

    Context is critical to the adaptive value of communication. Sensory systems such as the auditory system represent an important juncture at which information on physiological state or social valence can be added to communicative information. However, the neural pathways that convey context to the auditory system are not well understood. The serotonergic system offers an excellent model to address these types of questions. Serotonin fluctuates in the mouse inferior colliculus (IC), an auditory midbrain region important for species-specific vocalizations, during specific social and non-social contexts. Furthermore, serotonin is an indicator of the valence of event-based changes within individual social interactions. We propose a model in which the brain's social behavior network serves as an afferent effector of the serotonergic dorsal raphe nucleus in order to gate contextual release of serotonin in the IC. Specifically, discrete vasopressinergic nuclei within the hypothalamus and extended amygdala that project to the dorsal raphe are functionally engaged during contexts in which serotonin fluctuates in the IC. Since serotonin strongly influences the responses of IC neurons to social vocalizations, this pathway could serve as a feedback loop whereby integrative social centers modulate their own sources of input. The end result of this feedback would be to produce a process that is geared, from sensory input to motor output, toward responding appropriately to a dynamic external world. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  9. Pilocarpine Seizures Cause Age-Dependent Impairment in Auditory Location Discrimination

    ERIC Educational Resources Information Center

    Neill, John C.; Liu, Zhao; Mikati, Mohammad; Holmes, Gregory L.

    2005-01-01

    Children who have status epilepticus have continuous or rapidly repeating seizures that may be life-threatening and may cause life-long changes in brain and behavior. The extent to which status epilepticus causes deficits in auditory discrimination is unknown. A naturalistic auditory location discrimination method was used to evaluate this…

  10. A corollary discharge maintains auditory sensitivity during sound production

    NASA Astrophysics Data System (ADS)

    Poulet, James F. A.; Hedwig, Berthold

    2002-08-01

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  11. The effect of contextual auditory stimuli on virtual spatial navigation in patients with focal hemispheric lesions.

    PubMed

    Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie

    2018-01-01

    Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.

  12. Repeated restraint stress impairs auditory attention and GABAergic synaptic efficacy in the rat auditory cortex.

    PubMed

    Pérez, Miguel Ángel; Pérez-Valenzuela, Catherine; Rojas-Thomas, Felipe; Ahumada, Juan; Fuenzalida, Marco; Dagnino-Subiabre, Alexies

    2013-08-29

    Chronic stress induces dendritic atrophy in the rat primary auditory cortex (A1), a key brain area for auditory attention. The aim of this study was to determine whether repeated restraint stress affects auditory attention and synaptic transmission in A1. Male Sprague-Dawley rats were trained in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance over 80% of correct trials in the 2-ACT were randomly assigned to control and restraint stress experimental groups. To analyze the effects of restraint stress on the auditory attention, trained rats of both groups were subjected to 50 2-ACT trials one day before and one day after of the stress period. A difference score was determined by subtracting the number of correct trials after from those before the stress protocol. Another set of rats was used to study the synaptic transmission in A1. Restraint stress decreased the number of correct trials by 28% compared to the performance of control animals (p < 0.001). Furthermore, stress reduced the frequency of spontaneous inhibitory postsynaptic currents (sIPSC) and miniature IPSC in A1, whereas glutamatergic efficacy was not affected. Our results demonstrate that restraint stress decreased auditory attention and GABAergic synaptic efficacy in A1. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. A lateralized functional auditory network is involved in anuran sexual selection.

    PubMed

    Xue, Fei; Fang, Guangzhan; Yue, Xizi; Zhao, Ermi; Brauth, Steven E; Tang, Yezhong

    2016-12-01

    Right ear advantage (REA) exists in many land vertebrates in which the right ear and left hemisphere preferentially process conspecific acoustic stimuli such as those related to sexual selection. Although ecological and neural mechanisms for sexual selection have been widely studied, the brain networks involved are still poorly understood. In this study we used multi-channel electroencephalographic data in combination with Granger causal connectivity analysis to demonstrate, for the first time, that auditory neural network interconnecting the left and right midbrain and forebrain function asymmetrically in the Emei music frog (Babina daunchina), an anuran species which exhibits REA. The results showed the network was lateralized. Ascending connections between the mesencephalon and telencephalon were stronger in the left side while descending ones were stronger in the right, which matched with the REA in this species and implied that inhibition from the forebrainmay induce REA partly. Connections from the telencephalon to ipsilateral mesencephalon in response to white noise were the highest in the non-reproductive stage while those to advertisement calls were the highest in reproductive stage, implying the attention resources and living strategy shift when entered the reproductive season. Finally, these connection changes were sexually dimorphic, revealing sex differences in reproductive roles.

  14. Positron Emission Tomography in Cochlear Implant and Auditory Brainstem Implant Recipients.

    ERIC Educational Resources Information Center

    Miyamoto, Richard T.; Wong, Donald

    2001-01-01

    Positron emission tomography imaging was used to evaluate the brain's response to auditory stimulation, including speech, in deaf adults (five with cochlear implants and one with an auditory brainstem implant). Functional speech processing was associated with activation in areas classically associated with speech processing. (Contains five…

  15. Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.

    PubMed

    Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd

    2014-11-01

    In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and

  16. Neural basis of processing threatening voices in a crowded auditory world

    PubMed Central

    Mothes-Lasch, Martin; Becker, Michael P. I.; Miltner, Wolfgang H. R.

    2016-01-01

    In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant prosodic features under varying combinations of prosody valence, auditory background complexity and attentional focus. Here, we used event-related functional magnetic resonance imaging to investigate the effects of high background sound complexity and attentional focus on brain activation to angry and neutral prosody in humans. Results show that prosody effects in mid superior temporal cortex were gated by background complexity but not attention, while prosody effects in the amygdala and anterior superior temporal cortex were gated by attention but not background complexity, suggesting distinct emotional prosody processing limitations in different regions. Crucially, if attention was focused on the highly complex background, the differential processing of emotional prosody was prevented in all brain regions, suggesting that in a distracting, complex auditory world even threatening voices may go unnoticed. PMID:26884543

  17. Least squares restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.

    1991-01-01

    Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.

  18. Delta, theta, beta, and gamma brain oscillations index levels of auditory sentence processing.

    PubMed

    Mai, Guangting; Minett, James W; Wang, William S-Y

    2016-06-01

    A growing number of studies indicate that multiple ranges of brain oscillations, especially the delta (δ, <4Hz), theta (θ, 4-8Hz), beta (β, 13-30Hz), and gamma (γ, 30-50Hz) bands, are engaged in speech and language processing. It is not clear, however, how these oscillations relate to functional processing at different linguistic hierarchical levels. Using scalp electroencephalography (EEG), the current study tested the hypothesis that phonological and the higher-level linguistic (semantic/syntactic) organizations during auditory sentence processing are indexed by distinct EEG signatures derived from the δ, θ, β, and γ oscillations. We analyzed specific EEG signatures while subjects listened to Mandarin speech stimuli in three different conditions in order to dissociate phonological and semantic/syntactic processing: (1) sentences comprising valid disyllabic words assembled in a valid syntactic structure (real-word condition); (2) utterances with morphologically valid syllables, but not constituting valid disyllabic words (pseudo-word condition); and (3) backward versions of the real-word and pseudo-word conditions. We tested four signatures: band power, EEG-acoustic entrainment (EAE), cross-frequency coupling (CFC), and inter-electrode renormalized partial directed coherence (rPDC). The results show significant effects of band power and EAE of δ and θ oscillations for phonological, rather than semantic/syntactic processing, indicating the importance of tracking δ- and θ-rate phonetic patterns during phonological analysis. We also found significant β-related effects, suggesting tracking of EEG to the acoustic stimulus (high-β EAE), memory processing (θ-low-β CFC), and auditory-motor interactions (20-Hz rPDC) during phonological analysis. For semantic/syntactic processing, we obtained a significant effect of γ power, suggesting lexical memory retrieval or processing grammatical word categories. Based on these findings, we confirm that scalp EEG

  19. Noise Trauma Induced Plastic Changes in Brain Regions outside the Classical Auditory Pathway

    PubMed Central

    Chen, Guang-Di; Sheppard, Adam; Salvi, Richard

    2017-01-01

    The effects of intense noise exposure on the classical auditory pathway have been extensively investigated; however, little is known about the effects of noise-induced hearing loss on non-classical auditory areas in the brain such as the lateral amygdala (LA) and striatum (Str). To address this issue, we compared the noise-induced changes in spontaneous and tone-evoked responses from multiunit clusters (MUC) in the LA and Str with those seen in auditory cortex (AC). High-frequency octave band noise (10–20 kHz) and narrow band noise (16–20 kHz) induced permanent thresho ld shifts (PTS) at high-frequencies within and above the noise band but not at low frequencies. While the noise trauma significantly elevated spontaneous discharge rate (SR) in the AC, SRs in the LA and Str were only slightly increased across all frequencies. The high-frequency noise trauma affected tone-evoked firing rates in frequency and time dependent manner and the changes appeared to be related to severity of noise trauma. In the LA, tone-evoked firing rates were reduced at the high-frequencies (trauma area) whereas firing rates were enhanced at the low-frequencies or at the edge-frequency dependent on severity of hearing loss at the high frequencies. The firing rate temporal profile changed from a broad plateau to one sharp, delayed peak. In the AC, tone-evoked firing rates were depressed at high frequencies and enhanced at the low frequencies while the firing rate temporal profiles became substantially broader. In contrast, firing rates in the Str were generally decreased and firing rate temporal profiles become more phasic and less prolonged. The altered firing rate and pattern at low frequencies induced by high frequency hearing loss could have perceptual consequences. The tone-evoked hyperactivity in low-frequency MUC could manifest as hyperacusis whereas the discharge pattern changes could affect temporal resolution and integration. PMID:26701290

  20. Entrainment to an auditory signal: Is attention involved?

    PubMed

    Kunert, Richard; Jongman, Suzanne R

    2017-01-01

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Characterization of auditory synaptic inputs to gerbil perirhinal cortex

    PubMed Central

    Kotak, Vibhakar C.; Mowery, Todd M.; Sanes, Dan H.

    2015-01-01

    The representation of acoustic cues involves regions downstream from the auditory cortex (ACx). One such area, the perirhinal cortex (PRh), processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG) and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of type A gamma-aminobutyric acid (GABA-A) receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the ACx. PMID:26321918

  2. Altered auditory function in rats exposed to hypergravic fields

    NASA Technical Reports Server (NTRS)

    Jones, T. A.; Hoffman, L.; Horowitz, J. M.

    1982-01-01

    The effect of an orthodynamic hypergravic field of 6 G on the brainstem auditory projections was studied in rats. The brain temperature and EEG activity were recorded in the rats during 6 G orthodynamic acceleration and auditory brainstem responses were used to monitor auditory function. Results show that all animals exhibited auditory brainstem responses which indicated impaired conduction and transmission of brainstem auditory signals during the exposure to the 6 G acceleration field. Significant increases in central conduction time were observed for peaks 3N, 4P, 4N, and 5P (N = negative, P = positive), while the absolute latency values for these same peaks were also significantly increased. It is concluded that these results, along with those for fields below 4 G (Jones and Horowitz, 1981), indicate that impaired function proceeds in a rostro-caudal progression as field strength is increased.

  3. Auditory pathways: are 'what' and 'where' appropriate?

    PubMed

    Hall, Deborah A

    2003-05-13

    New evidence confirms that the auditory system encompasses temporal, parietal and frontal brain regions, some of which partly overlap with the visual system. But common assumptions about the functional homologies between sensory systems may be misleading.

  4. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  5. [Communication and auditory behavior obtained by auditory evoked potentials in mammals, birds, amphibians, and reptiles].

    PubMed

    Arch-Tirado, Emilio; Collado-Corona, Miguel Angel; Morales-Martínez, José de Jesús

    2004-01-01

    amphibians, Frog catesbiana (frog bull, 30 animals); reptiles, Sceloporus torcuatus (common small lizard, 22 animals); birds: Columba livia (common dove, 20 animals), and mammals, Cavia porcellus, (guinea pig, 20 animals). With regard to lodging, all animals were maintained at the Institute of Human Communication Disorders, were fed with special food for each species, and had water available ad libitum. Regarding procedure, for carrying out analysis of auditory evoked potentials of brain stem SPL amphibians, birds, and mammals were anesthetized with ketamine 20, 25, and 50 mg/kg, by injection. Reptiles were anesthetized by freezing (6 degrees C). Study subjects had needle electrodes placed in an imaginary line on the half sagittal line between both ears and eyes, behind right ear, and behind left ear. Stimulation was carried out inside a no noise site by means of a horn in free field. The sign was filtered at between 100 and 3,000 Hz and analyzed in a computer for provoked potentials (Racia APE 78). In data shown by amphibians, wave-evoked responses showed greater latency than those of the other species. In reptiles, latency was observed as reduced in comparison with amphibians. In the case of birds, lesser latency values were observed, while in the case of guinea pigs latencies were greater than those of doves but they were stimulated by 10 dB, which demonstrated best auditory threshold in the four studied species. Last, it was corroborated that as the auditory threshold of each species it descends conforms to it advances in the phylogenetic scale. Beginning with these registrations, we care able to say that response for evoked brain stem potential showed to be more complex and lesser values of absolute latency as we advance along the phylogenetic scale; thus, the opposing auditory threshold is better agreement with regard to the phylogenetic scale among studied species. These data indicated to us that seeking of auditory information is more complex in more

  6. A supramodal brain substrate of word form processing--an fMRI study on homonym finding with auditory and visual input.

    PubMed

    Balthasar, Andrea J R; Huber, Walter; Weis, Susanne

    2011-09-02

    Homonym processing in German is of theoretical interest as homonyms specifically involve word form information. In a previous study (Weis et al., 2001), we found inferior parietal activation as a correlate of successfully finding a homonym from written stimuli. The present study tries to clarify the underlying mechanism and to examine to what extend the previous homonym effect is dependent on visual in contrast to auditory input modality. 18 healthy subjects were examined using an event-related functional magnetic resonance imaging paradigm. Participants had to find and articulate a homonym in relation to two spoken or written words. A semantic-lexical task - oral naming from two-word definitions - was used as a control condition. When comparing brain activation for solved homonym trials to both brain activation for unsolved homonyms and solved definition trials we obtained two activations patterns, which characterised both auditory and visual processing. Semantic-lexical processing was related to bilateral inferior frontal activation, whereas left inferior parietal activation was associated with finding the correct homonym. As the inferior parietal activation during successful access to the word form of a homonym was independent of input modality, it might be the substrate of access to word form knowledge. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Auditory evoked potentials.

    PubMed

    De Cosmo, G; Aceto, P; Clemente, A; Congedo, E

    2004-05-01

    Auditory evoked potentials (AEPs) are an electrical manifestation of the brain response to an auditory stimulus. Mid-latency auditory evoked potentials (MLAEPs) and the coherent frequency of the AEP are the most promising for monitoring depth of anaesthesia. MLAEPs show graded changes with increasing anaesthetic concentration over the clinical concentration range. The latencies of Pa and Nb lengthen and their amplitudes reduce. These changes in features of waveform are similar with both inhaled and intravenous anaesthetics. Changes in latency of Pa and Nb waves are highly correlated to a transition from awake to loss of consciousness. MLAEPs recording may also provide information about cerebral processing of the auditory input, probably because it reflects activity in the temporal lobe/primary cortex, sites involved in sounds elaboration and in a complex mechanism of implicit (non declarative) memory processing. The coherent frequency has found to be disrupted by the anaesthetics as well as to be implicated in attentional mechanism. These results support the concept that the AEPs reflects the balance between the arousal effects of surgical stimulation and the depressant effects of anaesthetics. However, AEPs aren't a perfect measure of anaesthesia depth. They can't predict patients movements during surgery and the signal may be affected by muscle artefacts, diathermy and other electrical operating theatre interferences. In conclusion, once reliability of the AEPs recording became proved and the signal acquisition improved it is likely to became a routine feature of clinical anaesthetic practice.

  8. Premotor cortex is sensitive to auditory-visual congruence for biological motion.

    PubMed

    Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F

    2012-03-01

    The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.

  9. Evaluation of auditory brain stems evoked response in newborns with pathologic hyperbilirubinemia in mashhad, iran.

    PubMed

    Okhravi, Tooba; Tarvij Eslami, Saeedeh; Hushyar Ahmadi, Ali; Nassirian, Hossain; Najibpour, Reza

    2015-02-01

    Neonatal jaundice is a common cause of sensorneural hearing loss in children. We aimed to detect the neurotoxic effects of pathologic hyperbilirubinemia on brain stem and auditory tract by auditory brain stem evoked response (ABR) which could predict early effects of hyperbilirubinemia. This case-control study was performed on newborns with pathologic hyperbilirubinemia. The inclusion criteria were healthy term and near term (35 - 37 weeks) newborns with pathologic hyperbilirubinemia with serum bilirubin values of ≥ 7 mg/dL, ≥ 10 mg/dL and ≥14 mg/dL at the first, second and third-day of life, respectively, and with bilirubin concentration ≥ 18 mg/dL at over 72 hours of life. The exclusion criteria included family history and diseases causing sensorineural hearing loss, use of auto-toxic medications within the preceding five days, convulsion, congenital craniofacial anomalies, birth trauma, preterm newborns < 35 weeks old, birth weight < 1500 g, asphyxia, and mechanical ventilations for five days or more. A total of 48 newborns with hyperbilirubinemia met the enrolment criteria as the case group and 49 healthy newborns as the control group, who were hospitalized in a university educational hospital (22 Bahaman), in a north-eastern city of Iran, Mashhad. ABR was performed on both groups. The evaluated variable factors were latency time, inter peak intervals time, and loss of waves. The mean latencies of waves I, III and V of ABR were significantly higher in the pathologic hyperbilirubinemia group compared with the controls (P < 0.001). In addition, the mean interpeak intervals (IPI) of waves I-III, I-V and III-V of ABR were significantly higher in the pathologic hyperbilirubinemia group compared with the controls (P < 0.001). For example, the mean latencies time of wave I was significantly higher in right ear of the case group than in controls (2.16 ± 0.26 vs. 1.77 ± 0.15 milliseconds, respectively) (P < 0.001). Pathologic hyperbilirubinemia causes acute

  10. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  11. Neural preservation underlies speech improvement from auditory deprivation in young cochlear implant recipients.

    PubMed

    Feng, Gangyi; Ingvalson, Erin M; Grieco-Calub, Tina M; Roberts, Megan Y; Ryan, Maura E; Birmingham, Patrick; Burrowes, Delilah; Young, Nancy M; Wong, Patrick C M

    2018-01-30

    Although cochlear implantation enables some children to attain age-appropriate speech and language development, communicative delays persist in others, and outcomes are quite variable and difficult to predict, even for children implanted early in life. To understand the neurobiological basis of this variability, we used presurgical neural morphological data obtained from MRI of individual pediatric cochlear implant (CI) candidates implanted younger than 3.5 years to predict variability of their speech-perception improvement after surgery. We first compared neuroanatomical density and spatial pattern similarity of CI candidates to that of age-matched children with normal hearing, which allowed us to detail neuroanatomical networks that were either affected or unaffected by auditory deprivation. This information enables us to build machine-learning models to predict the individual children's speech development following CI. We found that regions of the brain that were unaffected by auditory deprivation, in particular the auditory association and cognitive brain regions, produced the highest accuracy, specificity, and sensitivity in patient classification and the most precise prediction results. These findings suggest that brain areas unaffected by auditory deprivation are critical to developing closer to typical speech outcomes. Moreover, the findings suggest that determination of the type of neural reorganization caused by auditory deprivation before implantation is valuable for predicting post-CI language outcomes for young children.

  12. Prediction in the service of comprehension: modulated early brain responses to omitted speech segments.

    PubMed

    Bendixen, Alexandra; Scharinger, Mathias; Strauß, Antje; Obleser, Jonas

    2014-04-01

    Speech signals are often compromised by disruptions originating from external (e.g., masking noise) or internal (e.g., inaccurate articulation) sources. Speech comprehension thus entails detecting and replacing missing information based on predictive and restorative neural mechanisms. The present study targets predictive mechanisms by investigating the influence of a speech segment's predictability on early, modality-specific electrophysiological responses to this segment's omission. Predictability was manipulated in simple physical terms in a single-word framework (Experiment 1) or in more complex semantic terms in a sentence framework (Experiment 2). In both experiments, final consonants of the German words Lachs ([laks], salmon) or Latz ([lats], bib) were occasionally omitted, resulting in the syllable La ([la], no semantic meaning), while brain responses were measured with multi-channel electroencephalography (EEG). In both experiments, the occasional presentation of the fragment La elicited a larger omission response when the final speech segment had been predictable. The omission response occurred ∼125-165 msec after the expected onset of the final segment and showed characteristics of the omission mismatch negativity (MMN), with generators in auditory cortical areas. Suggestive of a general auditory predictive mechanism at work, this main observation was robust against varying source of predictive information or attentional allocation, differing between the two experiments. Source localization further suggested the omission response enhancement by predictability to emerge from left superior temporal gyrus and left angular gyrus in both experiments, with additional experiment-specific contributions. These results are consistent with the existence of predictive coding mechanisms in the central auditory system, and suggestive of the general predictive properties of the auditory system to support spoken word recognition. Copyright © 2014 Elsevier Ltd. All

  13. Multi-channel polarized thermal emitter

    DOEpatents

    Lee, Jae-Hwang; Ho, Kai-Ming; Constant, Kristen P

    2013-07-16

    A multi-channel polarized thermal emitter (PTE) is presented. The multi-channel PTE can emit polarized thermal radiation without using a polarizer at normal emergence. The multi-channel PTE consists of two layers of metallic gratings on a monolithic and homogeneous metallic plate. It can be fabricated by a low-cost soft lithography technique called two-polymer microtransfer molding. The spectral positions of the mid-infrared (MIR) radiation peaks can be tuned by changing the periodicity of the gratings and the spectral separation between peaks are tuned by changing the mutual angle between the orientations of the two gratings.

  14. Auditory cortex stimulation to suppress tinnitus: mechanisms and strategies.

    PubMed

    Zhang, Jinsheng

    2013-01-01

    Brain stimulation is an important method used to modulate neural activity and suppress tinnitus. Several auditory and non-auditory brain regions have been targeted for stimulation. This paper reviews recent progress on auditory cortex (AC) stimulation to suppress tinnitus and its underlying neural mechanisms and stimulation strategies. At the same time, the author provides his opinions and hypotheses on both animal and human models. The author also proposes a medial geniculate body (MGB)-thalamic reticular nucleus (TRN)-Gating mechanism to reflect tinnitus-related neural information coming from upstream and downstream projection structures. The upstream structures include the lower auditory brainstem and midbrain structures. The downstream structures include the AC and certain limbic centers. Both upstream and downstream information is involved in a dynamic gating mechanism in the MGB together with the TRN. When abnormal gating occurs at the thalamic level, the spilled-out information interacts with the AC to generate tinnitus. The tinnitus signals at the MGB-TRN-Gating may be modulated by different forms of stimulations including brain stimulation. Each stimulation acts as a gain modulator to control the level of tinnitus signals at the MGB-TRN-Gate. This hypothesis may explain why different types of stimulation can induce tinnitus suppression. Depending on the tinnitus etiology, MGB-TRN-Gating may be different in levels and dynamics, which cause variability in tinnitus suppression induced by different gain controllers. This may explain why the induced suppression of tinnitus by one type of stimulation varies across individual patients. Copyright © 2012. Published by Elsevier B.V.

  15. Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System

    PubMed Central

    Anderson, Lucy A.

    2016-01-01

    High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the

  16. Emotional context enhances auditory novelty processing in superior temporal gyrus.

    PubMed

    Domínguez-Borràs, Judith; Trautmann, Sina-Alexa; Erhard, Peter; Fehr, Thorsten; Herrmann, Manfred; Escera, Carles

    2009-07-01

    Visualizing emotionally loaded pictures intensifies peripheral reflexes toward sudden auditory stimuli, suggesting that the emotional context may potentiate responses elicited by novel events in the acoustic environment. However, psychophysiological results have reported that attentional resources available to sounds become depleted, as attention allocation to emotional pictures increases. These findings have raised the challenging question of whether an emotional context actually enhances or attenuates auditory novelty processing at a central level in the brain. To solve this issue, we used functional magnetic resonance imaging to first identify brain activations induced by novel sounds (NOV) when participants made a color decision on visual stimuli containing both negative (NEG) and neutral (NEU) facial expressions. We then measured modulation of these auditory responses by the emotional load of the task. Contrary to what was assumed, activation induced by NOV in superior temporal gyrus (STG) was enhanced when subjects responded to faces with a NEG emotional expression compared with NEU ones. Accordingly, NOV yielded stronger behavioral disruption on subjects' performance in the NEG context. These results demonstrate that the emotional context modulates the excitability of auditory and possibly multimodal novelty cerebral regions, enhancing acoustic novelty processing in a potentially harming environment.

  17. The human auditory evoked response

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1974-01-01

    Figures are presented of computer-averaged auditory evoked responses (AERs) that point to the existence of a completely endogenous brain event. A series of regular clicks or tones was administered to the ear, and 'odd-balls' of different intensity or frequency respectively were included. Subjects were asked either to ignore the sounds (to read or do something else) or to attend to the stimuli. When they listened and counted the odd-balls, a P3 wave occurred at 300msec after stimulus. When the odd-balls consisted of omitted clicks or tone bursts, a similar response was observed. This could not have come from auditory nerve, but only from cortex. It is evidence of recognition, a conscious process.

  18. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  19. Multichannel electrochemical microbial detection unit

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Young, R. N.; Boykin, E. H.

    1978-01-01

    The paper describes the design and capabilities of a compact multichannel electrochemical unit devised to detect and automatically indicate detection time length of bacteria. By connecting this unit to a strip-chart recorder, a permanent record is obtained of the end points and growth curves for each of eight channels. The experimental setup utilizing the multichannel unit consists of a test tube (25 by 150 mm) containing a combination redox electrode plus 18 ml of lauryl tryptose broth and positioned in a 35-C water bath. Leads from the electrodes are connected to the multichannel unit, which in turn is connected to a strip-chart recorder. After addition of 2.0 ml of inoculum to the test tubes, depression of the push-button starter activates the electronics, timer, and indicator light for each channel. The multichannel unit is employed to test tenfold dilutions of various members of the Enterobacteriaceae group, and a typical dose-response curve is presented.

  20. Developing Brain Vital Signs: Initial Framework for Monitoring Brain Function Changes Over Time

    PubMed Central

    Ghosh Hajra, Sujoy; Liu, Careesa C.; Song, Xiaowei; Fickling, Shaun; Liu, Luke E.; Pawlowski, Gabriela; Jorgensen, Janelle K.; Smith, Aynsley M.; Schnaider-Beeri, Michal; Van Den Broek, Rudi; Rizzotti, Rowena; Fisher, Kirk; D'Arcy, Ryan C. N.

    2016-01-01

    Clinical assessment of brain function relies heavily on indirect behavior-based tests. Unfortunately, behavior-based assessments are subjective and therefore susceptible to several confounding factors. Event-related brain potentials (ERPs), derived from electroencephalography (EEG), are often used to provide objective, physiological measures of brain function. Historically, ERPs have been characterized extensively within research settings, with limited but growing clinical applications. Over the past 20 years, we have developed clinical ERP applications for the evaluation of functional status following serious injury and/or disease. This work has identified an important gap: the need for a clinically accessible framework to evaluate ERP measures. Crucially, this enables baseline measures before brain dysfunction occurs, and might enable the routine collection of brain function metrics in the future much like blood pressure measures today. Here, we propose such a framework for extracting specific ERPs as potential “brain vital signs.” This framework enabled the translation/transformation of complex ERP data into accessible metrics of brain function for wider clinical utilization. To formalize the framework, three essential ERPs were selected as initial indicators: (1) the auditory N100 (Auditory sensation); (2) the auditory oddball P300 (Basic attention); and (3) the auditory speech processing N400 (Cognitive processing). First step validation was conducted on healthy younger and older adults (age range: 22–82 years). Results confirmed specific ERPs at the individual level (86.81–98.96%), verified predictable age-related differences (P300 latency delays in older adults, p < 0.05), and demonstrated successful linear transformation into the proposed brain vital sign (BVS) framework (basic attention latency sub-component of BVS framework reflects delays in older adults, p < 0.05). The findings represent an initial critical step in developing, extracting, and

  1. Multimodal lexical processing in auditory cortex is literacy skill dependent.

    PubMed

    McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R

    2014-09-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Ultrasound Produces Extensive Brain Activation via a Cochlear Pathway.

    PubMed

    Guo, Hongsun; Hamilton, Mark; Offutt, Sarah J; Gloeckner, Cory D; Li, Tianqi; Kim, Yohan; Legon, Wynn; Alford, Jamu K; Lim, Hubert H

    2018-06-06

    Ultrasound (US) can noninvasively activate intact brain circuits, making it a promising neuromodulation technique. However, little is known about the underlying mechanism. Here, we apply transcranial US and perform brain mapping studies in guinea pigs using extracellular electrophysiology. We find that US elicits extensive activation across cortical and subcortical brain regions. However, transection of the auditory nerves or removal of cochlear fluids eliminates the US-induced activity, revealing an indirect auditory mechanism for US neural activation. Our findings indicate that US activates the ascending auditory system through a cochlear pathway, which can activate other non-auditory regions through cross-modal projections. This cochlear pathway mechanism challenges the idea that US can directly activate neurons in the intact brain, suggesting that future US stimulation studies will need to control for this effect to reach reliable conclusions. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Can spectro-temporal complexity explain the autistic pattern of performance on auditory tasks?

    PubMed

    Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter

    2006-01-01

    To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material (pure tones) and/or low-level operations (detection, labelling, chord disembedding, detection of pitch changes) show a superior level of performance and shorter ERP latencies. In contrast, tasks involving spectrally- and temporally-dynamic material and/or complex operations (evaluation, attention) are poorly performed by autistics, or generate inferior ERP activity or brain activation. Neural complexity required to perform auditory tasks may therefore explain pattern of performance and activation of autistic individuals during auditory tasks.

  4. Multichannel, Active Low-Pass Filters

    NASA Technical Reports Server (NTRS)

    Lev, James J.

    1989-01-01

    Multichannel integrated circuits cascaded to obtain matched characteristics. Gain and phase characteristics of channels of multichannel, multistage, active, low-pass filter matched by making filter of cascaded multichannel integrated-circuit operational amplifiers. Concept takes advantage of inherent equality of electrical characteristics of nominally-identical circuit elements made on same integrated-circuit chip. Characteristics of channels vary identically with changes in temperature. If additional matched channels needed, chips containing more than two operational amplifiers apiece (e.g., commercial quad operational amplifliers) used. Concept applicable to variety of equipment requiring matched gain and phase in multiple channels - radar, test instruments, communication circuits, and equipment for electronic countermeasures.

  5. A trade-off between somatosensory and auditory related brain activity during object naming but not reading.

    PubMed

    Seghier, Mohamed L; Hope, Thomas M H; Prejawa, Susan; Parker Jones, 'Ōiwi; Vitkovitch, Melanie; Price, Cathy J

    2015-03-18

    The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level. Copyright © 2015 Seghier et al.

  6. Music and natural sounds in an auditory steady-state response based brain-computer interface to increase user acceptance.

    PubMed

    Heo, Jeong; Baek, Hyun Jae; Hong, Seunghyeok; Chang, Min Hye; Lee, Jeong Su; Park, Kwang Suk

    2017-05-01

    Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  8. Changes of the directional brain networks related with brain plasticity in patients with long-term unilateral sensorineural hearing loss.

    PubMed

    Zhang, G-Y; Yang, M; Liu, B; Huang, Z-C; Li, J; Chen, J-Y; Chen, H; Zhang, P-P; Liu, L-J; Wang, J; Teng, G-J

    2016-01-28

    Previous studies often report that early auditory deprivation or congenital deafness contributes to cross-modal reorganization in the auditory-deprived cortex, and this cross-modal reorganization limits clinical benefit from cochlear prosthetics. However, there are inconsistencies among study results on cortical reorganization in those subjects with long-term unilateral sensorineural hearing loss (USNHL). It is also unclear whether there exists a similar cross-modal plasticity of the auditory cortex for acquired monaural deafness and early or congenital deafness. To address this issue, we constructed the directional brain functional networks based on entropy connectivity of resting-state functional MRI and researched changes of the networks. Thirty-four long-term USNHL individuals and seventeen normally hearing individuals participated in the test, and all USNHL patients had acquired deafness. We found that certain brain regions of the sensorimotor and visual networks presented enhanced synchronous output entropy connectivity with the left primary auditory cortex in the left long-term USNHL individuals as compared with normally hearing individuals. Especially, the left USNHL showed more significant changes of entropy connectivity than the right USNHL. No significant plastic changes were observed in the right USNHL. Our results indicate that the left primary auditory cortex (non-auditory-deprived cortex) in patients with left USNHL has been reorganized by visual and sensorimotor modalities through cross-modal plasticity. Furthermore, the cross-modal reorganization also alters the directional brain functional networks. The auditory deprivation from the left or right side generates different influences on the human brain. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. The frequency modulated auditory evoked response (FMAER), a technical advance for study of childhood language disorders: cortical source localization and selected case studies

    PubMed Central

    2013-01-01

    Background Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report's purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. Results FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. Conclusion The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral

  10. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  11. Baseline vestibular and auditory findings in a trial of post-concussive syndrome

    PubMed

    Meehan, Anna; Searing, Elizabeth; Weaver, Lindell; Lewandowski, Andrew

    2016-01-01

    Previous studies have reported high rates of auditory and vestibular-balance deficits immediately following head injury. This study uses a comprehensive battery of assessments to characterize auditory and vestibular function in 71 U.S. military service members with chronic symptoms following mild traumatic brain injury that did not resolve with traditional interventions. The majority of the study population reported hearing loss (70%) and recent vestibular symptoms (83%). Central auditory deficits were most prevalent, with 58% of participants failing the SCAN3:A screening test and 45% showing abnormal responses on auditory steady-state response testing presented at a suprathreshold intensity. Only 17% of the participants had abnormal hearing (⟩25 dB hearing loss) based on the pure-tone average. Objective vestibular testing supported significant deficits in this population, regardless of whether the participant self-reported active symptoms. Composite score on the Sensory Organization Test was lower than expected from normative data (mean 69.6 ±vestibular tests, vestibulo-ocular reflex, central auditory dysfunction, mild traumatic brain injury, post-concussive symptoms, hearing15.6). High abnormality rates were found in funduscopy torsion (58%), oculomotor assessments (49%), ocular and cervical vestibular evoked myogenic potentials (46% and 33%, respectively), and monothermal calorics (40%). It is recommended that a full peripheral and central auditory, oculomotor, and vestibular-balance evaluation be completed on military service members who have sustained head trauma.

  12. Simultaneous acquisition of multiple auditory-motor transformations in speech

    PubMed Central

    Rochet-Capellan, Amelie; Ostry, David J.

    2011-01-01

    The brain easily generates the movement that is needed in a given situation. Yet surprisingly, the results of experimental studies suggest that it is difficult to acquire more than one skill at a time. To do so, it has generally been necessary to link the required movement to arbitrary cues. In the present study, we show that speech motor learning provides an informative model for the acquisition of multiple sensorimotor skills. During training, subjects are required to repeat aloud individual words in random order while auditory feedback is altered in real-time in different ways for the different words. We find that subjects can quite readily and simultaneously modify their speech movements to correct for these different auditory transformations. This multiple learning occurs effortlessly without explicit cues and without any apparent awareness of the perturbation. The ability to simultaneously learn several different auditory-motor transformations is consistent with the idea that in speech motor learning, the brain acquires instance specific memories. The results support the hypothesis that speech motor learning is fundamentally local. PMID:21325534

  13. The impact of variation in low-frequency interaural cross correlation on auditory spatial imagery in stereophonic loudspeaker reproduction

    NASA Astrophysics Data System (ADS)

    Martens, William

    2005-04-01

    Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.

  14. Auditory memory function in expert chess players

    PubMed Central

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. Results: The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Conclusion: Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time. PMID:26793666

  15. Auditory memory function in expert chess players.

    PubMed

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.

  16. Reboxetine Improves Auditory Attention and Increases Norepinephrine Levels in the Auditory Cortex of Chronically Stressed Rats

    PubMed Central

    Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies

    2016-01-01

    Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle

  17. Recovery function of the human brain stem auditory-evoked potential.

    PubMed

    Kevanishvili, Z; Lagidze, Z

    1979-01-01

    Amplitude reduction and peak latency prolongation were observed in the human brain stem auditory-evoked potential (BEP) with preceding (conditioning) stimulation. At a conditioning interval (CI) of 5 ms the alteration of BEP was greater than at a CI of 10 ms. At a CI of 10 ms the amplitudes of some BEP components (e.g. waves I and II) were more decreased than those of others (e.g. wave V), while the peak latency prolongation did not show any obvious component selectivity. At a CI of 5 ms, the extent of the amplitude decrement of individual BEP components differed less, while the increase in the peak latencies of the later components was greater than that of the earlier components. The alterations of the parameters of the test BEPs at both CIs are ascribed to the desynchronization of intrinsic neural events. The differential amplitude reduction at a CI of 10 ms is explained by the different durations of neural firings determining various effects of desynchronization upon the amplitudes of individual BEP components. The decrease in the extent of the component selectivity and the preferential increase in the peak latencies of the later BEP components observed at a CI of 5 ms are explained by the intensification of the mechanism of the relative refractory period.

  18. No auditory experience, no tinnitus: Lessons from subjects with congenital- and acquired single-sided deafness.

    PubMed

    Lee, Sang-Yeon; Nam, Dong Woo; Koo, Ja-Won; De Ridder, Dirk; Vanneste, Sven; Song, Jae-Jin

    2017-10-01

    Recent studies have adopted the Bayesian brain model to explain the generation of tinnitus in subjects with auditory deafferentation. That is, as the human brain works in a Bayesian manner to reduce environmental uncertainty, missing auditory information due to hearing loss may cause auditory phantom percepts, i.e., tinnitus. This type of deafferentation-induced auditory phantom percept should be preceded by auditory experience because the fill-in phenomenon, namely tinnitus, is based upon auditory prediction and the resultant prediction error. For example, a recent animal study observed the absence of tinnitus in cats with congenital single-sided deafness (SSD; Eggermont and Kral, Hear Res 2016). However, no human studies have investigated the presence and characteristics of tinnitus in subjects with congenital SSD. Thus, the present study sought to reveal differences in the generation of tinnitus between subjects with congenital SSD and those with acquired SSD to evaluate the replicability of previous animal studies. This study enrolled 20 subjects with congenital SSD and 44 subjects with acquired SSD and examined the presence and characteristics of tinnitus in the groups. None of the 20 subjects with congenital SSD perceived tinnitus on the affected side, whereas 30 of 44 subjects with acquired SSD experienced tinnitus on the affected side. Additionally, there were significant positive correlations between tinnitus characteristics and the audiometric characteristics of the SSD. In accordance with the findings of the recent animal study, tinnitus was absent in subjects with congenital SSD, but relatively frequent in subjects with acquired SSD, which suggests that the development of tinnitus should be preceded by auditory experience. In other words, subjects with profound congenital peripheral deafferentation do not develop auditory phantom percepts because no auditory predictions are available from the Bayesian brain. Copyright © 2017 Elsevier B.V. All rights

  19. Auditory middle latency responses differ in right- and left-handed subjects: an evaluation through topographic brain mapping.

    PubMed

    Mohebbi, Mehrnaz; Mahmoudian, Saeid; Alborzi, Marzieh Sharifian; Najafi-Koopaie, Mojtaba; Farahani, Ehsan Darestani; Farhadi, Mohammad

    2014-09-01

    To investigate the association of handedness with auditory middle latency responses (AMLRs) using topographic brain mapping by comparing amplitudes and latencies in frontocentral and hemispheric regions of interest (ROIs). The study included 44 healthy subjects with normal hearing (22 left handed and 22 right handed). AMLRs were recorded from 29 scalp electrodes in response to binaural 4-kHz tone bursts. Frontocentral ROI comparisons revealed that Pa and Pb amplitudes were significantly larger in the left-handed than the right-handed group. Topographic brain maps showed different distributions in AMLR components between the two groups. In hemispheric comparisons, Pa amplitude differed significantly across groups. A left-hemisphere emphasis of Pa was found in the right-handed group but not in the left-handed group. This study provides evidence that handedness is associated with AMLR components in frontocentral and hemispheric ROI. Handedness should be considered an essential factor in the clinical or experimental use of AMLRs.

  20. Seeing sounds and hearing colors: an event-related potential study of auditory-visual synesthesia.

    PubMed

    Goller, Aviva I; Otten, Leun J; Ward, Jamie

    2009-10-01

    In auditory-visual synesthesia, sounds automatically elicit conscious and reliable visual experiences. It is presently unknown whether this reflects early or late processes in the brain. It is also unknown whether adult audiovisual synesthesia resembles auditory-induced visual illusions that can sometimes occur in the general population or whether it resembles the electrophysiological deflection over occipital sites that has been noted in infancy and has been likened to synesthesia. Electrical brain activity was recorded from adult synesthetes and control participants who were played brief tones and required to monitor for an infrequent auditory target. The synesthetes were instructed to attend either to the auditory or to the visual (i.e., synesthetic) dimension of the tone, whereas the controls attended to the auditory dimension alone. There were clear differences between synesthetes and controls that emerged early (100 msec after tone onset). These differences tended to lie in deflections of the auditory-evoked potential (e.g., the auditory N1, P2, and N2) rather than the presence of an additional posterior deflection. The differences occurred irrespective of what the synesthetes attended to (although attention had a late effect). The results suggest that differences between synesthetes and others occur early in time, and that synesthesia is qualitatively different from similar effects found in infants and certain auditory-induced visual illusions in adults. In addition, we report two novel cases of synesthesia in which colors elicit sounds, and vice versa.

  1. Behavioral and brain pattern differences between acting and observing in an auditory task

    PubMed Central

    Karanasiou, Irene S; Papageorgiou, Charalabos; Tsianaka, Eleni I; Matsopoulos, George K; Ventouras, Errikos M; Uzunoglu, Nikolaos K

    2009-01-01

    Background Recent research has shown that errors seem to influence the patterns of brain activity. Additionally current notions support the idea that similar brain mechanisms are activated during acting and observing. The aim of the present study was to examine the patterns of brain activity of actors and observers elicited upon receiving feedback information of the actor's response. Methods The task used in the present research was an auditory identification task that included both acting and observing settings, ensuring concurrent ERP measurements of both participants. The performance of the participants was investigated in conditions of varying complexity. ERP data were analyzed with regards to the conditions of acting and observing in conjunction to correct and erroneous responses. Results The obtained results showed that the complexity induced by cue dissimilarity between trials was a demodulating factor leading to poorer performance. The electrophysiological results suggest that feedback information results in different intensities of the ERP patterns of observers and actors depending on whether the actor had made an error or not. The LORETA source localization method yielded significantly larger electrical activity in the supplementary motor area (Brodmann area 6), the posterior cingulate gyrus (Brodmann area 31/23) and the parietal lobe (Precuneus/Brodmann area 7/5). Conclusion These findings suggest that feedback information has a different effect on the intensities of the ERP patterns of actors and observers depending on whether the actor committed an error. Certain neural systems, including medial frontal area, posterior cingulate gyrus and precuneus may mediate these modulating effects. Further research is needed to elucidate in more detail the neuroanatomical and neuropsychological substrates of these systems. PMID:19154586

  2. The harmonic organization of auditory cortex

    PubMed Central

    Wang, Xiaoqin

    2013-01-01

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds. PMID:24381544

  3. The harmonic organization of auditory cortex.

    PubMed

    Wang, Xiaoqin

    2013-12-17

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.

  4. Tracking the voluntary control of auditory spatial attention with event-related brain potentials.

    PubMed

    Störmer, Viola S; Green, Jessica J; McDonald, John J

    2009-03-01

    A lateralized event-related potential (ERP) component elicited by attention-directing cues (ADAN) has been linked to frontal-lobe control but is often absent when spatial attention is deployed in the auditory modality. Here, we tested the hypothesis that ERP activity associated with frontal-lobe control of auditory spatial attention is distributed bilaterally by comparing ERPs elicited by attention-directing cues and neutral cues in a unimodal auditory task. This revealed an initial ERP positivity over the anterior scalp and a later ERP negativity over the parietal scalp. Distributed source analysis indicated that the anterior positivity was generated primarily in bilateral prefrontal cortices, whereas the more posterior negativity was generated in parietal and temporal cortices. The anterior ERP positivity likely reflects frontal-lobe attentional control, whereas the subsequent ERP negativity likely reflects anticipatory biasing of activity in auditory cortex.

  5. Human-Avatar Symbiosis for the Treatment of Auditory Verbal Hallucinations in Schizophrenia through Virtual/Augmented Reality and Brain-Computer Interfaces

    PubMed Central

    Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J.; Latorre, José M.; Rodriguez-Jimenez, Roberto

    2017-01-01

    This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis. PMID:29209193

  6. Human-Avatar Symbiosis for the Treatment of Auditory Verbal Hallucinations in Schizophrenia through Virtual/Augmented Reality and Brain-Computer Interfaces.

    PubMed

    Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J; Latorre, José M; Rodriguez-Jimenez, Roberto

    2017-01-01

    This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis.

  7. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks

    PubMed Central

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention. PMID:25745395

  8. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks.

    PubMed

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.

  9. The role of auditory transient and deviance processing in distraction of task performance: a combined behavioral and event-related brain potential study

    PubMed Central

    Berti, Stefan

    2013-01-01

    Distraction of goal-oriented performance by a sudden change in the auditory environment is an everyday life experience. Different types of changes can be distracting, including a sudden onset of a transient sound and a slight deviation of otherwise regular auditory background stimulation. With regard to deviance detection, it is assumed that slight changes in a continuous sequence of auditory stimuli are detected by a predictive coding mechanisms and it has been demonstrated that this mechanism is capable of distracting ongoing task performance. In contrast, it is open whether transient detection—which does not rely on predictive coding mechanisms—can trigger behavioral distraction, too. In the present study, the effect of rare auditory changes on visual task performance is tested in an auditory-visual cross-modal distraction paradigm. The rare changes are either embedded within a continuous standard stimulation (triggering deviance detection) or are presented within an otherwise silent situation (triggering transient detection). In the event-related brain potentials, deviants elicited the mismatch negativity (MMN) while transients elicited an enhanced N1 component, mirroring pre-attentive change detection in both conditions but on the basis of different neuro-cognitive processes. These sensory components are followed by attention related ERP components including the P3a and the reorienting negativity (RON). This demonstrates that both types of changes trigger switches of attention. Finally, distraction of task performance is observable, too, but the impact of deviants is higher compared to transients. These findings suggest different routes of distraction allowing for the automatic processing of a wide range of potentially relevant changes in the environment as a pre-requisite for adaptive behavior. PMID:23874278

  10. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    PubMed

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-08

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.

  11. Auditory peripersonal space in humans.

    PubMed

    Farnè, Alessandro; Làdavas, Elisabetta

    2002-10-01

    In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.

  12. Brainstem auditory evoked responses in an equine patient population: part I--adult horses.

    PubMed

    Aleman, M; Holliday, T A; Nieto, J E; Williams, D C

    2014-01-01

    Brainstem auditory evoked response has been an underused diagnostic modality in horses as evidenced by few reports on the subject. To describe BAER findings, common clinical signs, and causes of hearing loss in adult horses. Study group, 76 horses; control group, 8 horses. Retrospective. BAER records from the Clinical Neurophysiology Laboratory were reviewed from the years of 1982 to 2013. Peak latencies, amplitudes, and interpeak intervals were measured when visible. Horses were grouped under disease categories. Descriptive statistics and a posthoc Bonferroni test were performed. Fifty-seven of 76 horses had BAER deficits. There was no breed or sex predisposition, with the exception of American Paint horses diagnosed with congenital sensorineural deafness. Eighty-six percent (n = 49/57) of the horses were younger than 16 years of age. The most common causes of BAER abnormalities were temporohyoid osteoarthropathy (THO, n = 20/20; abnormalities/total), congenital sensorineural deafness in Paint horses (17/17), multifocal brain disease (13/16), and otitis media/interna (4/4). Auditory loss was bilateral and unilateral in 74% (n = 42/57) and 26% (n = 15/57) of the horses, respectively. The most common causes of bilateral auditory loss were sensorineural deafness, THO, and multifocal brain disease whereas THO and otitis were the most common causes of unilateral deficits. Auditory deficits should be investigated in horses with altered behavior, THO, multifocal brain disease, otitis, and in horses with certain coat and eye color patterns. BAER testing is an objective and noninvasive diagnostic modality to assess auditory function in horses. Copyright © 2014 by the American College of Veterinary Internal Medicine.

  13. A longitudinal study of auditory evoked field and language development in young children.

    PubMed

    Yoshimura, Yuko; Kikuchi, Mitsuru; Ueno, Sanae; Shitamichi, Kiyomi; Remijn, Gerard B; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Furutani, Naoki; Oi, Manabu; Munesue, Toshio; Tsubokawa, Tsunehisa; Higashida, Haruhiro; Minabe, Yoshio

    2014-11-01

    The relationship between language development in early childhood and the maturation of brain functions related to the human voice remains unclear. Because the development of the auditory system likely correlates with language development in young children, we investigated the relationship between the auditory evoked field (AEF) and language development using non-invasive child-customized magnetoencephalography (MEG) in a longitudinal design. Twenty typically developing children were recruited (aged 36-75 months old at the first measurement). These children were re-investigated 11-25 months after the first measurement. The AEF component P1m was examined to investigate the developmental changes in each participant's neural brain response to vocal stimuli. In addition, we examined the relationships between brain responses and language performance. P1m peak amplitude in response to vocal stimuli significantly increased in both hemispheres in the second measurement compared to the first measurement. However, no differences were observed in P1m latency. Notably, our results reveal that children with greater increases in P1m amplitude in the left hemisphere performed better on linguistic tests. Thus, our results indicate that P1m evoked by vocal stimuli is a neurophysiological marker for language development in young children. Additionally, MEG is a technique that can be used to investigate the maturation of the auditory cortex based on auditory evoked fields in young children. This study is the first to demonstrate a significant relationship between the development of the auditory processing system and the development of language abilities in young children. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. The cortical language circuit: from auditory perception to sentence comprehension.

    PubMed

    Friederici, Angela D

    2012-05-01

    Over the years, a large body of work on the brain basis of language comprehension has accumulated, paving the way for the formulation of a comprehensive model. The model proposed here describes the functional neuroanatomy of the different processing steps from auditory perception to comprehension as located in different gray matter brain regions. It also specifies the information flow between these regions, taking into account white matter fiber tract connections. Bottom-up, input-driven processes proceeding from the auditory cortex to the anterior superior temporal cortex and from there to the prefrontal cortex, as well as top-down, controlled and predictive processes from the prefrontal cortex back to the temporal cortex are proposed to constitute the cortical language circuit. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Options for Auditory Training for Adults with Hearing Loss.

    PubMed

    Olson, Anne D

    2015-11-01

    Hearing aid devices alone do not adequately compensate for sensory losses despite significant technological advances in digital technology. Overall use rates of amplification among adults with hearing loss remain low, and overall satisfaction and performance in noise can be improved. Although improved technology may partially address some listening problems, auditory training may be another alternative to improve speech recognition in noise and satisfaction with devices. The literature underlying auditory plasticity following placement of sensory devices suggests that additional auditory training may be needed for reorganization of the brain to occur. Furthermore, training may be required to acquire optimal performance from devices. Several auditory training programs that are readily accessible for adults with hearing loss, hearing aids, or cochlear implants are described. Programs that can be accessed via Web-based formats and smartphone technology are reviewed. A summary table is provided for easy access to programs with descriptions of features that allow hearing health care providers to assist clients in selecting the most appropriate auditory training program to fit their needs.

  16. Music training alters the course of adolescent auditory development

    PubMed Central

    Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina

    2015-01-01

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739

  17. Music training alters the course of adolescent auditory development.

    PubMed

    Tierney, Adam T; Krizman, Jennifer; Kraus, Nina

    2015-08-11

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes.

  18. Diagnosing Dyslexia: The Screening of Auditory Laterality.

    ERIC Educational Resources Information Center

    Johansen, Kjeld

    A study investigated whether a correlation exists between the degree and nature of left-brain laterality and specific reading and spelling difficulties. Subjects, 50 normal readers and 50 reading disabled persons native to the island of Bornholm, had their auditory laterality screened using pure-tone audiometry and dichotic listening. Results…

  19. Brain activity underlying auditory perceptual learning during short period training: simultaneous fMRI and EEG recording

    PubMed Central

    2013-01-01

    Background There is an accumulating body of evidence indicating that neuronal functional specificity to basic sensory stimulation is mutable and subject to experience. Although fMRI experiments have investigated changes in brain activity after relative to before perceptual learning, brain activity during perceptual learning has not been explored. This work investigated brain activity related to auditory frequency discrimination learning using a variational Bayesian approach for source localization, during simultaneous EEG and fMRI recording. We investigated whether the practice effects are determined solely by activity in stimulus-driven mechanisms or whether high-level attentional mechanisms, which are linked to the perceptual task, control the learning process. Results The results of fMRI analyses revealed significant attention and learning related activity in left and right superior temporal gyrus STG as well as the left inferior frontal gyrus IFG. Current source localization of simultaneously recorded EEG data was estimated using a variational Bayesian method. Analysis of current localized to the left inferior frontal gyrus and the right superior temporal gyrus revealed gamma band activity correlated with behavioral performance. Conclusions Rapid improvement in task performance is accompanied by plastic changes in the sensory cortex as well as superior areas gated by selective attention. Together the fMRI and EEG results suggest that gamma band activity in the right STG and left IFG plays an important role during perceptual learning. PMID:23316957

  20. Marking multi-channel silicon-substrate electrode recording sites using radiofrequency lesions.

    PubMed

    Brozoski, Thomas J; Caspary, Donald M; Bauer, Carol A

    2006-01-30

    Silicon-substrate multi-channel electrodes (multiprobes) have proven useful in a variety of electrophysiological tasks. When using multiprobes it is often useful to identify the site of each channel, e.g., when recording single-unit activity from a heterogeneous structure. Lesion marking of electrode sites has been used for many years. Electrolytic, or direct current (DC) lesions, have been used successfully to mark multiprobe sites in rat hippocampus [Townsend G, Peloquin P, Kloosterman F, Hetke JF, Leung LS. Recording and marking with silicon multichannel electrodes. Brain Res Brain Res Protoc 2002;9:122-9]. The present method used radio-frequency (rf) lesions to distinctly mark each of the 16 recording sites of 16-channel linear array multiprobes, in chinchilla inferior colliculus. A commercial radio-frequency lesioner was used as the current source, in conjunction with custom connectors adapted to the multiprobe configuration. In vitro bench testing was used to establish current-voltage-time parameters, as well as to check multiprobe integrity and radio-frequency performance. In in vivo application, visualization of individual-channel multiprobe recording sites was clear in 21 out of 33 sets of collicular serial-sections (i.e., probe tracks) obtained from acute experimental subjects, i.e., maximum post-lesion survival time of 2h. Advantages of the rf method include well-documented methods of in vitro calibration as well as low impact on probe integrity. The rf method of marking individual-channel sites should be useful in a variety of applications.

  1. Visual and auditory steady-state responses in attention-deficit/hyperactivity disorder.

    PubMed

    Khaleghi, Ali; Zarafshan, Hadi; Mohammadi, Mohammad Reza

    2018-05-22

    We designed a study to investigate the patterns of the steady-state visual evoked potential (SSVEP) and auditory steady-state response (ASSR) in adolescents with attention-deficit/hyperactivity disorder (ADHD) when performing a motor response inhibition task. Thirty 12- to 18-year-old adolescents with ADHD and 30 healthy control adolescents underwent an electroencephalogram (EEG) examination during steady-state stimuli when performing a stop-signal task. Then, we calculated the amplitude and phase of the steady-state responses in both visual and auditory modalities. Results showed that adolescents with ADHD had a significantly poorer performance in the stop-signal task during both visual and auditory stimuli. The SSVEP amplitude of the ADHD group was larger than that of the healthy control group in most regions of the brain, whereas the ASSR amplitude of the ADHD group was smaller than that of the healthy control group in some brain regions (e.g., right hemisphere). In conclusion, poorer task performance (especially inattention) and neurophysiological results in ADHD demonstrate a possible impairment in the interconnection of the association cortices in the parietal and temporal lobes and the prefrontal cortex. Also, the motor control problems in ADHD may arise from neural deficits in the frontoparietal and occipitoparietal systems and other brain structures such as cerebellum.

  2. Multi-voxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    PubMed Central

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.; Munhall, Kevin G.; Cusack, Rhodri; Johnsrude, Ingrid S.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multi-voxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was employed to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared to during passive listening. One network of regions appears to encode an ‘error signal’ irrespective of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a fronto-temporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Taken together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems. PMID:23467350

  3. Stretchable multichannel antennas in soft wireless optoelectronic implants for optogenetics.

    PubMed

    Park, Sung Il; Shin, Gunchul; McCall, Jordan G; Al-Hasani, Ream; Norris, Aaron; Xia, Li; Brenner, Daniel S; Noh, Kyung Nim; Bang, Sang Yun; Bhatti, Dionnet L; Jang, Kyung-In; Kang, Seung-Kyun; Mickle, Aaron D; Dussor, Gregory; Price, Theodore J; Gereau, Robert W; Bruchas, Michael R; Rogers, John A

    2016-12-13

    Optogenetic methods to modulate cells and signaling pathways via targeted expression and activation of light-sensitive proteins have greatly accelerated the process of mapping complex neural circuits and defining their roles in physiological and pathological contexts. Recently demonstrated technologies based on injectable, microscale inorganic light-emitting diodes (μ-ILEDs) with wireless control and power delivery strategies offer important functionality in such experiments, by eliminating the external tethers associated with traditional fiber optic approaches. Existing wireless μ-ILED embodiments allow, however, illumination only at a single targeted region of the brain with a single optical wavelength and over spatial ranges of operation that are constrained by the radio frequency power transmission hardware. Here we report stretchable, multiresonance antennas and battery-free schemes for multichannel wireless operation of independently addressable, multicolor μ-ILEDs with fully implantable, miniaturized platforms. This advance, as demonstrated through in vitro and in vivo studies using thin, mechanically soft systems that separately control as many as three different μ-ILEDs, relies on specially designed stretchable antennas in which parallel capacitive coupling circuits yield several independent, well-separated operating frequencies, as verified through experimental and modeling results. When used in combination with active motion-tracking antenna arrays, these devices enable multichannel optogenetic research on complex behavioral responses in groups of animals over large areas at low levels of radio frequency power (<1 W). Studies of the regions of the brain that are involved in sleep arousal (locus coeruleus) and preference/aversion (nucleus accumbens) demonstrate the unique capabilities of these technologies.

  4. Stretchable multichannel antennas in soft wireless optoelectronic implants for optogenetics

    PubMed Central

    Park, Sung Il; Shin, Gunchul; McCall, Jordan G.; Al-Hasani, Ream; Norris, Aaron; Xia, Li; Brenner, Daniel S.; Noh, Kyung Nim; Bang, Sang Yun; Bhatti, Dionnet L.; Jang, Kyung-In; Kang, Seung-Kyun; Mickle, Aaron D.; Dussor, Gregory; Price, Theodore J.; Gereau, Robert W.; Bruchas, Michael R.; Rogers, John A.

    2016-01-01

    Optogenetic methods to modulate cells and signaling pathways via targeted expression and activation of light-sensitive proteins have greatly accelerated the process of mapping complex neural circuits and defining their roles in physiological and pathological contexts. Recently demonstrated technologies based on injectable, microscale inorganic light-emitting diodes (μ-ILEDs) with wireless control and power delivery strategies offer important functionality in such experiments, by eliminating the external tethers associated with traditional fiber optic approaches. Existing wireless μ-ILED embodiments allow, however, illumination only at a single targeted region of the brain with a single optical wavelength and over spatial ranges of operation that are constrained by the radio frequency power transmission hardware. Here we report stretchable, multiresonance antennas and battery-free schemes for multichannel wireless operation of independently addressable, multicolor μ-ILEDs with fully implantable, miniaturized platforms. This advance, as demonstrated through in vitro and in vivo studies using thin, mechanically soft systems that separately control as many as three different μ-ILEDs, relies on specially designed stretchable antennas in which parallel capacitive coupling circuits yield several independent, well-separated operating frequencies, as verified through experimental and modeling results. When used in combination with active motion-tracking antenna arrays, these devices enable multichannel optogenetic research on complex behavioral responses in groups of animals over large areas at low levels of radio frequency power (<1 W). Studies of the regions of the brain that are involved in sleep arousal (locus coeruleus) and preference/aversion (nucleus accumbens) demonstrate the unique capabilities of these technologies. PMID:27911798

  5. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    PubMed

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Multichannel Compression, Temporal Cues, and Audibility.

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Turner, Christopher W.

    1998-01-01

    The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…

  7. Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis

    PubMed Central

    Fletcher, Phillip D.; Downey, Laura E.; Golden, Hannah L.; Clark, Camilla N.; Slattery, Catherine F.; Paterson, Ross W.; Schott, Jonathan M.; Rohrer, Jonathan D.; Rossor, Martin N.; Warren, Jason D.

    2015-01-01

    Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music (‘musicophilia’) occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease. PMID:25929717

  8. Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve

    PubMed Central

    Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.

    2015-01-01

    The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538

  9. Multistability in auditory stream segregation: a predictive coding view

    PubMed Central

    Winkler, István; Denham, Susan; Mill, Robert; Bőhm, Tamás M.; Bendixen, Alexandra

    2012-01-01

    Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm. PMID:22371621

  10. Representations of Pitch and Timbre Variation in Human Auditory Cortex

    PubMed Central

    2017-01-01

    Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex. SIGNIFICANCE STATEMENT Pitch and timbre are two crucial aspects of auditory perception. Pitch governs our perception of musical melodies and harmonies, and conveys both prosodic and (in tone languages) lexical information in speech. Brightness—an aspect of timbre or sound quality—allows us to distinguish different musical instruments and speech sounds. Frequency-mapping studies have revealed tonotopic organization in primary auditory cortex, but the use of pure tones or noise bands has precluded the possibility of dissociating pitch from brightness. Our results suggest a

  11. Spatial and temporal relationships of electrocorticographic alpha and gamma activity during auditory processing.

    PubMed

    Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin

    2014-08-15

    Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these

  12. Integration and segregation in auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  13. Multivariate sensitivity to voice during auditory categorization.

    PubMed

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  14. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing

    PubMed Central

    Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  15. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    PubMed

    Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  16. Recent advances in exploring the neural underpinnings of auditory scene perception

    PubMed Central

    Snyder, Joel S.; Elhilali, Mounya

    2017-01-01

    Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds—and conventional behavioral techniques—to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and computational neuroscience and has led to new theories of how the auditory system segregates sounds in complex arrays. The current review focuses on neural and computational studies of auditory scene perception published in the past few years. Following the progress that has been made in these studies, we describe (1) theoretical advances in our understanding of the most well-studied aspects of auditory scene perception, namely segregation of sequential patterns of sounds and concurrently presented sounds; (2) the diversification of topics and paradigms that have been investigated; and (3) how new neuroscience techniques (including invasive neurophysiology in awake humans, genotyping, and brain stimulation) have been used in this field. PMID:28199022

  17. What is extinguished in auditory extinction?

    PubMed

    Deouell, L Y; Soroker, N

    2000-09-11

    Extinction is a frequent sequel of brain damage, whereupon patients disregard (extinguish) a contralesional stimulus, and report only the more ipsilesional stimulus, of a pair of stimuli presented simultaneously. We investigated the possibility of a dissociation between the detection and the identification of extinguished phonemes. Fourteen right hemisphere damaged patients with severe auditory extinction were examined using a paradigm that separated the localization of stimuli and the identification of their phonetic content. Patients reported the identity of left-sided phonemes, while extinguishing them at the same time, in the traditional sense of the term. This dissociation suggests that auditory extinction is more about acknowledging the existence of a stimulus in the contralesional hemispace than about the actual processing of the stimulus.

  18. Forebrain pathway for auditory space processing in the barn owl.

    PubMed

    Cohen, Y E; Miller, G L; Knudsen, E I

    1998-02-01

    The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.

  19. Neurophysiological Studies of Auditory Verbal Hallucinations

    PubMed Central

    Ford, Judith M.; Dierks, Thomas; Fisher, Derek J.; Herrmann, Christoph S.; Hubl, Daniela; Kindler, Jochen; Koenig, Thomas; Mathalon, Daniel H.; Spencer, Kevin M.; Strik, Werner; van Lutterveld, Remko

    2012-01-01

    We discuss 3 neurophysiological approaches to study auditory verbal hallucinations (AVH). First, we describe “state” (or symptom capture) studies where periods with and without hallucinations are compared “within” a patient. These studies take 2 forms: passive studies, where brain activity during these states is compared, and probe studies, where brain responses to sounds during these states are compared. EEG (electroencephalography) and MEG (magnetoencephalography) data point to frontal and temporal lobe activity, the latter resulting in competition with external sounds for auditory resources. Second, we discuss “trait” studies where EEG and MEG responses to sounds are recorded from patients who hallucinate and those who do not. They suggest a tendency to hallucinate is associated with competition for auditory processing resources. Third, we discuss studies addressing possible mechanisms of AVH, including spontaneous neural activity, abnormal self-monitoring, and dysfunctional interregional communication. While most studies show differences in EEG and MEG responses between patients and controls, far fewer show symptom relationships. We conclude that efforts to understand the pathophysiology of AVH using EEG and MEG have been hindered by poor anatomical resolution of the EEG and MEG measures, poor assessment of symptoms, poor understanding of the phenomenon, poor models of the phenomenon, decoupling of the symptoms from the neurophysiology due to medications and comorbidites, and the possibility that the schizophrenia diagnosis breeds truer than the symptoms it comprises. These problems are common to studies of other psychiatric symptoms and should be considered when attempting to understand the basic neural mechanisms responsible for them. PMID:22368236

  20. Multichannel linear descriptors analysis for event-related EEG of vascular dementia patients during visual detection task.

    PubMed

    Lou, Wutao; Xu, Jin; Sheng, Hengsong; Zhao, Songzhen

    2011-11-01

    Multichannel EEG recorded in a task condition could contain more information about cognition. However, that has not been widely investigated in the vascular-dementia (VaD)- related studies. The purpose of this study was to explore the differences of brain functional states between VaD patients and normal controls while performing a detection task. Three multichannel linear descriptors, i.e. spatial complexity (Ω), field strength (Σ) and frequency of field changes (Φ), were applied to analyse four frequency bands (delta, theta, alpha and beta) of multichannel event-related EEG signals for 12 VaD patients (mean age ± SD: 69.25 ± 10.56 years ; MMSE score ± SD: 22.58 ± 4.42) and 12 age-matched healthy subjects (mean age ± SD: 67.17 ± 5.97 years ; MMSE score ± SD: 29.08 ± 0.9). The correlations between the three measures and MMSE scores were also analysed. VaD patients showed a significant higher Ω value in the delta (p = 0.013) and theta (p = 0.021) frequency bands, a lower Σ value (p = 0.011) and a higher Φ (p = 0.008) value in the delta frequency band compared with normal controls. The MMSE scores were negatively correlated with the Ω (r = -0.52, p = 0.01) and Φ (r = -0.47, p = 0.02) values in the delta frequency band. The results indicated the VaD patients presented a reduction of synchronization in the slow frequency band during target detection, and suggested more neurons might be activated in VaD patients compared with normal controls. The Ω and Φ measures in the delta frequency band might be used to evaluate the degree of cognitive dysfunction. The multichannel linear descriptors are promising measures to reveal the differences in brain functions between VaD patients and normal subjects, and could potentially be used to evaluate the degree of cognitive dysfunction in VaD patients. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  1. A Student-Made Inexpensive Multichannel Pipet

    ERIC Educational Resources Information Center

    Dragojlovic, Veljko

    2009-01-01

    An inexpensive multichannel pipet designed to deliver small volumes of liquid simultaneously to wells in a multiwell plate can be prepared by students in a single laboratory period. The multichannel pipet is made of disposable plastic 1 mL syringes and drilled plastic plates, which are used to make plunger and barrel assemblies. Application of the…

  2. Precise auditory-vocal mirroring in neurons for learned vocal communication.

    PubMed

    Prather, J F; Peters, S; Nowicki, S; Mooney, R

    2008-01-17

    Brain mechanisms for communication must establish a correspondence between sensory and motor codes used to represent the signal. One idea is that this correspondence is established at the level of single neurons that are active when the individual performs a particular gesture or observes a similar gesture performed by another individual. Although neurons that display a precise auditory-vocal correspondence could facilitate vocal communication, they have yet to be identified. Here we report that a certain class of neurons in the swamp sparrow forebrain displays a precise auditory-vocal correspondence. We show that these neurons respond in a temporally precise fashion to auditory presentation of certain note sequences in this songbird's repertoire and to similar note sequences in other birds' songs. These neurons display nearly identical patterns of activity when the bird sings the same sequence, and disrupting auditory feedback does not alter this singing-related activity, indicating it is motor in nature. Furthermore, these neurons innervate striatal structures important for song learning, raising the possibility that singing-related activity in these cells is compared to auditory feedback to guide vocal learning.

  3. Effects of musical training on the auditory cortex in children.

    PubMed

    Trainor, Laurel J; Shahin, Antoine; Roberts, Larry E

    2003-11-01

    Several studies of the effects of musical experience on sound representations in the auditory cortex are reviewed. Auditory evoked potentials are compared in response to pure tones, violin tones, and piano tones in adult musicians versus nonmusicians as well as in 4- to 5-year-old children who have either had or not had extensive musical experience. In addition, the effects of auditory frequency discrimination training in adult nonmusicians on auditory evoked potentials are examined. It was found that the P2-evoked response is larger in both adult and child musicians than in nonmusicians and that auditory training enhances this component in nonmusician adults. The results suggest that the P2 is particularly neuroplastic and that the effects of musical experience can be seen early in development. They also suggest that although the effects of musical training on cortical representations may be greater if training begins in childhood, the adult brain is also open to change. These results are discussed with respect to potential benefits of early musical training as well as potential benefits of musical experience in aging.

  4. Auditory priming improves neural synchronization in auditory-motor entrainment.

    PubMed

    Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J

    2018-05-22

    Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions

  5. Modulation frequency as a cue for auditory speed perception.

    PubMed

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  6. Exploratory study of once-daily transcranial direct current stimulation (tDCS) as a treatment for auditory hallucinations in schizophrenia.

    PubMed

    Fröhlich, F; Burrello, T N; Mellin, J M; Cordle, A L; Lustenberger, C M; Gilmore, J H; Jarskog, L F

    2016-03-01

    Auditory hallucinations are resistant to pharmacotherapy in about 25% of adults with schizophrenia. Treatment with noninvasive brain stimulation would provide a welcomed additional tool for the clinical management of auditory hallucinations. A recent study found a significant reduction in auditory hallucinations in people with schizophrenia after five days of twice-daily transcranial direct current stimulation (tDCS) that simultaneously targeted left dorsolateral prefrontal cortex and left temporo-parietal cortex. We hypothesized that once-daily tDCS with stimulation electrodes over left frontal and temporo-parietal areas reduces auditory hallucinations in patients with schizophrenia. We performed a randomized, double-blind, sham-controlled study that evaluated five days of daily tDCS of the same cortical targets in 26 outpatients with schizophrenia and schizoaffective disorder with auditory hallucinations. We found a significant reduction in auditory hallucinations measured by the Auditory Hallucination Rating Scale (F2,50=12.22, P<0.0001) that was not specific to the treatment group (F2,48=0.43, P=0.65). No significant change of overall schizophrenia symptom severity measured by the Positive and Negative Syndrome Scale was observed. The lack of efficacy of tDCS for treatment of auditory hallucinations and the pronounced response in the sham-treated group in this study contrasts with the previous finding and demonstrates the need for further optimization and evaluation of noninvasive brain stimulation strategies. In particular, higher cumulative doses and higher treatment frequencies of tDCS together with strategies to reduce placebo responses should be investigated. Additionally, consideration of more targeted stimulation to engage specific deficits in temporal organization of brain activity in patients with auditory hallucinations may be warranted. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  7. Auditory perception in the aging brain: the role of inhibition and facilitation in early processing.

    PubMed

    Stothart, George; Kazanina, Nina

    2016-11-01

    Aging affects the interplay between peripheral and cortical auditory processing. Previous studies have demonstrated that older adults are less able to regulate afferent sensory information and are more sensitive to distracting information. Using auditory event-related potentials we investigated the role of cortical inhibition on auditory and audiovisual processing in younger and older adults. Across puretone, auditory and audiovisual speech paradigms older adults showed a consistent pattern of inhibitory deficits, manifested as increased P50 and/or N1 amplitudes and an absent or significantly reduced N2. Older adults were still able to use congruent visual articulatory information to aid auditory processing but appeared to require greater neural effort to resolve conflicts generated by incongruent visual information. In combination, the results provide support for the Inhibitory Deficit Hypothesis of aging. They extend previous findings into the audiovisual domain and highlight older adults' ability to benefit from congruent visual information during speech processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Source Space Estimation of Oscillatory Power and Brain Connectivity in Tinnitus

    PubMed Central

    Zobay, Oliver; Palmer, Alan R.; Hall, Deborah A.; Sereda, Magdalena; Adjamian, Peyman

    2015-01-01

    Tinnitus is the perception of an internally generated sound that is postulated to emerge as a result of structural and functional changes in the brain. However, the precise pathophysiology of tinnitus remains unknown. Llinas’ thalamocortical dysrhythmia model suggests that neural deafferentation due to hearing loss causes a dysregulation of coherent activity between thalamus and auditory cortex. This leads to a pathological coupling of theta and gamma oscillatory activity in the resting state, localised to the auditory cortex where normally alpha oscillations should occur. Numerous studies also suggest that tinnitus perception relies on the interplay between auditory and non-auditory brain areas. According to the Global Brain Model, a network of global fronto—parietal—cingulate areas is important in the generation and maintenance of the conscious perception of tinnitus. Thus, the distress experienced by many individuals with tinnitus is related to the top—down influence of this global network on auditory areas. In this magnetoencephalographic study, we compare resting-state oscillatory activity of tinnitus participants and normal-hearing controls to examine effects on spectral power as well as functional and effective connectivity. The analysis is based on beamformer source projection and an atlas-based region-of-interest approach. We find increased functional connectivity within the auditory cortices in the alpha band. A significant increase is also found for the effective connectivity from a global brain network to the auditory cortices in the alpha and beta bands. We do not find evidence of effects on spectral power. Overall, our results provide only limited support for the thalamocortical dysrhythmia and Global Brain models of tinnitus. PMID:25799178

  9. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease

    PubMed Central

    Golden, Hannah L.; Agustus, Jennifer L.; Goll, Johanna C.; Downey, Laura E.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629

  10. Voxel-based morphometry of auditory and speech-related cortex in stutterers.

    PubMed

    Beal, Deryk S; Gracco, Vincent L; Lafaille, Sophie J; De Nil, Luc F

    2007-08-06

    Stutterers demonstrate unique functional neural activation patterns during speech production, including reduced auditory activation, relative to nonstutterers. The extent to which these functional differences are accompanied by abnormal morphology of the brain in stutterers is unclear. This study examined the neuroanatomical differences in speech-related cortex between stutterers and nonstutterers using voxel-based morphometry. Results revealed significant differences in localized grey matter and white matter densities of left and right hemisphere regions involved in auditory processing and speech production.

  11. Auditory and visual connectivity gradients in frontoparietal cortex

    PubMed Central

    Hellyer, Peter J.; Wise, Richard J. S.; Leech, Robert

    2016-01-01

    Abstract A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal–ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior–anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top–down modulation of modality‐specific information to occur within higher‐order cortex. This could provide a potentially faster and more efficient pathway by which top–down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long‐range connections to sensory cortices. Hum Brain Mapp 38:255–270, 2017. © 2016 Wiley Periodicals, Inc. PMID:27571304

  12. Infants' brain responses to speech suggest analysis by synthesis.

    PubMed

    Kuhl, Patricia K; Ramírez, Rey R; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-08-05

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.

  13. Children's Performance on Pseudoword Repetition Depends on Auditory Trace Quality: Evidence from Event-Related Potentials.

    ERIC Educational Resources Information Center

    Ceponiene, Rita; Service, Elisabet; Kurjenluoma, Sanna; Cheour, Marie; Naatanen, Risto

    1999-01-01

    Compared the mismatch-negativity (MMN) component of auditory event-related brain potentials to explore the relationship between phonological short-term memory and auditory-sensory processing in 7- to 9-year olds scoring the highest and lowest on a pseudoword repetition test. Found that high and low repeaters differed in MMN amplitude to speech…

  14. A Brain for Speech. Evolutionary Continuity in Primate and Human Auditory-Vocal Processing

    PubMed Central

    Aboitiz, Francisco

    2018-01-01

    In this review article, I propose a continuous evolution from the auditory-vocal apparatus and its mechanisms of neural control in non-human primates, to the peripheral organs and the neural control of human speech. Although there is an overall conservatism both in peripheral systems and in central neural circuits, a few changes were critical for the expansion of vocal plasticity and the elaboration of proto-speech in early humans. Two of the most relevant changes were the acquisition of direct cortical control of the vocal fold musculature and the consolidation of an auditory-vocal articulatory circuit, encompassing auditory areas in the temporoparietal junction and prefrontal and motor areas in the frontal cortex. This articulatory loop, also referred to as the phonological loop, enhanced vocal working memory capacity, enabling early humans to learn increasingly complex utterances. The auditory-vocal circuit became progressively coupled to multimodal systems conveying information about objects and events, which gradually led to the acquisition of modern speech. Gestural communication accompanies the development of vocal communication since very early in human evolution, and although both systems co-evolved tightly in the beginning, at some point speech became the main channel of communication. PMID:29636657

  15. Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation

    PubMed Central

    Oliva, Aude

    2017-01-01

    Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630

  16. Simulation and testing of a multichannel system for 3D sound localization

    NASA Astrophysics Data System (ADS)

    Matthews, Edward Albert

    Three-dimensional (3D) audio involves the ability to localize sound anywhere in a three-dimensional space. 3D audio can be used to provide the listener with the perception of moving sounds and can provide a realistic listening experience for applications such as gaming, video conferencing, movies, and concerts. The purpose of this research is to simulate and test 3D audio by incorporating auditory localization techniques in a multi-channel speaker system. The objective is to develop an algorithm that can place an audio event in a desired location by calculating and controlling the gain factors of each speaker. A MATLAB simulation displays the location of the speakers and perceived sound, which is verified through experimentation. The scenario in which the listener is not equidistant from each of the speakers is also investigated and simulated. This research is envisioned to lead to a better understanding of human localization of sound, and will contribute to a more realistic listening experience.

  17. Optimal resource allocation for novelty detection in a human auditory memory.

    PubMed

    Sinkkonen, J; Kaski, S; Huotilainen, M; Ilmoniemi, R J; Näätänen, R; Kaila, K

    1996-11-04

    A theory of resource allocation for neuronal low-level filtering is presented, based on an analysis of optimal resource allocation in simple environments. A quantitative prediction of the theory was verified in measurements of the magnetic mismatch response (MMR), an auditory event-related magnetic response of the human brain. The amplitude of the MMR was found to be directly proportional to the information conveyed by the stimulus. To the extent that the amplitude of the MMR can be used to measure resource usage by the auditory cortex, this finding supports our theory that, at least for early auditory processing, energy resources are used in proportion to the information content of incoming stimulus flow.

  18. Auditory evoked field measurement using magneto-impedance sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, K., E-mail: o-kabou@echo.nuee.nagoya-u.ac.jp; Tajima, S.; Song, D.

    The magnetic field of the human brain is extremely weak, and it is mostly measured and monitored in the magnetoencephalography method using superconducting quantum interference devices. In this study, in order to measure the weak magnetic field of the brain, we constructed a Magneto-Impedance sensor (MI sensor) system that can cancel out the background noise without any magnetic shield. Based on our previous studies of brain wave measurements, we used two MI sensors in this system for monitoring both cerebral hemispheres. In this study, we recorded and compared the auditory evoked field signals of the subject, including the N100 (ormore » N1) and the P300 (or P3) brain waves. The results suggest that the MI sensor can be applied to brain activity measurement.« less

  19. Synchronisation signatures in the listening brain: a perspective from non-invasive neuroelectrophysiology.

    PubMed

    Weisz, Nathan; Obleser, Jonas

    2014-01-01

    Human magneto- and electroencephalography (M/EEG) are capable of tracking brain activity at millisecond temporal resolution in an entirely non-invasive manner, a feature that offers unique opportunities to uncover the spatiotemporal dynamics of the hearing brain. In general, precise synchronisation of neural activity within as well as across distributed regions is likely to subserve any cognitive process, with auditory cognition being no exception. Brain oscillations, in a range of frequencies, are a putative hallmark of this synchronisation process. Embedded in a larger effort to relate human cognition to brain oscillations, a field of research is emerging on how synchronisation within, as well as between, brain regions may shape auditory cognition. Combined with much improved source localisation and connectivity techniques, it has become possible to study directly the neural activity of auditory cortex with unprecedented spatio-temporal fidelity and to uncover frequency-specific long-range connectivities across the human cerebral cortex. In the present review, we will summarise recent contributions mainly of our laboratories to this emerging domain. We present (1) a more general introduction on how to study local as well as interareal synchronisation in human M/EEG; (2) how these networks may subserve and influence illusory auditory perception (clinical and non-clinical) and (3) auditory selective attention; and (4) how oscillatory networks further reflect and impact on speech comprehension. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Visual attention modulates brain activation to angry voices.

    PubMed

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  1. Is auditory perceptual timing a core deficit of developmental coordination disorder?

    PubMed

    Trainor, Laurel J; Chang, Andrew; Cairney, John; Li, Yao-Chuen

    2018-05-09

    Time is an essential dimension for perceiving and processing auditory events, and for planning and producing motor behaviors. Developmental coordination disorder (DCD) is a neurodevelopmental disorder affecting 5-6% of children that is characterized by deficits in motor skills. Studies show that children with DCD have motor timing and sensorimotor timing deficits. We suggest that auditory perceptual timing deficits may also be core characteristics of DCD. This idea is consistent with evidence from several domains, (1) motor-related brain regions are often involved in auditory timing process; (2) DCD has high comorbidity with dyslexia and attention deficit hyperactivity, which are known to be associated with auditory timing deficits; (3) a few studies report deficits in auditory-motor timing among children with DCD; and (4) our preliminary behavioral and neuroimaging results show that children with DCD at age 6 and 7 have deficits in auditory time discrimination compared to typically developing children. We propose directions for investigating auditory perceptual timing processing in DCD that use various behavioral and neuroimaging approaches. From a clinical perspective, research findings can potentially benefit our understanding of the etiology of DCD, identify early biomarkers of DCD, and can be used to develop evidence-based interventions for DCD involving auditory-motor training. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of The New York Academy of Sciences.

  2. Early but not late-blindness leads to enhanced auditory perception.

    PubMed

    Wan, Catherine Y; Wood, Amanda G; Reutens, David C; Wilson, Sarah J

    2010-01-01

    The notion that blindness leads to superior non-visual abilities has been postulated for centuries. Compared to sighted individuals, blind individuals show different patterns of brain activation when performing auditory tasks. To date, no study has controlled for musical experience, which is known to influence auditory skills. The present study tested 33 blind (11 congenital, 11 early-blind, 11 late-blind) participants and 33 matched sighted controls. We showed that the performance of blind participants was better than that of sighted participants on a range of auditory perception tasks, even when musical experience was controlled for. This advantage was observed only for individuals who became blind early in life, and was even more pronounced for individuals who were blind from birth. Years of blindness did not predict task performance. Here, we provide compelling evidence that superior auditory abilities in blind individuals are not explained by musical experience alone. These results have implications for the development of sensory substitution devices, particularly for late-blind individuals.

  3. Egocentric and allocentric representations in auditory cortex

    PubMed Central

    Brimijoin, W. Owen; Bizley, Jennifer K.

    2017-01-01

    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796

  4. Integrating Information from Different Senses in the Auditory Cortex

    PubMed Central

    King, Andrew J.; Walker, Kerry M.M.

    2015-01-01

    Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies. PMID:22798035

  5. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    NASA Astrophysics Data System (ADS)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  6. Brain blood flow changes measured by positron emission tomography during an auditory cognitive task in healthy volunteers and in schizophrenic patients.

    PubMed

    Emri, Miklós; Glaub, Teodóra; Berecz, Roland; Lengyel, Zsolt; Mikecz, Pál; Repa, Imre; Bartók, Eniko; Degrell, István; Trón, Lajos

    2006-05-01

    Cognitive deficit is an essential feature of schizophrenia. One of the generally used simple cognitive tasks to characterize specific cognitive dysfunctions is the auditory "oddball" paradigm. During this task, two different tones are presented with different repetition frequencies and the subject is asked to pay attention and to respond to the less frequent tone. The aim of the present study was to apply positron emission tomography (PET) to measure the regional brain blood flow changes induced by an auditory oddball task in healthy volunteers and in stable schizophrenic patients in order to detect activation differences between the two groups. Eight healthy volunteers and 11 schizophrenic patients were studied. The subjects carried out a specific auditory oddball task, while cerebral activation measured via the regional distribution of [15O]-butanol activity changes in the PET camera was recorded. Task-related activation differed significantly across the patients and controls. The healthy volunteers displayed significant activation in the anterior cingulate area (Brodman Area - BA32), while in the schizophrenic patients the area was wider, including the mediofrontal regions (BA32 and BA10). The distance between the locations of maximal activation of the two populations were 33 mm and the cluster size was about twice as large in the patient group. The present results demonstrate that the perfusion changes induced in the schizophrenic patients by this cognitive task extends over a larger part of the mediofrontal cortex than in the healthy volunteers. The different pattern of activation observed during the auditory oddball task in the schizophrenic patients suggests that a larger cortical area - and consequently a larger variety of neuronal networks--is involved in the cognitive processes in these patients. The dispersion of stimulus processing during a cognitive task requiring sustained attention and stimulus discrimination may play an important role in the

  7. Auditory attention enhances processing of positive and negative words in inferior and superior prefrontal cortex.

    PubMed

    Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna

    2017-11-01

    Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging

    PubMed Central

    Hugdahl, Kenneth

    2017-01-01

    In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing “hearing voices”. An auditory verbal hallucination (i.e. hearing a voice) is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular). Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses). The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in the emotional

  9. Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging.

    PubMed

    Hugdahl, Kenneth

    2017-12-01

    In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing "hearing voices". An auditory verbal hallucination (i.e. hearing a voice) is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular). Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses). The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in the emotional

  10. A Non-canonical Reticular-Limbic Central Auditory Pathway via Medial Septum Contributes to Fear Conditioning.

    PubMed

    Zhang, Guang-Wei; Sun, Wen-Jian; Zingg, Brian; Shen, Li; He, Jufang; Xiong, Ying; Tao, Huizhong W; Zhang, Li I

    2018-01-17

    In the mammalian brain, auditory information is known to be processed along a central ascending pathway leading to auditory cortex (AC). Whether there exist any major pathways beyond this canonical auditory neuraxis remains unclear. In awake mice, we found that auditory responses in entorhinal cortex (EC) cannot be explained by a previously proposed relay from AC based on response properties. By combining anatomical tracing and optogenetic/pharmacological manipulations, we discovered that EC received auditory input primarily from the medial septum (MS), rather than AC. A previously uncharacterized auditory pathway was then revealed: it branched from the cochlear nucleus, and via caudal pontine reticular nucleus, pontine central gray, and MS, reached EC. Neurons along this non-canonical auditory pathway responded selectively to high-intensity broadband noise, but not pure tones. Disruption of the pathway resulted in an impairment of specifically noise-cued fear conditioning. This reticular-limbic pathway may thus function in processing aversive acoustic signals. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Anatomical Substrates of Visual and Auditory Miniature Second-language Learning

    PubMed Central

    Newman-Norlund, Roger D.; Frey, Scott H.; Petitto, Laura-Ann; Grafton, Scott T.

    2007-01-01

    Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca’s area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca’s area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the ‘‘critical period.’’ The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance. PMID:17129186

  12. Effect of transcranial direct current stimulation (tDCS) on MMN-indexed auditory discrimination: a pilot study.

    PubMed

    Impey, Danielle; Knott, Verner

    2015-08-01

    Membrane potentials and brain plasticity are basic modes of cerebral information processing. Both can be externally (non-invasively) modulated by weak transcranial direct current stimulation (tDCS). Polarity-dependent tDCS-induced reversible circumscribed increases and decreases in cortical excitability and functional changes have been observed following stimulation of motor and visual cortices but relatively little research has been conducted with respect to the auditory cortex. The aim of this pilot study was to examine the effects of tDCS on auditory sensory discrimination in healthy participants (N = 12) assessed with the mismatch negativity (MMN) brain event-related potential (ERP). In a randomized, double-blind, sham-controlled design, participants received anodal tDCS over the primary auditory cortex (2 mA for 20 min) in one session and 'sham' stimulation (i.e., no stimulation except initial ramp-up for 30 s) in the other session. MMN elicited by changes in auditory pitch was found to be enhanced after receiving anodal tDCS compared to 'sham' stimulation, with the effects being evidenced in individuals with relatively reduced (vs. increased) baseline amplitudes and with relatively small (vs. large) pitch deviants. Additional studies are needed to further explore relationships between tDCS-related parameters, auditory stimulus features and individual differences prior to assessing the utility of this tool for treating auditory processing deficits in psychiatric and/or neurological disorders.

  13. Infants’ brain responses to speech suggest Analysis by Synthesis

    PubMed Central

    Kuhl, Patricia K.; Ramírez, Rey R.; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-01-01

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners’ knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca’s area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of “motherese” on early language learning, and (iii) the “social-gating” hypothesis and humans’ development of social understanding. PMID:25024207

  14. Functional auditory disorders.

    PubMed

    Baguley, D M; Cope, T E; McFerran, D J

    2016-01-01

    There are a number of auditory symptom syndromes that can develop without an organic basis. Some of these, such as nonorganic hearing loss, affect populations similar to those presenting with functional somatosensory and motor symptoms, while others, such as musical hallucination, affect populations with a significantly different demographic and require different treatment strategies. Many of these conditions owe their origin to measurably abnormal peripheral sensory pathology or brain network activity, but their pathological impact is often due, at least in part, to overamplification of the salience of these phenomena. For each syndrome, this chapter briefly outlines a definition, demographics, investigations, putative mechanisms, and treatment strategies. Consideration is given to what extent they can be considered to have a functional basis. Treatments are in many cases pragmatic and rudimentary, needing more work to be done in integrating insights from behavioral and cognitive psychology to auditory neuroscience. The audiology literature has historically equated the term functional with malingering, although this perception is, thankfully, slowly changing. These disorders transcend the disciplines of audiology, otorhinolaryngology, neurology and psychiatry, and a multidisciplinary approach is often rewarding. © 2016 Elsevier B.V. All rights reserved.

  15. An auditory multiclass brain-computer interface with natural stimuli: Usability evaluation with healthy participants and a motor impaired end user.

    PubMed

    Simon, Nadine; Käthner, Ivo; Ruf, Carolin A; Pasqualotto, Emanuele; Kübler, Andrea; Halder, Sebastian

    2014-01-01

    Brain-computer interfaces (BCIs) can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g., in the completely locked-in state) or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on 2 consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS) in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min) that increased to 90% (4.23 bits/min) on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease.

  16. Neuropsychopharmacology of auditory hallucinations: insights from pharmacological functional MRI and perspectives for future research.

    PubMed

    Johnsen, Erik; Hugdahl, Kenneth; Fusar-Poli, Paolo; Kroken, Rune A; Kompus, Kristiina

    2013-01-01

    Experiencing auditory verbal hallucinations is a prominent symptom in schizophrenia that also occurs in subjects at enhanced risk for psychosis and in the general population. Drug treatment of auditory hallucinations is challenging, because the current understanding is limited with respect to the neural mechanisms involved, as well as how CNS drugs, such as antipsychotics, influence the subjective experience and neurophysiology of hallucinations. In this article, the authors review studies of the effect of antipsychotic medication on brain activation as measured with functional MRI in patients with auditory verbal hallucinations. First, the authors examine the neural correlates of ongoing auditory hallucinations. Then, the authors critically discuss studies addressing the antipsychotic effect on the neural correlates of complex cognitive tasks. Current evidence suggests that blood oxygen level-dependant effects of antipsychotic drugs reflect specific, regional effects but studies on the neuropharmacology of auditory hallucinations are scarce. Future directions for pharmacological neuroimaging of auditory hallucinations are discussed.

  17. Analysis of the influence of memory content of auditory stimuli on the memory content of EEG signal.

    PubMed

    Namazi, Hamidreza; Khosrowabadi, Reza; Hussaini, Jamal; Habibi, Shaghayegh; Farid, Ali Akhavan; Kulish, Vladimir V

    2016-08-30

    One of the major challenges in brain research is to relate the structural features of the auditory stimulus to structural features of Electroencephalogram (EEG) signal. Memory content is an important feature of EEG signal and accordingly the brain. On the other hand, the memory content can also be considered in case of stimulus. Beside all works done on analysis of the effect of stimuli on human EEG and brain memory, no work discussed about the stimulus memory and also the relationship that may exist between the memory content of stimulus and the memory content of EEG signal. For this purpose we consider the Hurst exponent as the measure of memory. This study reveals the plasticity of human EEG signals in relation to the auditory stimuli. For the first time we demonstrated that the memory content of an EEG signal shifts towards the memory content of the auditory stimulus used. The results of this analysis showed that an auditory stimulus with higher memory content causes a larger increment in the memory content of an EEG signal. For the verification of this result, we benefit from approximate entropy as indicator of time series randomness. The capability, observed in this research, can be further investigated in relation to human memory.

  18. Analysis of the influence of memory content of auditory stimuli on the memory content of EEG signal

    PubMed Central

    Namazi, Hamidreza; Kulish, Vladimir V.

    2016-01-01

    One of the major challenges in brain research is to relate the structural features of the auditory stimulus to structural features of Electroencephalogram (EEG) signal. Memory content is an important feature of EEG signal and accordingly the brain. On the other hand, the memory content can also be considered in case of stimulus. Beside all works done on analysis of the effect of stimuli on human EEG and brain memory, no work discussed about the stimulus memory and also the relationship that may exist between the memory content of stimulus and the memory content of EEG signal. For this purpose we consider the Hurst exponent as the measure of memory. This study reveals the plasticity of human EEG signals in relation to the auditory stimuli. For the first time we demonstrated that the memory content of an EEG signal shifts towards the memory content of the auditory stimulus used. The results of this analysis showed that an auditory stimulus with higher memory content causes a larger increment in the memory content of an EEG signal. For the verification of this result, we benefit from approximate entropy as indicator of time series randomness. The capability, observed in this research, can be further investigated in relation to human memory. PMID:27528219

  19. Auditory Technology and Its Impact on Bilingual Deaf Education

    ERIC Educational Resources Information Center

    Mertes, Jennifer

    2015-01-01

    Brain imaging studies suggest that children can simultaneously develop, learn, and use two languages. A visual language, such as American Sign Language (ASL), facilitates development at the earliest possible moments in a child's life. Spoken language development can be delayed due to diagnostic evaluations, device fittings, and auditory skill…

  20. Brain-Computer Interface application: auditory serial interface to control a two-class motor-imagery-based wheelchair.

    PubMed

    Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier

    2017-05-30

    Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.

  1. Processing and Analysis of Multichannel Extracellular Neuronal Signals: State-of-the-Art and Challenges

    PubMed Central

    Mahmud, Mufti; Vassanelli, Stefano

    2016-01-01

    In recent years multichannel neuronal signal acquisition systems have allowed scientists to focus on research questions which were otherwise impossible. They act as a powerful means to study brain (dys)functions in in-vivo and in in-vitro animal models. Typically, each session of electrophysiological experiments with multichannel data acquisition systems generate large amount of raw data. For example, a 128 channel signal acquisition system with 16 bits A/D conversion and 20 kHz sampling rate will generate approximately 17 GB data per hour (uncompressed). This poses an important and challenging problem of inferring conclusions from the large amounts of acquired data. Thus, automated signal processing and analysis tools are becoming a key component in neuroscience research, facilitating extraction of relevant information from neuronal recordings in a reasonable time. The purpose of this review is to introduce the reader to the current state-of-the-art of open-source packages for (semi)automated processing and analysis of multichannel extracellular neuronal signals (i.e., neuronal spikes, local field potentials, electroencephalogram, etc.), and the existing Neuroinformatics infrastructure for tool and data sharing. The review is concluded by pinpointing some major challenges that are being faced, which include the development of novel benchmarking techniques, cloud-based distributed processing and analysis tools, as well as defining novel means to share and standardize data. PMID:27313507

  2. Mapping Frequency-Specific Tone Predictions in the Human Auditory Cortex at High Spatial Resolution.

    PubMed

    Berlot, Eva; Formisano, Elia; De Martino, Federico

    2018-05-23

    Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels. SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such

  3. The integration processing of the visual and auditory information in videos of real-world events: an ERP study.

    PubMed

    Liu, Baolin; Wang, Zhongning; Jin, Zhixing

    2009-09-11

    In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.

  4. Rodent Auditory Perception: Critical Band Limitations and Plasticity

    PubMed Central

    King, Julia; Insanally, Michele; Jin, Menghan; Martins, Ana Raquel O.; D'amour, James A.; Froemke, Robert C.

    2015-01-01

    What do animals hear? While it remains challenging to adequately assess sensory perception in animal models, it is important to determine perceptual abilities in model systems to understand how physiological processes and plasticity relate to perception, learning, and cognition. Here we discuss hearing in rodents, reviewing previous and recent behavioral experiments querying acoustic perception in rats and mice, and examining the relation between behavioral data and electrophysiological recordings from the central auditory system. We focus on measurements of critical bands, which are psychoacoustic phenomena that seem to have a neural basis in the functional organization of the cochlea and the inferior colliculus. We then discuss how behavioral training, brain stimulation, and neuropathology impact auditory processing and perception. PMID:25827498

  5. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    PubMed

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  6. Structural and functional correlates for language efficiency in auditory word processing.

    PubMed

    Jung, JeYoung; Kim, Sunmi; Cho, Hyesuk; Nam, Kichun

    2017-01-01

    This study aims to provide convergent understanding of the neural basis of auditory word processing efficiency using a multimodal imaging. We investigated the structural and functional correlates of word processing efficiency in healthy individuals. We acquired two structural imaging (T1-weighted imaging and diffusion tensor imaging) and functional magnetic resonance imaging (fMRI) during auditory word processing (phonological and semantic tasks). Our results showed that better phonological performance was predicted by the greater thalamus activity. In contrary, better semantic performance was associated with the less activation in the left posterior middle temporal gyrus (pMTG), supporting the neural efficiency hypothesis that better task performance requires less brain activation. Furthermore, our network analysis revealed the semantic network including the left anterior temporal lobe (ATL), dorsolateral prefrontal cortex (DLPFC) and pMTG was correlated with the semantic efficiency. Especially, this network acted as a neural efficient manner during auditory word processing. Structurally, DLPFC and cingulum contributed to the word processing efficiency. Also, the parietal cortex showed a significate association with the word processing efficiency. Our results demonstrated that two features of word processing efficiency, phonology and semantics, can be supported in different brain regions and, importantly, the way serving it in each region was different according to the feature of word processing. Our findings suggest that word processing efficiency can be achieved by in collaboration of multiple brain regions involved in language and general cognitive function structurally and functionally.

  7. Structural and functional correlates for language efficiency in auditory word processing

    PubMed Central

    Kim, Sunmi; Cho, Hyesuk; Nam, Kichun

    2017-01-01

    This study aims to provide convergent understanding of the neural basis of auditory word processing efficiency using a multimodal imaging. We investigated the structural and functional correlates of word processing efficiency in healthy individuals. We acquired two structural imaging (T1-weighted imaging and diffusion tensor imaging) and functional magnetic resonance imaging (fMRI) during auditory word processing (phonological and semantic tasks). Our results showed that better phonological performance was predicted by the greater thalamus activity. In contrary, better semantic performance was associated with the less activation in the left posterior middle temporal gyrus (pMTG), supporting the neural efficiency hypothesis that better task performance requires less brain activation. Furthermore, our network analysis revealed the semantic network including the left anterior temporal lobe (ATL), dorsolateral prefrontal cortex (DLPFC) and pMTG was correlated with the semantic efficiency. Especially, this network acted as a neural efficient manner during auditory word processing. Structurally, DLPFC and cingulum contributed to the word processing efficiency. Also, the parietal cortex showed a significate association with the word processing efficiency. Our results demonstrated that two features of word processing efficiency, phonology and semantics, can be supported in different brain regions and, importantly, the way serving it in each region was different according to the feature of word processing. Our findings suggest that word processing efficiency can be achieved by in collaboration of multiple brain regions involved in language and general cognitive function structurally and functionally. PMID:28892503

  8. Brain-computer interfaces using capacitive measurement of visual or auditory steady-state responses

    NASA Astrophysics Data System (ADS)

    Baek, Hyun Jae; Kim, Hyun Seok; Heo, Jeong; Lim, Yong Gyu; Park, Kwang Suk

    2013-04-01

    Objective. Brain-computer interface (BCI) technologies have been intensely studied to provide alternative communication tools entirely independent of neuromuscular activities. Current BCI technologies use electroencephalogram (EEG) acquisition methods that require unpleasant gel injections, impractical preparations and clean-up procedures. The next generation of BCI technologies requires practical, user-friendly, nonintrusive EEG platforms in order to facilitate the application of laboratory work in real-world settings. Approach. A capacitive electrode that does not require an electrolytic gel or direct electrode-scalp contact is a potential alternative to the conventional wet electrode in future BCI systems. We have proposed a new capacitive EEG electrode that contains a conductive polymer-sensing surface, which enhances electrode performance. This paper presents results from five subjects who exhibited visual or auditory steady-state responses according to BCI using these new capacitive electrodes. The steady-state visual evoked potential (SSVEP) spelling system and the auditory steady-state response (ASSR) binary decision system were employed. Main results. Offline tests demonstrated BCI performance high enough to be used in a BCI system (accuracy: 95.2%, ITR: 19.91 bpm for SSVEP BCI (6 s), accuracy: 82.6%, ITR: 1.48 bpm for ASSR BCI (14 s)) with the analysis time being slightly longer than that when wet electrodes were employed with the same BCI system (accuracy: 91.2%, ITR: 25.79 bpm for SSVEP BCI (4 s), accuracy: 81.3%, ITR: 1.57 bpm for ASSR BCI (12 s)). Subjects performed online BCI under the SSVEP paradigm in copy spelling mode and under the ASSR paradigm in selective attention mode with a mean information transfer rate (ITR) of 17.78 ± 2.08 and 0.7 ± 0.24 bpm, respectively. Significance. The results of these experiments demonstrate the feasibility of using our capacitive EEG electrode in BCI systems. This capacitive electrode may become a flexible and

  9. Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.

    PubMed

    Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C

    2009-09-01

    A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.

  10. Reality of auditory verbal hallucinations.

    PubMed

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  11. Reality of auditory verbal hallucinations

    PubMed Central

    Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-01-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency. PMID:19620178

  12. Summary statistics in auditory perception.

    PubMed

    McDermott, Josh H; Schemitsch, Michael; Simoncelli, Eero P

    2013-04-01

    Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here we provide evidence that the auditory system summarizes the temporal details of sounds using time-averaged statistics. We measured discrimination of 'sound textures' that were characterized by particular statistical properties, as normally result from the superposition of many acoustic features in auditory scenes. When listeners discriminated examples of different textures, performance improved with excerpt duration. In contrast, when listeners discriminated different examples of the same texture, performance declined with duration, a paradoxical result given that the information available for discrimination grows with duration. These results indicate that once these sounds are of moderate length, the brain's representation is limited to time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail.

  13. Auditory Evoked Potentials from the Frog Eighth Nerve

    DTIC Science & Technology

    1989-09-01

    superior olivary nucleus 6, 10-100 ms in torus semicircularis’ 2,4’ 14, 1618, 30-120 ms in thalamus 7’ 1,13,14, and greater than 30 ms in telencephalon 12...899. 12 Mudry, K.M. and Capranica, R.R., Evoked auditory activity within the telencephalon of the bullfrog (Rana catesbeiana), Brain Res., 182 (1980

  14. Multichannel fiber-based diffuse reflectance spectroscopy for the rat brain exposed to a laser-induced shock wave: comparison between ipsi- and contralateral hemispheres

    NASA Astrophysics Data System (ADS)

    Miyaki, Mai; Kawauchi, Satoko; Okuda, Wataru; Nawashiro, Hiroshi; Takemura, Toshiya; Sato, Shunichi; Nishidate, Izumi

    2015-03-01

    Due to considerable increase in the terrorism using explosive devices, blast-induced traumatic brain injury (bTBI) receives much attention worldwide. However, little is known about the pathology and mechanism of bTBI. In our previous study, we found that cortical spreading depolarization (CSD) occurred in the hemisphere exposed to a laser- induced shock wave (LISW), which was followed by long-lasting hypoxemia-oligemia. However, there is no information on the events occurred in the contralateral hemisphere. In this study, we performed multichannel fiber-based diffuse reflectance spectroscopy for the rat brain exposed to an LISW and compared the results for the ipsilateral and contralateral hemispheres. A pair of optical fibers was put on the both exposed right and left parietal bone; white light was delivered to the brain through source fibers and diffuse reflectance signals were collected with detection fibers for both hemispheres. An LISW was applied to the left (ipsilateral) hemisphere. By analyzing reflectance signals, we evaluated occurrence of CSD, blood volume and oxygen saturation for both hemispheres. In the ipsilateral hemispheres, we observed the occurrence of CSD and long-lasting hypoxemia-oligemia in all rats examined (n=8), as observed in our previous study. In the contralateral hemisphere, on the other hand, no occurrence of CSD was observed, but we observed oligemia in 7 of 8 rats and hypoxemia in 1 of 8 rats, suggesting a mechanism to cause hypoxemia or oligemia or both that is (are) not directly associated with CSD in the contralateral hemisphere.

  15. Compression of auditory space during forward self-motion.

    PubMed

    Teramoto, Wataru; Sakamoto, Shuichi; Furune, Fumimasa; Gyoba, Jiro; Suzuki, Yôiti

    2012-01-01

    Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from

  16. Preattentive extraction of abstract feature conjunctions from auditory stimulation as reflected by the mismatch negativity (MMN).

    PubMed

    Paavilainen, P; Simola, J; Jaramillo, M; Näätänen, R; Winkler, I

    2001-03-01

    Brain mechanisms extracting invariant information from varying auditory inputs were studied using the mismatch-negativity (MMN) brain response. We wished to determine whether the preattentive sound-analysis mechanisms, reflected by MMN, are capable of extracting invariant relationships based on abstract conjunctions between two sound features. The standard stimuli varied over a large range in frequency and intensity dimensions following the rule that the higher the frequency, the louder the intensity. The occasional deviant stimuli violated this frequency-intensity relationship and elicited an MMN. The results demonstrate that preattentive processing of auditory stimuli extends to unexpectedly complex relationships between the stimulus features.

  17. Functional associations at global brain level during perception of an auditory illusion by applying maximal information coefficient

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Joydeep; Pereda, Ernesto; Ioannou, Christos

    2018-02-01

    Maximal information coefficient (MIC) is a recently introduced information-theoretic measure of functional association with a promising potential of application to high dimensional complex data sets. Here, we applied MIC to reveal the nature of the functional associations between different brain regions during the perception of binaural beat (BB); BB is an auditory illusion occurring when two sinusoidal tones of slightly different frequency are presented separately to each ear and an illusory beat at the different frequency is perceived. We recorded sixty-four channels EEG from two groups of participants, musicians and non-musicians, during the presentation of BB, and systematically varied the frequency difference from 1 Hz to 48 Hz. Participants were also presented non-binuaral beat (NBB) stimuli, in which same frequencies were presented to both ears. Across groups, as compared to NBB, (i) BB conditions produced the most robust changes in the MIC values at the whole brain level when the frequency differences were in the classical alpha range (8-12 Hz), and (ii) the number of electrode pairs showing nonlinear associations decreased gradually with increasing frequency difference. Between groups, significant effects were found for BBs in the broad gamma frequency range (34-48 Hz), but such effects were not observed between groups during NBB. Altogether, these results revealed the nature of functional associations at the whole brain level during the binaural beat perception and demonstrated the usefulness of MIC in characterizing interregional neural dependencies.

  18. Brain stem auditory-evoked response of the nonanesthetized dog.

    PubMed

    Marshall, A E

    1985-04-01

    The brain stem auditory evoked-response was measured from a group of 24 healthy dogs under conditions suitable for clinical diagnostic use. The waveforms were identified, and analysis of amplitude ratios, latencies, and interpeak latencies were done. The group was subdivided into subgroups based on tranquilization, nontranquilization, sex, and weight. Differences were not observed among any of these subgroups. All dogs responded to the click stimulus from 30 dB to 90 dB, but only 62.5% of the dogs responded at 5 dB. The total number of peaks averaged 1.6 at 5 dB, increased linearly to 6.5 at 50 dB, and remained at 6.5 to 90 dB. Frequency of recognizability of each wave was tabulated for each stimulus intensity tested; recognizability increased with increased stimulus intensity. Amplitudes of waves increased with increasing stimulus intensity, but were highly variable. The 4th wave had the greatest amplitude at the lower stimulus intensities, and the 1st wave had the greatest amplitude at the higher stimulus intensities. Amplitude ratio of the 1st to 5th wave was greater than 1 at less than or equal to 50 dB stimulus intensity, and was 1 for stimulus intensities greater than 50 dB. Interpeak latencies did not change relative to stimulus intensities. Peak latencies of each wave averaged at 5-dB hearing level for the 1st to 6th waves were 2.03, 2.72, 3.23, 4.14, 4.41, and 6.05 ms, respectively; latencies of these 6 waves at 90 dB were 0.92, 1.79, 2.46, 3.03, 3.47, and 4.86 ms, respectively. Latency decreased between 0.009 to 0.014 ms/dB for the waves.

  19. Auditory Reserve and the Legacy of Auditory Experience

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381

  20. Rapid Effects of Hearing Song on Catecholaminergic Activity in the Songbird Auditory Pathway

    PubMed Central

    Matragrano, Lisa L.; Beaulieu, Michaël; Phillip, Jessica O.; Rae, Ali I.; Sanford, Sara E.; Sockman, Keith W.; Maney, Donna L.

    2012-01-01

    Catecholaminergic (CA) neurons innervate sensory areas and affect the processing of sensory signals. For example, in birds, CA fibers innervate the auditory pathway at each level, including the midbrain, thalamus, and forebrain. We have shown previously that in female European starlings, CA activity in the auditory forebrain can be enhanced by exposure to attractive male song for one week. It is not known, however, whether hearing song can initiate that activity more rapidly. Here, we exposed estrogen-primed, female white-throated sparrows to conspecific male song and looked for evidence of rapid synthesis of catecholamines in auditory areas. In one hemisphere of the brain, we used immunohistochemistry to detect the phosphorylation of tyrosine hydroxylase (TH), a rate-limiting enzyme in the CA synthetic pathway. We found that immunoreactivity for TH phosphorylated at serine 40 increased dramatically in the auditory forebrain, but not the auditory thalamus and midbrain, after 15 min of song exposure. In the other hemisphere, we used high pressure liquid chromatography to measure catecholamines and their metabolites. We found that two dopamine metabolites, dihydroxyphenylacetic acid and homovanillic acid, increased in the auditory forebrain but not the auditory midbrain after 30 min of exposure to conspecific song. Our results are consistent with the hypothesis that exposure to a behaviorally relevant auditory stimulus rapidly induces CA activity, which may play a role in auditory responses. PMID:22724011

  1. Developmental changes in distinguishing concurrent auditory objects.

    PubMed

    Alain, Claude; Theunissen, Eef L; Chevalier, Hélène; Batty, Magali; Taylor, Margot J

    2003-04-01

    Children have considerable difficulties in identifying speech in noise. In the present study, we examined age-related differences in central auditory functions that are crucial for parsing co-occurring auditory events using behavioral and event-related brain potential measures. Seventeen pre-adolescent children and 17 adults were presented with complex sounds containing multiple harmonics, one of which could be 'mistuned' so that it was no longer an integer multiple of the fundamental. Both children and adults were more likely to report hearing the mistuned harmonic as a separate sound with an increase in mistuning. However, children were less sensitive in detecting mistuning across all levels as revealed by lower d' scores than adults. The perception of two concurrent auditory events was accompanied by a negative wave that peaked at about 160 ms after sound onset. In both age groups, the negative wave, referred to as the 'object-related negativity' (ORN), increased in amplitude with mistuning. The ORN was larger in children than in adults despite a lower d' score. Together, the behavioral and electrophysiological results suggest that concurrent sound segregation is probably adult-like in pre-adolescent children, but that children are inefficient in processing the information following the detection of mistuning. These findings also suggest that processes involved in distinguishing concurrent auditory objects continue to mature during adolescence.

  2. Constructing Noise-Invariant Representations of Sound in the Auditory Pathway

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.

    2013-01-01

    Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. PMID:24265596

  3. Auditory, Tactile, and Audiotactile Information Processing Following Visual Deprivation

    ERIC Educational Resources Information Center

    Occelli, Valeria; Spence, Charles; Zampini, Massimiliano

    2013-01-01

    We highlight the results of those studies that have investigated the plastic reorganization processes that occur within the human brain as a consequence of visual deprivation, as well as how these processes give rise to behaviorally observable changes in the perceptual processing of auditory and tactile information. We review the evidence showing…

  4. Task-specific reorganization of the auditory cortex in deaf humans

    PubMed Central

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-01

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964

  5. Task-specific reorganization of the auditory cortex in deaf humans.

    PubMed

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  6. Partially Overlapping Brain Networks for Singing and Cello Playing.

    PubMed

    Segado, Melanie; Hollinger, Avrum; Thibodeau, Joseph; Penhune, Virginia; Zatorre, Robert J

    2018-01-01

    This research uses an MR-Compatible cello to compare functional brain activation during singing and cello playing within the same individuals to determine the extent to which arbitrary auditory-motor associations, like those required to play the cello, co-opt functional brain networks that evolved for singing. Musical instrument playing and singing both require highly specific associations between sounds and movements. Because these are both used to produce musical sounds, it is often assumed in the literature that their neural underpinnings are highly similar. However, singing is an evolutionarily old human trait, and the auditory-motor associations used for singing are also used for speech and non-speech vocalizations. This sets it apart from the arbitrary auditory-motor associations required to play musical instruments. The pitch range of the cello is similar to that of the human voice, but cello playing is completely independent of the vocal apparatus, and can therefore be used to dissociate the auditory-vocal network from that of the auditory-motor network. While in the MR-Scanner, 11 expert cellists listened to and subsequently produced individual tones either by singing or cello playing. All participants were able to sing and play the target tones in tune (<50C deviation from target). We found that brain activity during cello playing directly overlaps with brain activity during singing in many areas within the auditory-vocal network. These include primary motor, dorsal pre-motor, and supplementary motor cortices (M1, dPMC, SMA),the primary and periprimary auditory cortices within the superior temporal gyrus (STG) including Heschl's gyrus, anterior insula (aINS), anterior cingulate cortex (ACC), and intraparietal sulcus (IPS), and Cerebellum but, notably, exclude the periaqueductal gray (PAG) and basal ganglia (Putamen). Second, we found that activity within the overlapping areas is positively correlated with, and therefore likely contributing to, both

  7. Partially Overlapping Brain Networks for Singing and Cello Playing

    PubMed Central

    Segado, Melanie; Hollinger, Avrum; Thibodeau, Joseph; Penhune, Virginia; Zatorre, Robert J.

    2018-01-01

    This research uses an MR-Compatible cello to compare functional brain activation during singing and cello playing within the same individuals to determine the extent to which arbitrary auditory-motor associations, like those required to play the cello, co-opt functional brain networks that evolved for singing. Musical instrument playing and singing both require highly specific associations between sounds and movements. Because these are both used to produce musical sounds, it is often assumed in the literature that their neural underpinnings are highly similar. However, singing is an evolutionarily old human trait, and the auditory-motor associations used for singing are also used for speech and non-speech vocalizations. This sets it apart from the arbitrary auditory-motor associations required to play musical instruments. The pitch range of the cello is similar to that of the human voice, but cello playing is completely independent of the vocal apparatus, and can therefore be used to dissociate the auditory-vocal network from that of the auditory-motor network. While in the MR-Scanner, 11 expert cellists listened to and subsequently produced individual tones either by singing or cello playing. All participants were able to sing and play the target tones in tune (<50C deviation from target). We found that brain activity during cello playing directly overlaps with brain activity during singing in many areas within the auditory-vocal network. These include primary motor, dorsal pre-motor, and supplementary motor cortices (M1, dPMC, SMA),the primary and periprimary auditory cortices within the superior temporal gyrus (STG) including Heschl's gyrus, anterior insula (aINS), anterior cingulate cortex (ACC), and intraparietal sulcus (IPS), and Cerebellum but, notably, exclude the periaqueductal gray (PAG) and basal ganglia (Putamen). Second, we found that activity within the overlapping areas is positively correlated with, and therefore likely contributing to, both

  8. How do auditory cortex neurons represent communication sounds?

    PubMed

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  10. Segregating the neural correlates of physical and perceived change in auditory input using the change deafness effect.

    PubMed

    Puschmann, Sebastian; Weerda, Riklef; Klump, Georg; Thiel, Christiane M

    2013-05-01

    Psychophysical experiments show that auditory change detection can be disturbed in situations in which listeners have to monitor complex auditory input. We made use of this change deafness effect to segregate the neural correlates of physical change in auditory input from brain responses related to conscious change perception in an fMRI experiment. Participants listened to two successively presented complex auditory scenes, which consisted of six auditory streams, and had to decide whether scenes were identical or whether the frequency of one stream was changed between presentations. Our results show that physical changes in auditory input, independent of successful change detection, are represented at the level of auditory cortex. Activations related to conscious change perception, independent of physical change, were found in the insula and the ACC. Moreover, our data provide evidence for significant effective connectivity between auditory cortex and the insula in the case of correctly detected auditory changes, but not for missed changes. This underlines the importance of the insula/anterior cingulate network for conscious change detection.

  11. A prediction of templates in the auditory cortex system

    NASA Astrophysics Data System (ADS)

    Ghanbeigi, Kimia

    In this study variation of human auditory evoked mismatch field amplitudes in response to complex tones as a function of the removal in single partials in the onset period was investigated. It was determined: 1-A single frequency elimination in a sound stimulus plays a significant role in human brain sound recognition. 2-By comparing the mismatches of the brain response due to a single frequency elimination in the "Starting Transient" and "Sustain Part" of the sound stimulus, it is found that the brain is more sensitive to frequency elimination in the Starting Transient. This study involves 4 healthy subjects with normal hearing. Neural activity was recorded with stimulus whole-head MEG. Verification of spatial location in the auditory cortex was determined by comparing with MRI images. In the first set of stimuli, repetitive ('standard') tones with five selected onset frequencies were randomly embedded in the string of rare ('deviant') tones with randomly varying inter stimulus intervals. In the deviant tones one of the frequency components was omitted relative to the deviant tones during the onset period. The frequency of the test partial of the complex tone was intentionally selected to preclude its reinsertion by generation of harmonics or combination tones due to either the nonlinearity of the ear, the electronic equipment or the brain processing. In the second set of stimuli, time structured as above, repetitive ('standard') tones with five selected sustained frequency components were embedded in the string of rare '(deviant') tones for which one of these selected frequencies was omitted in the sustained tone. In both measurements, the carefully frequency selection precluded their reinsertion by generation of harmonics or combination tones due to the nonlinearity of the ear, the electronic equipment and brain processing. The same considerations for selecting the test frequency partial were applied. Results. By comparing MMN of the two data sets, the relative

  12. The added value of auditory cortex transcranial random noise stimulation (tRNS) after bifrontal transcranial direct current stimulation (tDCS) for tinnitus.

    PubMed

    To, Wing Ting; Ost, Jan; Hart, John; De Ridder, Dirk; Vanneste, Sven

    2017-01-01

    Tinnitus is the perception of a sound in the absence of a corresponding external sound source. Research has suggested that functional abnormalities in tinnitus patients involve auditory as well as non-auditory brain areas. Transcranial electrical stimulation (tES), such as transcranial direct current stimulation (tDCS) to the dorsolateral prefrontal cortex and transcranial random noise stimulation (tRNS) to the auditory cortex, has demonstrated modulation of brain activity to transiently suppress tinnitus symptoms. Targeting two core regions of the tinnitus network by tES might establish a promising strategy to enhance treatment effects. This proof-of-concept study aims to investigate the effect of a multisite tES treatment protocol on tinnitus intensity and distress. A total of 40 tinnitus patients were enrolled in this study and received either bifrontal tDCS or the multisite treatment of bifrontal tDCS before bilateral auditory cortex tRNS. Both groups were treated on eight sessions (two times a week for 4 weeks). Our results show that a multisite treatment protocol resulted in more pronounced effects when compared with the bifrontal tDCS protocol or the waiting list group, suggesting an added value of auditory cortex tRNS to the bifrontal tDCS protocol for tinnitus patients. These findings support the involvement of the auditory as well as non-auditory brain areas in the pathophysiology of tinnitus and demonstrate the idea of the efficacy of network stimulation in the treatment of neurological disorders. This multisite tES treatment protocol proved to be save and feasible for clinical routine in tinnitus patients.

  13. Analyzing pitch chroma and pitch height in the human brain.

    PubMed

    Warren, Jason D; Uppenkamp, Stefan; Patterson, Roy D; Griffiths, Timothy D

    2003-11-01

    The perceptual pitch dimensions of chroma and height have distinct representations in the human brain: chroma is represented in cortical areas anterior to primary auditory cortex, whereas height is represented posterior to primary auditory cortex.

  14. Auditory short-term memory capacity correlates with gray matter density in the left posterior STS in cognitively normal and dyslexic adults.

    PubMed

    Richardson, Fiona M; Ramsden, Sue; Ellis, Caroline; Burnett, Stephanie; Megnin, Odette; Catmur, Caroline; Schofield, Tom M; Leff, Alex P; Price, Cathy J

    2011-12-01

    A central feature of auditory STM is its item-limited processing capacity. We investigated whether auditory STM capacity correlated with regional gray and white matter in the structural MRI images from 74 healthy adults, 40 of whom had a prior diagnosis of developmental dyslexia whereas 34 had no history of any cognitive impairment. Using whole-brain statistics, we identified a region in the left posterior STS where gray matter density was positively correlated with forward digit span, backward digit span, and performance on a "spoonerisms" task that required both auditory STM and phoneme manipulation. Across tasks and participant groups, the correlation was highly significant even when variance related to reading and auditory nonword repetition was factored out. Although the dyslexics had poorer phonological skills, the effect of auditory STM capacity in the left STS was the same as in the cognitively normal group. We also illustrate that the anatomical location of this effect is in proximity to a lesion site recently associated with reduced auditory STM capacity in patients with stroke damage. This result, therefore, indicates that gray matter density in the posterior STS predicts auditory STM capacity in the healthy and damaged brain. In conclusion, we suggest that our present findings are consistent with the view that there is an overlap between the mechanisms that support language processing and auditory STM.

  15. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    PubMed

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  16. Multichannel activity propagation across an engineered axon network

    NASA Astrophysics Data System (ADS)

    Chen, H. Isaac; Wolf, John A.; Smith, Douglas H.

    2017-04-01

    . These results provide insight into how the brain potentially processes information and generates the neural code and could guide the development of clinical therapies based on multichannel brain stimulation.

  17. Intensity non-uniformity correction using N3 on 3-T scanners with multichannel phased array coils

    PubMed Central

    Boyes, Richard G.; Gunter, Jeff L.; Frost, Chris; Janke, Andrew L.; Yeatman, Thomas; Hill, Derek L.G.; Bernstein, Matt A.; Thompson, Paul M.; Weiner, Michael W.; Schuff, Norbert; Alexander, Gene E.; Killiany, Ronald J.; DeCarli, Charles; Jack, Clifford R.; Fox, Nick C.

    2008-01-01

    Measures of structural brain change based on longitudinal MR imaging are increasingly important but can be degraded by intensity non-uniformity. This non-uniformity can be more pronounced at higher field strengths, or when using multichannel receiver coils. We assessed the ability of the non-parametric non-uniform intensity normalization (N3) technique to correct non-uniformity in 72 volumetric brain MR scans from the preparatory phase of the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Normal elderly subjects (n = 18) were scanned on different 3-T scanners with a multichannel phased array receiver coil at baseline, using magnetization prepared rapid gradient echo (MP-RAGE) and spoiled gradient echo (SPGR) pulse sequences, and again 2 weeks later. When applying N3, we used five brain masks of varying accuracy and four spline smoothing distances (d = 50, 100, 150 and 200 mm) to ascertain which combination of parameters optimally reduces the non-uniformity. We used the normalized white matter intensity variance (standard deviation/mean) to ascertain quantitatively the correction for a single scan; we used the variance of the normalized difference image to assess quantitatively the consistency of the correction over time from registered scan pairs. Our results showed statistically significant (p < 0.01) improvement in uniformity for individual scans and reduction in the normalized difference image variance when using masks that identified distinct brain tissue classes, and when using smaller spline smoothing distances (e.g., 50-100 mm) for both MP-RAGE and SPGR pulse sequences. These optimized settings may assist future large-scale studies where 3-T scanners and phased array receiver coils are used, such as ADNI, so that intensity non-uniformity does not influence the power of MR imaging to detect disease progression and the factors that influence it. PMID:18063391

  18. Physiological modulators of Kv3.1 channels adjust firing patterns of auditory brain stem neurons.

    PubMed

    Brown, Maile R; El-Hassar, Lynda; Zhang, Yalan; Alvaro, Giuseppe; Large, Charles H; Kaczmarek, Leonard K

    2016-07-01

    Many rapidly firing neurons, including those in the medial nucleus of the trapezoid body (MNTB) in the auditory brain stem, express "high threshold" voltage-gated Kv3.1 potassium channels that activate only at positive potentials and are required for stimuli to generate rapid trains of actions potentials. We now describe the actions of two imidazolidinedione derivatives, AUT1 and AUT2, which modulate Kv3.1 channels. Using Chinese hamster ovary cells stably expressing rat Kv3.1 channels, we found that lower concentrations of these compounds shift the voltage of activation of Kv3.1 currents toward negative potentials, increasing currents evoked by depolarization from typical neuronal resting potentials. Single-channel recordings also showed that AUT1 shifted the open probability of Kv3.1 to more negative potentials. Higher concentrations of AUT2 also shifted inactivation to negative potentials. The effects of lower and higher concentrations could be mimicked in numerical simulations by increasing rates of activation and inactivation respectively, with no change in intrinsic voltage dependence. In brain slice recordings of mouse MNTB neurons, both AUT1 and AUT2 modulated firing rate at high rates of stimulation, a result predicted by numerical simulations. Our results suggest that pharmaceutical modulation of Kv3.1 currents represents a novel avenue for manipulation of neuronal excitability and has the potential for therapeutic benefit in the treatment of hearing disorders. Copyright © 2016 the American Physiological Society.

  19. Physiological modulators of Kv3.1 channels adjust firing patterns of auditory brain stem neurons

    PubMed Central

    Brown, Maile R.; El-Hassar, Lynda; Zhang, Yalan; Alvaro, Giuseppe; Large, Charles H.

    2016-01-01

    Many rapidly firing neurons, including those in the medial nucleus of the trapezoid body (MNTB) in the auditory brain stem, express “high threshold” voltage-gated Kv3.1 potassium channels that activate only at positive potentials and are required for stimuli to generate rapid trains of actions potentials. We now describe the actions of two imidazolidinedione derivatives, AUT1 and AUT2, which modulate Kv3.1 channels. Using Chinese hamster ovary cells stably expressing rat Kv3.1 channels, we found that lower concentrations of these compounds shift the voltage of activation of Kv3.1 currents toward negative potentials, increasing currents evoked by depolarization from typical neuronal resting potentials. Single-channel recordings also showed that AUT1 shifted the open probability of Kv3.1 to more negative potentials. Higher concentrations of AUT2 also shifted inactivation to negative potentials. The effects of lower and higher concentrations could be mimicked in numerical simulations by increasing rates of activation and inactivation respectively, with no change in intrinsic voltage dependence. In brain slice recordings of mouse MNTB neurons, both AUT1 and AUT2 modulated firing rate at high rates of stimulation, a result predicted by numerical simulations. Our results suggest that pharmaceutical modulation of Kv3.1 currents represents a novel avenue for manipulation of neuronal excitability and has the potential for therapeutic benefit in the treatment of hearing disorders. PMID:27052580

  20. Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes.

    PubMed

    Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian

    2018-04-18

    Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Two distinct auditory-motor circuits for monitoring speech production as revealed by content-specific suppression of auditory cortex.

    PubMed

    Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet

    2015-06-01

    Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Inter-subject synchronization of brain responses during natural music listening

    PubMed Central

    Abrams, Daniel A.; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J.; Menon, Vinod

    2015-01-01

    Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic ‘real-world’ music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. PMID:23578016

  3. Inter-subject synchronization of brain responses during natural music listening.

    PubMed

    Abrams, Daniel A; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J; Menon, Vinod

    2013-05-01

    Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic 'real-world' music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Prestimulus influences on auditory perception from sensory representations and decision processes

    PubMed Central

    McNair, Steven W.

    2016-01-01

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task. PMID:27071110

  5. Prestimulus influences on auditory perception from sensory representations and decision processes.

    PubMed

    Kayser, Stephanie J; McNair, Steven W; Kayser, Christoph

    2016-04-26

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task.

  6. EEG Responses to Auditory Stimuli for Automatic Affect Recognition

    PubMed Central

    Hettich, Dirk T.; Bolinger, Elaina; Matuz, Tamara; Birbaumer, Niels; Rosenstiel, Wolfgang; Spüler, Martin

    2016-01-01

    Brain state classification for communication and control has been well established in the area of brain-computer interfaces over the last decades. Recently, the passive and automatic extraction of additional information regarding the psychological state of users from neurophysiological signals has gained increased attention in the interdisciplinary field of affective computing. We investigated how well specific emotional reactions, induced by auditory stimuli, can be detected in EEG recordings. We introduce an auditory emotion induction paradigm based on the International Affective Digitized Sounds 2nd Edition (IADS-2) database also suitable for disabled individuals. Stimuli are grouped in three valence categories: unpleasant, neutral, and pleasant. Significant differences in time domain domain event-related potentials are found in the electroencephalogram (EEG) between unpleasant and neutral, as well as pleasant and neutral conditions over midline electrodes. Time domain data were classified in three binary classification problems using a linear support vector machine (SVM) classifier. We discuss three classification performance measures in the context of affective computing and outline some strategies for conducting and reporting affect classification studies. PMID:27375410

  7. Auditory temporal processing in healthy aging: a magnetoencephalographic study

    PubMed Central

    Sörös, Peter; Teismann, Inga K; Manemann, Elisabeth; Lütkenhöner, Bernd

    2009-01-01

    Background Impaired speech perception is one of the major sequelae of aging. In addition to peripheral hearing loss, central deficits of auditory processing are supposed to contribute to the deterioration of speech perception in older individuals. To test the hypothesis that auditory temporal processing is compromised in aging, auditory evoked magnetic fields were recorded during stimulation with sequences of 4 rapidly recurring speech sounds in 28 healthy individuals aged 20 – 78 years. Results The decrement of the N1m amplitude during rapid auditory stimulation was not significantly different between older and younger adults. The amplitudes of the middle-latency P1m wave and of the long-latency N1m, however, were significantly larger in older than in younger participants. Conclusion The results of the present study do not provide evidence for the hypothesis that auditory temporal processing, as measured by the decrement (short-term habituation) of the major auditory evoked component, the N1m wave, is impaired in aging. The differences between these magnetoencephalographic findings and previously published behavioral data might be explained by differences in the experimental setting between the present study and previous behavioral studies, in terms of speech rate, attention, and masking noise. Significantly larger amplitudes of the P1m and N1m waves suggest that the cortical processing of individual sounds differs between younger and older individuals. This result adds to the growing evidence that brain functions, such as sensory processing, motor control and cognitive processing, can change during healthy aging, presumably due to experience-dependent neuroplastic mechanisms. PMID:19351410

  8. Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?

    PubMed

    McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh

    2014-05-01

    Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.

  9. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  10. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    PubMed Central

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  11. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    ERIC Educational Resources Information Center

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  12. Hierarchical Processing of Auditory Objects in Humans

    PubMed Central

    Kumar, Sukhbinder; Stephan, Klaas E; Warren, Jason D; Friston, Karl J; Griffiths, Timothy D

    2007-01-01

    This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG), containing the primary auditory cortex, planum temporale (PT), and superior temporal sulcus (STS), and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal “templates” in the PT before further analysis of the abstracted form in anterior temporal lobe areas. PMID:17542641

  13. To Modulate and Be Modulated: Estrogenic Influences on Auditory Processing of Communication Signals within a Socio-Neuro-Endocrine Framework

    PubMed Central

    Yoder, Kathleen M.; Vicario, David S.

    2012-01-01

    Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1. Local estradiol action within an auditory area is necessary for socially-relevant sounds to induce normal physiological responses in the brains of both sexes; 2. These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3. Estradiol action within the auditory forebrain enables behavioral discrimination among socially-relevant sounds in males; and 4. Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. Keywords: Estrogens, Songbird, Social Context, Auditory Perception PMID:22201281

  14. Hallucination- and speech-specific hypercoupling in frontotemporal auditory and language networks in schizophrenia using combined task-based fMRI data: An fBIRN study.

    PubMed

    Lavigne, Katie M; Woodward, Todd S

    2018-04-01

    Hypercoupling of activity in speech-perception-specific brain networks has been proposed to play a role in the generation of auditory-verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task-based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI-CPCA), which allowed for comparison of task-related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory-motor, (b) language processing, and (c) default-mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory-motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non-AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech-perception-related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation. © 2017 Wiley Periodicals, Inc.

  15. Auditory training improves auditory performance in cochlear implanted children.

    PubMed

    Roman, Stephane; Rochette, Françoise; Triglia, Jean-Michel; Schön, Daniele; Bigand, Emmanuel

    2016-07-01

    While the positive benefits of pediatric cochlear implantation on language perception skills are now proven, the heterogeneity of outcomes remains high. The understanding of this heterogeneity and possible strategies to minimize it is of utmost importance. Our scope here is to test the effects of an auditory training strategy, "sound in Hands", using playful tasks grounded on the theoretical and empirical findings of cognitive sciences. Indeed, several basic auditory operations, such as auditory scene analysis (ASA) are not trained in the usual therapeutic interventions in deaf children. However, as they constitute a fundamental basis in auditory cognition, their development should imply general benefit in auditory processing and in turn enhance speech perception. The purpose of the present study was to determine whether cochlear implanted children could improve auditory performances in trained tasks and whether they could develop a transfer of learning to a phonetic discrimination test. Nineteen prelingually unilateral cochlear implanted children without additional handicap (4-10 year-olds) were recruited. The four main auditory cognitive processing (identification, discrimination, ASA and auditory memory) were stimulated and trained in the Experimental Group (EG) using Sound in Hands. The EG followed 20 training weekly sessions of 30 min and the untrained group was the control group (CG). Two measures were taken for both groups: before training (T1) and after training (T2). EG showed a significant improvement in the identification, discrimination and auditory memory tasks. The improvement in the ASA task did not reach significance. CG did not show any significant improvement in any of the tasks assessed. Most importantly, improvement was visible in the phonetic discrimination test for EG only. Moreover, younger children benefited more from the auditory training program to develop their phonetic abilities compared to older children, supporting the idea that

  16. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    PubMed Central

    Zaltz, Yael; Globerson, Eitan; Amir, Noam

    2017-01-01

    The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination

  17. Auditory and visual interhemispheric communication in musicians and non-musicians.

    PubMed

    Woelfle, Rebecca; Grahn, Jessica A

    2013-01-01

    The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.

  18. Auditory and Visual Interhemispheric Communication in Musicians and Non-Musicians

    PubMed Central

    Woelfle, Rebecca; Grahn, Jessica A.

    2013-01-01

    The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer. PMID:24386382

  19. Tuning in to the Voices: A Multisite fMRI Study of Auditory Hallucinations

    PubMed Central

    Ford, Judith M.; Roach, Brian J.; Jorgensen, Kasper W.; Turner, Jessica A.; Brown, Gregory G.; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A.; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S.; Lim, Kelvin O.; Glover, Gary; Potkin, Steven G.; Mathalon, Daniel H.

    2009-01-01

    Introduction: Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically “tuned” to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. Methods: At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Results: Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Discussion: Although “voices” are the anticipated sensory experience, it appears that even primary auditory cortex is “turned on” and “tuned in” to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample. PMID:18987102

  20. Auditory training and challenges associated with participation and compliance.

    PubMed

    Sweetow, Robert W; Sabes, Jennifer Henderson

    2010-10-01

    When individuals have hearing loss, physiological changes in their brain interact with relearning of sound patterns. Some individuals utilize compensatory strategies that may result in successful hearing aid use. Others, however, are not so fortunate. Modern hearing aids can provide audibility but may not rectify spectral and temporal resolution, susceptibility to noise interference, or degradation of cognitive skills, such as declining auditory memory and slower speed of processing associated with aging. Frequently, these deficits are not identified during a typical "hearing aid evaluation." Aural rehabilitation has long been advocated to enhance communication but has not been considered time or cost-effective. Home-based, interactive adaptive computer therapy programs are available that are designed to engage the adult hearing-impaired listener in the hearing aid fitting process, provide listening strategies, build confidence, and address cognitive changes. Despite the availability of these programs, many patients and professionals are reluctant to engage in and complete therapy. The purposes of this article are to discuss the need for identifying auditory and nonauditory factors that may adversely affect the overall audiological rehabilitation process, to discuss important features that should be incorporated into training, and to examine reasons for the lack of compliance with therapeutic options. Possible solutions to maximizing compliance are explored. Only a small portion of audiologists (fewer than 10%) offer auditory training to patients with hearing impairment, even though auditory training appears to lower the rate of hearing aid returns for credit. Patients to whom auditory training programs are recommended often do not complete the training, however. Compliance for a cohort of home-based auditory therapy trainees was less than 30%. Activities to increase patient compliance to auditory training protocols are proposed. American Academy of Audiology.

  1. Maturation of the auditory system in clinically normal puppies as reflected by the brain stem auditory-evoked potential wave V latency-intensity curve and rarefaction-condensation differential potentials.

    PubMed

    Poncelet, L C; Coppens, A G; Meuris, S I; Deltenre, P F

    2000-11-01

    To evaluate auditory maturation in puppies. Ten clinically normal Beagle puppies. Puppies were examined repeatedly from days 11 to 36 after birth (8 measurements). Click-evoked brain stem auditory-evoked potentials (BAEP) were obtained in response to rarefaction and condensation click stimuli from 90 dB normal hearing level to wave V threshold, using steps of 10 dB. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation differential potential (RCDP). Steps of 5 dB were used to determine thresholds of RCDP and wave V. Slope of the low-intensity segment of the wave V latency-intensity curve was calculated. The intensity range at which RCDP could not be recorded (ie, pre-RCDP range) was calculated by subtracting the threshold of wave V from threshold of RCDP RESULTS: Slope of the wave V latency-intensity curve low-intensity segment evolved with age, changing from (mean +/- SD) -90.8 +/- 41.6 to -27.8 +/- 4.1 micros/dB. Similar results were obtained from days 23 through 36. The pre-RCDP range diminished as puppies became older, decreasing from 40.0 +/- 7.5 to 20.5 +/- 6.4 dB. Changes in slope of the latency-intensity curve with age suggest enlargement of the audible range of frequencies toward high frequencies up to the third week after birth. Decrease in the pre-RCDP range may indicate an increase of the audible range of frequencies toward low frequencies. Age-related reference values will assist clinicians in detecting hearing loss in puppies.

  2. Protective effects of brain-derived neurotrophic factor on the noise-damaged cochlear spiral ganglion.

    PubMed

    Zhai, S-Q; Guo, W; Hu, Y-Y; Yu, N; Chen, Q; Wang, J-Z; Fan, M; Yang, W-Y

    2011-05-01

    To explore the protective effects of brain-derived neurotrophic factor on the noise-damaged cochlear spiral ganglion. Recombinant adenovirus brain-derived neurotrophic factor vector, recombinant adenovirus LacZ and artificial perilymph were prepared. Guinea pigs with audiometric auditory brainstem response thresholds of more than 75 dB SPL, measured seven days after four hours of noise exposure at 135 dB SPL, were divided into three groups. Adenovirus brain-derived neurotrophic factor vector, adenovirus LacZ and perilymph were infused into the cochleae of the three groups, variously. Eight weeks later, the cochleae were stained immunohistochemically and the spiral ganglion cells counted. The auditory brainstem response threshold recorded before and seven days after noise exposure did not differ significantly between the three groups. However, eight weeks after cochlear perfusion, the group receiving brain-derived neurotrophic factor had a significantly decreased auditory brainstem response threshold and increased spiral ganglion cell count, compared with the adenovirus LacZ and perilymph groups. When administered via cochlear infusion following noise damage, brain-derived neurotrophic factor appears to improve the auditory threshold, and to have a protective effect on the spiral ganglion cells.

  3. When instructions fail. The effects of stimulus control training on brain injury survivors' attending and reporting during hearing screenings.

    PubMed

    Schlund, M W

    2000-10-01

    Bedside hearing screenings are routinely conducted by speech and language pathologists for brain injury survivors during rehabilitation. Cognitive deficits resulting from brain injury, however, may interfere with obtaining estimates of auditory thresholds. Poor comprehension or attention deficits often compromise patient abilities to follow procedural instructions. This article describes the effects of jointly applying behavioral methods and psychophysical methods to improve two severely brain-injured survivors' attending and reporting on auditory test stimuli presentation. Treatment consisted of stimulus control training that involved differentially reinforcing responding in the presence and absence of an auditory test tone. Subsequent hearing screenings were conducted with novel auditory test tones and a common titration procedure. Results showed that prior stimulus control training improved attending and reporting such that hearing screenings were conducted and estimates of auditory thresholds were obtained.

  4. Speech acquisition predicts regions of enhanced cortical response to auditory stimulation in autism spectrum individuals.

    PubMed

    Samson, F; Zeffiro, T A; Doyon, J; Benali, H; Mottron, L

    2015-09-01

    A continuum of phenotypes makes up the autism spectrum (AS). In particular, individuals show large differences in language acquisition, ranging from precocious speech to severe speech onset delay. However, the neurological origin of this heterogeneity remains unknown. Here, we sought to determine whether AS individuals differing in speech acquisition show different cortical responses to auditory stimulation and morphometric brain differences. Whole-brain activity following exposure to non-social sounds was investigated. Individuals in the AS were classified according to the presence or absence of Speech Onset Delay (AS-SOD and AS-NoSOD, respectively) and were compared with IQ-matched typically developing individuals (TYP). AS-NoSOD participants displayed greater task-related activity than TYP in the inferior frontal gyrus and peri-auditory middle and superior temporal gyri, which are associated with language processing. Conversely, the AS-SOD group only showed enhanced activity in the vicinity of the auditory cortex. We detected no differences in brain structure between groups. This is the first study to demonstrate the existence of differences in functional brain activity between AS individuals divided according to their pattern of speech development. These findings support the Trigger-threshold-target model and indicate that the occurrence of speech onset delay in AS individuals depends on the location of cortical functional reallocation, which favors perception in AS-SOD and language in AS-NoSOD. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Functional magnetic resonance imaging (FMRI) with auditory stimulation in songbirds.

    PubMed

    Van Ruijssevelt, Lisbeth; De Groof, Geert; Van der Kant, Anne; Poirier, Colline; Van Audekerke, Johan; Verhoye, Marleen; Van der Linden, Annemie

    2013-06-03

    The neurobiology of birdsong, as a model for human speech, is a pronounced area of research in behavioral neuroscience. Whereas electrophysiology and molecular approaches allow the investigation of either different stimuli on few neurons, or one stimulus in large parts of the brain, blood oxygenation level dependent (BOLD) functional Magnetic Resonance Imaging (fMRI) allows combining both advantages, i.e. compare the neural activation induced by different stimuli in the entire brain at once. fMRI in songbirds is challenging because of the small size of their brains and because their bones and especially their skull comprise numerous air cavities, inducing important susceptibility artifacts. Gradient-echo (GE) BOLD fMRI has been successfully applied to songbirds (1-5) (for a review, see (6)). These studies focused on the primary and secondary auditory brain areas, which are regions free of susceptibility artifacts. However, because processes of interest may occur beyond these regions, whole brain BOLD fMRI is required using an MRI sequence less susceptible to these artifacts. This can be achieved by using spin-echo (SE) BOLD fMRI (7,8) . In this article, we describe how to use this technique in zebra finches (Taeniopygia guttata), which are small songbirds with a bodyweight of 15-25 g extensively studied in behavioral neurosciences of birdsong. The main topic of fMRI studies on songbirds is song perception and song learning. The auditory nature of the stimuli combined with the weak BOLD sensitivity of SE (compared to GE) based fMRI sequences makes the implementation of this technique very challenging.

  6. Blast-induced tinnitus and hyperactivity in the auditory cortex of rats.

    PubMed

    Luo, Hao; Pace, Edward; Zhang, Jinsheng

    2017-01-06

    Blast exposure can cause tinnitus and hearing impairment by damaging the auditory periphery and direct impact to the brain, which trigger neural plasticity in both auditory and non-auditory centers. However, the underlying neurophysiological mechanisms of blast-induced tinnitus are still unknown. In this study, we induced tinnitus in rats using blast exposure and investigated changes in spontaneous firing and bursting activity in the auditory cortex (AC) at one day, one month, and three months after blast exposure. Our results showed that spontaneous activity in the tinnitus-positive group began changing at one month after blast exposure, and manifested as robust hyperactivity at all frequency regions at three months after exposure. We also observed an increased bursting rate in the low-frequency region at one month after blast exposure and in all frequency regions at three months after exposure. Taken together, spontaneous firing and bursting activity in the AC played an important role in blast-induced chronic tinnitus as opposed to acute tinnitus, thus favoring a bottom-up mechanism. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Neural plasticity expressed in central auditory structures with and without tinnitus

    PubMed Central

    Roberts, Larry E.; Bosnyak, Daniel J.; Thompson, David C.

    2012-01-01

    Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To assess this assumption, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by electroencephalography (EEG) are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated (AM) sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR) and P2 transient response known to localize to primary and non-primary auditory cortex, respectively. P2 amplitude increased over training sessions equally in participants with tinnitus and in control subjects, suggesting normal remodeling of non-primary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls the phase delay between the 40-Hz response and stimulus waveforms reduced by about 10° over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects) did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not non-primary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected. PMID:22654738

  8. Selective Neuronal Activation by Cochlear Implant Stimulation in Auditory Cortex of Awake Primate

    PubMed Central

    Johnson, Luke A.; Della Santina, Charles C.

    2016-01-01

    Despite the success of cochlear implants (CIs) in human populations, most users perform poorly in noisy environments and music and tonal language perception. How CI devices engage the brain at the single neuron level has remained largely unknown, in particular in the primate brain. By comparing neuronal responses with acoustic and CI stimulation in marmoset monkeys unilaterally implanted with a CI electrode array, we discovered that CI stimulation was surprisingly ineffective at activating many neurons in auditory cortex, particularly in the hemisphere ipsilateral to the CI. Further analyses revealed that the CI-nonresponsive neurons were narrowly tuned to frequency and sound level when probed with acoustic stimuli; such neurons likely play a role in perceptual behaviors requiring fine frequency and level discrimination, tasks that CI users find especially challenging. These findings suggest potential deficits in central auditory processing of CI stimulation and provide important insights into factors responsible for poor CI user performance in a wide range of perceptual tasks. SIGNIFICANCE STATEMENT The cochlear implant (CI) is the most successful neural prosthetic device to date and has restored hearing in hundreds of thousands of deaf individuals worldwide. However, despite its huge successes, CI users still face many perceptual limitations, and the brain mechanisms involved in hearing through CI devices remain poorly understood. By directly comparing single-neuron responses to acoustic and CI stimulation in auditory cortex of awake marmoset monkeys, we discovered that neurons unresponsive to CI stimulation were sharply tuned to frequency and sound level. Our results point out a major deficit in central auditory processing of CI stimulation and provide important insights into mechanisms underlying the poor CI user performance in a wide range of perceptual tasks. PMID:27927962

  9. Multichannel biomedical time series clustering via hierarchical probabilistic latent semantic analysis.

    PubMed

    Wang, Jin; Sun, Xiangping; Nahavandi, Saeid; Kouzani, Abbas; Wu, Yuchuan; She, Mary

    2014-11-01

    Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Deep brain stimulation of the ventral hippocampus restores deficits in processing of auditory evoked potentials in a rodent developmental disruption model of schizophrenia

    PubMed Central

    Ewing, Samuel G.; Grace, Anthony A.

    2012-01-01

    Existing antipsychotic drugs are most effective at treating the positive symptoms of schizophrenia, but their relative efficacy is low and they are associated with considerable side effects. In this study deep brain stimulation of the ventral hippocampus was performed in a rodent model of schizophrenia (MAM-E17) in an attempt to alleviate one set of neurophysiological alterations observed in this disorder. Bipolar stimulating electrodes were fabricated and implanted, bilaterally, into the ventral hippocampus of rats. High frequency stimulation was delivered bilaterally via a custom-made stimulation device and both spectral analysis (power and coherence) of resting state local field potentials and amplitude of auditory evoked potential components during a standard inhibitory gating paradigm were examined. MAM rats exhibited alterations in specific components of the auditory evoked potential in the infralimbic cortex, the core of the nucleus accumbens, mediodorsal thalamic nucleus, and ventral hippocampus in the left hemisphere only. DBS was effective in reversing these evoked deficits in the infralimbic cortex and the mediodorsal thalamic nucleus of MAM-treated rats to levels similar to those observed in control animals. In contrast stimulation did not alter evoked potentials in control rats. No deficits or stimulation-induced alterations were observed in the prelimbic and orbitofrontal cortices, the shell of the nucleus accumbens or ventral tegmental area. These data indicate a normalization of deficits in generating auditory evoked potentials induced by a developmental disruption by acute high frequency, electrical stimulation of the ventral hippocampus. PMID:23269227

  11. Primary Generators of Visually Evoked Field Potentials Recorded in the Macaque Auditory Cortex.

    PubMed

    Kajikawa, Yoshinao; Smiley, John F; Schroeder, Charles E

    2017-10-18

    Prior studies have reported "local" field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be "contaminated" by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such "far-field" activity, in addition to, or in absence of, local synaptic responses. SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is problematic as the

  12. Primary Generators of Visually Evoked Field Potentials Recorded in the Macaque Auditory Cortex

    PubMed Central

    Smiley, John F.; Schroeder, Charles E.

    2017-01-01

    Prior studies have reported “local” field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be “contaminated” by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such “far-field” activity, in addition to, or in absence of, local synaptic responses. SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is

  13. Neuroestrogen signaling in the songbird auditory cortex propagates into a sensorimotor network via an `interface' nucleus

    PubMed Central

    Pawlisch, Benjamin A.; Remage-Healey, Luke

    2014-01-01

    Neuromodulators rapidly alter activity of neural circuits and can therefore shape higher-order functions, such as sensorimotor integration. Increasing evidence suggests that brain-derived estrogens, such as 17-β-estradiol, can act rapidly to modulate sensory processing. However, less is known about how rapid estrogen signaling can impact downstream circuits. Past studies have demonstrated that estradiol levels increase within the songbird auditory cortex (the caudomedial nidopallium, NCM) during social interactions. Local estradiol signaling enhances the auditory-evoked firing rate of neurons in NCM to a variety of stimuli, while also enhancing the selectivity of auditory-evoked responses of neurons in a downstream sensorimotor brain region, HVC (proper name). Since these two brain regions are not directly connected, we employed dual extracellular recordings in HVC and the upstream nucleus interfacialis of the nidopallium (NIf) during manipulations of estradiol within NCM to better understand the pathway by which estradiol signaling propagates to downstream circuits. NIf has direct input into HVC, passing auditory information into the vocal motor output pathway, and is a possible source of the neural selectivity within HVC. Here, during acute estradiol administration in NCM, NIf neurons showed increases in baseline firing rates and auditory-evoked firing rates to all stimuli. Furthermore, when estradiol synthesis was blocked in NCM, we observed simultaneous decreases in the selectivity of NIf and HVC neurons. These effects were not due to direct estradiol actions because NIf has little to no capability for local estrogen synthesis or estrogen receptors, and these effects were specific to NIf because other neurons immediately surrounding NIf did not show these changes. Our results demonstrate that transsynaptic, rapid fluctuations in neuroestrogens are transmitted into NIf and subsequently HVC, both regions important for sensorimotor integration. Overall, these

  14. Encoding and Decoding of Multi-Channel ICMS in Macaque Somatosensory Cortex.

    PubMed

    Dadarlat, Maria C; Sabes, Philip N

    2016-01-01

    Naturalistic control of brain-machine interfaces will require artificial proprioception, potentially delivered via intracortical microstimulation (ICMS). We have previously shown that multi-channel ICMS can guide a monkey reaching to unseen targets in a planar workspace. Here, we expand on that work, asking how ICMS is decoded into target angle and distance by analyzing the performance of a monkey when ICMS feedback was degraded. From the resulting pattern of errors, we found that the animal's estimate of target direction was consistent with a weighted circular-mean strategy-close to the optimal decoding strategy given the ICMS encoding. These results support our previous finding that animals can learn to use this artificial sensory feedback in an efficient and naturalistic manner.

  15. Transcranial alternating current stimulation modulates auditory temporal resolution in elderly people.

    PubMed

    Baltus, Alina; Vosskuhl, Johannes; Boetzel, Cindy; Herrmann, Christoph Siegfried

    2018-05-13

    Recent research provides evidence for a functional role of brain oscillations for perception. For example, auditory temporal resolution seems to be linked to individual gamma frequency of auditory cortex. Individual gamma frequency not only correlates with performance in between-channel gap detection tasks but can be modulated via auditory transcranial alternating current stimulation. Modulation of individual gamma frequency is accompanied by an improvement in gap detection performance. Aging changes electrophysiological frequency components and sensory processing mechanisms. Therefore, we conducted a study to investigate the link between individual gamma frequency and gap detection performance in elderly people using auditory transcranial alternating current stimulation. In a within-subject design, twelve participants were electrically stimulated with two individualized transcranial alternating current stimulation frequencies: 3 Hz above their individual gamma frequency (experimental condition) and 4 Hz below their individual gamma frequency (control condition) while they were performing a between-channel gap detection task. As expected, individual gamma frequencies correlated significantly with gap detection performance at baseline and in the experimental condition, transcranial alternating current stimulation modulated gap detection performance. In the control condition, stimulation did not modulate gap detection performance. In addition, in elderly, the effect of transcranial alternating current stimulation on auditory temporal resolution seems to be dependent on endogenous frequencies in auditory cortex: elderlies with slower individual gamma frequencies and lower auditory temporal resolution profit from auditory transcranial alternating current stimulation and show increased gap detection performance during stimulation. Our results strongly suggest individualized transcranial alternating current stimulation protocols for successful modulation of performance

  16. Brain Connectivity Networks and the Aesthetic Experience of Music.

    PubMed

    Reybrouck, Mark; Vuust, Peter; Brattico, Elvira

    2018-06-12

    Listening to music is above all a human experience, which becomes an aesthetic experience when an individual immerses himself/herself in the music, dedicating attention to perceptual-cognitive-affective interpretation and evaluation. The study of these processes where the individual perceives, understands, enjoys and evaluates a set of auditory stimuli has mainly been focused on the effect of music on specific brain structures, as measured with neurophysiology and neuroimaging techniques. The very recent application of network science algorithms to brain research allows an insight into the functional connectivity between brain regions. These studies in network neuroscience have identified distinct circuits that function during goal-directed tasks and resting states. We review recent neuroimaging findings which indicate that music listening is traceable in terms of network connectivity and activations of target regions in the brain, in particular between the auditory cortex, the reward brain system and brain regions active during mind wandering.

  17. 47 CFR 76.613 - Interference from a multichannel video programming distributor (MVPD).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Interference from a multichannel video... (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.613 Interference from a multichannel video programming distributor (MVPD). (a) Harmful interference is...

  18. 47 CFR 76.613 - Interference from a multichannel video programming distributor (MVPD).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Interference from a multichannel video... (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.613 Interference from a multichannel video programming distributor (MVPD). (a) Harmful interference is...

  19. 47 CFR 76.613 - Interference from a multichannel video programming distributor (MVPD).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Interference from a multichannel video... (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.613 Interference from a multichannel video programming distributor (MVPD). (a) Harmful interference is...

  20. 47 CFR 76.613 - Interference from a multichannel video programming distributor (MVPD).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Interference from a multichannel video... (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.613 Interference from a multichannel video programming distributor (MVPD). (a) Harmful interference is...

  1. 47 CFR 76.613 - Interference from a multichannel video programming distributor (MVPD).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Interference from a multichannel video... (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Technical Standards § 76.613 Interference from a multichannel video programming distributor (MVPD). (a) Harmful interference is...

  2. [Some electrophysiological and hemodynamic characteristics of auditory selective attention in norm and schizophrenia].

    PubMed

    Lebedeva, I S; Akhadov, T A; Petriaĭkin, A V; Kaleda, V G; Barkhatova, A N; Golubev, S A; Rumiantseva, E E; Vdovenko, A M; Fufaeva, E A; Semenova, N A

    2011-01-01

    Six patients in the state of remission after the first episode ofjuvenile schizophrenia and seven sex- and age-matched mentally healthy subjects were examined by fMRI and ERP methods. The auditory oddball paradigm was applied. Differences in P300 parameters didn't reach the level of significance, however, a significantly higher hemodynamic response to target stimuli was found in patients bilaterally in the supramarginal gyrus and in the right medial frontal gyrus, which points to pathology of these brain areas in supporting of auditory selective attention.

  3. Multichannel film dosimetry with nonuniformity correction.

    PubMed

    Micke, Andre; Lewis, David F; Yu, Xiang

    2011-05-01

    A new method to evaluate radiochromic film dosimetry data scanned in multiple color channels is presented. This work was undertaken to demonstrate that the multichannel method is fundamentally superior to the traditional single channel method. The multichannel method allows for the separation and removal of the nondose-dependent portions of a film image leaving a residual image that is dependent only on absorbed dose. Radiochromic films were exposed to 10 x 10 cm radiation fields (Co-60 and 6 MV) at doses up to about 300 cGy. The films were scanned in red-blue-green (RGB) format on a flatbed color scanner and measured to build calibration tables relating the absorbed dose to the response of the film in each of the color channels. Film images were converted to dose maps using two methods. The first method used the response from a single color channel and the second method used the response from all three color channels. The multichannel method allows for the separation of the scanned signal into one part that is dose-dependent and another part that is dose-independent and enables the correction of a variety of disturbances in the digitized image including nonuniformities in the active coating on the radiochromic film as well as scanner related artifacts. The fundamental mathematics of the two methods is described and the dose maps calculated from film images using the two methods are compared and analyzed. The multichannel dosimetry method was shown to be an effective way to separate out non-dose-dependent abnormalities from radiochromic dosimetry film images. The process was shown to remove disturbances in the scanned images caused by nonhomogeneity of the radiochromic film and artifacts caused by the scanner and to improve the integrity of the dose information. Multichannel dosimetry also reduces random noise in the dose images and mitigates scanner-related artifacts such as lateral position dependence. In providing an ability to calculate dose maps from data in

  4. Binaural beats increase interhemispheric alpha-band coherence between auditory cortices.

    PubMed

    Solcà, Marco; Mottaz, Anaïs; Guggisberg, Adrian G

    2016-02-01

    Binaural beats (BBs) are an auditory illusion occurring when two tones of slightly different frequency are presented separately to each ear. BBs have been suggested to alter physiological and cognitive processes through synchronization of the brain hemispheres. To test this, we recorded electroencephalograms (EEG) at rest and while participants listened to BBs or a monaural control condition during which both tones were presented to both ears. We calculated for each condition the interhemispheric coherence, which expressed the synchrony between neural oscillations of both hemispheres. Compared to monaural beats and resting state, BBs enhanced interhemispheric coherence between the auditory cortices. Beat frequencies in the alpha (10 Hz) and theta (4 Hz) frequency range both increased interhemispheric coherence selectively at alpha frequencies. In a second experiment, we evaluated whether this coherence increase has a behavioral aftereffect on binaural listening. No effects were observed in a dichotic digit task performed immediately after BBs presentation. Our results suggest that BBs enhance alpha-band oscillation synchrony between the auditory cortices during auditory stimulation. This effect seems to reflect binaural integration rather than entrainment. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments

    NASA Astrophysics Data System (ADS)

    Horowitz, Seth S.; Simmons, Andrea M.; Blue, China

    2005-09-01

    Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.

  6. Local and Global Auditory Processing: Behavioral and ERP Evidence

    PubMed Central

    Sanders, Lisa D.; Poeppel, David

    2007-01-01

    Differential processing of local and global visual features is well established. Global precedence effects, differences in event-related potentials (ERPs) elicited when attention is focused on local versus global levels, and hemispheric specialization for local and global features all indicate that relative scale of detail is an important distinction in visual processing. Observing analogous differential processing of local and global auditory information would suggest that scale of detail is a general organizational principle of the brain. However, to date the research on auditory local and global processing has primarily focused on music perception or on the perceptual analysis of relatively higher and lower frequencies. The study described here suggests that temporal aspects of auditory stimuli better capture the local-global distinction. By combining short (40 ms) frequency modulated tones in series to create global auditory patterns (500 ms), we independently varied whether pitch increased or decreased over short time spans (local) and longer time spans (global). Accuracy and reaction time measures revealed better performance for global judgments and asymmetric interference that were modulated by amount of pitch change. ERPs recorded while participants listened to identical sounds and indicated the direction of pitch change at the local or global levels provided evidence for differential processing similar to that found in ERP studies employing hierarchical visual stimuli. ERP measures failed to provide evidence for lateralization of local and global auditory perception, but differences in distributions suggest preferential processing in more ventral and dorsal areas respectively. PMID:17113115

  7. Auditory stream segregation in children with Asperger syndrome

    PubMed Central

    Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E.; Nieminen-von Wendt, T.; Kujala, T.

    2009-01-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception. PMID:19751798

  8. Long-term variability of importance of brain regions in evolving epileptic brain networks

    NASA Astrophysics Data System (ADS)

    Geier, Christian; Lehnertz, Klaus

    2017-04-01

    We investigate the temporal and spatial variability of the importance of brain regions in evolving epileptic brain networks. We construct these networks from multiday, multichannel electroencephalographic data recorded from 17 epilepsy patients and use centrality indices to assess the importance of brain regions. Time-resolved indications of highest importance fluctuate over time to a greater or lesser extent, however, with some periodic temporal structure that can mostly be attributed to phenomena unrelated to the disease. In contrast, relevant aspects of the epileptic process contribute only marginally. Indications of highest importance also exhibit pronounced alternations between various brain regions that are of relevance for studies aiming at an improved understanding of the epileptic process with graph-theoretical approaches. Nonetheless, these findings may guide new developments for individualized diagnosis, treatment, and control.

  9. A multichannel amplitude and relative-phase controller for active sound quality control

    NASA Astrophysics Data System (ADS)

    Mosquera-Sánchez, Jaime A.; Desmet, Wim; de Oliveira, Leopoldo P. R.

    2017-05-01

    The enhancement of the sound quality of periodic disturbances for a number of listeners within an enclosure often confronts difficulties given by cross-channel interferences, which arise from simultaneously profiling the primary sound at each error sensor. These interferences may deteriorate the original sound among each listener, which is an unacceptable result from the point of view of sound quality control. In this paper we provide experimental evidence on controlling both amplitude and relative-phase functions of stationary complex primary sounds for a number of listeners within a cavity, attaining amplifications of twice the original value, reductions on the order of 70 dB, and relative-phase shifts between ± π rad, still in a free-of-interference control scenario. To accomplish such burdensome control targets, we have designed a multichannel active sound profiling scheme that bases its operation on exchanging time-domain control signals among the control units during uptime. Provided the real parts of the eigenvalues of persistently excited control matrices are positive, the proposed multichannel array is able to counterbalance cross-channel interferences, while attaining demanding control targets. Moreover, regularization of unstable control matrices is not seen to prevent the proposed array to provide free-of-interference amplitude and relative-phase control, but the system performance is degraded, as a function of the amount of regularization needed. The assessment of Loudness and Roughness metrics on the controlled primary sound proves that the proposed distributed control scheme noticeably outperforms current techniques, since active amplitude- and/or relative-phase-based enhancement of the auditory qualities of a primary sound no longer implies in causing interferences among different positions. In this regard, experimental results also confirm the effectiveness of the proposed scheme on stably enhancing the sound qualities of periodic sounds for

  10. Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.

    PubMed

    Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V

    2013-11-15

    Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Auditory and visual cortex of primates: a comparison of two sensory systems

    PubMed Central

    Rauschecker, Josef P.

    2014-01-01

    A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177

  12. Least squares restoration of multi-channel images

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Galatsanos, Nikolas P.

    1989-01-01

    In this paper, a least squares filter for the restoration of multichannel imagery is presented. The restoration filter is based on a linear, space-invariant imaging model and makes use of an iterative matrix inversion algorithm. The restoration utilizes both within-channel (spatial) and cross-channel information as constraints. Experiments using color images (three-channel imagery with red, green, and blue components) were performed to evaluate the filter's performance and to compare it with other monochrome and multichannel filters.

  13. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI.

    PubMed

    Zhou, Sijie; Allison, Brendan Z; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing

    2016-01-01

    Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.

  14. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  15. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    PubMed

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Single electrode micro-stimulation of rat auditory cortex: an evaluation of behavioral performance.

    PubMed

    Rousche, Patrick J; Otto, Kevin J; Reilly, Mark P; Kipke, Daryl R

    2003-05-01

    A combination of electrophysiological mapping, behavioral analysis and cortical micro-stimulation was used to explore the interrelation between the auditory cortex and behavior in the adult rat. Auditory discriminations were evaluated in eight rats trained to discriminate the presence or absence of a 75 dB pure tone stimulus. A probe trial technique was used to obtain intensity generalization gradients that described response probabilities to mid-level tones between 0 and 75 dB. The same rats were then chronically implanted in the auditory cortex with a 16 or 32 channel tungsten microwire electrode array. Implanted animals were then trained to discriminate the presence of single electrode micro-stimulation of magnitude 90 microA (22.5 nC/phase). Intensity generalization gradients were created to obtain the response probabilities to mid-level current magnitudes ranging from 0 to 90 microA on 36 different electrodes in six of the eight rats. The 50% point (the current level resulting in 50% detections) varied from 16.7 to 69.2 microA, with an overall mean of 42.4 (+/-8.1) microA across all single electrodes. Cortical micro-stimulation induced sensory-evoked behavior with similar characteristics as normal auditory stimuli. The results highlight the importance of the auditory cortex in a discrimination task and suggest that micro-stimulation of the auditory cortex might be an effective means for a graded information transfer of auditory information directly to the brain as part of a cortical auditory prosthesis.

  17. Symmetry and asymmetry in the human brain

    NASA Astrophysics Data System (ADS)

    Hugdahl, Kenneth

    2005-10-01

    Structural and functional asymmetry in the human brain and nervous system is reviewed in a historical perspective, focusing on the pioneering work of Broca, Wernicke, Sperry, and Geschwind. Structural and functional asymmetry is exemplified from work done in our laboratory on auditory laterality using an empirical procedure called dichotic listening. This also involves different ways of validating the dichotic listening procedure against both invasive and non-invasive techniques, including PET and fMRI blood flow recordings. A major argument is that the human brain shows a substantial interaction between structurally, or "bottom-up" asymmetry and cognitively, or "top-down" modulation, through a focus of attention to the right or left side in auditory space. These results open up a more dynamic and interactive view of functional brain asymmetry than the traditional static view that the brain is lateralized, or asymmetric, only for specific stimuli and stimulus properties.

  18. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  19. Auditory Habituation in the Fetus and Neonate: An fMEG Study

    ERIC Educational Resources Information Center

    Muenssinger, Jana; Matuz, Tamara; Schleger, Franziska; Kiefer-Schmidt, Isabelle; Goelz, Rangmar; Wacker-Gussmann, Annette; Birbaumer, Niels; Preissl, Hubert

    2013-01-01

    Habituation--the most basic form of learning--is used to evaluate central nervous system (CNS) maturation and to detect abnormalities in fetal brain development. In the current study, habituation, stimulus specificity and dishabituation of auditory evoked responses were measured in fetuses and newborns using fetal magnetoencephalography (fMEG). An…

  20. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C. L.; Olson, W. S.

    1983-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Some of its properties and limitations are also presented. The selection of appropriate constraints was emphasized in a practical application. Multichannel microwave images, each having different spatial resolution, were restored to a common highest resolution to demonstrate the effectiveness of the method. Both noise-free and noisy images were used in this investigation.

  1. Auditory mismatch impairments are characterized by core neural dysfunctions in schizophrenia

    PubMed Central

    Gaebler, Arnim Johannes; Mathiak, Klaus; Koten, Jan Willem; König, Andrea Anna; Koush, Yury; Weyer, David; Depner, Conny; Matentzoglu, Simeon; Edgar, James Christopher; Willmes, Klaus; Zvyagintsev, Mikhail

    2015-01-01

    Major theories on the neural basis of schizophrenic core symptoms highlight aberrant salience network activity (insula and anterior cingulate cortex), prefrontal hypoactivation, sensory processing deficits as well as an impaired connectivity between temporal and prefrontal cortices. The mismatch negativity is a potential biomarker of schizophrenia and its reduction might be a consequence of each of these mechanisms. In contrast to the previous electroencephalographic studies, functional magnetic resonance imaging may disentangle the involved brain networks at high spatial resolution and determine contributions from localized brain responses and functional connectivity to the schizophrenic impairments. Twenty-four patients and 24 matched control subjects underwent functional magnetic resonance imaging during an optimized auditory mismatch task. Haemodynamic responses and functional connectivity were compared between groups. These data sets further entered a diagnostic classification analysis to assess impairments on the individual patient level. In the control group, mismatch responses were detected in the auditory cortex, prefrontal cortex and the salience network (insula and anterior cingulate cortex). Furthermore, mismatch processing was associated with a deactivation of the visual system and the dorsal attention network indicating a shift of resources from the visual to the auditory domain. The patients exhibited reduced activation in all of the respective systems (right auditory cortex, prefrontal cortex, and the salience network) as well as reduced deactivation of the visual system and the dorsal attention network. Group differences were most prominent in the anterior cingulate cortex and adjacent prefrontal areas. The latter regions also exhibited a reduced functional connectivity with the auditory cortex in the patients. In the classification analysis, haemodynamic responses yielded a maximal accuracy of 83% based on four features; functional connectivity

  2. Effects of an NMDA antagonist on the auditory mismatch negativity response to transcranial direct current stimulation.

    PubMed

    Impey, Danielle; de la Salle, Sara; Baddeley, Ashley; Knott, Verner

    2017-05-01

    Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a weak constant current to alter cortical excitability and activity temporarily. tDCS-induced increases in neuronal excitability and performance improvements have been observed following anodal stimulation of brain regions associated with visual and motor functions, but relatively little research has been conducted with respect to auditory processing. Recently, pilot study results indicate that anodal tDCS can increase auditory deviance detection, whereas cathodal tDCS decreases auditory processing, as measured by a brain-based event-related potential (ERP), mismatch negativity (MMN). As evidence has shown that tDCS lasting effects may be dependent on N-methyl-D-aspartate (NMDA) receptor activity, the current study investigated the use of dextromethorphan (DMO), an NMDA antagonist, to assess possible modulation of tDCS's effects on both MMN and working memory performance. The study, conducted in 12 healthy volunteers, involved four laboratory test sessions within a randomised, placebo and sham-controlled crossover design that compared pre- and post-anodal tDCS over the auditory cortex (2 mA for 20 minutes to excite cortical activity temporarily and locally) and sham stimulation (i.e. device is turned off) during both DMO (50 mL) and placebo administration. Anodal tDCS increased MMN amplitudes with placebo administration. Significant increases were not seen with sham stimulation or with anodal stimulation during DMO administration. With sham stimulation (i.e. no stimulation), DMO decreased MMN amplitudes. Findings from this study contribute to the understanding of underlying neurobiological mechanisms mediating tDCS sensory and memory improvements.

  3. On wavelet analysis of auditory evoked potentials.

    PubMed

    Bradley, A P; Wilson, W J

    2004-05-01

    To determine a preferred wavelet transform (WT) procedure for multi-resolution analysis (MRA) of auditory evoked potentials (AEP). A number of WT algorithms, mother wavelets, and pre-processing techniques were examined by way of critical theoretical discussion followed by experimental testing of key points using real and simulated auditory brain-stem response (ABR) waveforms. Conclusions from these examinations were then tested on a normative ABR dataset. The results of the various experiments are reported in detail. Optimal AEP WT MRA is most likely to occur when an over-sampled discrete wavelet transformation (DWT) is used, utilising a smooth (regularity >or=3) and symmetrical (linear phase) mother wavelet, and a reflection boundary extension policy. This study demonstrates the practical importance of, and explains how to minimize potential artefacts due to, 4 inter-related issues relevant to AEP WT MRA, namely shift variance, phase distortion, reconstruction smoothness, and boundary artefacts.

  4. The Development of Auditory Perception in Children Following Auditory Brainstem Implantation

    PubMed Central

    Colletti, Liliana; Shannon, Robert V.; Colletti, Vittorio

    2014-01-01

    Auditory brainstem implants (ABI) can provide useful auditory perception and language development in deaf children who are not able to use a cochlear implant (CI). We prospectively followed-up a consecutive group of 64 deaf children up to 12 years following ABI implantation. The etiology of deafness in these children was: cochlear nerve aplasia in 49, auditory neuropathy in 1, cochlear malformations in 8, bilateral cochlear post-meningitic ossification in 3, NF2 in 2, and bilateral cochlear fractures due to a head injury in 1. Thirty five children had other congenital non-auditory disabilities. Twenty two children had previous CIs with no benefit. Fifty eight children were fitted with the Cochlear 24 ABI device and six with the MedEl ABI device and all children followed the same rehabilitation program. Auditory perceptual abilities were evaluated on the Categories of Auditory Performance (CAP) scale. No child was lost to follow-up and there were no exclusions from the study. All children showed significant improvement in auditory perception with implant experience. Seven children (11%) were able to achieve the highest score on the CAP test; they were able to converse on the telephone within 3 years of implantation. Twenty children (31.3%) achieved open set speech recognition (CAP score of 5 or greater) and 30 (46.9%) achieved a CAP level of 4 or greater. Of the 29 children without non-auditory disabilities, 18 (62%) achieved a CAP score of 5 or greater with the ABI. All children showed continued improvements in auditory skills over time. The long-term results of ABI implantation reveal significant auditory benefit in most children, and open set auditory recognition in many. PMID:25377987

  5. A neural network model of normal and abnormal auditory information processing.

    PubMed

    Du, X; Jansen, B H

    2011-08-01

    The ability of the brain to attenuate the response to irrelevant sensory stimulation is referred to as sensory gating. A gating deficiency has been reported in schizophrenia. To study the neural mechanisms underlying sensory gating, a neuroanatomically inspired model of auditory information processing has been developed. The mathematical model consists of lumped parameter modules representing the thalamus (TH), the thalamic reticular nucleus (TRN), auditory cortex (AC), and prefrontal cortex (PC). It was found that the membrane potential of the pyramidal cells in the PC module replicated auditory evoked potentials, recorded from the scalp of healthy individuals, in response to pure tones. Also, the model produced substantial attenuation of the response to the second of a pair of identical stimuli, just as seen in actual human experiments. We also tested the viewpoint that schizophrenia is associated with a deficit in prefrontal dopamine (DA) activity, which would lower the excitatory and inhibitory feedback gains in the AC and PC modules. Lowering these gains by less than 10% resulted in model behavior resembling the brain activity seen in schizophrenia patients, and replicated the reported gating deficits. The model suggests that the TRN plays a critical role in sensory gating, with the smaller response to a second tone arising from a reduction in inhibition of TH by the TRN. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Age-related delay in visual and auditory evoked responses is mediated by white- and grey-matter differences.

    PubMed

    Price, D; Tyler, L K; Neto Henriques, R; Campbell, K L; Williams, N; Treder, M S; Taylor, J R; Henson, R N A

    2017-06-09

    Slowing is a common feature of ageing, yet a direct relationship between neural slowing and brain atrophy is yet to be established in healthy humans. We combine magnetoencephalographic (MEG) measures of neural processing speed with magnetic resonance imaging (MRI) measures of white and grey matter in a large population-derived cohort to investigate the relationship between age-related structural differences and visual evoked field (VEF) and auditory evoked field (AEF) delay across two different tasks. Here we use a novel technique to show that VEFs exhibit a constant delay, whereas AEFs exhibit delay that accumulates over time. White-matter (WM) microstructure in the optic radiation partially mediates visual delay, suggesting increased transmission time, whereas grey matter (GM) in auditory cortex partially mediates auditory delay, suggesting less efficient local processing. Our results demonstrate that age has dissociable effects on neural processing speed, and that these effects relate to different types of brain atrophy.

  7. Age-related delay in visual and auditory evoked responses is mediated by white- and grey-matter differences

    PubMed Central

    Price, D.; Tyler, L. K.; Neto Henriques, R.; Campbell, K. L.; Williams, N.; Treder, M.S.; Taylor, J. R.; Brayne, Carol; Bullmore, Edward T.; Calder, Andrew C.; Cusack, Rhodri; Dalgleish, Tim; Duncan, John; Matthews, Fiona E.; Marslen-Wilson, William D.; Rowe, James B.; Shafto, Meredith A.; Cheung, Teresa; Davis, Simon; Geerligs, Linda; Kievit, Rogier; McCarrey, Anna; Mustafa, Abdur; Samu, David; Tsvetanov, Kamen A.; van Belle, Janna; Bates, Lauren; Emery, Tina; Erzinglioglu, Sharon; Gadie, Andrew; Gerbase, Sofia; Georgieva, Stanimira; Hanley, Claire; Parkin, Beth; Troy, David; Auer, Tibor; Correia, Marta; Gao, Lu; Green, Emma; Allen, Jodie; Amery, Gillian; Amunts, Liana; Barcroft, Anne; Castle, Amanda; Dias, Cheryl; Dowrick, Jonathan; Fair, Melissa; Fisher, Hayley; Goulding, Anna; Grewal, Adarsh; Hale, Geoff; Hilton, Andrew; Johnson, Frances; Johnston, Patricia; Kavanagh-Williamson, Thea; Kwasniewska, Magdalena; McMinn, Alison; Norman, Kim; Penrose, Jessica; Roby, Fiona; Rowland, Diane; Sargeant, John; Squire, Maggie; Stevens, Beth; Stoddart, Aldabra; Stone, Cheryl; Thompson, Tracy; Yazlik, Ozlem; Barnes, Dan; Dixon, Marie; Hillman, Jaya; Mitchell, Joanne; Villis, Laura; Henson, R. N. A.

    2017-01-01

    Slowing is a common feature of ageing, yet a direct relationship between neural slowing and brain atrophy is yet to be established in healthy humans. We combine magnetoencephalographic (MEG) measures of neural processing speed with magnetic resonance imaging (MRI) measures of white and grey matter in a large population-derived cohort to investigate the relationship between age-related structural differences and visual evoked field (VEF) and auditory evoked field (AEF) delay across two different tasks. Here we use a novel technique to show that VEFs exhibit a constant delay, whereas AEFs exhibit delay that accumulates over time. White-matter (WM) microstructure in the optic radiation partially mediates visual delay, suggesting increased transmission time, whereas grey matter (GM) in auditory cortex partially mediates auditory delay, suggesting less efficient local processing. Our results demonstrate that age has dissociable effects on neural processing speed, and that these effects relate to different types of brain atrophy. PMID:28598417

  8. Identification of a motor to auditory pathway important for vocal learning

    PubMed Central

    Roberts, Todd F.; Hisey, Erin; Tanaka, Masashi; Kearney, Matthew; Chattree, Gaurav; Yang, Cindy F.; Shah, Nirao M.; Mooney, Richard

    2017-01-01

    Summary Learning to vocalize depends on the ability to adaptively modify the temporal and spectral features of vocal elements. Neurons that convey motor-related signals to the auditory system are theorized to facilitate vocal learning, but the identity and function of such neurons remain unknown. Here we identify a previously unknown neuron type in the songbird brain that transmits vocal motor signals to the auditory cortex. Genetically ablating these neurons in juveniles disrupted their ability to imitate features of an adult tutor’s song. Ablating these neurons in adults had little effect on previously learned songs, but interfered with their ability to adaptively modify the duration of vocal elements and largely prevented the degradation of song’s temporal features normally caused by deafening. These findings identify a motor to auditory circuit essential to vocal imitation and to the adaptive modification of vocal timing. PMID:28504672

  9. Origins of task-specific sensory-independent organization in the visual and auditory brain: neuroscience evidence, open questions and clinical implications.

    PubMed

    Heimler, Benedetta; Striem-Amit, Ella; Amedi, Amir

    2015-12-01

    Evidence of task-specific sensory-independent (TSSI) plasticity from blind and deaf populations has led to a better understanding of brain organization. However, the principles determining the origins of this plasticity remain unclear. We review recent data suggesting that a combination of the connectivity bias and sensitivity to task-distinctive features might account for TSSI plasticity in the sensory cortices as a whole, from the higher-order occipital/temporal cortices to the primary sensory cortices. We discuss current theories and evidence, open questions and related predictions. Finally, given the rapid progress in visual and auditory restoration techniques, we address the crucial need to develop effective rehabilitation approaches for sensory recovery. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Blast-Induced Tinnitus and Elevated Central Auditory and Limbic Activity in Rats: A Manganese-Enhanced MRI and Behavioral Study.

    PubMed

    Ouyang, Jessica; Pace, Edward; Lepczyk, Laura; Kaufman, Michael; Zhang, Jessica; Perrine, Shane A; Zhang, Jinsheng

    2017-07-07

    Blast-induced tinitus is the number one service-connected disability that currently affects military personnel and veterans. To elucidate its underlying mechanisms, we subjected 13 Sprague Dawley adult rats to unilateral 14 psi blast exposure to induce tinnitus and measured auditory and limbic brain activity using manganese-enhanced MRI (MEMRI). Tinnitus was evaluated with a gap detection acoustic startle reflex paradigm, while hearing status was assessed with prepulse inhibition (PPI) and auditory brainstem responses (ABRs). Both anxiety and cognitive functioning were assessed using elevated plus maze and Morris water maze, respectively. Five weeks after blast exposure, 8 of the 13 blasted rats exhibited chronic tinnitus. While acoustic PPI remained intact and ABR thresholds recovered, the ABR wave P1-N1 amplitude reduction persisted in all blast-exposed rats. No differences in spatial cognition were observed, but blasted rats as a whole exhibited increased anxiety. MEMRI data revealed a bilateral increase in activity along the auditory pathway and in certain limbic regions of rats with tinnitus compared to age-matched controls. Taken together, our data suggest that while blast-induced tinnitus may play a role in auditory and limbic hyperactivity, the non-auditory effects of blast and potential traumatic brain injury may also exert an effect.

  11. MULTICHANNEL ANALYZER

    DOEpatents

    Kelley, G.G.

    1959-11-10

    A multichannel pulse analyzer having several window amplifiers, each amplifier serving one group of channels, with a single fast pulse-lengthener and a single novel interrogation circuit serving all channels is described. A pulse followed too closely timewise by another pulse is disregarded by the interrogation circuit to prevent errors due to pulse pileup. The window amplifiers are connected to the pulse lengthener output, rather than the linear amplifier output, so need not have the fast response characteristic formerly required.

  12. An Auditory BCI System for Assisting CRS-R Behavioral Assessment in Patients with Disorders of Consciousness

    NASA Astrophysics Data System (ADS)

    Xiao, Jun; Xie, Qiuyou; He, Yanbin; Yu, Tianyou; Lu, Shenglin; Huang, Ningmeng; Yu, Ronghao; Li, Yuanqing

    2016-09-01

    The Coma Recovery Scale-Revised (CRS-R) is a consistent and sensitive behavioral assessment standard for disorders of consciousness (DOC) patients. However, the CRS-R has limitations due to its dependence on behavioral markers, which has led to a high rate of misdiagnosis. Brain-computer interfaces (BCIs), which directly detect brain activities without any behavioral expression, can be used to evaluate a patient’s state. In this study, we explored the application of BCIs in assisting CRS-R assessments of DOC patients. Specifically, an auditory passive EEG-based BCI system with an oddball paradigm was proposed to facilitate the evaluation of one item of the auditory function scale in the CRS-R - the auditory startle. The results obtained from five healthy subjects validated the efficacy of the BCI system. Nineteen DOC patients participated in the CRS-R and BCI assessments, of which three patients exhibited no responses in the CRS-R assessment but were responsive to auditory startle in the BCI assessment. These results revealed that a proportion of DOC patients who have no behavioral responses in the CRS-R assessment can generate neural responses, which can be detected by our BCI system. Therefore, the proposed BCI may provide more sensitive results than the CRS-R and thus assist CRS-R behavioral assessments.

  13. An Auditory BCI System for Assisting CRS-R Behavioral Assessment in Patients with Disorders of Consciousness.

    PubMed

    Xiao, Jun; Xie, Qiuyou; He, Yanbin; Yu, Tianyou; Lu, Shenglin; Huang, Ningmeng; Yu, Ronghao; Li, Yuanqing

    2016-09-13

    The Coma Recovery Scale-Revised (CRS-R) is a consistent and sensitive behavioral assessment standard for disorders of consciousness (DOC) patients. However, the CRS-R has limitations due to its dependence on behavioral markers, which has led to a high rate of misdiagnosis. Brain-computer interfaces (BCIs), which directly detect brain activities without any behavioral expression, can be used to evaluate a patient's state. In this study, we explored the application of BCIs in assisting CRS-R assessments of DOC patients. Specifically, an auditory passive EEG-based BCI system with an oddball paradigm was proposed to facilitate the evaluation of one item of the auditory function scale in the CRS-R - the auditory startle. The results obtained from five healthy subjects validated the efficacy of the BCI system. Nineteen DOC patients participated in the CRS-R and BCI assessments, of which three patients exhibited no responses in the CRS-R assessment but were responsive to auditory startle in the BCI assessment. These results revealed that a proportion of DOC patients who have no behavioral responses in the CRS-R assessment can generate neural responses, which can be detected by our BCI system. Therefore, the proposed BCI may provide more sensitive results than the CRS-R and thus assist CRS-R behavioral assessments.

  14. Highly scalable multichannel mesh electronics for stable chronic brain electrophysiology.

    PubMed

    Fu, Tian-Ming; Hong, Guosong; Viveros, Robert D; Zhou, Tao; Lieber, Charles M

    2017-11-21

    Implantable electrical probes have led to advances in neuroscience, brain-machine interfaces, and treatment of neurological diseases, yet they remain limited in several key aspects. Ideally, an electrical probe should be capable of recording from large numbers of neurons across multiple local circuits and, importantly, allow stable tracking of the evolution of these neurons over the entire course of study. Silicon probes based on microfabrication can yield large-scale, high-density recording but face challenges of chronic gliosis and instability due to mechanical and structural mismatch with the brain. Ultraflexible mesh electronics, on the other hand, have demonstrated negligible chronic immune response and stable long-term brain monitoring at single-neuron level, although, to date, it has been limited to 16 channels. Here, we present a scalable scheme for highly multiplexed mesh electronics probes to bridge the gap between scalability and flexibility, where 32 to 128 channels per probe were implemented while the crucial brain-like structure and mechanics were maintained. Combining this mesh design with multisite injection, we demonstrate stable 128-channel local field potential and single-unit recordings from multiple brain regions in awake restrained mice over 4 mo. In addition, the newly integrated mesh is used to validate stable chronic recordings in freely behaving mice. This scalable scheme for mesh electronics together with demonstrated long-term stability represent important progress toward the realization of ideal implantable electrical probes allowing for mapping and tracking single-neuron level circuit changes associated with learning, aging, and neurodegenerative diseases. Copyright © 2017 the Author(s). Published by PNAS.

  15. Tinnitus and hyperacusis involve hyperactivity and enhanced connectivity in auditory-limbic-arousal-cerebellar network

    PubMed Central

    Chen, Yu-Chen; Li, Xiaowei; Liu, Lijie; Wang, Jian; Lu, Chun-Qiang; Yang, Ming; Jiao, Yun; Zang, Feng-Chao; Radziwon, Kelly; Chen, Guang-Di; Sun, Wei; Krishnan Muthaiah, Vijaya Prakash; Salvi, Richard; Teng, Gao-Jun

    2015-01-01

    Hearing loss often triggers an inescapable buzz (tinnitus) and causes everyday sounds to become intolerably loud (hyperacusis), but exactly where and how this occurs in the brain is unknown. To identify the neural substrate for these debilitating disorders, we induced both tinnitus and hyperacusis with an ototoxic drug (salicylate) and used behavioral, electrophysiological, and functional magnetic resonance imaging (fMRI) techniques to identify the tinnitus–hyperacusis network. Salicylate depressed the neural output of the cochlea, but vigorously amplified sound-evoked neural responses in the amygdala, medial geniculate, and auditory cortex. Resting-state fMRI revealed hyperactivity in an auditory network composed of inferior colliculus, medial geniculate, and auditory cortex with side branches to cerebellum, amygdala, and reticular formation. Functional connectivity revealed enhanced coupling within the auditory network and segments of the auditory network and cerebellum, reticular formation, amygdala, and hippocampus. A testable model accounting for distress, arousal, and gating of tinnitus and hyperacusis is proposed. DOI: http://dx.doi.org/10.7554/eLife.06576.001 PMID:25962854

  16. Lifespan Differences in Nonlinear Dynamics during Rest and Auditory Oddball Performance

    ERIC Educational Resources Information Center

    Muller, Viktor; Lindenberger, Ulman

    2012-01-01

    Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an…

  17. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    PubMed

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  18. Auditory Spatial Perception: Auditory Localization

    DTIC Science & Technology

    2012-05-01

    cochlear nucleus, TB – trapezoid body, SOC – superior olivary complex, LL – lateral lemniscus, IC – inferior colliculus. Adapted from Aharonson and...Figure 5. Auditory pathways in the central nervous system. LE – left ear, RE – right ear, AN – auditory nerve, CN – cochlear nucleus, TB...fibers leaving the left and right inner ear connect directly to the synaptic inputs of the cochlear nucleus (CN) on the same (ipsilateral) side of

  19. Impaired pitch perception and memory in congenital amusia: the deficit starts in the auditory cortex.

    PubMed

    Albouy, Philippe; Mattout, Jérémie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaëtan; Aguera, Pierre-Emmanuel; Daligault, Sébastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara

    2013-05-01

    Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a melodic contour task and an easier transposition task); they had to indicate whether sequences of six tones (presented in pairs) were the same or different. Behavioural data indicated that in comparison with control participants, amusics' short-term memory was impaired for the melodic contour task, but not for the transposition task. The major finding was that pitch processing and short-term memory deficits can be traced down to amusics' early brain responses during encoding of the melodic information. Temporal and frontal generators of the N100m evoked by each note of the melody were abnormally recruited in the amusic brain. Dynamic causal modelling of the N100m further revealed decreased intrinsic connectivity in both auditory cortices, increased lateral connectivity between auditory cortices as well as a decreased right fronto-temporal backward connectivity in amusics relative to control subjects. Abnormal functioning of this fronto-temporal network was also shown during the retention interval and the retrieval of melodic information. In particular, induced gamma oscillations in right frontal areas were decreased in amusics during the retention interval. Using voxel-based morphometry, we confirmed morphological brain anomalies in terms of white and grey matter concentration in the right inferior frontal gyrus and the right superior temporal gyrus in the amusic brain. The convergence between functional and structural brain differences strengthens the hypothesis of abnormalities in the fronto-temporal pathway of the amusic brain. Our data provide first evidence of altered functioning of the auditory cortices during pitch

  20. Hearing, feeling or seeing a beat recruits a supramodal network in the auditory dorsal stream.

    PubMed

    Araneda, Rodrigo; Renier, Laurent; Ebner-Karestinos, Daniela; Dricot, Laurence; De Volder, Anne G

    2017-06-01

    Hearing a beat recruits a wide neural network that involves the auditory cortex and motor planning regions. Perceiving a beat can potentially be achieved via vision or even touch, but it is currently not clear whether a common neural network underlies beat processing. Here, we used functional magnetic resonance imaging (fMRI) to test to what extent the neural network involved in beat processing is supramodal, that is, is the same in the different sensory modalities. Brain activity changes in 27 healthy volunteers were monitored while they were attending to the same rhythmic sequences (with and without a beat) in audition, vision and the vibrotactile modality. We found a common neural network for beat detection in the three modalities that involved parts of the auditory dorsal pathway. Within this network, only the putamen and the supplementary motor area (SMA) showed specificity to the beat, while the brain activity in the putamen covariated with the beat detection speed. These results highlighted the implication of the auditory dorsal stream in beat detection, confirmed the important role played by the putamen in beat detection and indicated that the neural network for beat detection is mostly supramodal. This constitutes a new example of convergence of the same functional attributes into one centralized representation in the brain. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. Hearing voices in the resting brain: A review of intrinsic functional connectivity research on auditory verbal hallucinations

    PubMed Central

    Alderson-Day, Ben; McCarthy-Jones, Simon; Fernyhough, Charles

    2018-01-01

    Resting state networks (RSNs) are thought to reflect the intrinsic functional connectivity of brain regions. Alterations to RSNs have been proposed to underpin various kinds of psychopathology, including the occurrence of auditory verbal hallucinations (AVH). This review outlines the main hypotheses linking AVH and the resting state, and assesses the evidence for alterations to intrinsic connectivity provided by studies of resting fMRI in AVH. The influence of hallucinations during data acquisition, medication confounds, and movement are also considered. Despite a large variety of analytic methods and designs being deployed, it is possible to conclude that resting connectivity in the left temporal lobe in general and left superior temporal gyrus in particular are disrupted in AVH. There is also preliminary evidence of atypical connectivity in the default mode network and its interaction with other RSNs. Recommendations for future research include the adoption of a common analysis protocol to allow for more overlapping datasets and replication of intrinsic functional connectivity alterations. PMID:25956256

  2. Neuronal effects of nicotine during auditory selective attention.

    PubMed

    Smucny, Jason; Olincy, Ann; Eichman, Lindsay S; Tregellas, Jason R

    2015-06-01

    Although the attention-enhancing effects of nicotine have been behaviorally and neurophysiologically well-documented, its localized functional effects during selective attention are poorly understood. In this study, we examined the neuronal effects of nicotine during auditory selective attention in healthy human nonsmokers. We hypothesized to observe significant effects of nicotine in attention-associated brain areas, driven by nicotine-induced increases in activity as a function of increasing task demands. A single-blind, prospective, randomized crossover design was used to examine neuronal response associated with a go/no-go task after 7 mg nicotine or placebo patch administration in 20 individuals who underwent functional magnetic resonance imaging at 3T. The task design included two levels of difficulty (ordered vs. random stimuli) and two levels of auditory distraction (silence vs. noise). Significant treatment × difficulty × distraction interaction effects on neuronal response were observed in the hippocampus, ventral parietal cortex, and anterior cingulate. In contrast to our hypothesis, U and inverted U-shaped dependencies were observed between the effects of nicotine on response and task demands, depending on the brain area. These results suggest that nicotine may differentially affect neuronal response depending on task conditions. These results have important theoretical implications for understanding how cholinergic tone may influence the neurobiology of selective attention.

  3. Determination of optimum "multi-channel surface wave method" field parameters.

    DOT National Transportation Integrated Search

    2012-12-01

    Multi-channel surface wave methods (especially the multi-channel analyses of surface wave method; MASW) are routinely used to : determine the shear-wave velocity of the subsurface to depths of 100 feet for site classification purposes. Users are awar...

  4. Integrated trimodal SSEP experimental setup for visual, auditory and tactile stimulation

    NASA Astrophysics Data System (ADS)

    Kuś, Rafał; Spustek, Tomasz; Zieleniewska, Magdalena; Duszyk, Anna; Rogowski, Piotr; Suffczyński, Piotr

    2017-12-01

    Objective. Steady-state evoked potentials (SSEPs), the brain responses to repetitive stimulation, are commonly used in both clinical practice and scientific research. Particular brain mechanisms underlying SSEPs in different modalities (i.e. visual, auditory and tactile) are very complex and still not completely understood. Each response has distinct resonant frequencies and exhibits a particular brain topography. Moreover, the topography can be frequency-dependent, as in case of auditory potentials. However, to study each modality separately and also to investigate multisensory interactions through multimodal experiments, a proper experimental setup appears to be of critical importance. The aim of this study was to design and evaluate a novel SSEP experimental setup providing a repetitive stimulation in three different modalities (visual, tactile and auditory) with a precise control of stimuli parameters. Results from a pilot study with a stimulation in a particular modality and in two modalities simultaneously prove the feasibility of the device to study SSEP phenomenon. Approach. We developed a setup of three separate stimulators that allows for a precise generation of repetitive stimuli. Besides sequential stimulation in a particular modality, parallel stimulation in up to three different modalities can be delivered. Stimulus in each modality is characterized by a stimulation frequency and a waveform (sine or square wave). We also present a novel methodology for the analysis of SSEPs. Main results. Apart from constructing the experimental setup, we conducted a pilot study with both sequential and simultaneous stimulation paradigms. EEG signals recorded during this study were analyzed with advanced methodology based on spatial filtering and adaptive approximation, followed by statistical evaluation. Significance. We developed a novel experimental setup for performing SSEP experiments. In this sense our study continues the ongoing research in this field. On the

  5. VGLUT1 and VGLUT2 mRNA expression in the primate auditory pathway

    PubMed Central

    Hackett, Troy A.; Takahata, Toru; Balaram, Pooja

    2011-01-01

    The vesicular glutamate transporters (VGLUTs) regulate storage and release of glutamate in the brain. In adult animals, the VGLUT1 and VGLUT2 isoforms are widely expressed and differentially distributed, suggesting that neural circuits exhibit distinct modes of glutamate regulation. Studies in rodents suggest that VGLUT1 and VGLUT2 mRNA expression patterns are partly complementary, with VGLUT1 expressed at higher levels in cortex and VGLUT2 prominent subcortically, but with overlapping distributions in some nuclei. In primates, VGLUT gene expression has not been previously studied in any part of the brain. The purposes of the present study were to document the regional expression of VGLUT1 and VGLUT2 mRNA in the auditory pathway through A1 in cortex, and to determine whether their distributions are comparable to rodents. In situ hybridization with antisense riboprobes revealed that VGLUT2 was strongly expressed by neurons in the cerebellum and most major auditory nuclei, including the dorsal and ventral cochlear nuclei, medial and lateral superior olivary nuclei, central nucleus of the inferior colliculus, sagulum, and all divisions of the medial geniculate. VGLUT1 was densely expressed in the hippocampus and ventral cochlear nuclei, and at reduced levels in other auditory nuclei. In auditory cortex, neurons expressing VGLUT1 were widely distributed in layers II – VI of the core, belt and parabelt regions. VGLUT2 was most strongly expressed by neurons in layers IIIb and IV, weakly by neurons in layers II – IIIa, and at very low levels in layers V – VI. The findings indicate that VGLUT2 is strongly expressed by neurons at all levels of the subcortical auditory pathway, and by neurons in the middle layers of cortex, whereas VGLUT1 is strongly expressed by most if not all glutamatergic neurons in auditory cortex and at variable levels among auditory subcortical nuclei. These patterns imply that VGLUT2 is the main vesicular glutamate transporter in subcortical

  6. VGLUT1 and VGLUT2 mRNA expression in the primate auditory pathway.

    PubMed

    Hackett, Troy A; Takahata, Toru; Balaram, Pooja

    2011-04-01

    The vesicular glutamate transporters (VGLUTs) regulate the storage and release of glutamate in the brain. In adult animals, the VGLUT1 and VGLUT2 isoforms are widely expressed and differentially distributed, suggesting that neural circuits exhibit distinct modes of glutamate regulation. Studies in rodents suggest that VGLUT1 and VGLUT2 mRNA expression patterns are partly complementary, with VGLUT1 expressed at higher levels in the cortex and VGLUT2 prominent subcortically, but with overlapping distributions in some nuclei. In primates, VGLUT gene expression has not been previously studied in any part of the brain. The purposes of the present study were to document the regional expression of VGLUT1 and VGLUT2 mRNA in the auditory pathway through A1 in the cortex, and to determine whether their distributions are comparable to rodents. In situ hybridization with antisense riboprobes revealed that VGLUT2 was strongly expressed by neurons in the cerebellum and most major auditory nuclei, including the dorsal and ventral cochlear nuclei, medial and lateral superior olivary nuclei, central nucleus of the inferior colliculus, sagulum, and all divisions of the medial geniculate. VGLUT1 was densely expressed in the hippocampus and ventral cochlear nuclei, and at reduced levels in other auditory nuclei. In the auditory cortex, neurons expressing VGLUT1 were widely distributed in layers II-VI of the core, belt and parabelt regions. VGLUT2 was expressed most strongly by neurons in layers IIIb and IV, weakly by neurons in layers II-IIIa, and at very low levels in layers V-VI. The findings indicate that VGLUT2 is strongly expressed by neurons at all levels of the subcortical auditory pathway, and by neurons in the middle layers of the cortex, whereas VGLUT1 is strongly expressed by most if not all glutamatergic neurons in the auditory cortex and at variable levels among auditory subcortical nuclei. These patterns imply that VGLUT2 is the main vesicular glutamate transporter in

  7. Suppression and facilitation of auditory neurons through coordinated acoustic and midbrain stimulation: investigating a deep brain stimulator for tinnitus

    NASA Astrophysics Data System (ADS)

    Offutt, Sarah J.; Ryan, Kellie J.; Konop, Alexander E.; Lim, Hubert H.

    2014-12-01

    Objective. The inferior colliculus (IC) is the primary processing center of auditory information in the midbrain and is one site of tinnitus-related activity. One potential option for suppressing the tinnitus percept is through deep brain stimulation via the auditory midbrain implant (AMI), which is designed for hearing restoration and is already being implanted in deaf patients who also have tinnitus. However, to assess the feasibility of AMI stimulation for tinnitus treatment we first need to characterize the functional connectivity within the IC. Previous studies have suggested modulatory projections from the dorsal cortex of the IC (ICD) to the central nucleus of the IC (ICC), though the functional properties of these projections need to be determined. Approach. In this study, we investigated the effects of electrical stimulation of the ICD on acoustic-driven activity within the ICC in ketamine-anesthetized guinea pigs. Main Results. We observed ICD stimulation induces both suppressive and facilitatory changes across ICC that can occur immediately during stimulation and remain after stimulation. Additionally, ICD stimulation paired with broadband noise stimulation at a specific delay can induce greater suppressive than facilitatory effects, especially when stimulating in more rostral and medial ICD locations. Significance. These findings demonstrate that ICD stimulation can induce specific types of plastic changes in ICC activity, which may be relevant for treating tinnitus. By using the AMI with electrode sites positioned with the ICD and the ICC, the modulatory effects of ICD stimulation can be tested directly in tinnitus patients.

  8. Deep brain stimulation of the ventral hippocampus restores deficits in processing of auditory evoked potentials in a rodent developmental disruption model of schizophrenia.

    PubMed

    Ewing, Samuel G; Grace, Anthony A

    2013-02-01

    Existing antipsychotic drugs are most effective at treating the positive symptoms of schizophrenia but their relative efficacy is low and they are associated with considerable side effects. In this study deep brain stimulation of the ventral hippocampus was performed in a rodent model of schizophrenia (MAM-E17) in an attempt to alleviate one set of neurophysiological alterations observed in this disorder. Bipolar stimulating electrodes were fabricated and implanted, bilaterally, into the ventral hippocampus of rats. High frequency stimulation was delivered bilaterally via a custom-made stimulation device and both spectral analysis (power and coherence) of resting state local field potentials and amplitude of auditory evoked potential components during a standard inhibitory gating paradigm were examined. MAM rats exhibited alterations in specific components of the auditory evoked potential in the infralimbic cortex, the core of the nucleus accumbens, mediodorsal thalamic nucleus, and ventral hippocampus in the left hemisphere only. DBS was effective in reversing these evoked deficits in the infralimbic cortex and the mediodorsal thalamic nucleus of MAM-treated rats to levels similar to those observed in control animals. In contrast stimulation did not alter evoked potentials in control rats. No deficits or stimulation-induced alterations were observed in the prelimbic and orbitofrontal cortices, the shell of the nucleus accumbens or ventral tegmental area. These data indicate a normalization of deficits in generating auditory evoked potentials induced by a developmental disruption by acute high frequency, electrical stimulation of the ventral hippocampus. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Multichannel analysis of surface waves

    USGS Publications Warehouse

    Park, C.B.; Miller, R.D.; Xia, J.

    1999-01-01

    The frequency-dependent properties of Rayleigh-type surface waves can be utilized for imaging and characterizing the shallow subsurface. Most surface-wave analysis relies on the accurate calculation of phase velocities for the horizontally traveling fundamental-mode Rayleigh wave acquired by stepping out a pair of receivers at intervals based on calculated ground roll wavelengths. Interference by coherent source-generated noise inhibits the reliability of shear-wave velocities determined through inversion of the whole wave field. Among these nonplanar, nonfundamental-mode Rayleigh waves (noise) are body waves, scattered and nonsource-generated surface waves, and higher-mode surface waves. The degree to which each of these types of noise contaminates the dispersion curve and, ultimately, the inverted shear-wave velocity profile is dependent on frequency as well as distance from the source. Multichannel recording permits effective identification and isolation of noise according to distinctive trace-to-trace coherency in arrival time and amplitude. An added advantage is the speed and redundancy of the measurement process. Decomposition of a multichannel record into a time variable-frequency format, similar to an uncorrelated Vibroseis record, permits analysis and display of each frequency component in a unique and continuous format. Coherent noise contamination can then be examined and its effects appraised in both frequency and offset space. Separation of frequency components permits real-time maximization of the S/N ratio during acquisition and subsequent processing steps. Linear separation of each ground roll frequency component allows calculation of phase velocities by simply measuring the linear slope of each frequency component. Breaks in coherent surface-wave arrivals, observable on the decomposed record, can be compensated for during acquisition and processing. Multichannel recording permits single-measurement surveying of a broad depth range, high levels of

  10. Different neural activities support auditory working memory in musicians and bilinguals.

    PubMed

    Alain, Claude; Khatamian, Yasha; He, Yu; Lee, Yunjo; Moreno, Sylvain; Leung, Ada W S; Bialystok, Ellen

    2018-05-17

    Musical training and bilingualism benefit executive functioning and working memory (WM)-however, the brain networks supporting this advantage are not well specified. Here, we used functional magnetic resonance imaging and the n-back task to assess WM for spatial (sound location) and nonspatial (sound category) auditory information in musician monolingual (musicians), nonmusician bilinguals (bilinguals), and nonmusician monolinguals (controls). Musicians outperformed bilinguals and controls on the nonspatial WM task. Overall, spatial and nonspatial WM were associated with greater activity in dorsal and ventral brain regions, respectively. Increasing WM load yielded similar recruitment of the anterior-posterior attention network in all three groups. In both tasks and both levels of difficulty, musicians showed lower brain activity than controls in superior prefrontal frontal gyrus and dorsolateral prefrontal cortex (DLPFC) bilaterally, a finding that may reflect improved and more efficient use of neural resources. Bilinguals showed enhanced activity in language-related areas (i.e., left DLPFC and left supramarginal gyrus) relative to musicians and controls, which could be associated with the need to suppress interference associated with competing semantic activations from multiple languages. These findings indicate that the auditory WM advantage in musicians and bilinguals is mediated by different neural networks specific to each life experience. © 2018 New York Academy of Sciences.

  11. Aging effects on functional auditory and visual processing using fMRI with variable sensory loading.

    PubMed

    Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S

    2013-05-01

    Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older

  12. Auditory hallucinations: A review of the ERC “VOICE” project

    PubMed Central

    Hugdahl, Kenneth

    2015-01-01

    In this invited review I provide a selective overview of recent research on brain mechanisms and cognitive processes involved in auditory hallucinations. The review is focused on research carried out in the “VOICE” ERC Advanced Grant Project, funded by the European Research Council, but I also review and discuss the literature in general. Auditory hallucinations are suggested to be perceptual phenomena, with a neuronal origin in the speech perception areas in the temporal lobe. The phenomenology of auditory hallucinations is conceptualized along three domains, or dimensions; a perceptual dimension, experienced as someone speaking to the patient; a cognitive dimension, experienced as an inability to inhibit, or ignore the voices, and an emotional dimension, experienced as the “voices” having primarily a negative, or sinister, emotional tone. I will review cognitive, imaging, and neurochemistry data related to these dimensions, primarily the first two. The reviewed data are summarized in a model that sees auditory hallucinations as initiated from temporal lobe neuronal hyper-activation that draws attentional focus inward, and which is not inhibited due to frontal lobe hypo-activation. It is further suggested that this is maintained through abnormal glutamate and possibly gamma-amino-butyric-acid transmitter mediation, which could point towards new pathways for pharmacological treatment. A final section discusses new methods of acquiring quantitative data on the phenomenology and subjective experience of auditory hallucination that goes beyond standard interview questionnaires, by suggesting an iPhone/iPod app. PMID:26110121

  13. Cerebral processing of auditory stimuli in patients with irritable bowel syndrome

    PubMed Central

    Andresen, Viola; Poellinger, Alexander; Tsrouya, Chedwa; Bach, Dominik; Stroh, Albrecht; Foerschler, Annette; Georgiewa, Petra; Schmidtmann, Marco; van der Voort, Ivo R; Kobelt, Peter; Zimmer, Claus; Wiedenmann, Bertram; Klapp, Burghard F; Monnikes, Hubert

    2006-01-01

    AIM: To determine by brain functional magnetic resonance imaging (fMRI) whether cerebral processing of non-visceral stimuli is altered in irritable bowel syndrome (IBS) patients compared with healthy subjects. To circumvent spinal viscerosomatic convergence mechanisms, we used auditory stimulation, and to identify a possible influence of psychological factors the stimuli differed in their emotional quality. METHODS: In 8 IBS patients and 8 controls, fMRI measurements were performed using a block design of 4 auditory stimuli of different emotional quality (pleasant sounds of chimes, unpleasant peep (2000 Hz), neutral words, and emotional words). A gradient echo T2*-weighted sequence was used for the functional scans. Statistical maps were constructed using the general linear model. RESULTS: To emotional auditory stimuli, IBS patients relative to controls responded with stronger deactivations in a greater variety of emotional processing regions, while the response patterns, unlike in controls, did not differentiate between distressing or pleasant sounds. To neutral auditory stimuli, by contrast, only IBS patients responded with large significant activations. CONCLUSION: Altered cerebral response patterns to auditory stimuli in emotional stimulus-processing regions suggest that altered sensory processing in IBS may not be specific for visceral sensation, but might reflect generalized changes in emotional sensitivity and affective reactivity, possibly associated with the psychological comorbidity often found in IBS patients. PMID:16586541

  14. Multichannel-Hadamard calibration of high-order adaptive optics systems.

    PubMed

    Guo, Youming; Rao, Changhui; Bao, Hua; Zhang, Ang; Zhang, Xuejun; Wei, Kai

    2014-06-02

    we present a novel technique of calibrating the interaction matrix for high-order adaptive optics systems, called the multichannel-Hadamard method. In this method, the deformable mirror actuators are firstly divided into a series of channels according to their coupling relationship, and then the voltage-oriented Hadamard method is applied to these channels. Taking the 595-element adaptive optics system as an example, the procedure is described in detail. The optimal channel dividing is discussed and tested by numerical simulation. The proposed method is also compared with the voltage-oriented Hadamard only method and the multichannel only method by experiments. Results show that the multichannel-Hadamard method can produce significant improvement on interaction matrix measurement.

  15. Musical training induces functional and structural auditory-motor network plasticity in young adults.

    PubMed

    Li, Qiongling; Wang, Xuetong; Wang, Shaoyi; Xie, Yongqi; Li, Xinwei; Xie, Yachao; Li, Shuyu

    2018-05-01

    Playing music requires a strong coupling of perception and action mediated by multimodal integration of brain regions, which can be described as network connections measured by anatomical and functional correlations between regions. However, the structural and functional connectivities within and between the auditory and sensorimotor networks after long-term musical training remain largely uninvestigated. Here, we compared the structural connectivity (SC) and resting-state functional connectivity (rs-FC) within and between the two networks in 29 novice healthy young adults before and after musical training (piano) with those of another 27 novice participants who were evaluated longitudinally but with no intervention. In addition, a correlation analysis was performed between the changes in FC or SC with practice time in the training group. As expected, participants in the training group showed increased FC within the sensorimotor network and increased FC and SC of the auditory-motor network after musical training. Interestingly, we further found that the changes in FC within the sensorimotor network and SC of the auditory-motor network were positively correlated with practice time. Our results indicate that musical training could induce enhanced local interaction and global integration between musical performance-related regions, which provides insights into the mechanism of brain plasticity in young adults. © 2018 Wiley Periodicals, Inc.

  16. Multi-channel retarding field analyzer for EAST

    NASA Astrophysics Data System (ADS)

    M, HENKEL; D, HÖSCHEN; Y, LIANG; Y, LI; S, C. LIU; D, NICOLAI; N, SANDRI; G, SATHEESWARAN; N, YAN; H, X. ZHANG; the EAST, team2

    2018-05-01

    A multi-channel retarding field analyzer (MC-RFA) including two RFA modules and two Langmuir probes to measure the ion and electron temperature profiles within the scrape-off layer was developed for investigations of the interplay between magnetic topology and plasma transport at the plasma boundary. The MC-RFA probe for the stellarator W7-X and first measurements at the tokamak EAST was designed. The probe head allows simultaneous multi-channel ion temperature as well as for electron temperature measurements. The usability for radial correlation measurements of the measured ion currents is also given.

  17. Resting-state brain networks revealed by granger causal connectivity in frogs.

    PubMed

    Xue, Fei; Fang, Guangzhan; Yue, Xizi; Zhao, Ermi; Brauth, Steven E; Tang, Yezhong

    2016-10-15

    Resting-state networks (RSNs) refer to the spontaneous brain activity generated under resting conditions, which maintain the dynamic connectivity of functional brain networks for automatic perception or higher order cognitive functions. Here, Granger causal connectivity analysis (GCCA) was used to explore brain RSNs in the music frog (Babina daunchina) during different behavioral activity phases. The results reveal that a causal network in the frog brain can be identified during the resting state which reflects both brain lateralization and sexual dimorphism. Specifically (1) ascending causal connections from the left mesencephalon to both sides of the telencephalon are significantly higher than those from the right mesencephalon, while the right telencephalon gives rise to the strongest efferent projections among all brain regions; (2) causal connections from the left mesencephalon in females are significantly higher than those in males and (3) these connections are similar during both the high and low behavioral activity phases in this species although almost all electroencephalograph (EEG) spectral bands showed higher power in the high activity phase for all nodes. The functional features of this network match important characteristics of auditory perception in this species. Thus we propose that this causal network maintains auditory perception during the resting state for unexpected auditory inputs as resting-state networks do in other species. These results are also consistent with the idea that females are more sensitive to auditory stimuli than males during the reproductive season. In addition, these results imply that even when not behaviorally active, the frogs remain vigilant for detecting external stimuli. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    PubMed

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  19. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    PubMed Central

    Hill, N J; Schölkopf, B

    2012-01-01

    We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135

  20. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    NASA Astrophysics Data System (ADS)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  1. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    PubMed

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  2. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps.

    PubMed

    Sood, Mariam R; Sereno, Martin I

    2016-08-01

    Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  3. Brain Networks of Novelty-Driven Involuntary and Cued Voluntary Auditory Attention Shifting

    PubMed Central

    Huang, Samantha; Belliveau, John W.; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-01-01

    In everyday life, we need a capacity to flexibly shift attention between alternative sound sources. However, relatively little work has been done to elucidate the mechanisms of attention shifting in the auditory domain. Here, we used a mixed event-related/sparse-sampling fMRI approach to investigate this essential cognitive function. In each 10-sec trial, subjects were instructed to wait for an auditory “cue” signaling the location where a subsequent “target” sound was likely to be presented. The target was occasionally replaced by an unexpected “novel” sound in the uncued ear, to trigger involuntary attention shifting. To maximize the attention effects, cues, targets, and novels were embedded within dichotic 800-Hz vs. 1500-Hz pure-tone “standard” trains. The sound of clustered fMRI acquisition (starting at t = 7.82 sec) served as a controlled trial-end signal. Our approach revealed notable activation differences between the conditions. Cued voluntary attention shifting activated the superior intra­­parietal sulcus (IPS), whereas novelty-triggered involuntary orienting activated the inferior IPS and certain subareas of the precuneus. Clearly more widespread activations were observed during voluntary than involuntary orienting in the premotor cortex, including the frontal eye fields. Moreover, we found ­evidence for a frontoinsular-cingular attentional control network, consisting of the anterior insula, inferior frontal cortex, and medial frontal cortices, which were activated during both target discrimination and voluntary attention shifting. Finally, novels and targets activated much wider areas of superior temporal auditory cortices than shifting cues. PMID:22937153

  4. Top-down and bottom-up modulation of brain structures involved in auditory discrimination.

    PubMed

    Diekhof, Esther K; Biedermann, Franziska; Ruebsamen, Rudolf; Gruber, Oliver

    2009-11-10

    Auditory deviancy detection comprises both automatic and voluntary processing. Here, we investigated the neural correlates of different components of the sensory discrimination process using functional magnetic resonance imaging. Subliminal auditory processing of deviant events that were not detected led to activation in left superior temporal gyrus. On the other hand, both correct detection of deviancy and false alarms activated a frontoparietal network of attentional processing and response selection, i.e. this network was activated regardless of the physical presence of deviant events. Finally, activation in the putamen, anterior cingulate and middle temporal cortex depended on factual stimulus representations and occurred only during correct deviancy detection. These results indicate that sensory discrimination may rely on dynamic bottom-up and top-down interactions.

  5. Activity in the left auditory cortex is associated with individual impulsivity in time discounting.

    PubMed

    Han, Ruokang; Takahashi, Taiki; Miyazaki, Akane; Kadoya, Tomoka; Kato, Shinya; Yokosawa, Koichi

    2015-01-01

    Impulsivity dictates individual decision-making behavior. Therefore, it can reflect consumption behavior and risk of addiction and thus underlies social activities as well. Neuroscience has been applied to explain social activities; however, the brain function controlling impulsivity has remained unclear. It is known that impulsivity is related to individual time perception, i.e., a person who perceives a certain physical time as being longer is impulsive. Here we show that activity of the left auditory cortex is related to individual impulsivity. Individual impulsivity was evaluated by a self-answered questionnaire in twelve healthy right-handed adults, and activities of the auditory cortices of bilateral hemispheres when listening to continuous tones were recorded by magnetoencephalography. Sustained activity of the left auditory cortex was significantly correlated to impulsivity, that is, larger sustained activity indicated stronger impulsivity. The results suggest that the left auditory cortex represent time perception, probably because the area is involved in speech perception, and that it represents impulsivity indirectly.

  6. Modulation of auditory processing during speech movement planning is limited in adults who stutter

    PubMed Central

    Daliri, Ayoub; Max, Ludo

    2015-01-01

    Stuttering is associated with atypical structural and functional connectivity in sensorimotor brain areas, in particular premotor, motor, and auditory regions. It remains unknown, however, which specific mechanisms of speech planning and execution are affected by these neurological abnormalities. To investigate pre-movement sensory modulation, we recorded 12 stuttering and 12 nonstuttering adults’ auditory evoked potentials in response to probe tones presented prior to speech onset in a delayed-response speaking condition vs. no-speaking control conditions (silent reading; seeing nonlinguistic symbols). Findings indicate that, during speech movement planning, the nonstuttering group showed a statistically significant modulation of auditory processing (reduced N1 amplitude) that was not observed in the stuttering group. Thus, the obtained results provide electrophysiological evidence in support of the hypothesis that stuttering is associated with deficiencies in modulating the cortical auditory system during speech movement planning. This specific sensorimotor integration deficiency may contribute to inefficient feedback monitoring and, consequently, speech dysfluencies. PMID:25796060

  7. A role for descending auditory cortical projections in songbird vocal learning

    PubMed Central

    Mandelblat-Cerf, Yael; Las, Liora; Denisenko, Natalia; Fee, Michale S

    2014-01-01

    Many learned motor behaviors are acquired by comparing ongoing behavior with an internal representation of correct performance, rather than using an explicit external reward. For example, juvenile songbirds learn to sing by comparing their song with the memory of a tutor song. At present, the brain regions subserving song evaluation are not known. In this study, we report several findings suggesting that song evaluation involves an avian 'cortical' area previously shown to project to the dopaminergic midbrain and other downstream targets. We find that this ventral portion of the intermediate arcopallium (AIV) receives inputs from auditory cortical areas, and that lesions of AIV result in significant deficits in vocal learning. Additionally, AIV neurons exhibit fast responses to disruptive auditory feedback presented during singing, but not during nonsinging periods. Our findings suggest that auditory cortical areas may guide learning by transmitting song evaluation signals to the dopaminergic midbrain and/or other subcortical targets. DOI: http://dx.doi.org/10.7554/eLife.02152.001 PMID:24935934

  8. State-dependent changes in cortical gain control as measured by auditory evoked responses to varying intensity stimuli.

    PubMed

    Phillips, Derrick J; Schei, Jennifer L; Meighan, Peter C; Rector, David M

    2011-11-01

    Auditory evoked potential (AEP) components correspond to sequential activation of brain structures within the auditory pathway and reveal neural activity during sensory processing. To investigate state-dependent modulation of stimulus intensity response profiles within different brain structures, we assessed AEP components across both stimulus intensity and state. We implanted adult female Sprague-Dawley rats (N = 6) with electrodes to measure EEG, EKG, and EMG. Intermittent auditory stimuli (6-12 s) varying from 50 to 75 dBa were delivered over a 24-h period. Data were parsed into 2-s epochs and scored for wake/sleep state. All AEP components increased in amplitude with increased stimulus intensity during wake. During quiet sleep, however, only the early latency response (ELR) showed this relationship, while the middle latency response (MLR) increased at the highest 75 dBa intensity, and the late latency response (LLR) showed no significant change across the stimulus intensities tested. During rapid eye movement sleep (REM), both ELR and LLR increased, similar to wake, but MLR was severely attenuated. Stimulation intensity and the corresponding AEP response profile were dependent on both brain structure and sleep state. Lower brain structures maintained stimulus intensity and neural response relationships during sleep. This relationship was not observed in the cortex, implying state-dependent modification of stimulus intensity coding. Since AEP amplitude is not modulated by stimulus intensity during sleep, differences between paired 75/50 dBa stimuli could be used to determine state better than individual intensities.

  9. Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment.

    PubMed

    Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas

    2010-07-01

    Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  10. BDNF gene delivery within and beyond templated agarose multi-channel guidance scaffolds enhances peripheral nerve regeneration

    NASA Astrophysics Data System (ADS)

    Gao, Mingyong; Lu, Paul; Lynam, Dan; Bednark, Bridget; Campana, W. Marie; Sakamoto, Jeff; Tuszynski, Mark

    2016-12-01

    Objective. We combined implantation of multi-channel templated agarose scaffolds with growth factor gene delivery to examine whether this combinatorial treatment can enhance peripheral axonal regeneration through long sciatic nerve gaps. Approach. 15 mm long scaffolds were templated into highly organized, strictly linear channels, mimicking the linear organization of natural nerves into fascicles of related function. Scaffolds were filled with syngeneic bone marrow stromal cells (MSCs) secreting the growth factor brain derived neurotrophic factor (BDNF), and lentiviral vectors expressing BDNF were injected into the sciatic nerve segment distal to the scaffold implantation site. Main results. Twelve weeks after injury, scaffolds supported highly linear regeneration of host axons across the 15 mm lesion gap. The incorporation of BDNF-secreting cells into scaffolds significantly increased axonal regeneration, and additional injection of viral vectors expressing BDNF into the distal segment of the transected nerve significantly enhanced axonal regeneration beyond the lesion. Significance. Combinatorial treatment with multichannel bioengineered scaffolds and distal growth factor delivery significantly improves peripheral nerve repair, rivaling the gold standard of autografts.

  11. Differential responses of primary auditory cortex in autistic spectrum disorder with auditory hypersensitivity.

    PubMed

    Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako

    2012-01-25

    The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.

  12. Chronic auditory hallucinations in schizophrenic patients: MR analysis of the coincidence between functional and morphologic abnormalities.

    PubMed

    Martí-Bonmatí, Luis; Lull, Juan José; García-Martí, Gracián; Aguilar, Eduardo J; Moratal-Pérez, David; Poyatos, Cecilio; Robles, Montserrat; Sanjuán, Julio

    2007-08-01

    To prospectively evaluate if functional magnetic resonance (MR) imaging abnormalities associated with auditory emotional stimuli coexist with focal brain reductions in schizophrenic patients with chronic auditory hallucinations. Institutional review board approval was obtained and all participants gave written informed consent. Twenty-one right-handed male patients with schizophrenia and persistent hallucinations (started to hear hallucinations at a mean age of 23 years +/- 10, with 15 years +/- 8 of mean illness duration) and 10 healthy paired participants (same ethnic group [white], age, and education level [secondary school]) were studied. Functional echo-planar T2*-weighted (after both emotional and neutral auditory stimulation) and morphometric three-dimensional gradient-recalled echo T1-weighted MR images were analyzed using Statistical Parametric Mapping (SPM2) software. Brain activation images were extracted by subtracting those with emotional from nonemotional words. Anatomic differences were explored by optimized voxel-based morphometry. The functional and morphometric MR images were overlaid to depict voxels statistically reported by both techniques. A coincidence map was generated by multiplying the emotional subtracted functional MR and volume decrement morphometric maps. Statistical analysis used the general linear model, Student t tests, random effects analyses, and analysis of covariance with a correction for multiple comparisons following the false discovery rate method. Large coinciding brain clusters (P < .005) were found in the left and right middle temporal and superior temporal gyri. Smaller coinciding clusters were found in the left posterior and right anterior cingular gyri, left inferior frontal gyrus, and middle occipital gyrus. The middle and superior temporal and the cingular gyri are closely related to the abnormal neural network involved in the auditory emotional dysfunction seen in schizophrenic patients.

  13. Classification of single-trial auditory events using dry-wireless EEG during real and motion simulated flight.

    PubMed

    Callan, Daniel E; Durantin, Gautier; Terzibas, Cengiz

    2015-01-01

    Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound vs. silent periods. Evaluation of Independent component analysis (ICA) and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs. 78.3%), Platform On (73.1% vs. 71.6%), Biplane Engine Off (81.1% vs. 77.4%), and Biplane Engine On (79.2% vs. 66.1%). This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces.

  14. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    PubMed

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants <18, nonhuman participants, and article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory

  15. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    PubMed

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  17. Functional connectivity during phonemic and semantic verbal fluency test: a multi-channel near infrared spectroscopy study (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Huang, Chun-Jung; Sun, Chia-Wei; Chou, Po-Han; Chuang, Ching-Cheng

    2016-03-01

    Verbal fluency tests (VFT) are widely used neuropsychological tests of frontal lobe and have been frequently used in various functional brain mapping studies. There are two versions of VFT based on the type of cue: the letter fluency task (LFT) and the category fluency task (CFT). However, the fundamental aspect of the brain connectivity across spatial regions of the fronto-temporal regions during the VFTs has not been elucidated to date. In this study we hypothesized that different cortical functional connectivity over bilateral fronto-temporal regions can be observed by means of multi-channel fNIRS in the LFT and the CFT respectively. Our results from fNIRS (ETG-4000) showed different patterns of brain functional connectivity consistent with these different cognitive requirements. We demonstrate more brain functional connectivity over frontal and temporal regions during LFT than CFT, and this was in line with previous brain activity studies using fNIRS demonstrating increased frontal and temporal region activation during LFT and CFT and more pronounced frontal activation by the LFT.

  18. Neural correlates of accelerated auditory processing in children engaged in music training.

    PubMed

    Habibi, Assal; Cahn, B Rael; Damasio, Antonio; Damasio, Hanna

    2016-10-01

    Several studies comparing adult musicians and non-musicians have shown that music training is associated with brain differences. It is unknown, however, whether these differences result from lengthy musical training, from pre-existing biological traits, or from social factors favoring musicality. As part of an ongoing 5-year longitudinal study, we investigated the effects of a music training program on the auditory development of children, over the course of two years, beginning at age 6-7. The training was group-based and inspired by El-Sistema. We compared the children in the music group with two comparison groups of children of the same socio-economic background, one involved in sports training, another not involved in any systematic training. Prior to participating, children who began training in music did not differ from those in the comparison groups in any of the assessed measures. After two years, we now observe that children in the music group, but not in the two comparison groups, show an enhanced ability to detect changes in tonal environment and an accelerated maturity of auditory processing as measured by cortical auditory evoked potentials to musical notes. Our results suggest that music training may result in stimulus specific brain changes in school aged children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. A novel method of brainstem auditory evoked potentials using complex verbal stimuli.

    PubMed

    Kouni, Sophia N; Koutsojannis, Constantinos; Ziavra, Nausika; Giannopoulos, Sotirios

    2014-08-01

    The click and tone-evoked auditory brainstem responses are widely used in clinical practice due to their consistency and predictability. More recently, the speech-evoked responses have been used to evaluate subcortical processing of complex signals, not revealed by responses to clicks and tones. Disyllable stimuli corresponding to familiar words can induce a pattern of voltage fluctuations in the brain stem resulting in a familiar waveform, and they can yield better information about brain stem nuclei along the ascending central auditory pathway. We describe a new method with the use of the disyllable word "baba" corresponding to English "daddy" that is commonly used in many other ethnic languages spanning from West Africa to the Eastern Mediterranean all the way to the East Asia. This method was applied in 20 young adults institutionally diagnosed as dyslexic (10 subjects) or light dyslexic (10 subjects) who were matched with 20 sex, age, education, hearing sensitivity, and IQ-matched normal subjects. The absolute peak latencies of the negative wave C and the interpeak latencies of A-C elicited by verbal stimuli "baba" were found to be significantly increased in the dyslexic group in comparison with the control group. The method is easy and helpful to diagnose abnormalities affecting the auditory pathway, to identify subjects with early perception and cortical representation abnormalities, and to apply the suitable therapeutic and rehabilitation management.

  20. Mutism and auditory agnosia due to bilateral insular damage--role of the insula in human communication.

    PubMed

    Habib, M; Daquin, G; Milandre, L; Royere, M L; Rey, M; Lanteri, A; Salamon, G; Khalil, R

    1995-03-01

    We report a case of transient mutism and persistent auditory agnosia due to two successive ischemic infarcts mainly involving the insular cortex on both hemispheres. During the 'mutic' period, which lasted about 1 month, the patient did not respond to any auditory stimuli and made no effort to communicate. On follow-up examinations, language competences had re-appeared almost intact, but a massive auditory agnosia for non-verbal sounds was observed. From close inspection of lesion site, as determined with brain resonance imaging, and from a study of auditory evoked potentials, it is concluded that bilateral insular damage was crucial to both expressive and receptive components of the syndrome. The role of the insula in verbal and non-verbal communication is discussed in the light of anatomical descriptions of the pattern of connectivity of the insular cortex.

  1. Early neural disruption and auditory processing outcomes in rodent models: implications for developmental language disability

    PubMed Central

    Fitch, R. Holly; Alexander, Michelle L.; Threlkeld, Steven W.

    2013-01-01

    Most researchers in the field of neural plasticity are familiar with the “Kennard Principle,” which purports a positive relationship between age at brain injury and severity of subsequent deficits (plateauing in adulthood). As an example, a child with left hemispherectomy can recover seemingly normal language, while an adult with focal injury to sub-regions of left temporal and/or frontal cortex can suffer dramatic and permanent language loss. Here we present data regarding the impact of early brain injury in rat models as a function of type and timing, measuring long-term behavioral outcomes via auditory discrimination tasks varying in temporal demand. These tasks were created to model (in rodents) aspects of human sensory processing that may correlate—both developmentally and functionally—with typical and atypical language. We found that bilateral focal lesions to the cortical plate in rats during active neuronal migration led to worse auditory outcomes than comparable lesions induced after cortical migration was complete. Conversely, unilateral hypoxic-ischemic (HI) injuries (similar to those seen in premature infants and term infants with birth complications) led to permanent auditory processing deficits when induced at a neurodevelopmental point comparable to human “term,” but only transient deficits (undetectable in adulthood) when induced in a “preterm” window. Convergent evidence suggests that regardless of when or how disruption of early neural development occurs, the consequences may be particularly deleterious to rapid auditory processing (RAP) outcomes when they trigger developmental alterations that extend into subcortical structures (i.e., lower sensory processing stations). Collective findings hold implications for the study of behavioral outcomes following early brain injury as well as genetic/environmental disruption, and are relevant to our understanding of the neurologic risk factors underlying developmental language disability in

  2. Neuroestrogen signaling in the songbird auditory cortex propagates into a sensorimotor network via an 'interface' nucleus.

    PubMed

    Pawlisch, B A; Remage-Healey, L

    2015-01-22

    Neuromodulators rapidly alter activity of neural circuits and can therefore shape higher order functions, such as sensorimotor integration. Increasing evidence suggests that brain-derived estrogens, such as 17-β-estradiol, can act rapidly to modulate sensory processing. However, less is known about how rapid estrogen signaling can impact downstream circuits. Past studies have demonstrated that estradiol levels increase within the songbird auditory cortex (the caudomedial nidopallium, NCM) during social interactions. Local estradiol signaling enhances the auditory-evoked firing rate of neurons in NCM to a variety of stimuli, while also enhancing the selectivity of auditory-evoked responses of neurons in a downstream sensorimotor brain region, HVC (proper name). Since these two brain regions are not directly connected, we employed dual extracellular recordings in HVC and the upstream nucleus interfacialis of the nidopallium (NIf) during manipulations of estradiol within NCM to better understand the pathway by which estradiol signaling propagates to downstream circuits. NIf has direct input into HVC, passing auditory information into the vocal motor output pathway, and is a possible source of the neural selectivity within HVC. Here, during acute estradiol administration in NCM, NIf neurons showed increases in baseline firing rates and auditory-evoked firing rates to all stimuli. Furthermore, when estradiol synthesis was blocked in NCM, we observed simultaneous decreases in the selectivity of NIf and HVC neurons. These effects were not due to direct estradiol actions because NIf has little to no capability for local estrogen synthesis or estrogen receptors, and these effects were specific to NIf because other neurons immediately surrounding NIf did not show these changes. Our results demonstrate that transsynaptic, rapid fluctuations in neuroestrogens are transmitted into NIf and subsequently HVC, both regions important for sensorimotor integration. Overall, these

  3. Rapid extraction of auditory feature contingencies.

    PubMed

    Bendixen, Alexandra; Prinz, Wolfgang; Horváth, János; Trujillo-Barreto, Nelson J; Schröger, Erich

    2008-07-01

    Contingent relations between sensory events render the environment predictable and thus facilitate adaptive behavior. The human capacity to detect such relations has been comprehensively demonstrated in paradigms in which contingency rules were task-relevant or in which they applied to motor behavior. The extent to which contingencies can also be extracted from events that are unrelated to the current goals of the organism has remained largely unclear. The present study addressed the emergence of contingency-related effects for behaviorally irrelevant auditory stimuli and the cortical areas involved in the processing of such contingency rules. Contingent relations between different features of temporally separate events were embedded in a new dynamic protocol. Participants were presented with the auditory stimulus sequences while their attention was captured by a video. The mismatch negativity (MMN) component of the event-related brain potential (ERP) was employed as an electrophysiological correlate of contingency detection. MMN generators were localized by means of scalp current density (SCD) and primary current density (PCD) analyses with variable resolution electromagnetic tomography (VARETA). Results show that task-irrelevant contingencies can be extracted from about fifteen to twenty successive events conforming to the contingent relation. Topographic and tomographic analyses reveal the involvement of the auditory cortex in the processing of contingency violations. The present data provide evidence for the rapid encoding of complex extrapolative relations in sensory areas. This capacity is of fundamental importance for the organism in its attempt to model the sensory environment outside the focus of attention.

  4. Characterization of the blood-oxygen level-dependent (BOLD) response in cat auditory cortex using high-field fMRI.

    PubMed

    Brown, Trecia A; Joanisse, Marc F; Gati, Joseph S; Hughes, Sarah M; Nixon, Pam L; Menon, Ravi S; Lomber, Stephen G

    2013-01-01

    Much of what is known about the cortical organization for audition in humans draws from studies of auditory cortex in the cat. However, these data build largely on electrophysiological recordings that are both highly invasive and provide less evidence concerning macroscopic patterns of brain activation. Optical imaging, using intrinsic signals or dyes, allows visualization of surface-based activity but is also quite invasive. Functional magnetic resonance imaging (fMRI) overcomes these limitations by providing a large-scale perspective of distributed activity across the brain in a non-invasive manner. The present study used fMRI to characterize stimulus-evoked activity in auditory cortex of an anesthetized (ketamine/isoflurane) cat, focusing specifically on the blood-oxygen-level-dependent (BOLD) signal time course. Functional images were acquired for adult cats in a 7 T MRI scanner. To determine the BOLD signal time course, we presented 1s broadband noise bursts between widely spaced scan acquisitions at randomized delays (1-12 s in 1s increments) prior to each scan. Baseline trials in which no stimulus was presented were also acquired. Our results indicate that the BOLD response peaks at about 3.5s in primary auditory cortex (AI) and at about 4.5 s in non-primary areas (AII, PAF) of cat auditory cortex. The observed peak latency is within the range reported for humans and non-human primates (3-4 s). The time course of hemodynamic activity in cat auditory cortex also occurs on a comparatively shorter scale than in cat visual cortex. The results of this study will provide a foundation for future auditory fMRI studies in the cat to incorporate these hemodynamic response properties into appropriate analyses of cat auditory cortex. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Stability of auditory discrimination and novelty processing in physiological aging.

    PubMed

    Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele

    2013-01-01

    Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  6. Translational control of auditory imprinting and structural plasticity by eIF2α.

    PubMed

    Batista, Gervasio; Johnson, Jennifer Leigh; Dominguez, Elena; Costa-Mattioli, Mauro; Pena, Jose L

    2016-12-23

    The formation of imprinted memories during a critical period is crucial for vital behaviors, including filial attachment. Yet, little is known about the underlying molecular mechanisms. Using a combination of behavior, pharmacology, in vivo surface sensing of translation (SUnSET) and DiOlistic labeling we found that, translational control by the eukaryotic translation initiation factor 2 alpha (eIF2α) bidirectionally regulates auditory but not visual imprinting and related changes in structural plasticity in chickens. Increasing phosphorylation of eIF2α (p-eIF2α) reduces translation rates and spine plasticity, and selectively impairs auditory imprinting. By contrast, inhibition of an eIF2α kinase or blocking the translational program controlled by p-eIF2α enhances auditory imprinting. Importantly, these manipulations are able to reopen the critical period. Thus, we have identified a translational control mechanism that selectively underlies auditory imprinting. Restoring translational control of eIF2α holds the promise to rejuvenate adult brain plasticity and restore learning and memory in a variety of cognitive disorders.

  7. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    PubMed

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  8. List mode multichannel analyzer

    DOEpatents

    Archer, Daniel E [Livermore, CA; Luke, S John [Pleasanton, CA; Mauger, G Joseph [Livermore, CA; Riot, Vincent J [Berkeley, CA; Knapp, David A [Livermore, CA

    2007-08-07

    A digital list mode multichannel analyzer (MCA) built around a programmable FPGA device for onboard data analysis and on-the-fly modification of system detection/operating parameters, and capable of collecting and processing data in very small time bins (<1 millisecond) when used in histogramming mode, or in list mode as a list mode MCA.

  9. Auditory Attraction: Activation of Visual Cortex by Music and Sound in Williams Syndrome

    ERIC Educational Resources Information Center

    Thornton-Wells, Tricia A.; Cannistraci, Christopher J.; Anderson, Adam W.; Kim, Chai-Youn; Eapen, Mariam; Gore, John C.; Blake, Randolph; Dykens, Elisabeth M.

    2010-01-01

    Williams syndrome is a genetic neurodevelopmental disorder with a distinctive phenotype, including cognitive-linguistic features, nonsocial anxiety, and a strong attraction to music. We performed functional MRI studies examining brain responses to musical and other types of auditory stimuli in young adults with Williams syndrome and typically…

  10. Decoding the auditory brain with canonical component analysis.

    PubMed

    de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M; Hjortkjær, Jens; Slaney, Malcolm; Lalor, Edmund

    2018-05-15

    The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Brain function assessment in different conscious states.

    PubMed

    Ozgoren, Murat; Bayazit, Onur; Kocaaslan, Sibel; Gokmen, Necati; Oniz, Adile

    2010-06-03

    The study of brain functioning is a major challenge in neuroscience fields as human brain has a dynamic and ever changing information processing. Case is worsened with conditions where brain undergoes major changes in so-called different conscious states. Even though the exact definition of consciousness is a hard one, there are certain conditions where the descriptions have reached a consensus. The sleep and the anesthesia are different conditions which are separable from each other and also from wakefulness. The aim of our group has been to tackle the issue of brain functioning with setting up similar research conditions for these three conscious states. In order to achieve this goal we have designed an auditory stimulation battery with changing conditions to be recorded during a 40 channel EEG polygraph (Nuamps) session. The stimuli (modified mismatch, auditory evoked etc.) have been administered both in the operation room and the sleep lab via Embedded Interactive Stimulus Unit which was developed in our lab. The overall study has provided some results for three domains of consciousness. In order to be able to monitor the changes we have incorporated Bispectral Index Monitoring to both sleep and anesthesia conditions. The first stage results have provided a basic understanding in these altered states such that auditory stimuli have been successfully processed in both light and deep sleep stages. The anesthesia provides a sudden change in brain responsiveness; therefore a dosage dependent anesthetic administration has proved to be useful. The auditory processing was exemplified targeting N1 wave, with a thorough analysis from spectrogram to sLORETA. The frequency components were observed to be shifting throughout the stages. The propofol administration and the deeper sleep stages both resulted in the decreasing of N1 component. The sLORETA revealed similar activity at BA7 in sleep (BIS 70) and target propofol concentration of 1.2 microg/mL. The current study

  12. Robust detection of multiple sclerosis lesions from intensity-normalized multi-channel MRI

    NASA Astrophysics Data System (ADS)

    Karpate, Yogesh; Commowick, Olivier; Barillot, Christian

    2015-03-01

    Multiple sclerosis (MS) is a disease with heterogeneous evolution among the patients. Quantitative analysis of longitudinal Magnetic Resonance Images (MRI) provides a spatial analysis of the brain tissues which may lead to the discovery of biomarkers of disease evolution. Better understanding of the disease will lead to a better discovery of pathogenic mechanisms, allowing for patient-adapted therapeutic strategies. To characterize MS lesions, we propose a novel paradigm to detect white matter lesions based on a statistical framework. It aims at studying the benefits of using multi-channel MRI to detect statistically significant differences between each individual MS patient and a database of control subjects. This framework consists in two components. First, intensity standardization is conducted to minimize the inter-subject intensity difference arising from variability of the acquisition process and different scanners. The intensity normalization maps parameters obtained using a robust Gaussian Mixture Model (GMM) estimation not affected by the presence of MS lesions. The second part studies the comparison of multi-channel MRI of MS patients with respect to an atlas built from the control subjects, thereby allowing us to look for differences in normal appearing white matter, in and around the lesions of each patient. Experimental results demonstrate that our technique accurately detects significant differences in lesions consequently improving the results of MS lesion detection.

  13. Optical multichannel sensing of skin blood pulsations

    NASA Astrophysics Data System (ADS)

    Spigulis, Janis; Erts, Renars; Kukulis, Indulis; Ozols, Maris; Prieditis, Karlis

    2004-09-01

    Time resolved detection and analysis of the skin back-scattered optical signals (reflection photoplethysmography or PPG) provide information on skin blood volume pulsations and can serve for cardiovascular assessment. The multi-channel PPG concept has been developed and clinically verified in this study. Portable two- and four-channel PPG monitoring devices have been designed for real-time data acquisition and processing. The multi-channel devices were successfully applied for cardiovascular fitness tests and for early detection of arterial occlusions in extremities. The optically measured heartbeat pulse wave propagation made possible to estimate relative arterial resistances for numerous patients and healthy volunteers.

  14. [Monitoring of brain function].

    PubMed

    Doi, Matsuyuki

    2012-01-01

    Despite being the most important of organs, the brain is disproportionately unmonitored compared to other systems such as cardiorespiratory in anesthesia settings. In order to optimize level of anesthesia, it is important to quantify the brain activity suppressed by anesthetic agents. Adverse cerebral outcomes remain a continued problem in patients undergoing various surgical procedures. By providing information on a range of physiologic parameters, brain monitoring may contribute to improve perioperative outcomes. This article addresses the various brain monitoring equipments including bispectral index (BIS), auditory evoked potentials (AEP), near-infrared spectroscopy (NIRS), transcranial Doppler ultrasonography (TCD) and oxygen saturation of the jugular vein (Sjv(O2)).

  15. Electrostimulation mapping of comprehension of auditory and visual words.

    PubMed

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  17. Transplantation of conditionally immortal auditory neuroblasts to the auditory nerve.

    PubMed

    Sekiya, Tetsuji; Holley, Matthew C; Kojima, Ken; Matsumoto, Masahiro; Helyer, Richard; Ito, Juichi

    2007-04-01

    Cell transplantation is a realistic potential therapy for replacement of auditory sensory neurons and could benefit patients with cochlear implants or acoustic neuropathies. The procedure involves many experimental variables, including the nature and conditioning of donor cells, surgical technique and degree of degeneration in the host tissue. It is essential to control these variables in order to develop cell transplantation techniques effectively. We have characterized a conditionally immortal, mouse cell line suitable for transplantation to the auditory nerve. Structural and physiological markers defined the cells as early auditory neuroblasts that lacked neuronal, voltage-gated sodium or calcium currents and had an undifferentiated morphology. When transplanted into the auditory nerves of rats in vivo, the cells migrated peripherally and centrally and aggregated to form coherent, ectopic 'ganglia'. After 7 days they expressed beta 3-tubulin and adopted a similar morphology to native spiral ganglion neurons. They also developed bipolar projections aligned with the host nerves. There was no evidence for uncontrolled proliferation in vivo and cells survived for at least 63 days. If cells were transplanted with the appropriate surgical technique then the auditory brainstem responses were preserved. We have shown that immortal cell lines can potentially be used in the mammalian ear, that it is possible to differentiate significant numbers of cells within the auditory nerve tract and that surgery and cell injection can be achieved with no damage to the cochlea and with minimal degradation of the auditory brainstem response.

  18. Encoding of Natural Sounds at Multiple Spectral and Temporal Resolutions in the Human Auditory Cortex

    PubMed Central

    Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Goebel, Rainer; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia

    2014-01-01

    Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex. PMID:24391486

  19. The Role of the Auditory Brainstem in Processing Linguistically-Relevant Pitch Patterns

    ERIC Educational Resources Information Center

    Krishnan, Ananthanarayan; Gandour, Jackson T.

    2009-01-01

    Historically, the brainstem has been neglected as a part of the brain involved in language processing. We review recent evidence of language-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem. We argue that there is enhancing…

  20. Representation of particle motion in the auditory midbrain of a developing anuran.

    PubMed

    Simmons, Andrea Megela

    2015-07-01

    In bullfrog tadpoles, a "deaf period" of lessened responsiveness to the pressure component of sounds, evident during the end of the late larval period, has been identified in the auditory midbrain. But coding of underwater particle motion in the vestibular medulla remains stable over all of larval development, with no evidence of a "deaf period." Neural coding of particle motion in the auditory midbrain was assessed to determine if a "deaf period" for this mode of stimulation exists in this brain area in spite of its absence from the vestibular medulla. Recording sites throughout the developing laminar and medial principal nuclei show relatively stable thresholds to z-axis particle motion, up until the "deaf period." Thresholds then begin to increase from this point up through the rest of metamorphic climax, and significantly fewer responsive sites can be located. The representation of particle motion in the auditory midbrain is less robust during later compared to earlier larval stages, overlapping with but also extending beyond the restricted "deaf period" for pressure stimulation. The decreased functional representation of particle motion in the auditory midbrain throughout metamorphic climax may reflect ongoing neural reorganization required to mediate the transition from underwater to amphibious life.