Science.gov

Sample records for multichannel auditory brain

  1. Multichannel Spatial Auditory Display for Speed Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  2. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  3. Functional development in the infant brain for auditory pitch processing.

    PubMed

    Homae, Fumitaka; Watanabe, Hama; Nakano, Tamami; Taga, Gentaro

    2012-03-01

    Understanding how the developing brain processes auditory information is a critical step toward the clarification of infants' perception of speech and music. We have reported that the infant brain perceives pitch information in speech sounds. Here, we used multichannel near-infrared spectroscopy to examine whether the infant brain is sensitive to information of pitch changes in auditory sequences. Three types of auditory sequences with distinct temporal structures of pitch changes were presented to 3- and 6-month-old infants: a long condition of 12 successive tones constructing a chromatic scale (600 ms), a short condition of four successive tones constructing a chromatic scale (200 ms), and a random condition of random tone sequences (50 ms per tone). The difference among the conditions was only in the sequential order of the tones, which causes pitch changes between the successive tones. We found that the bilateral temporal regions of both ages of infants showed significant activation under the three conditions. The stimulus-dependent activation was observed in the right temporoparietal region of the both infant groups; the 3- and 6-month-old infants showed the most prominent activation under the random and short conditions, respectively. Our findings indicate that the infant brain, which shows functional differentiation and lateralization in auditory-related areas, is capable of responding to more than single tones of pitch information. These results suggest that the right temporoparietal region of the infants increases sensitivity to auditory sequences, which have temporal structures similar to those of syllables in speech sounds, in the course of development. PMID:21488136

  4. Consequences of Broad Auditory Filters for Identification of Multichannel-Compressed Vowels

    ERIC Educational Resources Information Center

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose: In view of previous findings (Bor, Souza, & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method: Listeners were recruited in 3 groups:…

  5. Improving auditory steady-state response detection using independent component analysis on multichannel EEG data.

    PubMed

    Van Dun, Bram; Wouters, Jan; Moonen, Marc

    2007-07-01

    Over the last decade, the detection of auditory steady-state responses (ASSR) has been developed for reliable hearing threshold estimation at audiometric frequencies. Unfortunately, the duration of ASSR measurement can be long, which is unpractical for wide scale clinical application. In this paper, we propose independent component analysis (ICA) as a tool to improve the ASSR detection in recorded single-channel as well as multichannel electroencephalogram (EEG) data. We conclude that ICA is able to reduce measurement duration significantly. For a multichannel implementation, near-optimal performance is obtained with five-channel recordings. PMID:17605353

  6. A Brain System for Auditory Working Memory

    PubMed Central

    Joseph, Sabine; Gander, Phillip E.; Barascud, Nicolas; Halpern, Andrea R.; Griffiths, Timothy D.

    2016-01-01

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. SIGNIFICANCE STATEMENT In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. PMID:27098693

  7. Auditory pattern perception in 'split brain' patients.

    PubMed

    Musiek, F E; Pinheiro, M L; Wilson, D H

    1980-10-01

    Three "split brain" subjects with normal peripheral hearing were tested on identifying monaurally presented auditory intensity and frequency patterns. One subject was tested before commissurotomy, ten days later, and one year after surgery. Results indicated that sectioning the corpus callosum dramatically affects the ability to verbally report both intensity and frequency patterns. However, the ability of the subjects to correctly "hum" frequency patterns was not impaired. Thus, it appears for a correct verbal report of an auditory pattern, interhemispheric transfer of acoustic information is required, while "humming" the pattern does not. Further application of this finding implicates auditory pattern tasks as as a potentially valuable test for detecting problems of higher auditory processing, particularly those affecting interhemispheric interaction. PMID:7417089

  8. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  9. Rapid acquisition of auditory subcortical steady state responses using multichannel recordings✩

    PubMed Central

    Bharadwaj, Hari M.; Shinn-Cunningham, Barbara G.

    2015-01-01

    Objective Auditory subcortical steady state responses (SSSRs), also known as frequency following responses (FFRs), provide a non-invasive measure of phase-locked neural responses to acoustic and cochlear-induced periodicities. SSSRs have been used both clinically and in basic neurophysiological investigation of auditory function. SSSR data acquisition typically involves thousands of presentations of each stimulus type, sometimes in two polarities, with acquisition times often exceeding an hour per subject. Here, we present a novel approach to reduce the data acquisition times significantly. Methods Because the sources of the SSSR are deep compared to the primary noise sources, namely background spontaneous cortical activity, the SSSR varies more smoothly over the scalp than the noise. We exploit this property and extract SSSRs efficiently, using multichannel recordings and an eigendecomposition of the complex cross-channel spectral density matrix. Results Our proposed method yields SNR improvement exceeding a factor of 3 compared to traditional single-channel methods. Conclusions It is possible to reduce data acquisition times for SSSRs significantly with our approach. Significance The proposed method allows SSSRs to be recorded for several stimulus conditions within a single session and also makes it possible to acquire both SSSRs and cortical EEG responses without increasing the session length. PMID:24525091

  10. Exploring functional connectivity networks with multichannel brain array coils.

    PubMed

    Anteraper, Sheeba Arnold; Whitfield-Gabrieli, Susan; Keil, Boris; Shannon, Steven; Gabrieli, John D; Triantafyllou, Christina

    2013-01-01

    The use of multichannel array head coils in functional and structural magnetic resonance imaging (MRI) provides increased signal-to-noise ratio (SNR), higher sensitivity, and parallel imaging capabilities. However, their benefits remain to be systematically explored in the context of resting-state functional connectivity MRI (fcMRI). In this study, we compare signal detectability within and between commercially available multichannel brain coils, a 32-Channel (32Ch), and a 12-Channel (12Ch) at 3T, in a high-resolution regime to accurately map resting-state networks. We investigate whether the 32Ch coil can extract and map fcMRI more efficiently and robustly than the 12Ch coil using seed-based and graph-theory-based analyses. Our findings demonstrate that although the 12Ch coil can be used to reveal resting-state connectivity maps, the 32Ch coil provides increased detailed functional connectivity maps (using seed-based analysis) as well as increased global and local efficiency, and cost (using graph-theory-based analysis), in a number of widely reported resting-state networks. The exploration of subcortical networks, which are scarcely reported due to limitations in spatial-resolution and coil sensitivity, also proved beneficial with the 32Ch coil. Further, comparisons regarding the data acquisition time required to successfully map these networks indicated that scan time can be significantly reduced by 50% when a coil with increased number of channels (i.e., 32Ch) is used. Switching to multichannel arrays in resting-state fcMRI could, therefore, provide both detailed functional connectivity maps and acquisition time reductions, which could further benefit imaging special subject populations, such as patients or pediatrics who have less tolerance in lengthy imaging sessions. PMID:23510203

  11. Multichannel Brain-Signal-Amplifying and Digitizing System

    NASA Technical Reports Server (NTRS)

    Gevins, Alan

    2005-01-01

    An apparatus has been developed for use in acquiring multichannel electroencephalographic (EEG) data from a human subject. EEG apparatuses with many channels in use heretofore have been too heavy and bulky to be worn, and have been limited in dynamic range to no more than 18 bits. The present apparatus is small and light enough to be worn by the subject. It is capable of amplifying EEG signals and digitizing them to 22 bits in as many as 150 channels. The apparatus is controlled by software and is plugged into the USB port of a personal computer. This apparatus makes it possible, for the first time, to obtain high-resolution functional EEG images of a thinking brain in a real-life, ambulatory setting outside a research laboratory or hospital.

  12. The SRI24 Multi-Channel Brain Atlas

    PubMed Central

    Rohlfing, Torsten; Zahr, Natalie M.; Sullivan, Edith V.; Pfefferbaum, Adolf

    2009-01-01

    We present a new standard atlas of the human brain based on magnetic resonance images. The atlas was generated using unbiased population registration from high-resolution images obtained by multichannel-coil acquisition at 3T in a group of 24 normal subjects. The final atlas comprises three anatomical channels (T1-weighted, early and late spin echo), three diffusion-related channels (fractional anisotropy, mean diffusivity, diffusion-weighted image), and three tissue probability maps (CSF, gray matter, white matter). The atlas is dynamic in that it is implicitly represented by nonrigid transformations between the 24 subject images, as well as distortion-correction alignments between the image channels in each subject. The atlas can, therefore, be generated at essentially arbitrary image resolutions and orientations (e.g., AC/PC aligned), without compounding interpolation artifacts. We demonstrate in this paper two different applications of the atlas: (a) region definition by label propagation in a fiber tracking study is enabled by the increased sharpness of our atlas compared with other available atlases, and (b) spatial normalization is enabled by its average shape property. In summary, our atlas has unique features and will be made available to the scientific community as a resource and reference system for future imaging-based studies of the human brain. PMID:19183706

  13. The SRI24 multichannel brain atlas: construction and applications

    NASA Astrophysics Data System (ADS)

    Rohlfing, Torsten; Zahr, Natalie M.; Sullivan, Edith V.; Pfefferbaum, Adolf

    2008-03-01

    We present a new standard atlas of the human brain based on magnetic resonance images. The atlas was generated using unbiased population registration from high-resolution images obtained by multichannel-coil acquisition at 3T in a group of 24 normal subjects. The final atlas comprises three anatomical channels (T I-weighted, early and late spin echo), three diffusion-related channels (fractional anisotropy, mean diffusivity, diffusion-weighted image), and three tissue probability maps (CSF, gray matter, white matter). The atlas is dynamic in that it is implicitly represented by nonrigid transformations between the 24 subject images, as well as distortion-correction alignments between the image channels in each subject. The atlas can, therefore, be generated at essentially arbitrary image resolutions and orientations (e.g., AC/PC aligned), without compounding interpolation artifacts. We demonstrate in this paper two different applications of the atlas: (a) region definition by label propagation in a fiber tracking study is enabled by the increased sharpness of our atlas compared with other available atlases, and (b) spatial normalization is enabled by its average shape property. In summary, our atlas has unique features and will be made available to the scientific community as a resource and reference system for future imaging-based studies of the human brain.

  14. The utility of multichannel local field potentials for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Hwang, Eun Jung; Andersen, Richard A.

    2013-08-01

    Objective. Local field potentials (LFPs) that carry information about the subject's motor intention have the potential to serve as a complement or alternative to spike signals for brain-machine interfaces (BMIs). The goal of this study is to assess the utility of LFPs for BMIs by characterizing the largely unknown information coding properties of multichannel LFPs. Approach. Two monkeys were implanted, each with a 16-channel electrode array, in the parietal reach region where both LFPs and spikes are known to encode the subject's intended reach target. We examined how multichannel LFPs recorded during a reach task jointly carry reach target information, and compared the LFP performance to simultaneously recorded multichannel spikes. Main Results. LFPs yielded a higher number of channels that were informative about reach targets than spikes. Single channel LFPs provided more accurate target information than single channel spikes. However, LFPs showed significantly larger signal and noise correlations across channels than spikes. Reach target decoders performed worse when using multichannel LFPs than multichannel spikes. The underperformance of multichannel LFPs was mostly due to their larger noise correlation because noise de-correlated multichannel LFPs produced a decoding accuracy comparable to multichannel spikes. Despite the high noise correlation, decoders using LFPs in addition to spikes outperformed decoders using only spikes. Significance. These results demonstrate that multichannel LFPs could effectively complement spikes for BMI applications by yielding more informative channels. The utility of multichannel LFPs may be further augmented if their high noise correlation can be taken into account by decoders.

  15. [Analysis of auditory information in the brain of the cetacean].

    PubMed

    Popov, V V; Supin, A Ia

    2006-01-01

    The cetacean brain specifics involve an exceptional development of the auditory neural centres. The place of projection sensory areas including the auditory that in the cetacean brain cortex is essentially different from that in other mammals. The EP characteristics indicated presence of several functional divisions in the auditory cortex. Physiological studies of the cetacean auditory centres were mainly performed using the EP technique. Of several types of the EPs, the short-latency auditory EP was most thoroughly studied. In cetacean, it is characterised by exceptionally high temporal resolution with the integration time about 0.3 ms which corresponds to the cut-off frequency 1700 Hz. This much exceeds the temporal resolution of the hearing in terranstrial mammals. The frequency selectivity of hearing in cetacean was measured using a number of variants of the masking technique. The hearing frequency selectivity acuity in cetacean exceeds that of most terraneous mammals (excepting the bats). This acute frequency selectivity provides the differentiation among the finest spectral patterns of auditory signals. PMID:16613059

  16. [Auditory hallucinations in lesions of the brain stem].

    PubMed

    Cambier, J; Decroix, J P; Masson, C

    1987-01-01

    Since the publication by Jean Lhermitte in 1922 of his paper on hallucinosis, the peduncular type has been described as a purely visual phenomenon. However, limited brain stem lesions can give rise to analogous manifestations in the auditory field. Five cases of auditory hallucinosis are reviewed, the first four resulting from a lesion of tegmentum of pons responsible for contralateral hemi-anesthesia and homolateral facial palsy with paralysis of laterality. Central type hypoacusis and a severe disorder of localization of sounds revealed a lesion of trapezoid body. The fifth case resulted from a peduncular lesion in region supplied by superior cerebellar artery, the auditory deficit being related to a lesion of inferior corpus quadrigeminum. In one patient, the auditory hallucinosis was followed by a period of visual hallucinations and oneiric delusions. Both auditory and visual hallucinosis can be related to hypnagogic hallucinations. Dream mechanisms (the geniculo-occipital spikes system) escape from normal inhibitory control exerted by the raphe nuclei. Auditory deafferentation could predispose to auditory hallucinosis. PMID:3629075

  17. The Human Brain Maintains Contradictory and Redundant Auditory Sensory Predictions

    PubMed Central

    Pieszek, Marika; Widmann, Andreas; Gruber, Thomas; Schröger, Erich

    2013-01-01

    Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs) in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants’ task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound) as well as violations of the visual-auditory prediction (i.e., an incongruent sound) elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]). Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density) equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events. PMID:23308266

  18. Shaping the aging brain: role of auditory input patterns in the emergence of auditory cortical impairments

    PubMed Central

    Kamal, Brishna; Holman, Constance; de Villers-Sidani, Etienne

    2013-01-01

    Age-related impairments in the primary auditory cortex (A1) include poor tuning selectivity, neural desynchronization, and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function. PMID:24062649

  19. Neural mechanisms of auditory categorization: from across brain areas to within local microcircuits

    PubMed Central

    Tsunada, Joji; Cohen, Yale E.

    2014-01-01

    Categorization enables listeners to efficiently encode and respond to auditory stimuli. Behavioral evidence for auditory categorization has been well documented across a broad range of human and non-human animal species. Moreover, neural correlates of auditory categorization have been documented in a variety of different brain regions in the ventral auditory pathway, which is thought to underlie auditory-object processing and auditory perception. Here, we review and discuss how neural representations of auditory categories are transformed across different scales of neural organization in the ventral auditory pathway: from across different brain areas to within local microcircuits. We propose different neural transformations across different scales of neural organization in auditory categorization. Along the ascending auditory system in the ventral pathway, there is a progression in the encoding of categories from simple acoustic categories to categories for abstract information. On the other hand, in local microcircuits, different classes of neurons differentially compute categorical information. PMID:24987324

  20. Analysis of auditory information in the brains of cetaceans.

    PubMed

    Popov, V V; Supin, A Ya

    2007-03-01

    A characteristic feature of the brains of toothed cetaceans is the exclusive development of the auditory neural centers. The location of the projection sensory zones, including the auditory zones, in the cetacean cortex is significantly different from that in other mammals. The characteristics of evoked potentials demonstrate the existence of several functional subdivisions in the auditory cortex. Physiological studies of the auditory neural centers of cetaceans have been performed predominantly using the evoked potentials method. Of the several types of evoked potentials available for non-invasive recording, the most detailed studies have been performed using short-latency auditory evoked potentials (SLAEP). SLAEP in cetaceans are characterized by exclusively high time resolution, with integration times of about 0.3 msec, which on the frequency scale corresponds to a cut-off frequency of 1700 Hz. This is more than an order of magnitude greater than the time resolution of hearing in terrestrial mammals. The frequency selectivity of hearing in cetaceans has been measured using several versions of the masking method. The acuity of frequency selectivity in cetaceans is several times greater than that in most terrestrial mammals (except bats). The acute frequency selectivity allows the discrimination of very fine spectral patterns of sound signals. PMID:17294105

  1. Brain Metabolism during Hallucination-Like Auditory Stimulation in Schizophrenia

    PubMed Central

    Horga, Guillermo; Fernández-Egea, Emilio; Mané, Anna; Font, Mireia; Schatz, Kelly C.; Falcon, Carles; Lomeña, Francisco; Bernardo, Miguel; Parellada, Eduard

    2014-01-01

    Auditory verbal hallucinations (AVH) in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1) at rest and (2) during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia. PMID:24416328

  2. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity

    PubMed Central

    Laing, Mark; Rees, Adrian; Vuong, Quoc C.

    2015-01-01

    The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies. PMID:26483710

  3. Concurrent brain responses to separate auditory and visual targets

    PubMed Central

    Mitchell, Daniel J.; Hauk, Olaf; Beste, Christian; Pizzella, Vittorio; Duncan, John

    2015-01-01

    In the attentional blink, a target event (T1) strongly interferes with perception of a second target (T2) presented within a few hundred milliseconds. Concurrently, the brain's electromagnetic response to the second target is suppressed, especially a late negative-positive EEG complex including the traditional P3 wave. An influential theory proposes that conscious perception requires access to a distributed, frontoparietal global workspace, explaining the attentional blink by strong mutual inhibition between concurrent workspace representations. Often, however, the attentional blink is reduced or eliminated for targets in different sensory modalities, suggesting a limit to such global inhibition. Using functional magnetic resonance imaging, we confirm that visual and auditory targets produce similar, distributed patterns of frontoparietal activity. In an attentional blink EEG/MEG design, however, an auditory T1 and visual T2 are identified without mutual interference, with largely preserved electromagnetic responses to T2. The results suggest parallel brain responses to target events in different sensory modalities. PMID:26084914

  4. Concurrent brain responses to separate auditory and visual targets.

    PubMed

    Finoia, Paola; Mitchell, Daniel J; Hauk, Olaf; Beste, Christian; Pizzella, Vittorio; Duncan, John

    2015-08-01

    In the attentional blink, a target event (T1) strongly interferes with perception of a second target (T2) presented within a few hundred milliseconds. Concurrently, the brain's electromagnetic response to the second target is suppressed, especially a late negative-positive EEG complex including the traditional P3 wave. An influential theory proposes that conscious perception requires access to a distributed, frontoparietal global workspace, explaining the attentional blink by strong mutual inhibition between concurrent workspace representations. Often, however, the attentional blink is reduced or eliminated for targets in different sensory modalities, suggesting a limit to such global inhibition. Using functional magnetic resonance imaging, we confirm that visual and auditory targets produce similar, distributed patterns of frontoparietal activity. In an attentional blink EEG/MEG design, however, an auditory T1 and visual T2 are identified without mutual interference, with largely preserved electromagnetic responses to T2. The results suggest parallel brain responses to target events in different sensory modalities. PMID:26084914

  5. Infant Auditory Processing and Event-related Brain Oscillations

    PubMed Central

    Musacchia, Gabriella; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P.; Benasich, April A.

    2015-01-01

    Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net

  6. Evoked potential correlates of selective attention with multi-channel auditory inputs

    NASA Technical Reports Server (NTRS)

    Schwent, V. L.; Hillyard, S. A.

    1975-01-01

    Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.

  7. An auditory brain-computer interface using virtual sound field.

    PubMed

    Gao, Haiyang; Ouyang, Minhui; Zhang, Dan; Hong, Bo

    2011-01-01

    Brain-computer interfaces (BCIs) exploring the auditory communication channel might be preferable for amyotrophic lateral sclerosis (ALS) patients with poor sight or with the visual system being occupied for other uses. Spatial attention was proven to be able to modulate the event-related potentials (ERPs); yet up to now, there is no auditory BCI based on virtual sound field. In this study, auditory spatial attention was introduced by using stimuli in a virtual sound field. Subjects attended selectively to the virtual location of the target sound and discriminated its relevant properties. The concurrently recorded ERP components and the users' performance were compared with those of the paradigm where all sounds were presented in the frontal direction. The early ERP components (100-250 ms) and the simulated online accuracies indicated that spatial attention indeed added effective discriminative information for BCI classification. The proposed auditory paradigm using virtual sound field may lead to a high-performance and portable BCI system. PMID:22255354

  8. Brain Region-Specific Activity Patterns after Recent or Remote Memory Retrieval of Auditory Conditioned Fear

    ERIC Educational Resources Information Center

    Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee

    2012-01-01

    Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or…

  9. Development of auditory-specific brain rhythm in infants.

    PubMed

    Fujioka, Takako; Mourad, Nasser; Trainor, Laurel J

    2011-02-01

    Human infants rapidly develop their auditory perceptual abilities and acquire culture-specific knowledge in speech and music in the second 6 months of life. In the adult brain, neural rhythm around 10 Hz in the temporal lobes is thought to reflect sound analysis and subsequent cognitive processes such as memory and attention. To study when and how such rhythm emerges in infancy, we examined electroencephalogram (EEG) recordings in infants 4 and 12 months of age during sound stimulation and silence. In the 4-month-olds, the amplitudes of narrowly tuned 4-Hz brain rhythm, recorded from bilateral temporal electrodes, were modulated by sound stimuli. In the 12-month-olds, the sound-induced modulation occurred at faster 6-Hz rhythm at temporofrontal locations. The brain rhythms in the older infants consisted of more complex components, as even evident in individual data. These findings suggest that auditory-specific rhythmic neural activity, which is already established before 6 months of age, involves more speed-efficient long-range neural networks by the age of 12 months when long-term memory for native phoneme representation and for musical rhythmic features is formed. We suggest that maturation of distinct rhythmic components occurs in parallel, and that sensory-specific functions bound to particular thalamo-cortical networks are transferred to newly developed higher-order networks step by step until adult hierarchical neural oscillatory mechanisms are achieved across the whole brain. PMID:21226773

  10. Brain Mapping of Language and Auditory Perception in High-Functioning Autistic Adults: A PET Study.

    ERIC Educational Resources Information Center

    Muller, R-A.; Behen, M. E.; Rothermel, R. D.; Chugani, D. C.; Muzik, O.; Mangner, T. J.; Chugani, H. T.

    1999-01-01

    A study used positron emission tomography (PET) to study patterns of brain activation during auditory processing in five high-functioning adults with autism. Results found that participants showed reversed hemispheric dominance during the verbal auditory stimulation and reduced activation of the auditory cortex and cerebellum. (CR)

  11. Tactual and auditory vigilance in split-brain man.

    PubMed Central

    Dimond, S J

    1979-01-01

    Two studies are reported of tactual and auditory vigilance performance in patients with a split-brain or partial commissurotomy to examine the attentional behaviour of the right and left hemisphere, and to identify defects in attention which may be related to the division of the cerebral commissures. The performance of the right hemisphere on all tasks of sustained attention so far studied was substantially better than that of the left. Considerable depletion of concentration was observed for the total split-brain group but not in patients with partial commissurotomy. One of the more unusual phenomena of the split-brain condition is that gaps of attention, often lasting many seconds, occur predominantly on the left hemisphere. The switch to a different type of signal on the same hemisphere does not stop them but the switching of signals from one hemisphere to another does. The defect is interpreted as a failure of attention peculiar to the individual hemisphere under test. PMID:762586

  12. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    NASA Astrophysics Data System (ADS)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to

  13. The auditory and non-auditory brain areas involved in tinnitus. An emergent property of multiple parallel overlapping subnetworks.

    PubMed

    Vanneste, Sven; De Ridder, Dirk

    2012-01-01

    Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375

  14. Brain network interactions in auditory, visual and linguistic processing.

    PubMed

    Horwitz, Barry; Braun, Allen R

    2004-05-01

    In the paper, we discuss the importance of network interactions between brain regions in mediating performance of sensorimotor and cognitive tasks, including those associated with language processing. Functional neuroimaging, especially PET and fMRI, provide data that are obtained essentially simultaneously from much of the brain, and thus are ideal for enabling one to assess interregional functional interactions. Two ways to use these types of data to assess network interactions are presented. First, using PET, we demonstrate that anterior and posterior perisylvian language areas have stronger functional connectivity during spontaneous narrative production than during other less linguistically demanding production tasks. Second, we show how one can use large-scale neural network modeling to relate neural activity to the hemodynamically-based data generated by fMRI and PET. We review two versions of a model of object processing - one for visual and one for auditory objects. The regions comprising the models include primary and secondary sensory cortex, association cortex in the temporal lobe, and prefrontal cortex. Each model incorporates specific assumptions about how neurons in each of these areas function, and how neurons in the different areas are interconnected with each other. Each model is able to perform a delayed match-to-sample task for simple objects (simple shapes for the visual model; tonal contours for the auditory model). We find that the simulated electrical activities in each region are similar to those observed in nonhuman primates performing analogous tasks, and the absolute values of the simulated integrated synaptic activity in each brain region match human fMRI/PET data. Thus, this type of modeling provides a way to understand the neural bases for the sensorimotor and cognitive tasks of interest. PMID:15068921

  15. Atypical Bilateral Brain Synchronization in the Early Stage of Human Voice Auditory Processing in Young Children with Autism.

    PubMed

    Kurita, Toshiharu; Kikuchi, Mitsuru; Yoshimura, Yuko; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Hirosawa, Tetsu; Furutani, Naoki; Higashida, Haruhiro; Ikeda, Takashi; Mutou, Kouhei; Asada, Minoru; Minabe, Yoshio

    2016-01-01

    Autism spectrum disorder (ASD) has been postulated to involve impaired neuronal cooperation in large-scale neural networks, including cortico-cortical interhemispheric circuitry. In the context of ASD, alterations in both peripheral and central auditory processes have also attracted a great deal of interest because these changes appear to represent pathophysiological processes; therefore, many prior studies have focused on atypical auditory responses in ASD. The auditory evoked field (AEF), recorded by magnetoencephalography, and the synchronization of these processes between right and left hemispheres was recently suggested to reflect various cognitive abilities in children. However, to date, no previous study has focused on AEF synchronization in ASD subjects. To assess global coordination across spatially distributed brain regions, the analysis of Omega complexity from multichannel neurophysiological data was proposed. Using Omega complexity analysis, we investigated the global coordination of AEFs in 3-8-year-old typically developing (TD) children (n = 50) and children with ASD (n = 50) in 50-ms time-windows. Children with ASD displayed significantly higher Omega complexities compared with TD children in the time-window of 0-50 ms, suggesting lower whole brain synchronization in the early stage of the P1m component. When we analyzed the left and right hemispheres separately, no significant differences in any time-windows were observed. These results suggest lower right-left hemispheric synchronization in children with ASD compared with TD children. Our study provides new evidence of aberrant neural synchronization in young children with ASD by investigating auditory evoked neural responses to the human voice. PMID:27074011

  16. Atypical Bilateral Brain Synchronization in the Early Stage of Human Voice Auditory Processing in Young Children with Autism

    PubMed Central

    Kurita, Toshiharu; Kikuchi, Mitsuru; Yoshimura, Yuko; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Hirosawa, Tetsu; Furutani, Naoki; Higashida, Haruhiro; Ikeda, Takashi; Mutou, Kouhei; Asada, Minoru; Minabe, Yoshio

    2016-01-01

    Autism spectrum disorder (ASD) has been postulated to involve impaired neuronal cooperation in large-scale neural networks, including cortico-cortical interhemispheric circuitry. In the context of ASD, alterations in both peripheral and central auditory processes have also attracted a great deal of interest because these changes appear to represent pathophysiological processes; therefore, many prior studies have focused on atypical auditory responses in ASD. The auditory evoked field (AEF), recorded by magnetoencephalography, and the synchronization of these processes between right and left hemispheres was recently suggested to reflect various cognitive abilities in children. However, to date, no previous study has focused on AEF synchronization in ASD subjects. To assess global coordination across spatially distributed brain regions, the analysis of Omega complexity from multichannel neurophysiological data was proposed. Using Omega complexity analysis, we investigated the global coordination of AEFs in 3–8-year-old typically developing (TD) children (n = 50) and children with ASD (n = 50) in 50-ms time-windows. Children with ASD displayed significantly higher Omega complexities compared with TD children in the time-window of 0–50 ms, suggesting lower whole brain synchronization in the early stage of the P1m component. When we analyzed the left and right hemispheres separately, no significant differences in any time-windows were observed. These results suggest lower right-left hemispheric synchronization in children with ASD compared with TD children. Our study provides new evidence of aberrant neural synchronization in young children with ASD by investigating auditory evoked neural responses to the human voice. PMID:27074011

  17. Behavioral and electrophysiological auditory processing measures in traumatic brain injury after acoustically controlled auditory training: a long-term study

    PubMed Central

    Figueiredo, Carolina Calsolari; de Andrade, Adriana Neves; Marangoni-Castan, Andréa Tortosa; Gil, Daniela; Suriano, Italo Capraro

    2015-01-01

    ABSTRACT Objective To investigate the long-term efficacy of acoustically controlled auditory training in adults after tarumatic brain injury. Methods A total of six audioogically normal individuals aged between 20 and 37 years were studied. They suffered severe traumatic brain injury with diffuse axional lesion and underwent an acoustically controlled auditory training program approximately one year before. The results obtained in the behavioral and electrophysiological evaluation of auditory processing immediately after acoustically controlled auditory training were compared to reassessment findings, one year later. Results Quantitative analysis of auditory brainsteim response showed increased absolute latency of all waves and interpeak intervals, bilaterraly, when comparing both evaluations. Moreover, increased amplitude of all waves, and the wave V amplitude was statistically significant for the right ear, and wave III for the left ear. As to P3, decreased latency and increased amplitude were found for both ears in reassessment. The previous and current behavioral assessment showed similar results, except for the staggered spondaic words in the left ear and the amount of errors on the dichotic consonant-vowel test. Conclusion The acoustically controlled auditory training was effective in the long run, since better latency and amplitude results were observed in the electrophysiological evaluation, in addition to stability of behavioral measures after one-year training. PMID:26676270

  18. Bigger Brains or Bigger Nuclei? Regulating the Size of Auditory Structures in Birds

    PubMed Central

    Kubke, M. Fabiana; Massoglia, Dino P.; Carr, Catherine E.

    2012-01-01

    Increases in the size of the neuronal structures that mediate specific behaviors are believed to be related to enhanced computational performance. It is not clear, however, what developmental and evolutionary mechanisms mediate these changes, nor whether an increase in the size of a given neuronal population is a general mechanism to achieve enhanced computational ability. We addressed the issue of size by analyzing the variation in the relative number of cells of auditory structures in auditory specialists and generalists. We show that bird species with different auditory specializations exhibit variation in the relative size of their hindbrain auditory nuclei. In the barn owl, an auditory specialist, the hind-brain auditory nuclei involved in the computation of sound location show hyperplasia. This hyperplasia was also found in songbirds, but not in non-auditory specialists. The hyperplasia of auditory nuclei was also not seen in birds with large body weight suggesting that the total number of cells is selected for in auditory specialists. In barn owls, differences observed in the relative size of the auditory nuclei might be attributed to modifications in neurogenesis and cell death. Thus, hyperplasia of circuits used for auditory computation accompanies auditory specialization in different orders of birds. PMID:14726625

  19. Evoked potential correlates of selective attention with multi-channel auditory inputs.

    PubMed

    Schwent, V L; Hillyard, S A

    1975-02-01

    Ten subjects were presented with a random sequence of 50 msec tone pips at a rapid rate (averaging one tone every 225 msec). The tones came from four different sound sources or sensory "channels" each having a different pitch (2000,4000,1000, and 500 c/sec respectively) and perceived spatial position (spaced equidistant across the head). Within each sensory "channel" a random 10% of the tones were of a slightly higher pitch (designated as "targets"). The subject attended to one channel at a time for 7.5 min and counted the targets in that channel. The auditory evoked vertex potential elicited by a channel of stimuli when attended was compared with the mean vertex potential elicited by those same stimuli when the other three channels were being attended. The N1 component (latency 80130 msec) measured re a baseline revealed an increase with attention (82% in the baselineN1 measure, P less than 10-). It was concluded that: (1) this N1 enhancement could not be attributed to peripheral mechanisms acting on sensory transmission; (2) this N1 enhancement reflects a "finely tuned" selective attention to one channel of stimuli among several concurrent and competing channels; and (3) a probable relationship exists between the information load on the subject and the magnitude of this EP enhancement with selective attention. PMID:45943

  20. Quantitative map of multiple auditory cortical regions with a stereotaxic fine-scale atlas of the mouse brain

    PubMed Central

    Tsukano, Hiroaki; Horie, Masao; Hishida, Ryuichi; Takahashi, Kuniyuki; Takebayashi, Hirohide; Shibuki, Katsuei

    2016-01-01

    Optical imaging studies have recently revealed the presence of multiple auditory cortical regions in the mouse brain. We have previously demonstrated, using flavoprotein fluorescence imaging, at least six regions in the mouse auditory cortex, including the anterior auditory field (AAF), primary auditory cortex (AI), the secondary auditory field (AII), dorsoanterior field (DA), dorsomedial field (DM), and dorsoposterior field (DP). While multiple regions in the visual cortex and somatosensory cortex have been annotated and consolidated in recent brain atlases, the multiple auditory cortical regions have not yet been presented from a coronal view. In the current study, we obtained regional coordinates of the six auditory cortical regions of the C57BL/6 mouse brain and illustrated these regions on template coronal brain slices. These results should reinforce the existing mouse brain atlases and support future studies in the auditory cortex. PMID:26924462

  1. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    PubMed

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26453287

  2. [Verbal auditory agnosia: SPECT study of the brain].

    PubMed

    Carmona, C; Casado, I; Fernández-Rojas, J; Garín, J; Rayo, J I

    1995-01-01

    Verbal auditory agnosia are rare in clinical practice. Clinically, it characterized by impairment of comprehension and repetition of speech but reading, writing, and spontaneous speech are preserved. So it is distinguished from generalized auditory agnosia by the preserved ability to recognize non verbal sounds. We present the clinical picture of a forty-years-old, right handed woman who developed verbal auditory agnosic after an bilateral temporal ischemic infarcts due to atrial fibrillation by dilated cardiomyopathie. Neurophysiological studies by pure tone threshold audiometry: brainstem auditory evoked potentials and cortical auditory evoked potentials showed sparing of peripheral hearing and intact auditory pathway in brainstem but impaired cortical responses. Cranial CT-SCAN revealed two large hypodenses area involving both cortico-subcortical temporal lobes. Cerebral SPECT using 99mTc-HMPAO as radiotracer showed hypoperfusion just posterior in both frontal lobes nect to Roland's fissure and at level of bitemporal lobes just anterior to Sylvian's fissure. PMID:8556589

  3. Multichannel optical brain imaging to separate cerebral vascular, tissue metabolic, and neuronal effects of cocaine

    NASA Astrophysics Data System (ADS)

    Ren, Hugang; Luo, Zhongchi; Yuan, Zhijia; Pan, Yingtian; Du, Congwu

    2012-02-01

    Characterization of cerebral hemodynamic and oxygenation metabolic changes, as well neuronal function is of great importance to study of brain functions and the relevant brain disorders such as drug addiction. Compared with other neuroimaging modalities, optical imaging techniques have the potential for high spatiotemporal resolution and dissection of the changes in cerebral blood flow (CBF), blood volume (CBV), and hemoglobing oxygenation and intracellular Ca ([Ca2+]i), which serves as markers of vascular function, tissue metabolism and neuronal activity, respectively. Recently, we developed a multiwavelength imaging system and integrated it into a surgical microscope. Three LEDs of λ1=530nm, λ2=570nm and λ3=630nm were used for exciting [Ca2+]i fluorescence labeled by Rhod2 (AM) and sensitizing total hemoglobin (i.e., CBV), and deoxygenated-hemoglobin, whereas one LD of λ1=830nm was used for laser speckle imaging to form a CBF mapping of the brain. These light sources were time-sharing for illumination on the brain and synchronized with the exposure of CCD camera for multichannel images of the brain. Our animal studies indicated that this optical approach enabled simultaneous mapping of cocaine-induced changes in CBF, CBV and oxygenated- and deoxygenated hemoglobin as well as [Ca2+]i in the cortical brain. Its high spatiotemporal resolution (30μm, 10Hz) and large field of view (4x5 mm2) are advanced as a neuroimaging tool for brain functional study.

  4. An auditory brain-computer interface evoked by natural speech

    NASA Astrophysics Data System (ADS)

    Lopez-Gordo, M. A.; Fernandez, E.; Romero, S.; Pelayo, F.; Prieto, Alberto

    2012-06-01

    Brain-computer interfaces (BCIs) are mainly intended for people unable to perform any muscular movement, such as patients in a complete locked-in state. The majority of BCIs interact visually with the user, either in the form of stimulation or biofeedback. However, visual BCIs challenge their ultimate use because they require the subjects to gaze, explore and shift eye-gaze using their muscles, thus excluding patients in a complete locked-in state or under the condition of the unresponsive wakefulness syndrome. In this study, we present a novel fully auditory EEG-BCI based on a dichotic listening paradigm using human voice for stimulation. This interface has been evaluated with healthy volunteers, achieving an average information transmission rate of 1.5 bits min-1 in full-length trials and 2.7 bits min-1 using the optimal length of trials, recorded with only one channel and without formal training. This novel technique opens the door to a more natural communication with users unable to use visual BCIs, with promising results in terms of performance, usability, training and cognitive effort.

  5. Effectiveness of direct and non-direct auditory stimulation on coma arousal after traumatic brain injury.

    PubMed

    Park, Soohyun; Davis, Alice E

    2016-08-01

    The aim of this study was to evaluate the effect of direct and non-direct auditory stimulation on arousal in coma patients with severe traumatic brain injury and to compare the effects of direct vs. non-direct auditory stimulation. A crossover intervention study design was used. Nine participants who were comatose after a severe traumatic brain injury underwent direct and non-direct auditory stimulation. Direct auditory stimulation requires a higher level of interpersonal interaction between the patient and stimuli such as voices of family members, orientation by a nurse or family member and familiar music. In contrast, non-direct auditory stimuli were characterized as more general, less familiar, less interactive, indirect and not lively such as general music and TV sounds. Participants received both direct and non-direct auditory stimulation in randomized order for 15 minutes. Recovery of consciousness was measured with the Glasgow Coma Scale (GCS) and Sensory Stimulation Assessment Measure (SSAM). The Friedman test with post hoc analysis by Wilcoxon's signed-rank test comparisons was used for data analysis. Patients who received both direct and non-direct auditory stimulation exhibited significantly increased GCS (p = 0.008) and SSAM scores (p = 0.008) over baseline. The improvement in SSAM scores after direct auditory stimulation was significantly greater than that after non-direct auditory stimulation (p = 0.021), but there was no statistically significant difference in GCS scores (p = 0.139). Auditory stimulation, in particular direct auditory stimulation, might be useful for improving the recovery of consciousness and increasing the arousal of comatose patients. The SSAM is more useful for detecting subtle changes from stimulation intervention than the GCS. PMID:27241789

  6. Scale-Free Brain Quartet: Artistic Filtering of Multi-Channel Brainwave Music

    PubMed Central

    Wu, Dan; Li, Chaoyi; Yao, Dezhong

    2013-01-01

    To listen to the brain activities as a piece of music, we proposed the scale-free brainwave music (SFBM) technology, which translated scalp EEGs into music notes according to the power law of both EEG and music. In the present study, the methodology was extended for deriving a quartet from multi-channel EEGs with artistic beat and tonality filtering. EEG data from multiple electrodes were first translated into MIDI sequences by SFBM, respectively. Then, these sequences were processed by a beat filter which adjusted the duration of notes in terms of the characteristic frequency. And the sequences were further filtered from atonal to tonal according to a key defined by the analysis of the original music pieces. Resting EEGs with eyes closed and open of 40 subjects were utilized for music generation. The results revealed that the scale-free exponents of the music before and after filtering were different: the filtered music showed larger variety between the eyes-closed (EC) and eyes-open (EO) conditions, and the pitch scale exponents of the filtered music were closer to 1 and thus it was more approximate to the classical music. Furthermore, the tempo of the filtered music with eyes closed was significantly slower than that with eyes open. With the original materials obtained from multi-channel EEGs, and a little creative filtering following the composition process of a potential artist, the resulted brainwave quartet opened a new window to look into the brain in an audible musical way. In fact, as the artistic beat and tonal filters were derived from the brainwaves, the filtered music maintained the essential properties of the brain activities in a more musical style. It might harmonically distinguish the different states of the brain activities, and therefore it provided a method to analyze EEGs from a relaxed audio perspective. PMID:23717527

  7. Maximum-likelihood estimation of channel-dependent trial-to-trial variability of auditory evoked brain responses in MEG

    PubMed Central

    2014-01-01

    Background We propose a mathematical model for multichannel assessment of the trial-to-trial variability of auditory evoked brain responses in magnetoencephalography (MEG). Methods Following the work of de Munck et al., our approach is based on the maximum likelihood estimation and involves an approximation of the spatio-temporal covariance of the contaminating background noise by means of the Kronecker product of its spatial and temporal covariance matrices. Extending the work of de Munck et al., where the trial-to-trial variability of the responses was considered identical to all channels, we evaluate it for each individual channel. Results Simulations with two equivalent current dipoles (ECDs) with different trial-to-trial variability, one seeded in each of the auditory cortices, were used to study the applicability of the proposed methodology on the sensor level and revealed spatial selectivity of the trial-to-trial estimates. In addition, we simulated a scenario with neighboring ECDs, to show limitations of the method. We also present an illustrative example of the application of this methodology to real MEG data taken from an auditory experimental paradigm, where we found hemispheric lateralization of the habituation effect to multiple stimulus presentation. Conclusions The proposed algorithm is capable of reconstructing lateralization effects of the trial-to-trial variability of evoked responses, i.e. when an ECD of only one hemisphere habituates, whereas the activity of the other hemisphere is not subject to habituation. Hence, it may be a useful tool in paradigms that assume lateralization effects, like, e.g., those involving language processing. PMID:24939398

  8. Are Auditory Hallucinations Related to the Brain's Resting State Activity? A 'Neurophenomenal Resting State Hypothesis'

    PubMed Central

    2014-01-01

    While several hypotheses about the neural mechanisms underlying auditory verbal hallucinations (AVH) have been suggested, the exact role of the recently highlighted intrinsic resting state activity of the brain remains unclear. Based on recent findings, we therefore developed what we call the 'resting state hypotheses' of AVH. Our hypothesis suggest that AVH may be traced back to abnormally elevated resting state activity in auditory cortex itself, abnormal modulation of the auditory cortex by anterior cortical midline regions as part of the default-mode network, and neural confusion between auditory cortical resting state changes and stimulus-induced activity. We discuss evidence in favour of our 'resting state hypothesis' and show its correspondence with phenomenal, i.e., subjective-experiential features as explored in phenomenological accounts. Therefore I speak of a 'neurophenomenal resting state hypothesis' of auditory hallucinations in schizophrenia. PMID:25598821

  9. The Relationship between Phonological and Auditory Processing and Brain Organization in Beginning Readers

    ERIC Educational Resources Information Center

    Pugh, Kenneth R.; Landi, Nicole; Preston, Jonathan L.; Mencl, W. Einar; Austin, Alison C.; Sibley, Daragh; Fulbright, Robert K.; Seidenberg, Mark S.; Grigorenko, Elena L.; Constable, R. Todd; Molfese, Peter; Frost, Stephen J.

    2013-01-01

    We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of…

  10. Comparison of temporal properties of auditory single units in response to cochlear infrared laser stimulation recorded with multi-channel and single tungsten electrodes

    NASA Astrophysics Data System (ADS)

    Tan, Xiaodong; Xia, Nan; Young, Hunter; Richter, Claus-Peter

    2015-02-01

    Auditory prostheses may benefit from Infrared Neural Stimulation (INS) because optical stimulation allows for spatially selective activation of neuron populations. Selective activation of neurons in the cochlear spiral ganglion can be determined in the central nucleus of the inferior colliculus (ICC) because the tonotopic organization of frequencies in the cochlea is maintained throughout the auditory pathway. The activation profile of INS is well represented in the ICC by multichannel electrodes (MCEs). To characterize single unit properties in response to INS, however, single tungsten electrodes (STEs) should be used because of its better signal-to-noise ratio. In this study, we compared the temporal properties of ICC single units recorded with MCEs and STEs in order to characterize the response properties of single auditory neurons in response to INS in guinea pigs. The length along the cochlea stimulated with infrared radiation corresponded to a frequency range of about 0.6 octaves, similar to that recorded with STEs. The temporal properties of single units recorded with MCEs showed higher maximum rates, shorter latencies, and higher firing efficiencies compared to those recorded with STEs. When the preset amplitude threshold for triggering MCE recordings was raised to twice over the noise level, the temporal properties of the single units became similar to those obtained with STEs. Undistinguishable neural activities from multiple sources in MCE recordings could be responsible for the response property difference between MCEs and STEs. Thus, caution should be taken in single unit recordings with MCEs.

  11. A multi-channel telemetry system for brain microstimulation in freely roaming animals.

    PubMed

    Xu, Shaohua; Talwar, Sanjiv K; Hawley, Emerson S; Li, Lei; Chapin, John K

    2004-02-15

    A system is described that enables an experimenter to remotely deliver electrical pulse train stimuli to multiple different locations in the brains of freely moving rats. The system consists of two separate components: a transmitter base station that is controlled by a PC operator, and a receiver-microprocessor integrated pack worn on the back of the animals and which connects to suitably implanted brain locations. The backpack is small and light so that small animal subjects can easily carry it. Under remote command from the PC the backpack can be configured to provide biphasic pulse trains of arbitrarily specified parameters. A feature of the system is that it generates precise brain-stimulation behavioral effects using the direct constant-voltage TTL output of the backpack microprocessor. The system performs with high fidelity even in complex environments over a distance of about 300 m. Rat self-stimulation tests showed that this system produced the same behavioral responses as a conventional constant-current stimulator. This system enables a variety of multi-channel brain stimulation experiments in freely moving animals. We have employed it to develop a new animal behavior model ("virtual" conditioning) for the neurophysiological study of spatial learning, in which a rat can be accurately guided to navigate various terrains. PMID:14757345

  12. Multichannel biomagnetic system for study of electrical activity in the brain and heart.

    PubMed

    Schneider, S; Hoenig, E; Reichenberger, H; Abraham-Fuchs, K; Moshage, W; Oppelt, A; Stefan, H; Weikl, A; Wirth, A

    1990-09-01

    The authors designed a multichannel system for noninvasive measurement of the extremely weak magnetic fields generated by the brain and the heart. It uses a flat array of 37 superconducting magnetic field-sensing coils connected to sophisticated superconducting quantum interference devices. To prevent interference from external electromagnetic fields, the system is operated inside a shielded room. Complete sets of coherent data, even from spontaneous events, can be recorded. System performance was evaluated with phantom measurements and evoked-response studies. A spatial resolution of a few millimeters and a temporal resolution of a millisecond were obtained. First results in patients with partial epilepsy and investigations of the cardiac conductive pathway indicate that biomagnetism is now ready for a systematic clinical evaluation. Interpretation of measurements was facilitated by highlighting biomagnetically localized electrical activity in three-dimensional digital magnetic resonance images. PMID:2389043

  13. Auditory motion in the sighted and blind: Early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions.

    PubMed

    Dormal, Giulia; Rezk, Mohamed; Yakobov, Esther; Lepore, Franco; Collignon, Olivier

    2016-07-01

    How early blindness reorganizes the brain circuitry that supports auditory motion processing remains controversial. We used fMRI to characterize brain responses to in-depth, laterally moving, and static sounds in early blind and sighted individuals. Whole-brain univariate analyses revealed that the right posterior middle temporal gyrus and superior occipital gyrus selectively responded to both in-depth and laterally moving sounds only in the blind. These regions overlapped with regions selective for visual motion (hMT+/V5 and V3A) that were independently localized in the sighted. In the early blind, the right planum temporale showed enhanced functional connectivity with right occipito-temporal regions during auditory motion processing and a concomitant reduced functional connectivity with parietal and frontal regions. Whole-brain searchlight multivariate analyses demonstrated higher auditory motion decoding in the right posterior middle temporal gyrus in the blind compared to the sighted, while decoding accuracy was enhanced in the auditory cortex bilaterally in the sighted compared to the blind. Analyses targeting individually defined visual area hMT+/V5 however indicated that auditory motion information could be reliably decoded within this area even in the sighted group. Taken together, the present findings demonstrate that early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions that typically support the processing of motion information. PMID:27107468

  14. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe

    PubMed Central

    Berns, Gregory S.; Cook, Peter F.; Foxley, Sean; Jbabdi, Saad; Miller, Karla L.; Marino, Lori

    2015-01-01

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of ‘associative′ regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin (Delphinus delphis) and a pantropical dolphin (Stenella attenuata) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species

  15. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe.

    PubMed

    Berns, Gregory S; Cook, Peter F; Foxley, Sean; Jbabdi, Saad; Miller, Karla L; Marino, Lori

    2015-07-22

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of 'associative' regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin (Delphinus delphis) and a pantropical dolphin (Stenella attenuata) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species

  16. Brain region-specific activity patterns after recent or remote memory retrieval of auditory conditioned fear.

    PubMed

    Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee

    2012-01-01

    Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or retrieval. To investigate this possibility, we systematically imaged the brain activity patterns in the lateral amygdala, MGm/PIN, and AuV/TeA using activity-dependent induction of immediate early gene zif268 after recent and remote memory retrieval of auditory conditioned fear. Consistent with the critical role of the amygdala in fear memory, the zif268 activity in the lateral amygdala was significantly increased after both recent and remote memory retrieval. Interesting, however, the density of zif268 (+) neurons in both MGm/PIN and AuV/TeA, particularly in layers IV and VI, was increased only after remote but not recent fear memory retrieval compared to control groups. Further analysis of zif268 signals in AuV/TeA revealed that conditioned tone induced stronger zif268 induction compared to familiar tone in each individual zif268 (+) neuron after recent memory retrieval. Taken together, our results support that the lateral amygdala is a key brain site for permanent fear memory storage and suggest that MGm/PIN and AuV/TeA might play a role for remote memory storage or retrieval of auditory conditioned fear, or, alternatively, that these auditory brain regions might have a different way of processing for familiar or conditioned tone information at recent and remote time phases. PMID:22993170

  17. The SRI24 Multi-Channel Brain Atlas: Construction and Applications.

    PubMed

    Rohlfing, Torsten; Zahr, Natalie M; Sullivan, Edith V; Pfefferbaum, Adolf

    2008-01-01

    We present a new standard atlas of the human brain based on magnetic resonance images. The atlas was generated using unbiased population registration from high-resolution images obtained by multichannel-coil acquisition at 3T in a group of 24 normal subjects. The final atlas comprises three anatomical channels (T(1)-weighted, early and late spin echo), three diffusion-related channels (fractional anisotropy, mean diffusivity, diffusion-weighted image), and three tissue probability maps (CSF, gray matter, white matter). The atlas is dynamic in that it is implicitly represented by nonrigid transformations between the 24 subject images, as well as distortion-correction alignments between the image channels in each subject. The atlas can, therefore, be generated at essentially arbitrary image resolutions and orientations (e.g., AC/PC aligned), without compounding interpolation artifacts. We demonstrate in this paper two different applications of the atlas: (a) region definition by label propagation in a fiber tracking study is enabled by the increased sharpness of our atlas compared with other available atlases, and (b) spatial normalization is enabled by its average shape property. In summary, our atlas has unique features and will be made available to the scientific community as a resource and reference system for future imaging-based studies of the human brain. PMID:19183706

  18. Non-local Atlas-guided Multi-channel Forest Learning for Human Brain Labeling

    PubMed Central

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang

    2015-01-01

    Labeling MR brain images into anatomically meaningful regions is important in many quantitative brain researches. In many existing label fusion methods, appearance information is widely used. Meanwhile, recent progress in computer vision suggests that the context feature is very useful in identifying an object from a complex scene. In light of this, we propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). In particular, we employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and the target labels (i.e., corresponding to certain anatomical structures). Moreover, to accommodate the high inter-subject variations, we further extend our learning-based label fusion to a multi-atlas scenario, i.e., we train a random forest for each atlas and then obtain the final labeling result according to the consensus of all atlases. We have comprehensively evaluated our method on both LONI-LBPA40 and IXI datasets, and achieved the highest labeling accuracy, compared to the state-of-the-art methods in the literature. PMID:26942235

  19. Multi-channel linear descriptors for event-related EEG collected in brain computer interface

    NASA Astrophysics Data System (ADS)

    Pei, Xiao-mei; Zheng, Chong-xun; Xu, Jin; Bin, Guang-yu; Wang, Hong-wu

    2006-03-01

    By three multi-channel linear descriptors, i.e. spatial complexity (Ω), field power (Σ) and frequency of field changes (Φ), event-related EEG data within 8-30 Hz were investigated during imagination of left or right hand movement. Studies on the event-related EEG data indicate that a two-channel version of Ω, Σ and Φ could reflect the antagonistic ERD/ERS patterns over contralateral and ipsilateral areas and also characterize different phases of the changing brain states in the event-related paradigm. Based on the selective two-channel linear descriptors, the left and right hand motor imagery tasks are classified to obtain satisfactory results, which testify the validity of the three linear descriptors Ω, Σ and Φ for characterizing event-related EEG. The preliminary results show that Ω, Σ together with Φ have good separability for left and right hand motor imagery tasks, which could be considered for classification of two classes of EEG patterns in the application of brain computer interfaces.

  20. [New method for the clinical study of the auditory pathway in the brainstem and cerebral primary and secondary auditory cortex using averaged auditory brain mapping for 15 seconds after sound stimulation].

    PubMed

    Ried Undurraga, E; Ried Goycoolea, E; Cristian Martínez, T

    1999-01-01

    A practical new method for the clinical examination of the auditory pathway from the ear to the brain is presented. Averaging of 4000 stimuli produces a graphic image of evoked potentials in the brainstem and both cerebral hemispheres. We report the results of examination with this new method in 60 normal ears of 30 healthy young people to determine the normal pattern of cerebral processing of evoked auditory signals 5 to 15 milliseconds after stimulating the ear. It is concluded that the examination is useful for studying auditory signal processing in the brain. It also demonstrated that the primary and secondary auditory cortexes are not the destination of the auditory pathway, but relay stations. PMID:10491469

  1. Turning down the noise: the benefit of musical training on the aging auditory brain.

    PubMed

    Alain, Claude; Zendel, Benjamin Rich; Hutka, Stefanie; Bidelman, Gavin M

    2014-02-01

    Age-related decline in hearing abilities is a ubiquitous part of aging, and commonly impacts speech understanding, especially when there are competing sound sources. While such age effects are partially due to changes within the cochlea, difficulties typically exist beyond measurable hearing loss, suggesting that central brain processes, as opposed to simple peripheral mechanisms (e.g., hearing sensitivity), play a critical role in governing hearing abilities late into life. Current training regimens aimed to improve central auditory processing abilities have experienced limited success in promoting listening benefits. Interestingly, recent studies suggest that in young adults, musical training positively modifies neural mechanisms, providing robust, long-lasting improvements to hearing abilities as well as to non-auditory tasks that engage cognitive control. These results offer the encouraging possibility that musical training might be used to counteract age-related changes in auditory cognition commonly observed in older adults. Here, we reviewed studies that have examined the effects of age and musical experience on auditory cognition with an emphasis on auditory scene analysis. We infer that musical training may offer potential benefits to complex listening and might be utilized as a means to delay or even attenuate declines in auditory perception and cognition that often emerge later in life. PMID:23831039

  2. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. PMID:22524375

  3. A blueprint for vocal learning: auditory predispositions from brains to genomes.

    PubMed

    Wheatcroft, David; Qvarnström, Anna

    2015-08-01

    Memorizing and producing complex strings of sound are requirements for spoken human language. We share these behaviours with likely more than 4000 species of songbirds, making birds our primary model for studying the cognitive basis of vocal learning and, more generally, an important model for how memories are encoded in the brain. In songbirds, as in humans, the sounds that a juvenile learns later in life depend on auditory memories formed early in development. Experiments on a wide variety of songbird species suggest that the formation and lability of these auditory memories, in turn, depend on auditory predispositions that stimulate learning when a juvenile hears relevant, species-typical sounds. We review evidence that variation in key features of these auditory predispositions are determined by variation in genes underlying the development of the auditory system. We argue that increased investigation of the neuronal basis of auditory predispositions expressed early in life in combination with modern comparative genomic approaches may provide insights into the evolution of vocal learning. PMID:26246333

  4. A blueprint for vocal learning: auditory predispositions from brains to genomes

    PubMed Central

    Wheatcroft, David; Qvarnström, Anna

    2015-01-01

    Memorizing and producing complex strings of sound are requirements for spoken human language. We share these behaviours with likely more than 4000 species of songbirds, making birds our primary model for studying the cognitive basis of vocal learning and, more generally, an important model for how memories are encoded in the brain. In songbirds, as in humans, the sounds that a juvenile learns later in life depend on auditory memories formed early in development. Experiments on a wide variety of songbird species suggest that the formation and lability of these auditory memories, in turn, depend on auditory predispositions that stimulate learning when a juvenile hears relevant, species-typical sounds. We review evidence that variation in key features of these auditory predispositions are determined by variation in genes underlying the development of the auditory system. We argue that increased investigation of the neuronal basis of auditory predispositions expressed early in life in combination with modern comparative genomic approaches may provide insights into the evolution of vocal learning. PMID:26246333

  5. BabySQUID: A mobile, high-resolution multichannel magnetoencephalography system for neonatal brain assessment

    NASA Astrophysics Data System (ADS)

    Okada, Yoshio; Pratt, Kevin; Atwood, Christopher; Mascarenas, Anthony; Reineman, Richard; Nurminen, Jussi; Paulson, Douglas

    2006-02-01

    We developed a prototype of a mobile, high-resolution, multichannel magnetoencephalography (MEG) system, called babySQUID, for assessing brain functions in newborns and infants. Unlike electroencephalography, MEG signals are not distorted by the scalp or the fontanels and sutures in the skull. Thus, brain activity can be measured and localized with MEG as if the sensors were above an exposed brain. The babySQUID is housed in a moveable cart small enough to be transported from one room to another. To assess brain functions, one places the baby on the bed of the cart and the head on its headrest with MEG sensors just below. The sensor array consists of 76 first-order axial gradiometers, each with a pickup coil diameter of 6mm and a baseline of 30mm, in a high-density array with a spacing of 12-14mm center-to-center. The pickup coils are 6±1mm below the outer surface of the headrest. The short gap provides unprecedented sensitivity since the scalp and skull are thin (as little as 3-4mm altogether) in babies. In an electromagnetically unshielded room in a hospital, the field sensitivity at 1kHz was ˜17fT/√Hz. The noise was reduced from ˜400to200fT/√Hz at 1Hz using a reference cancellation technique and further to ˜40fT/√Hz using a gradient common mode rejection technique. Although the residual environmental magnetic noise interfered with the operation of the babySQUID, the instrument functioned sufficiently well to detect spontaneous brain signals from babies with a signal to noise ratio (SNR) of as much as 7.6:1. In a magnetically shielded room, the field sensitivity was 17fT/√Hz at 20Hz and 30fT/√Hz at 1Hz without implementation of reference or gradient cancellation. The sensitivity was sufficiently high to detect spontaneous brain activity from a 7month old baby with a SNR as much as 40:1 and evoked somatosensory responses with a 50Hz bandwidth after as little as four averages. We expect that both the noise and the sensor gap can be reduced further by

  6. Multichannel brain recordings in behaving Drosophila reveal oscillatory activity and local coherence in response to sensory stimulation and circuit activation

    PubMed Central

    Paulk, Angelique C.; Zhou, Yanqiong; Stratton, Peter; Liu, Li

    2013-01-01

    Neural networks in vertebrates exhibit endogenous oscillations that have been associated with functions ranging from sensory processing to locomotion. It remains unclear whether oscillations may play a similar role in the insect brain. We describe a novel “whole brain” readout for Drosophila melanogaster using a simple multichannel recording preparation to study electrical activity across the brain of flies exposed to different sensory stimuli. We recorded local field potential (LFP) activity from >2,000 registered recording sites across the fly brain in >200 wild-type and transgenic animals to uncover specific LFP frequency bands that correlate with: 1) brain region; 2) sensory modality (olfactory, visual, or mechanosensory); and 3) activity in specific neural circuits. We found endogenous and stimulus-specific oscillations throughout the fly brain. Central (higher-order) brain regions exhibited sensory modality-specific increases in power within narrow frequency bands. Conversely, in sensory brain regions such as the optic or antennal lobes, LFP coherence, rather than power, best defined sensory responses across modalities. By transiently activating specific circuits via expression of TrpA1, we found that several circuits in the fly brain modulate LFP power and coherence across brain regions and frequency domains. However, activation of a neuromodulatory octopaminergic circuit specifically increased neuronal coherence in the optic lobes during visual stimulation while decreasing coherence in central brain regions. Our multichannel recording and brain registration approach provides an effective way to track activity simultaneously across the fly brain in vivo, allowing investigation of functional roles for oscillations in processing sensory stimuli and modulating behavior. PMID:23864378

  7. Auditory evoked responses in musicians during passive vowel listening are modulated by functional connectivity between bilateral auditory-related brain regions.

    PubMed

    Kühnis, Jürg; Elmer, Stefan; Jäncke, Lutz

    2014-12-01

    Currently, there is striking evidence showing that professional musical training can substantially alter the response properties of auditory-related cortical fields. Such plastic changes have previously been shown not only to abet the processing of musical sounds, but likewise spectral and temporal aspects of speech. Therefore, here we used the EEG technique and measured a sample of musicians and nonmusicians while the participants were passively exposed to artificial vowels in the context of an oddball paradigm. Thereby, we evaluated whether increased intracerebral functional connectivity between bilateral auditory-related brain regions may promote sensory specialization in musicians, as reflected by altered cortical N1 and P2 responses. This assumption builds on the reasoning that sensory specialization is dependent, at least in part, on the amount of synchronization between the two auditory-related cortices. Results clearly revealed that auditory-evoked N1 responses were shaped by musical expertise. In addition, in line with our reasoning musicians showed an overall increased intracerebral functional connectivity (as indexed by lagged phase synchronization) in theta, alpha, and beta bands. Finally, within-group correlative analyses indicated a relationship between intracerebral beta band connectivity and cortical N1 responses, however only within the musicians' group. Taken together, we provide first electrophysiological evidence for a relationship between musical expertise, auditory-evoked brain responses, and intracerebral functional connectivity among auditory-related brain regions. PMID:24893742

  8. Auditory information processing during human sleep as revealed by event-related brain potentials.

    PubMed

    Atienza, M; Cantero, J L; Escera, C

    2001-11-01

    The main goal of this review is to elucidate up to what extent pre-attentive auditory information processing is affected during human sleep. Evidence from event-related brain potential (ERP) studies indicates that auditory information processing is selectively affected, even at early phases, across the different stages of sleep-wakefulness continuum. According to these studies, 3 main conclusions are drawn: (1) the sleeping brain is able to automatically detect stimulus occurrence and trigger an orienting response towards that stimulus if its degree of novelty is large; (2) auditory stimuli are represented in the auditory system and maintained for a period of time in sensory memory, making the automatic-change detection during sleep possible; and (3) there are specific brain mechanisms (sleep-specific ERP components associated with the presence of vertex waves and K-complexes) by which information processing can be improved during non-rapid eye movement sleep. However, the remarkably affected amplitude and latency of the waking-ERPs during the different stages of sleep suggests deficits in the building and maintenance of a neural representation of the stimulus as well as in the process by which neural events lead to an orienting response toward such a stimulus. The deactivation of areas in the dorsolateral pre-frontal cortex during sleep contributing to the generation of these ERP components is hypothesized to be one of the main causes for the attenuated amplitude of these ERPs during human sleep. PMID:11682341

  9. Nonlocal atlas-guided multi-channel forest learning for human brain labeling

    PubMed Central

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang

    2016-01-01

    Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI_LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the

  10. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. PMID:23261663

  11. Coding space-time stimulus dynamics in auditory brain maps

    PubMed Central

    Wang, Yunyan; Gutfreund, Yoram; Peña, José L.

    2014-01-01

    Sensory maps are often distorted representations of the environment, where ethologically-important ranges are magnified. The implication of a biased representation extends beyond increased acuity for having more neurons dedicated to a certain range. Because neurons are functionally interconnected, non-uniform representations influence the processing of high-order features that rely on comparison across areas of the map. Among these features are time-dependent changes of the auditory scene generated by moving objects. How sensory representation affects high order processing can be approached in the map of auditory space of the owl's midbrain, where locations in the front are over-represented. In this map, neurons are selective not only to location but also to location over time. The tuning to space over time leads to direction selectivity, which is also topographically organized. Across the population, neurons tuned to peripheral space are more selective to sounds moving into the front. The distribution of direction selectivity can be explained by spatial and temporal integration on the non-uniform map of space. Thus, the representation of space can induce biased computation of a second-order stimulus feature. This phenomenon is likely observed in other sensory maps and may be relevant for behavior. PMID:24782781

  12. Brain Network Interactions in Auditory, Visual and Linguistic Processing

    ERIC Educational Resources Information Center

    Horwitz, Barry; Braun, Allen R.

    2004-01-01

    In the paper, we discuss the importance of network interactions between brain regions in mediating performance of sensorimotor and cognitive tasks, including those associated with language processing. Functional neuroimaging, especially PET and fMRI, provide data that are obtained essentially simultaneously from much of the brain, and thus are…

  13. The relationship between phonological and auditory processing and brain organization in beginning readers

    PubMed Central

    PUGH, Kenneth R.; LANDI, Nicole; PRESTON, Jonathan L.; MENCL, W. Einar; AUSTIN, Alison C.; SIBLEY, Daragh; FULBRIGHT, Robert K.; SEIDENBERG, Mark S.; GRIGORENKO, Elena L.; CONSTABLE, R. Todd; MOLFESE, Peter; FROST, Stephen J.

    2012-01-01

    We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of skill-correlated regions, including left hemisphere temporoparietal and occipitotemporal sites, as well as inferior frontal, visual, visual attention, and subcortical components. For speech-related activation, shared variance among reading skill measures was most prominently correlated with activation in left hemisphere inferior frontal gyrus and precuneus. Implications for brain-based models of literacy acquisition are discussed. PMID:22572517

  14. Neurogenesis in the brain auditory pathway of a marsupial, the northern native cat (Dasyurus hallucatus)

    SciTech Connect

    Aitkin, L.; Nelson, J.; Farrington, M.; Swann, S. )

    1991-07-08

    Neurogenesis in the auditory pathway of the marsupial Dasyurus hallucatus was studied. Intraperitoneal injections of tritiated thymidine (20-40 microCi) were made into pouch-young varying from 1 to 56 days pouch-life. Animals were killed as adults and brain sections were prepared for autoradiography and counterstained with a Nissl stain. Neurons in the ventral cochlear nucleus were generated prior to 3 days pouch-life, in the superior olive at 5-7 days, and in the dorsal cochlear nucleus over a prolonged period. Inferior collicular neurogenesis lagged behind that in the medial geniculate, the latter taking place between days 3 and 9 and the former between days 7 and 22. Neurogenesis began in the auditory cortex on day 9 and was completed by about day 42. Thus neurogenesis was complete in the medullary auditory nuclei before that in the midbrain commenced, and in the medial geniculate before that in the auditory cortex commenced. The time course of neurogenesis in the auditory pathway of the native cat was very similar to that in another marsupial, the brushtail possum. For both, neurogenesis occurred earlier than in eutherian mammals of a similar size but was more protracted.

  15. Development and modulation of intrinsic membrane properties control the temporal precision of auditory brain stem neurons.

    PubMed

    Franzen, Delwen L; Gleiss, Sarah A; Berger, Christina; Kümpfbeck, Franziska S; Ammer, Julian J; Felmy, Felix

    2015-01-15

    Passive and active membrane properties determine the voltage responses of neurons. Within the auditory brain stem, refinements in these intrinsic properties during late postnatal development usually generate short integration times and precise action-potential generation. This developmentally acquired temporal precision is crucial for auditory signal processing. How the interactions of these intrinsic properties develop in concert to enable auditory neurons to transfer information with high temporal precision has not yet been elucidated in detail. Here, we show how the developmental interaction of intrinsic membrane parameters generates high firing precision. We performed in vitro recordings from neurons of postnatal days 9-28 in the ventral nucleus of the lateral lemniscus of Mongolian gerbils, an auditory brain stem structure that converts excitatory to inhibitory information with high temporal precision. During this developmental period, the input resistance and capacitance decrease, and action potentials acquire faster kinetics and enhanced precision. Depending on the stimulation time course, the input resistance and capacitance contribute differentially to action-potential thresholds. The decrease in input resistance, however, is sufficient to explain the enhanced action-potential precision. Alterations in passive membrane properties also interact with a developmental change in potassium currents to generate the emergence of the mature firing pattern, characteristic of coincidence-detector neurons. Cholinergic receptor-mediated depolarizations further modulate this intrinsic excitability profile by eliciting changes in the threshold and firing pattern, irrespective of the developmental stage. Thus our findings reveal how intrinsic membrane properties interact developmentally to promote temporally precise information processing. PMID:25355963

  16. Localized Brain Activation Related to the Strength of Auditory Learning in a Parrot

    PubMed Central

    Matsushita, Masanori; Matsuda, Yasushi; Takeuchi, Hiro-Aki; Satoh, Ryohei; Watanabe, Aiko; Zandbergen, Matthijs A.; Manabe, Kazuchika; Kawashima, Takashi; Bolhuis, Johan J.

    2012-01-01

    Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus), a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory. PMID:22701714

  17. Brain stem auditory evoked responses in human infants and adults

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Galambos, R.

    1974-01-01

    Brain stem evoked potentials were recorded by conventional scalp electrodes in infants (3 weeks to 3 years of age) and adults. The latency of one of the major response components (wave V) is shown to be a function both of click intensity and the age of the subject; this latency at a given signal strength shortens postnatally to reach the adult value (about 6 msec) by 12 to 18 months of age. The demonstrated reliability and limited variability of these brain stem electrophysiological responses provide the basis for an optimistic estimate of their usefulness as an objective method for assessing hearing in infants and adults.

  18. Inconsistencies in the correlation between loss of brain stem auditory evoked response waves and postoperative deafness.

    PubMed

    Mustain, W D; al-Mefty, O; Anand, V K

    1992-07-01

    This case underscores the difficulty of predicting postoperative hearing status from brain stem auditory evoked response (BAER) monitoring when wave I is preserved and all later waves are lost. During an operation involving the base of the skull, sudden and irreversible loss of all BAER waves beyond wave I occurred unilaterally. Wave I was preserved, with reduced amplitude and minimal latency shift. There was no permanent postoperative hearing sensitivity loss or speech discrimination loss. PMID:1494930

  19. Endogenous Delta/Theta Sound-Brain Phase Entrainment Accelerates the Buildup of Auditory Streaming.

    PubMed

    Riecke, Lars; Sack, Alexander T; Schroeder, Charles E

    2015-12-21

    In many natural listening situations, meaningful sounds (e.g., speech) fluctuate in slow rhythms among other sounds. When a slow rhythmic auditory stream is selectively attended, endogenous delta (1‒4 Hz) oscillations in auditory cortex may shift their timing so that higher-excitability neuronal phases become aligned with salient events in that stream [1, 2]. As a consequence of this stream-brain phase entrainment [3], these events are processed and perceived more readily than temporally non-overlapping events [4-11], essentially enhancing the neural segregation between the attended stream and temporally noncoherent streams [12]. Stream-brain phase entrainment is robust to acoustic interference [13-20] provided that target stream-evoked rhythmic activity can be segregated from noncoherent activity evoked by other sounds [21], a process that usually builds up over time [22-27]. However, it has remained unclear whether stream-brain phase entrainment functionally contributes to this buildup of rhythmic streams or whether it is merely an epiphenomenon of it. Here, we addressed this issue directly by experimentally manipulating endogenous stream-brain phase entrainment in human auditory cortex with non-invasive transcranial alternating current stimulation (TACS) [28-30]. We assessed the consequences of these manipulations on the perceptual buildup of the target stream (the time required to recognize its presence in a noisy background), using behavioral measures in 20 healthy listeners performing a naturalistic listening task. Experimentally induced cyclic 4-Hz variations in stream-brain phase entrainment reliably caused a cyclic 4-Hz pattern in perceptual buildup time. Our findings demonstrate that strong endogenous delta/theta stream-brain phase entrainment accelerates the perceptual emergence of task-relevant rhythmic streams in noisy environments. PMID:26628008

  20. The SRI24 multichannel atlas of normal adult human brain structure.

    PubMed

    Rohlfing, Torsten; Zahr, Natalie M; Sullivan, Edith V; Pfefferbaum, Adolf

    2010-05-01

    This article describes the SRI24 atlas, a new standard reference system of normal human brain anatomy, that was created using template-free population registration of high-resolution magnetic resonance images acquired at 3T in a group of 24 normal control subjects. The atlas comprises anatomical channels (T1, T2, and proton density weighted), diffusion-related channels (fractional anisotropy, mean diffusivity, longitudinal diffusivity, mean diffusion-weighted image), tissue channels (CSF probability, gray matter probability, white matter probability, tissue labels), and two cortical parcellation maps. The SRI24 atlas enables multichannel atlas-to-subject image registration. It is uniquely versatile in that it is equally suited for the two fundamentally different atlas applications: label propagation and spatial normalization. Label propagation, herein demonstrated using diffusion tensor image fiber tracking, is enabled by the increased sharpness of the SRI24 atlas compared with other available atlases. Spatial normalization, herein demonstrated using data from a young-old group comparison study, is enabled by its unbiased average population shape property. For both propagation and normalization, we also report the results of quantitative comparisons with seven other published atlases: Colin27, MNI152, ICBM452 (warp5 and air12), and LPBA40 (SPM5, FLIRT, AIR). Our results suggest that the SRI24 atlas, although based on 3T MR data, allows equally accurate spatial normalization of data acquired at 1.5T as the comparison atlases, all of which are based on 1.5T data. Furthermore, the SRI24 atlas is as suitable for label propagation as the comparison atlases and detailed enough to allow delineation of anatomical structures for this purpose directly in the atlas. PMID:20017133

  1. The SRI24 Multi-Channel Atlas of Normal Adult Human Brain Structure

    PubMed Central

    Rohlfing, Torsten; Zahr, Natalie M.; Sullivan, Edith V.; Pfefferbaum, Adolf

    2010-01-01

    This paper describes the SRI24 atlas, a new standard reference system of normal human brain anatomy, that was created using template-free population registration of high-resolution magnetic resonance images acquired at 3T in a group of 24 normal control subjects. The atlas comprises anatomical channels (T1, T2, and proton density weighted), diffusion-related channels (fractional anisotropy, mean diffusivity, longitudinal diffusivity, mean diffusion-weighted image), tissue channels (CSF probability, gray matter probability, white matter probability, tissue labels), and two cortical parcellation maps. The SRI24 atlas enables multi-channel atlas-to-subject image registration. It is uniquely versatile in that it is equally suited for the two fundamentally different atlas applications: label propagation and spatial normalization. Label propagation, herein demonstrated using DTI fiber tracking, is enabled by the increased sharpness of the SRI24 atlas compared with other available atlases. Spatial normalization, herein demonstrated using data from a young-old group comparison study, is enabled by its unbiased average population shape property. For both propagation and normalization, we also report the results of quantitative comparisons with seven other published atlases: Colin27, MNI152, ICBM452 (warp5 and air12), and LPBA40 (SPM5, FLIRT, AIR). Our results suggest that the SRI24 atlas, although based on 3T MR data, allows equally accurate spatial normalization of data acquired at 1.5T as the comparison atlases, all of which are based on 1.5T data. Furthermore, the SRI24 atlas is as suitable for label propagation as the comparison atlases and detailed enough to allow delineation of anatomical structures for this purpose directly in the atlas. PMID:20017133

  2. Can an auditory illusion trick the brain into turning down tinnitus?

    PubMed

    Fletcher, M D; Wiggins, I M

    2014-07-01

    Tinnitus, the phantom perception of sound with no external source, affects an estimated 10-15% of the adult population. Current treatments for this oftentimes distressing condition are of limited effectiveness. The "central gain" model proposes that tinnitus arises from an increase in the responsiveness, or gain, of neurons in central auditory pathways, triggered by damage to the auditory periphery. It has been suggested that tinnitus might be treated by compensating for the peripheral damage, thereby restoring normal levels of input to the central pathways, and hence reducing central gain. Unfortunately, when tinnitus originates with permanent damage to the auditory periphery, it may be impossible to compensate for this damage directly. However, we hypothesize that tinnitus may be treated by tricking the brain into believing that it temporarily receives normal levels of input at frequencies where peripheral damage has occurred. We identify an auditory illusion that seems capable, in principle, of achieving this objective. If effective, this approach would offer a safe, accessible, and non-invasive treatment for tinnitus. PMID:24767808

  3. Can You Hear Me Now? Musical Training Shapes Functional Brain Networks for Selective Auditory Attention and Hearing Speech in Noise

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2011-01-01

    Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636

  4. The WIN-speller: a new intuitive auditory brain-computer interface spelling application

    PubMed Central

    Kleih, Sonja C.; Herweg, Andreas; Kaufmann, Tobias; Staiger-Sälzer, Pit; Gerstner, Natascha; Kübler, Andrea

    2015-01-01

    The objective of this study was to test the usability of a new auditory Brain-Computer Interface (BCI) application for communication. We introduce a word based, intuitive auditory spelling paradigm the WIN-speller. In the WIN-speller letters are grouped by words, such as the word KLANG representing the letters A, G, K, L, and N. Thereby, the decoding step between perceiving a code and translating it to the stimuli it represents becomes superfluous. We tested 11 healthy volunteers and four end-users with motor impairment in the copy spelling mode. Spelling was successful with an average accuracy of 84% in the healthy sample. Three of the end-users communicated with average accuracies of 80% or higher while one user was not able to communicate reliably. Even though further evaluation is required, the WIN-speller represents a potential alternative for BCI based communication in end-users. PMID:26500476

  5. The Wellcome Prize Lecture. A map of auditory space in the mammalian brain: neural computation and development.

    PubMed

    King, A J

    1993-09-01

    The experiments described in this review have demonstrated that the SC contains a two-dimensional map of auditory space, which is synthesized within the brain using a combination of monaural and binaural localization cues. There is also an adaptive fusion of auditory and visual space in this midbrain nucleus, providing for a common access to the motor pathways that control orientation behaviour. This necessitates a highly plastic relationship between the visual and auditory systems, both during postnatal development and in adult life. Because of the independent mobility of difference sense organs, gating mechanisms are incorporated into the auditory representation to provide up-to-date information about the spatial orientation of the eyes and ears. The SC therefore provides a valuable model system for studying a number of important issues in brain function, including the neural coding of sound location, the co-ordination of spatial information between different sensory systems, and the integration of sensory signals with motor outputs. PMID:8240794

  6. Psychophysical and neural correlates of noised-induced tinnitus in animals: Intra- and inter-auditory and non-auditory brain structure studies.

    PubMed

    Zhang, Jinsheng; Luo, Hao; Pace, Edward; Li, Liang; Liu, Bin

    2016-04-01

    Tinnitus, a ringing in the ear or head without an external sound source, is a prevalent health problem. It is often associated with a number of limbic-associated disorders such as anxiety, sleep disturbance, and emotional distress. Thus, to investigate tinnitus, it is important to consider both auditory and non-auditory brain structures. This paper summarizes the psychophysical, immunocytochemical and electrophysiological evidence found in rats or hamsters with behavioral evidence of tinnitus. Behaviorally, we tested for tinnitus using a conditioned suppression/avoidance paradigm, gap detection acoustic reflex behavioral paradigm, and our newly developed conditioned licking suppression paradigm. Our new tinnitus behavioral paradigm requires relatively short baseline training, examines frequency specification of tinnitus perception, and achieves sensitive tinnitus testing at an individual level. To test for tinnitus-related anxiety and cognitive impairment, we used the elevated plus maze and Morris water maze. Our results showed that not all animals with tinnitus demonstrate anxiety and cognitive impairment. Immunocytochemically, we found that animals with tinnitus manifested increased Fos-like immunoreactivity (FLI) in both auditory and non-auditory structures. The manner in which FLI appeared suggests that lower brainstem structures may be involved in acute tinnitus whereas the midbrain and cortex are involved in more chronic tinnitus. Meanwhile, animals with tinnitus also manifested increased FLI in non-auditory brain structures that are involved in autonomic reactions, stress, arousal and attention. Electrophysiologically, we found that rats with tinnitus developed increased spontaneous firing in the auditory cortex (AC) and amygdala (AMG), as well as intra- and inter-AC and AMG neurosynchrony, which demonstrate that tinnitus may be actively produced and maintained by the interactions between the AC and AMG. PMID:26299842

  7. Responses to Vocalizations and Auditory Controls in the Human Newborn Brain

    PubMed Central

    Cristia, Alejandrina; Minagawa, Yasuyo; Dupoux, Emmanuel

    2014-01-01

    In the adult brain, speech can recruit a brain network that is overlapping with, but not identical to, that involved in perceiving non-linguistic vocalizations. Using the same stimuli that had been presented to human 4-month-olds and adults, as well as adult macaques, we sought to shed light on the cortical networks engaged when human newborns process diverse vocalization types. Near infrared spectroscopy was used to register the response of 40 newborns' perisylvian regions when stimulated with speech, human and macaque emotional vocalizations, as well as auditory controls where the formant structure was destroyed but the long-term spectrum was retained. Left fronto-temporal and parietal regions were significantly activated in the comparison of stimulation versus rest, with unclear selectivity in cortical activation. These results for the newborn brain are qualitatively and quantitatively compared with previous work on newborns, older human infants, adult humans, and adult macaques reported in previous work. PMID:25517997

  8. Synchrony of auditory brain responses predicts behavioral ability to keep still in children with autism spectrum disorder: Auditory-evoked response in children with autism spectrum disorder.

    PubMed

    Yoshimura, Yuko; Kikuchi, Mitsuru; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Remijn, Gerard B; Oi, Manabu; Munesue, Toshio; Higashida, Haruhiro; Minabe, Yoshio

    2016-01-01

    The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD. PMID:27551667

  9. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    PubMed

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. PMID:25536493

  10. Connectivity in the human brain dissociates entropy and complexity of auditory inputs☆

    PubMed Central

    Nastase, Samuel A.; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-01-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. PMID:25536493

  11. Effects of Visual and Auditory Background on Reading Achievement Test Performance of Brain-Injured and Non Brain-Injured Children.

    ERIC Educational Resources Information Center

    Carter, John L.

    Forty-two brain injured boys and 42 non brain injured boys (aged 11-6 to 12-6) were tested to determine the effects of increasing amounts of visual and auditory distraction on reading performance. The Stanford Achievement Reading Comprehension Test was administered with three degrees of distraction. The visual distraction consisted of either very…

  12. An Auditory-Tactile Visual Saccade-Independent P300 Brain-Computer Interface.

    PubMed

    Yin, Erwei; Zeyl, Timothy; Saab, Rami; Hu, Dewen; Zhou, Zongtan; Chau, Tom

    2016-02-01

    Most P300 event-related potential (ERP)-based brain-computer interface (BCI) studies focus on gaze shift-dependent BCIs, which cannot be used by people who have lost voluntary eye movement. However, the performance of visual saccade-independent P300 BCIs is generally poor. To improve saccade-independent BCI performance, we propose a bimodal P300 BCI approach that simultaneously employs auditory and tactile stimuli. The proposed P300 BCI is a vision-independent system because no visual interaction is required of the user. Specifically, we designed a direction-congruent bimodal paradigm by randomly and simultaneously presenting auditory and tactile stimuli from the same direction. Furthermore, the channels and number of trials were tailored to each user to improve online performance. With 12 participants, the average online information transfer rate (ITR) of the bimodal approach improved by 45.43% and 51.05% over that attained, respectively, with the auditory and tactile approaches individually. Importantly, the average online ITR of the bimodal approach, including the break time between selections, reached 10.77 bits/min. These findings suggest that the proposed bimodal system holds promise as a practical visual saccade-independent P300 BCI. PMID:26678249

  13. Specialization of the auditory processing in harbor porpoise, characterized by brain-stem potentials

    NASA Astrophysics Data System (ADS)

    Bibikov, Nikolay G.

    2002-05-01

    Brain-stem auditory evoked potentials (BAEPs) were recorded from the head surface of the three awaked harbor porpoises (Phocoena phocoena). Silver disk placed on the skin surface above the vertex bone was used as an active electrode. The experiments were performed at the Karadag biological station (the Crimea peninsula). Clicks and tone bursts were used as stimuli. The temporal and frequency selectivity of the auditory system was estimated using the methods of simultaneous and forward masking. An evident minimum of the BAEPs thresholds was observed in the range of 125-135 kHz, where the main spectral component of species-specific echolocation signal is located. In this frequency range the tonal forward masking demonstrated a strong frequency selectivity. Off-response to such tone bursts was a typical observation. An evident BAEP could be recorded up to the frequencies 190-200 kHz, however, outside the acoustical fovea the frequency selectivity was rather poor. Temporal resolution was estimated by measuring BAER recovery functions for double clicks, double tone bursts, and double noise bursts. The half-time of BAERs recovery was in the range of 0.1-0.2 ms. The data indicate that the porpoise auditory system is strongly adapted to detect ultrasonic closely spaced sounds like species-specific locating signals and echoes.

  14. Auditory brain development in premature infants: the importance of early experience.

    PubMed

    McMahon, Erin; Wintermark, Pia; Lahav, Amir

    2012-04-01

    Preterm infants in the neonatal intensive care unit (NICU) often close their eyes in response to bright lights, but they cannot close their ears in response to loud sounds. The sudden transition from the womb to the overly noisy world of the NICU increases the vulnerability of these high-risk newborns. There is a growing concern that the excess noise typically experienced by NICU infants disrupts their growth and development, putting them at risk for hearing, language, and cognitive disabilities. Preterm neonates are especially sensitive to noise because their auditory system is at a critical period of neurodevelopment, and they are no longer shielded by maternal tissue. This paper discusses the developmental milestones of the auditory system and suggests ways to enhance the quality control and type of sounds delivered to NICU infants. We argue that positive auditory experience is essential for early brain maturation and may be a contributing factor for healthy neurodevelopment. Further research is needed to optimize the hospital environment for preterm newborns and to increase their potential to develop into healthy children. PMID:22524335

  15. Simultaneous EEG-fMRI brain signatures of auditory cue utilization

    PubMed Central

    Scharinger, Mathias; Herrmann, Björn; Nierhaus, Till; Obleser, Jonas

    2014-01-01

    Optimal utilization of acoustic cues during auditory categorization is a vital skill, particularly when informative cues become occluded or degraded. Consequently, the acoustic environment requires flexible choosing and switching amongst available cues. The present study targets the brain functions underlying such changes in cue utilization. Participants performed a categorization task with immediate feedback on acoustic stimuli from two categories that varied in duration and spectral properties, while we simultaneously recorded Blood Oxygenation Level Dependent (BOLD) responses in fMRI and electroencephalograms (EEGs). In the first half of the experiment, categories could be best discriminated by spectral properties. Halfway through the experiment, spectral degradation rendered the stimulus duration the more informative cue. Behaviorally, degradation decreased the likelihood of utilizing spectral cues. Spectrally degrading the acoustic signal led to increased alpha power compared to nondegraded stimuli. The EEG-informed fMRI analyses revealed that alpha power correlated with BOLD changes in inferior parietal cortex and right posterior superior temporal gyrus (including planum temporale). In both areas, spectral degradation led to a weaker coupling of BOLD response to behavioral utilization of the spectral cue. These data provide converging evidence from behavioral modeling, electrophysiology, and hemodynamics that (a) increased alpha power mediates the inhibition of uninformative (here spectral) stimulus features, and that (b) the parietal attention network supports optimal cue utilization in auditory categorization. The results highlight the complex cortical processing of auditory categorization under realistic listening challenges. PMID:24926232

  16. Brain responses to altered auditory feedback during musical keyboard production: an fMRI study.

    PubMed

    Pfordresher, Peter Q; Mantell, James T; Brown, Steven; Zivadinov, Robert; Cox, Jennifer L

    2014-03-27

    Alterations of auditory feedback during piano performance can be profoundly disruptive. Furthermore, different alterations can yield different types of disruptive effects. Whereas alterations of feedback synchrony disrupt performed timing, alterations of feedback pitch contents can disrupt accuracy. The current research tested whether these behavioral dissociations correlate with differences in brain activity. Twenty pianists performed simple piano keyboard melodies while being scanned in a 3-T magnetic resonance imaging (MRI) scanner. In different conditions they experienced normal auditory feedback, altered auditory feedback (asynchronous delays or altered pitches), or control conditions that excluded movement or sound. Behavioral results replicated past findings. Neuroimaging data suggested that asynchronous delays led to increased activity in Broca's area and its right homologue, whereas disruptive alterations of pitch elevated activations in the cerebellum, area Spt, inferior parietal lobule, and the anterior cingulate cortex. Both disruptive conditions increased activations in the supplementary motor area. These results provide the first evidence of neural responses associated with perception/action mismatch during keyboard production. PMID:24513403

  17. Auditory vocabulary of the right hemisphere following brain bisection or hemidecortication.

    PubMed

    Zaidel, E

    1976-09-01

    Unilateral scores of two commissurotomy and three (one left and two right) hemispherectomy patients were obtained on standardized auditory language comprehension tests which use pointing responses to a pictorial array. Unilateral performance by the commissurotomy patients was achieved by restricting the pictorial array to one visual half field, using a novel contact lens system which permits ocular scanning of the lateralized stimulus and self-monitoring of task performance. Using the Peabody and Ammons Picture Vocabulary Tests, the auditory vocabulary in the disconnected or isolated right hemispheres was found to be equivalent to that of normal subjects of ages 8:1 to 16:3 with a mean of 11:7 (eleven years and 7 months old). At the same time, standardized aphasia tests showed that the picture vocabulary in the right hemispheres is similar to that of a heterogeneous population of aphasics, even though the right hemispheres did not behave quite like any classical aphasic diagnostic group. No significant differences were found between right hemisphere comprehension of object vs. action names. Results indicated that vocabulary as a function of word frequency followed the same pattern in the right and left hemisphere although the right hemisphere was consistently lower. This parallel between the two hemispheres was conjectured to reflect some similar or even shared lexical structures in the two hemispheres. Together with other data on the performance of the right hemisphere on the Token Test (Zaidel, 1976), the results suggest a complex model of the development of language laterality in the brain, in which some, but not all, auditory language functions continue to develop in the right hemisphere past what is generally regarded as the critical period for language acquistion. In general, auditory language comprehension is better characterized as that of an "average aphasic" than that of a child of a specific age. PMID:1000988

  18. Age-related Changes in Auditory Nerve – Inner Hair Cell Connections, Hair Cell Numbers, Auditory Brain Stem Response and Gap Detection in UM-HET4 Mice

    PubMed Central

    Altschuler, RA; Dolan, DF; Halsey, K; Kanicki, A; Deng, N; Martin, C; Eberle, J; Kohrman, DC; Miller, RA; Schacht, J

    2015-01-01

    This study compared the timing of appearance of three components of age-related hearing loss that determine the pattern and severity of presbycusis: the functional and structural pathologies of sensory cells and neurons and changes in Gap Detection, the latter as an indicator of auditory temporal processing. Using UM-HET4 mice, genetically heterogeneous mice derived from four inbred strains, we studied the integrity of inner and outer hair cells by position along the cochlear spiral, inner hair cell-auditory nerve connections, spiral ganglion neurons, and determined auditory thresholds, as well as pre-pulse and gap inhibition of the acoustic startle reflex (ASR). Comparisons were made between mice of 5-7, 22-24 and 27-29 months of age. There was individual variability among mice in the onset and extent of age-related auditory pathology. At 22-24 months of age a moderate to large loss of outer hair cells was restricted to the apical third of the cochlea and threshold shifts in auditory brain stem response were minimal. There was also a large and significant loss of inner hair cell – auditory nerve connections and a significant reduction in Gap Detection. The expression of Ntf3 in the cochlea was significantly reduced. At 27-29 months of age there was no further change in the mean number of synaptic connections per inner hair cell or in gap detection, but a moderate to large loss of outer hair cells was found across all cochlear turns as well as significantly increased ABR threshold shifts at 4, 12, 24 and 48 kHz. A statistical analysis of correlations on an individual animal basis revealed that neither the hair cell loss nor the ABR threshold shifts correlated with loss of gap detection or with the loss of connections, consistent with independent pathological mechanisms. PMID:25665752

  19. Age-related changes in auditory nerve-inner hair cell connections, hair cell numbers, auditory brain stem response and gap detection in UM-HET4 mice.

    PubMed

    Altschuler, R A; Dolan, D F; Halsey, K; Kanicki, A; Deng, N; Martin, C; Eberle, J; Kohrman, D C; Miller, R A; Schacht, J

    2015-04-30

    This study compared the timing of appearance of three components of age-related hearing loss that determine the pattern and severity of presbycusis: the functional and structural pathologies of sensory cells and neurons and changes in gap detection (GD), the latter as an indicator of auditory temporal processing. Using UM-HET4 mice, genetically heterogeneous mice derived from four inbred strains, we studied the integrity of inner and outer hair cells by position along the cochlear spiral, inner hair cell-auditory nerve connections, spiral ganglion neurons (SGN), and determined auditory thresholds, as well as pre-pulse and gap inhibition of the acoustic startle reflex (ASR). Comparisons were made between mice of 5-7, 22-24 and 27-29 months of age. There was individual variability among mice in the onset and extent of age-related auditory pathology. At 22-24 months of age a moderate to large loss of outer hair cells was restricted to the apical third of the cochlea and threshold shifts in the auditory brain stem response were minimal. There was also a large and significant loss of inner hair cell-auditory nerve connections and a significant reduction in GD. The expression of Ntf3 in the cochlea was significantly reduced. At 27-29 months of age there was no further change in the mean number of synaptic connections per inner hair cell or in GD, but a moderate to large loss of outer hair cells was found across all cochlear turns as well as significantly increased ABR threshold shifts at 4, 12, 24 and 48 kHz. A statistical analysis of correlations on an individual animal basis revealed that neither the hair cell loss nor the ABR threshold shifts correlated with loss of GD or with the loss of connections, consistent with independent pathological mechanisms. PMID:25665752

  20. Case study: auditory brain responses in a minimally verbal child with autism and cerebral palsy

    PubMed Central

    Yau, Shu H.; McArthur, Genevieve; Badcock, Nicholas A.; Brock, Jon

    2015-01-01

    An estimated 30% of individuals with autism spectrum disorders (ASD) remain minimally verbal into late childhood, but research on cognition and brain function in ASD focuses almost exclusively on those with good or only moderately impaired language. Here we present a case study investigating auditory processing of GM, a nonverbal child with ASD and cerebral palsy. At the age of 8 years, GM was tested using magnetoencephalography (MEG) whilst passively listening to speech sounds and complex tones. Where typically developing children and verbal autistic children all demonstrated similar brain responses to speech and nonspeech sounds, GM produced much stronger responses to nonspeech than speech, particularly in the 65–165 ms (M50/M100) time window post-stimulus onset. GM was retested aged 10 years using electroencephalography (EEG) whilst passively listening to pure tone stimuli. Consistent with her MEG response to complex tones, GM showed an unusually early and strong response to pure tones in her EEG responses. The consistency of the MEG and EEG data in this single case study demonstrate both the potential and the feasibility of these methods in the study of minimally verbal children with ASD. Further research is required to determine whether GM's atypical auditory responses are characteristic of other minimally verbal children with ASD or of other individuals with cerebral palsy. PMID:26150768

  1. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise

    PubMed Central

    Ioannou, Christos I.; Pereda, Ernesto; Lindsen, Job P.; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies. PMID:26065708

  2. A comprehensive approach to the segmentation of multichannel three-dimensional MR brain images in multiple sclerosis.

    PubMed

    Datta, Sushmita; Narayana, Ponnada A

    2013-01-01

    Accurate classification and quantification of brain tissues is important for monitoring disease progression, measurement of atrophy, and correlating magnetic resonance (MR) measures with clinical disability. Classification of MR brain images in the presence of lesions, such as multiple sclerosis (MS), is particularly challenging. Images obtained with lower resolution often suffer from partial volume averaging leading to false classifications. While partial volume averaging can be reduced by acquiring volumetric images at high resolution, image segmentation and quantification can be technically challenging. In this study, we integrated the brain anatomical knowledge with non-parametric and parametric statistical classifiers for automatically classifying tissues and lesions on high resolution multichannel three-dimensional images acquired on 60 MS brains. The results of automatic lesion segmentation were reviewed by the expert. The agreement between results obtained by the automated analysis and the expert was excellent as assessed by the quantitative metrics, low absolute volume difference percent (36.18 ± 34.90), low average symmetric surface distance (1.64 mm ± 1.30 mm), high true positive rate (84.75 ± 12.69), and low false positive rate (34.10 ± 16.00). The segmented results were also in close agreement with the corrected results as assessed by Bland-Altman and regression analyses. Finally, our lesion segmentation was validated using the MS lesion segmentation grand challenge dataset (MICCAI 2008). PMID:24179773

  3. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    PubMed Central

    Scott, Gregory D.; Karns, Christina M.; Dow, Mark W.; Stevens, Courtney; Neville, Helen J.

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11–15° vs. 2–7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf

  4. Design of the multi-channel electroencephalography-based brain-computer interface with novel dry sensors.

    PubMed

    Wu, Shang-Lin; Liao, Lun-De; Liou, Chang-Hong; Chen, Shi-An; Ko, Li-Wei; Chen, Bo-Wei; Wang, Po-Sheng; Chen, Sheng-Fu; Lin, Chin-Teng

    2012-01-01

    The traditional brain-computer interface (BCI) system measures the electroencephalography (EEG) signals by the wet sensors with the conductive gel and skin preparation processes. To overcome the limitations of traditional BCI system with conventional wet sensors, a wireless and wearable multi-channel EEG-based BCI system is proposed in this study, including the wireless EEG data acquisition device, dry spring-loaded sensors, a size-adjustable soft cap. The dry spring-loaded sensors are made of metal conductors, which can measure the EEG signals without skin preparation and conductive gel. In addition, the proposed system provides a size-adjustable soft cap that can be used to fit user's head properly. Indeed, the results are shown that the proposed system can properly and effectively measure the EEG signals with the developed cap and sensors, even under movement. In words, the developed wireless and wearable BCI system is able to be used in cognitive neuroscience applications. PMID:23366259

  5. Research of brain activation regions of "yes" and "no" responses by auditory stimulations in human EEG

    NASA Astrophysics Data System (ADS)

    Hu, Min; Liu, GuoZhong

    2011-11-01

    People with neuromuscular disorders are difficult to communicate with the outside world. It is very important to the clinician and the patient's family that how to distinguish vegetative state (VS) and minimally conscious state (MCS) for a disorders of consciousness (DOC) patient. If a patient is diagnosed with VS, this means that the hope of recovery is greatly reduced, thus leading to the family to abandon the treatment. Brain-computer interface (BCI) is aiming to help those people by analyzing patients' electroencephalogram (EEG). This paper focus on analyzing the corresponding activated regions of the brain when a subject responses "yes" or "no" to an auditory stimuli question. When the brain concentrates, the phase of the related area will become orderly from desultorily. So in this paper we analyzed EEG from the angle of phase. Seven healthy subjects volunteered to participate in the experiment. A total of 84 groups of repeatability stimulation test were done. Firstly, the frequency is fragmented by using wavelet method. Secondly, the phase of EEG is extracted by Hilbert. At last, we obtained approximate entropy and information entropy of each frequency band of EEG. The results show that brain areas are activated of the central area when people say "yes", and the areas are activated of the central area and temporal when people say "no". This conclusion is corresponding to magnetic resonance imaging technology. This study provides the theory basis and the algorithm design basis for designing BCI equipment for people with neuromuscular disorders.

  6. Brain hyper-reactivity to auditory novel targets in children with high-functioning autism.

    PubMed

    Gomot, Marie; Belmonte, Matthew K; Bullmore, Edward T; Bernard, Frédéric A; Baron-Cohen, Simon

    2008-09-01

    Although communication and social difficulties in autism have received a great deal of research attention, the other key diagnostic feature, extreme repetitive behaviour and unusual narrow interests, has been addressed less often. Also known as 'resistance to change' this may be related to atypical processing of infrequent, novel stimuli. This can be tested at sensory and neural levels. Our aims were to (i) examine auditory novelty detection and its neural basis in children with autism spectrum conditions (ASC) and (ii) test for brain activation patterns that correlate quantitatively with number of autistic traits as a test of the dimensional nature of ASC. The present study employed event-related fMRI during a novel auditory detection paradigm. Participants were twelve 10- to 15-year-old children with ASC and a group of 12 age-, IQ- and sex-matched typical controls. The ASC group responded faster to novel target stimuli. Group differences in brain activity mainly involved the right prefrontal-premotor and the left inferior parietal regions, which were more activated in the ASC group than in controls. In both groups, activation of prefrontal regions during target detection was positively correlated with Autism Spectrum Quotient scores measuring the number of autistic traits. These findings suggest that target detection in autism is associated not only with superior behavioural performance (shorter reaction time) but also with activation of a more widespread network of brain regions. This pattern also shows quantitative variation with number of autistic traits, in a continuum that extends to the normal population. This finding may shed light on the neurophysiological process underlying narrow interests and what clinically is called 'need for sameness'. PMID:18669482

  7. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. PMID:25726291

  8. Audio representations of multi-channel EEG: a new tool for diagnosis of brain disorders

    PubMed Central

    Vialatte, François B; Dauwels, Justin; Musha, Toshimitsu; Cichocki, Andrzej

    2012-01-01

    Objective: The objective of this paper is to develop audio representations of electroencephalographic (EEG) multichannel signals, useful for medical practitioners and neuroscientists. The fundamental question explored in this paper is whether clinically valuable information contained in the EEG, not available from the conventional graphical EEG representation, might become apparent through audio representations. Methods and Materials: Music scores are generated from sparse time-frequency maps of EEG signals. Specifically, EEG signals of patients with mild cognitive impairment (MCI) and (healthy) control subjects are considered. Statistical differences in the audio representations of MCI patients and control subjects are assessed through mathematical complexity indexes as well as a perception test; in the latter, participants try to distinguish between audio sequences from MCI patients and control subjects. Results: Several characteristics of the audio sequences, including sample entropy, number of notes, and synchrony, are significantly different in MCI patients and control subjects (Mann-Whitney p < 0.01). Moreover, the participants of the perception test were able to accurately classify the audio sequences (89% correctly classified). Conclusions: The proposed audio representation of multi-channel EEG signals helps to understand the complex structure of EEG. Promising results were obtained on a clinical EEG data set. PMID:23383399

  9. High-Resolution Mapping of Myeloarchitecture In Vivo: Localization of Auditory Areas in the Human Brain.

    PubMed

    De Martino, Federico; Moerel, Michelle; Xu, Junqian; van de Moortele, Pierre-Francois; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia

    2015-10-01

    The precise delineation of auditory areas in vivo remains problematic. Histological analysis of postmortem tissue indicates that the relation of areal borders to macroanatomical landmarks is variable across subjects. Furthermore, functional parcellation schemes based on measures of, for example, frequency preference (tonotopy) remain controversial. Here, we propose a 7 Tesla magnetic resonance imaging method that enables the anatomical delineation of auditory cortical areas in vivo and in individual brains, through the high-resolution visualization (0.6 × 0.6 × 0.6 mm(3)) of intracortical anatomical contrast related to myelin. The approach combines the acquisition and analysis of images with multiple MR contrasts (T1, T2*, and proton density). Compared with previous methods, the proposed solution is feasible at high fields and time efficient, which allows collecting myelin-related and functional images within the same measurement session. Our results show that a data-driven analysis of cortical depth-dependent profiles of anatomical contrast allows identifying a most densely myelinated cortical region on the medial Heschl's gyrus. Analyses of functional responses show that this region includes neuronal populations with typical primary functional properties (single tonotopic gradient and narrow frequency tuning), thus indicating that it may correspond to the human homolog of monkey A1. PMID:24994817

  10. Auditory Hallucinations and the Brain's Resting-State Networks: Findings and Methodological Observations.

    PubMed

    Alderson-Day, Ben; Diederen, Kelly; Fernyhough, Charles; Ford, Judith M; Horga, Guillermo; Margulies, Daniel S; McCarthy-Jones, Simon; Northoff, Georg; Shine, James M; Turner, Jessica; van de Ven, Vincent; van Lutterveld, Remko; Waters, Flavie; Jardri, Renaud

    2016-09-01

    In recent years, there has been increasing interest in the potential for alterations to the brain's resting-state networks (RSNs) to explain various kinds of psychopathology. RSNs provide an intriguing new explanatory framework for hallucinations, which can occur in different modalities and population groups, but which remain poorly understood. This collaboration from the International Consortium on Hallucination Research (ICHR) reports on the evidence linking resting-state alterations to auditory hallucinations (AH) and provides a critical appraisal of the methodological approaches used in this area. In the report, we describe findings from resting connectivity fMRI in AH (in schizophrenia and nonclinical individuals) and compare them with findings from neurophysiological research, structural MRI, and research on visual hallucinations (VH). In AH, various studies show resting connectivity differences in left-hemisphere auditory and language regions, as well as atypical interaction of the default mode network and RSNs linked to cognitive control and salience. As the latter are also evident in studies of VH, this points to a domain-general mechanism for hallucinations alongside modality-specific changes to RSNs in different sensory regions. However, we also observed high methodological heterogeneity in the current literature, affecting the ability to make clear comparisons between studies. To address this, we provide some methodological recommendations and options for future research on the resting state and hallucinations. PMID:27280452

  11. Brain-Generated Estradiol Drives Long-Term Optimization of Auditory Coding to Enhance the Discrimination of Communication Signals

    PubMed Central

    Tremere, Liisa A.; Pinaud, Raphael

    2011-01-01

    Auditory processing and hearing-related pathologies are heavily influenced by steroid hormones in a variety of vertebrate species including humans. The hormone estradiol has been recently shown to directly modulate the gain of central auditory neurons, in real-time, by controlling the strength of inhibitory transmission via a non-genomic mechanism. The functional relevance of this modulation, however, remains unknown. Here we show that estradiol generated in the songbird homologue of the mammalian auditory association cortex, rapidly enhances the effectiveness of the neural coding of complex, learned acoustic signals in awake zebra finches. Specifically, estradiol increases mutual information rates, coding efficiency and the neural discrimination of songs. These effects are mediated by estradiol’s modulation of both rate and temporal coding of auditory signals. Interference with the local action or production of estradiol in the auditory forebrain of freely-behaving animals disrupts behavioral responses to songs, but not to other behaviorally-relevant communication signals. Our findings directly show that estradiol is a key regulator of auditory function in the adult vertebrate brain. PMID:21368039

  12. A brain-computer interface controlled auditory event-related potential (p300) spelling system for locked-in patients.

    PubMed

    Kübler, Andrea; Furdea, Adrian; Halder, Sebastian; Hammer, Eva Maria; Nijboer, Femke; Kotchoubey, Boris

    2009-03-01

    Using brain-computer interfaces (BCI) humans can select letters or other targets on a computer screen without any muscular involvement. An intensively investigated kind of BCI is based on the recording of visual event-related brain potentials (ERP). However, some severely paralyzed patients who need a BCI for communication have impaired vision or lack control of gaze movement, thus making a BCI depending on visual input no longer feasible. In an effort to render the ERP-BCI usable for this group of patients, the ERP-BCI was adapted to auditory stimulation. Letters of the alphabet were assigned to cells in a 5 x 5 matrix. Rows of the matrix were coded with numbers 1 to 5, and columns with numbers 6 to 10, and the numbers were presented auditorily. To select a letter, users had to first select the row and then the column containing the desired letter. Four severely paralyzed patients in the end-stage of a neurodegenerative disease were examined. All patients performed above chance level. Spelling accuracy was significantly lower with the auditory system as compared with a similar visual system. Patients reported difficulties in concentrating on the task when presented with the auditory system. In future studies, the auditory ERP-BCI should be adjusted by taking into consideration specific features of severely paralyzed patients, such as reduced attention span. This adjustment in combination with more intensive training will show whether an auditory ERP-BCI can become an option for visually impaired patients. PMID:19351359

  13. Persistent frontal P300 brain potential suggests abnormal processing of auditory information in distractible children.

    PubMed

    Kilpeläinen, R; Luoma, L; Herrgård, E; Yppärilä, H; Partanen, J; Karhu, J

    1999-11-01

    The P300 event-related potential (ERP) was studied at the beginning, in the middle, and at the end of an auditory stimulus discrimination task in 70 normal 9-year-old children. Easily distractible children showed frontally a short-latency P300 response to target stimuli throughout the task, whereas in the non-distractible children the corresponding response was distinctly smaller and also showed a tendency to decrease in size towards the end of the task. The short-latency frontal P300 response reflects activation of the brain's orienting networks, and it normally decreases in size when stimuli lose their 'novelty value' with stimulus repetition. Persistent frontal P300 suggest that distractible children continued to show enhanced orienting to stimuli that should have already been well encoded and/or categorized. PMID:10599853

  14. Frequency tuning of the dolphin's hearing as revealed by auditory brain-stem response with notch-noise masking.

    PubMed

    Popov, V V; Supin, A Y; Klishin, V O

    1997-12-01

    Notch-noise masking was used to measure frequency tuning in a dolphin (Tursiops truncatus) in a simultaneous-masking paradigm in conjunction with auditory brain-stem evoked potential recording. Measurements were made at probe frequencies of 64, 76, 90, and 108 kHz. The data were analyzed by fitting the rounded-exponent model of the auditory filters to the experimental data. The fitting parameter values corresponded to the filter tuning as follows: QER (center frequency divided by equivalent rectangular bandwidths) of 35 to 36.5 and Q10 dB of 18 to 19 at all tested frequencies. PMID:9407671

  15. Proteome rearrangements after auditory learning: high-resolution profiling of synapse-enriched protein fractions from mouse brain.

    PubMed

    Kähne, Thilo; Richter, Sandra; Kolodziej, Angela; Smalla, Karl-Heinz; Pielot, Rainer; Engler, Alexander; Ohl, Frank W; Dieterich, Daniela C; Seidenbecher, Constanze; Tischmeyer, Wolfgang; Naumann, Michael; Gundelfinger, Eckart D

    2016-07-01

    Learning and memory processes are accompanied by rearrangements of synaptic protein networks. While various studies have demonstrated the regulation of individual synaptic proteins during these processes, much less is known about the complex regulation of synaptic proteomes. Recently, we reported that auditory discrimination learning in mice is associated with a relative down-regulation of proteins involved in the structural organization of synapses in various brain regions. Aiming at the identification of biological processes and signaling pathways involved in auditory memory formation, here, a label-free quantification approach was utilized to identify regulated synaptic junctional proteins and phosphoproteins in the auditory cortex, frontal cortex, hippocampus, and striatum of mice 24 h after the learning experiment. Twenty proteins, including postsynaptic scaffolds, actin-remodeling proteins, and RNA-binding proteins, were regulated in at least three brain regions pointing to common, cross-regional mechanisms. Most of the detected synaptic proteome changes were, however, restricted to individual brain regions. For example, several members of the Septin family of cytoskeletal proteins were up-regulated only in the hippocampus, while Septin-9 was down-regulated in the hippocampus, the frontal cortex, and the striatum. Meta analyses utilizing several databases were employed to identify underlying cellular functions and biological pathways. Data are available via ProteomeExchange with identifier PXD003089. How does the protein composition of synapses change in different brain areas upon auditory learning? We unravel discrete proteome changes in mouse auditory cortex, frontal cortex, hippocampus, and striatum functionally implicated in the learning process. We identify not only common but also area-specific biological pathways and cellular processes modulated 24 h after training, indicating individual contributions of the regions to memory processing. PMID

  16. Evaluation of Auditory Brain Stems Evoked Response in Newborns With Pathologic Hyperbilirubinemia in Mashhad, Iran

    PubMed Central

    Okhravi, Tooba; Tarvij Eslami, Saeedeh; Hushyar Ahmadi, Ali; Nassirian, Hossain; Najibpour, Reza

    2015-01-01

    Background: Neonatal jaundice is a common cause of sensorneural hearing loss in children. Objectives: We aimed to detect the neurotoxic effects of pathologic hyperbilirubinemia on brain stem and auditory tract by auditory brain stem evoked response (ABR) which could predict early effects of hyperbilirubinemia. Patients and Methods: This case-control study was performed on newborns with pathologic hyperbilirubinemia. The inclusion criteria were healthy term and near term (35 - 37 weeks) newborns with pathologic hyperbilirubinemia with serum bilirubin values of ≥ 7 mg/dL, ≥ 10 mg/dL and ≥14 mg/dL at the first, second and third-day of life, respectively, and with bilirubin concentration ≥ 18 mg/dL at over 72 hours of life. The exclusion criteria included family history and diseases causing sensorineural hearing loss, use of auto-toxic medications within the preceding five days, convulsion, congenital craniofacial anomalies, birth trauma, preterm newborns < 35 weeks old, birth weight < 1500 g, asphyxia, and mechanical ventilations for five days or more. A total of 48 newborns with hyperbilirubinemia met the enrolment criteria as the case group and 49 healthy newborns as the control group, who were hospitalized in a university educational hospital (22 Bahaman), in a north-eastern city of Iran, Mashhad. ABR was performed on both groups. The evaluated variable factors were latency time, inter peak intervals time, and loss of waves. Results: The mean latencies of waves I, III and V of ABR were significantly higher in the pathologic hyperbilirubinemia group compared with the controls (P < 0.001). In addition, the mean interpeak intervals (IPI) of waves I-III, I-V and III-V of ABR were significantly higher in the pathologic hyperbilirubinemia group compared with the controls (P < 0.001). For example, the mean latencies time of wave I was significantly higher in right ear of the case group than in controls (2.16 ± 0.26 vs. 1.77 ± 0.15 milliseconds, respectively) (P

  17. Noninvasive brain stimulation for the treatment of auditory verbal hallucinations in schizophrenia: methods, effects and challenges

    PubMed Central

    Kubera, Katharina M.; Barth, Anja; Hirjak, Dusan; Thomann, Philipp A.; Wolf, Robert C.

    2015-01-01

    This mini-review focuses on noninvasive brain stimulation techniques as an augmentation method for the treatment of persistent auditory verbal hallucinations (AVH) in patients with schizophrenia. Paradigmatically, we place emphasis on transcranial magnetic stimulation (TMS). We specifically discuss rationales of stimulation and consider methodological questions together with issues of phenotypic diversity in individuals with drug-refractory and persistent AVH. Eventually, we provide a brief outlook for future investigations and treatment directions. Taken together, current evidence suggests TMS as a promising method in the treatment of AVH. Low-frequency stimulation of the superior temporal cortex (STC) may reduce symptom severity and frequency. Yet clinical effects are of relatively short duration and effect sizes appear to decrease over time along with publication of larger trials. Apart from considering other innovative stimulation techniques, such as transcranial Direct Current Stimulation (tDCS), and optimizing stimulation protocols, treatment of AVH using noninvasive brain stimulation will essentially rely on accurate identification of potential responders and non-responders for these treatment modalities. In this regard, future studies will need to consider distinct phenotypic presentations of AVH in patients with schizophrenia, together with the putative functional neurocircuitry underlying these phenotypes. PMID:26528145

  18. Hemispheric asymmetry of primary auditory cortex and Heschl’s gyrus in schizophrenia and nonpsychiatric brains

    PubMed Central

    Smiley, John F.; Hackett, Troy A.; Preuss, Todd M.; Bleiwas, Cynthia; Figarsky, Khadija; Mann, J. John; Rosoklija, Gorazd; Javitt, Daniel C.; Dwork, Andrew J.

    2013-01-01

    Heschl’s gyrus (HG) is reported to have a normal left>right hemispheric volume asymmetry, and reduced asymmetry in schizophrenia. Primary auditory cortex (A1) occupies the caudal-medial surface of HG, but it is unclear if A1 has normal asymmetry, or whether its asymmetry is altered in schizophrenia. To address these issues, we compared bilateral gray matter volumes of HG and A1, and neuron density and number in A1, in autopsy brains from male subjects with or without schizophrenia. Comparison of diagnostic groups did not reveal altered gray matter volumes, neuron density, neuron number or hemispheric asymmetries in schizophrenia. With respect to hemispheric differences, HG displayed a clear left>right asymmetry of gray matter volume. Area A1 occupied nearly half of HG, but had less consistent volume asymmetry, that was clearly present only in a subgroup of archival brains from elderly subjects. Neuron counts, in layers IIIb-c and V-VI, showed that the A1 volume asymmetry reflected differences in neuron number, and was not caused simply by changes in neuron density. Our findings confirm previous reports of striking hemispheric asymmetry of HG, and additionally show evidence that A1 has a corresponding asymmetry, although less consistent than that of HG. PMID:24148910

  19. Physiological modulators of Kv3.1 channels adjust firing patterns of auditory brain stem neurons.

    PubMed

    Brown, Maile R; El-Hassar, Lynda; Zhang, Yalan; Alvaro, Giuseppe; Large, Charles H; Kaczmarek, Leonard K

    2016-07-01

    Many rapidly firing neurons, including those in the medial nucleus of the trapezoid body (MNTB) in the auditory brain stem, express "high threshold" voltage-gated Kv3.1 potassium channels that activate only at positive potentials and are required for stimuli to generate rapid trains of actions potentials. We now describe the actions of two imidazolidinedione derivatives, AUT1 and AUT2, which modulate Kv3.1 channels. Using Chinese hamster ovary cells stably expressing rat Kv3.1 channels, we found that lower concentrations of these compounds shift the voltage of activation of Kv3.1 currents toward negative potentials, increasing currents evoked by depolarization from typical neuronal resting potentials. Single-channel recordings also showed that AUT1 shifted the open probability of Kv3.1 to more negative potentials. Higher concentrations of AUT2 also shifted inactivation to negative potentials. The effects of lower and higher concentrations could be mimicked in numerical simulations by increasing rates of activation and inactivation respectively, with no change in intrinsic voltage dependence. In brain slice recordings of mouse MNTB neurons, both AUT1 and AUT2 modulated firing rate at high rates of stimulation, a result predicted by numerical simulations. Our results suggest that pharmaceutical modulation of Kv3.1 currents represents a novel avenue for manipulation of neuronal excitability and has the potential for therapeutic benefit in the treatment of hearing disorders. PMID:27052580

  20. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks

    PubMed Central

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention. PMID:25745395

  1. Spect-studies of the brain with stimulation of the auditory cortex.

    PubMed

    Schadel, A

    1988-01-01

    The radiopharmaceutical N-isopropyl-p-J-Amphetamin (IMP) permits a new approach in the study of cerebral perfusion and function. We advanced the hypothesis for an increased IMP-uptake on auditory cortex during stimulation by white noise. Auditory stimulation activates the auditory cortex. This is marked by an increased IMP-uptake. IMP-uptake by the auditory region on the left side during stimulation on the right ear is another evidence of the crossing of central auditory pathways to the contralateral side. PMID:3265798

  2. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    NASA Astrophysics Data System (ADS)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  3. Brain-derived neurotrophic factor modulates auditory function in the hearing cochlea.

    PubMed

    Sly, David J; Hampson, Amy J; Minter, Ricki L; Heffer, Leon F; Li, Jack; Millard, Rodney E; Winata, Leon; Niasari, Allen; O'Leary, Stephen J

    2012-02-01

    Neurotrophins prevent spiral ganglion neuron (SGN) degeneration in animal models of ototoxin-induced deafness and may be used in the future to improve the hearing of cochlear implant patients. It is increasingly common for patients with residual hearing to undergo cochlear implantation. However, the effect of neurotrophin treatment on acoustic hearing is not known. In this study, brain-derived neurotrophic factor (BDNF) was applied to the round window membrane of adult guinea pigs for 4 weeks using a cannula attached to a mini-osmotic pump. SGN survival was first assessed in ototoxically deafened guinea pigs to establish that the delivery method was effective. Increased survival of SGNs was observed in the basal and middle cochlear turns of deafened guinea pigs treated with BDNF, confirming that delivery to the cochlea was successful. The effects of BDNF treatment in animals with normal hearing were then assessed using distortion product otoacoustic emissions (DPOAEs), pure tone, and click-evoked auditory brainstem responses (ABRs). DPOAE assessment indicated a mild deficit of 5 dB SPL in treated and control groups at 1 and 4 weeks after cannula placement. In contrast, ABR evaluation showed that BDNF lowered thresholds at specific frequencies (8 and 16 kHz) after 1 and 4 weeks posttreatment when compared to the control cohort receiving Ringer's solution. Longer treatment for 4 weeks not only widened the range of frequencies ameliorated from 2 to 32 kHz but also lowered the threshold by at least 28 dB SPL at frequencies ≥16 kHz. BDNF treatment for 4 weeks also increased the amplitude of the ABR response when compared to either the control cohort or prior to treatment. We show that BDNF applied to the round window reduces auditory thresholds and could potentially be used clinically to protect residual hearing following cochlear implantation. PMID:22086147

  4. Delta, theta, beta, and gamma brain oscillations index levels of auditory sentence processing.

    PubMed

    Mai, Guangting; Minett, James W; Wang, William S-Y

    2016-06-01

    A growing number of studies indicate that multiple ranges of brain oscillations, especially the delta (δ, <4Hz), theta (θ, 4-8Hz), beta (β, 13-30Hz), and gamma (γ, 30-50Hz) bands, are engaged in speech and language processing. It is not clear, however, how these oscillations relate to functional processing at different linguistic hierarchical levels. Using scalp electroencephalography (EEG), the current study tested the hypothesis that phonological and the higher-level linguistic (semantic/syntactic) organizations during auditory sentence processing are indexed by distinct EEG signatures derived from the δ, θ, β, and γ oscillations. We analyzed specific EEG signatures while subjects listened to Mandarin speech stimuli in three different conditions in order to dissociate phonological and semantic/syntactic processing: (1) sentences comprising valid disyllabic words assembled in a valid syntactic structure (real-word condition); (2) utterances with morphologically valid syllables, but not constituting valid disyllabic words (pseudo-word condition); and (3) backward versions of the real-word and pseudo-word conditions. We tested four signatures: band power, EEG-acoustic entrainment (EAE), cross-frequency coupling (CFC), and inter-electrode renormalized partial directed coherence (rPDC). The results show significant effects of band power and EAE of δ and θ oscillations for phonological, rather than semantic/syntactic processing, indicating the importance of tracking δ- and θ-rate phonetic patterns during phonological analysis. We also found significant β-related effects, suggesting tracking of EEG to the acoustic stimulus (high-β EAE), memory processing (θ-low-β CFC), and auditory-motor interactions (20-Hz rPDC) during phonological analysis. For semantic/syntactic processing, we obtained a significant effect of γ power, suggesting lexical memory retrieval or processing grammatical word categories. Based on these findings, we confirm that scalp EEG

  5. Noise trauma induced plastic changes in brain regions outside the classical auditory pathway.

    PubMed

    Chen, G-D; Sheppard, A; Salvi, R

    2016-02-19

    The effects of intense noise exposure on the classical auditory pathway have been extensively investigated; however, little is known about the effects of noise-induced hearing loss on non-classical auditory areas in the brain such as the lateral amygdala (LA) and striatum (Str). To address this issue, we compared the noise-induced changes in spontaneous and tone-evoked responses from multiunit clusters (MUC) in the LA and Str with those seen in auditory cortex (AC) in rats. High-frequency octave band noise (10-20 kHz) and narrow band noise (16-20 kHz) induced permanent threshold shifts at high-frequencies within and above the noise band but not at low frequencies. While the noise trauma significantly elevated spontaneous discharge rate (SR) in the AC, SRs in the LA and Str were only slightly increased across all frequencies. The high-frequency noise trauma affected tone-evoked firing rates in frequency and time-dependent manner and the changes appeared to be related to the severity of noise trauma. In the LA, tone-evoked firing rates were reduced at the high-frequencies (trauma area) whereas firing rates were enhanced at the low-frequencies or at the edge-frequency dependent on severity of hearing loss at the high frequencies. The firing rate temporal profile changed from a broad plateau to one sharp, delayed peak. In the AC, tone-evoked firing rates were depressed at high frequencies and enhanced at the low frequencies while the firing rate temporal profiles became substantially broader. In contrast, firing rates in the Str were generally decreased and firing rate temporal profiles become more phasic and less prolonged. The altered firing rate and pattern at low frequencies induced by high-frequency hearing loss could have perceptual consequences. The tone-evoked hyperactivity in low-frequency MUC could manifest as hyperacusis whereas the discharge pattern changes could affect temporal resolution and integration. PMID:26701290

  6. Quantitative complexity analysis in multi-channel intracranial EEG recordings form epilepsy brains

    PubMed Central

    Liu, Chang-Chia; Pardalos, Panos M.; Chaovalitwongse, W. Art; Shiau, Deng-Shan; Ghacibeh, Georges; Suharitdamrong, Wichai; Sackellares, J. Chris

    2008-01-01

    Epilepsy is a brain disorder characterized clinically by temporary but recurrent disturbances of brain function that may or may not be associated with destruction or loss of consciousness and abnormal behavior. Human brain is composed of more than 10 to the power 10 neurons, each of which receives electrical impulses known as action potentials from others neurons via synapses and sends electrical impulses via a sing output line to a similar (the axon) number of neurons. When neuronal networks are active, they produced a change in voltage potential, which can be captured by an electroencephalogram (EEG). The EEG recordings represent the time series that match up to neurological activity as a function of time. By analyzing the EEG recordings, we sought to evaluate the degree of underlining dynamical complexity prior to progression of seizure onset. Through the utilization of the dynamical measurements, it is possible to classify the state of the brain according to the underlying dynamical properties of EEG recordings. The results from two patients with temporal lobe epilepsy (TLE), the degree of complexity start converging to lower value prior to the epileptic seizures was observed from epileptic regions as well as non-epileptic regions. The dynamical measurements appear to reflect the changes of EEG’s dynamical structure. We suggest that the nonlinear dynamical analysis can provide a useful information for detecting relative changes in brain dynamics, which cannot be detected by conventional linear analysis. PMID:19079790

  7. An online multi-channel SSVEP-based brain-computer interface using a canonical correlation analysis method

    NASA Astrophysics Data System (ADS)

    Bin, Guangyu; Gao, Xiaorong; Yan, Zheng; Hong, Bo; Gao, Shangkai

    2009-08-01

    In recent years, there has been increasing interest in using steady-state visual evoked potential (SSVEP) in brain-computer interface (BCI) systems. However, several aspects of current SSVEP-based BCI systems need improvement, specifically in relation to speed, user variation and ease of use. With these improvements in mind, this paper presents an online multi-channel SSVEP-based BCI system using a canonical correlation analysis (CCA) method for extraction of frequency information associated with the SSVEP. The key parameters, channel location, window length and the number of harmonics, are investigated using offline data, and the result used to guide the design of the online system. An SSVEP-based BCI system with six targets, which use nine channel locations in the occipital and parietal lobes, a window length of 2 s and the first harmonic, is used for online testing on 12 subjects. The results show that the proposed BCI system has a high performance, achieving an average accuracy of 95.3% and an information transfer rate of 58 ± 9.6 bit min-1. The positive characteristics of the proposed system are that channel selection and parameter optimization are not required, the possible use of harmonic frequencies, low user variation and easy setup.

  8. Design, simulation and experimental validation of a novel flexible neural probe for deep brain stimulation and multichannel recording

    NASA Astrophysics Data System (ADS)

    Lai, Hsin-Yi; Liao, Lun-De; Lin, Chin-Teng; Hsu, Jui-Hsiang; He, Xin; Chen, You-Yin; Chang, Jyh-Yeong; Chen, Hui-Fen; Tsang, Siny; Shih, Yen-Yu I.

    2012-06-01

    An implantable micromachined neural probe with multichannel electrode arrays for both neural signal recording and electrical stimulation was designed, simulated and experimentally validated for deep brain stimulation (DBS) applications. The developed probe has a rough three-dimensional microstructure on the electrode surface to maximize the electrode-tissue contact area. The flexible, polyimide-based microelectrode arrays were each composed of a long shaft (14.9 mm in length) and 16 electrodes (5 µm thick and with a diameter of 16 µm). The ability of these arrays to record and stimulate specific areas in a rat brain was evaluated. Moreover, we have developed a finite element model (FEM) applied to an electric field to evaluate the volume of tissue activated (VTA) by DBS as a function of the stimulation parameters. The signal-to-noise ratio ranged from 4.4 to 5 over a 50 day recording period, indicating that the laboratory-designed neural probe is reliable and may be used successfully for long-term recordings. The somatosensory evoked potential (SSEP) obtained by thalamic stimulations and in vivo electrode-electrolyte interface impedance measurements was stable for 50 days and demonstrated that the neural probe is feasible for long-term stimulation. A strongly linear (positive correlation) relationship was observed among the simulated VTA, the absolute value of the SSEP during the 200 ms post-stimulus period (ΣSSEP) and c-Fos expression, indicating that the simulated VTA has perfect sensitivity to predict the evoked responses (c-Fos expression). This laboratory-designed neural probe and its FEM simulation represent a simple, functionally effective technique for studying DBS and neural recordings in animal models.

  9. Estimation of Temporary Change of Brain Activities in Auditory Oddball Paradigm

    NASA Astrophysics Data System (ADS)

    Fukami, Tadanori; Koyanagi, Yusuke; Tanno, Yukinori; Shimada, Takamasa; Akatsuka, Takao; Saito, Yoichi

    In this research, we estimated temporary change of brain activities in auditory oddball paradigm by moving an analysis time window. An advantage of this method is that it can acquire rough changes of activated areas even with data having low time resolution. Eight normal subjects participated in the study, which consisted of a random series of 30 target and 70 nontarget stimuli. We investigated the activated area in three kinds of analysis time sections, from stimulus onset to 5 seconds after the stimulus (time section A), from 2 to 7 seconds after (B) and from 4 to 9 seconds after (C). In time section A, representative activated areas were regions including superior temporal gyrus centered around inferior frontal gyrus, left precentral gyrus corresponding to Broadmann area 6 (BA 6), right fusiform gyrus corresponding to BA 20, bilaterally medial frontal gyrus and right inferior temporal gyrus were activated. In B, we could see the activations in bilatelally cerebellum, inferior frontal gyrus, and region including left motor area. In C, bilatelally postcentral gyrus, left cingulate gyrus , right cerebellum and right insula were activated. Most activations were consistent with previous studies.

  10. Hyperpolarization-independent maturation and refinement of GABA/glycinergic connections in the auditory brain stem.

    PubMed

    Lee, Hanmi; Bach, Eva; Noh, Jihyun; Delpire, Eric; Kandler, Karl

    2016-03-01

    During development GABA and glycine synapses are initially excitatory before they gradually become inhibitory. This transition is due to a developmental increase in the activity of neuronal potassium-chloride cotransporter 2 (KCC2), which shifts the chloride equilibrium potential (ECl) to values more negative than the resting membrane potential. While the role of early GABA and glycine depolarizations in neuronal development has become increasingly clear, the role of the transition to hyperpolarization in synapse maturation and circuit refinement has remained an open question. Here we investigated this question by examining the maturation and developmental refinement of GABA/glycinergic and glutamatergic synapses in the lateral superior olive (LSO), a binaural auditory brain stem nucleus, in KCC2-knockdown mice, in which GABA and glycine remain depolarizing. We found that many key events in the development of synaptic inputs to the LSO, such as changes in neurotransmitter phenotype, strengthening and elimination of GABA/glycinergic connection, and maturation of glutamatergic synapses, occur undisturbed in KCC2-knockdown mice compared with wild-type mice. These results indicate that maturation of inhibitory and excitatory synapses in the LSO is independent of the GABA and glycine depolarization-to-hyperpolarization transition. PMID:26655825

  11. Non-invasive Brain Stimulation and Auditory Verbal Hallucinations: New Techniques and Future Directions

    PubMed Central

    Moseley, Peter; Alderson-Day, Ben; Ellison, Amanda; Jardri, Renaud; Fernyhough, Charles

    2016-01-01

    Auditory verbal hallucinations (AVHs) are the experience of hearing a voice in the absence of any speaker. Results from recent attempts to treat AVHs with neurostimulation (rTMS or tDCS) to the left temporoparietal junction have not been conclusive, but suggest that it may be a promising treatment option for some individuals. Some evidence suggests that the therapeutic effect of neurostimulation on AVHs may result from modulation of cortical areas involved in the ability to monitor the source of self-generated information. Here, we provide a brief overview of cognitive models and neurostimulation paradigms associated with treatment of AVHs, and discuss techniques that could be explored in the future to improve the efficacy of treatment, including alternating current and random noise stimulation. Technical issues surrounding the use of neurostimulation as a treatment option are discussed (including methods to localize the targeted cortical area, and the state-dependent effects of brain stimulation), as are issues surrounding the acceptability of neurostimulation for adolescent populations and individuals who experience qualitatively different types of AVH. PMID:26834541

  12. Brain electrical activity evoked by mental formation of auditory expectations and images.

    PubMed

    Janata, P

    2001-01-01

    Evidence for the brain's derivation of explicit expectancies in an ongoing sensory context has been well established by studies of the P300 and processing negativity (PN) components of the event-related potential (ERP). "Emitted potentials" generated in the absence of sensory input by unexpected stimulus omissions also exhibit a P300 component and provide another perspective on patterns of brain activity related to the processing of expectancies. The studies described herein extend earlier emitted potential findings in several aspects. First, high-density (128-channel) EEG recordings are used for topographical mapping of emitted potentials. Second, the primary focus is on emitted potential components preceding the P300, i.e. those components that are more likely to resemble ERP components associated with sensory processing. Third, the dependence of emitted potentials on attention is assessed. Fourth, subjects' knowledge of the structure of an auditory stimulus sequence is modulated so that emitted potentials can be compared between conditions that are identical in physical aspects but differ in terms of subjects' expectations regarding the sequence structure. Finally, a novel task is used to elicit emitted potentials, in which subjects explicitly imagine the continuations of simple melodies. In this task, subjects mentally complete melodic fragments in the appropriate tempo, even though they know with absolute certainty that no sensory stimulus will occur. Emitted potentials were elicited only when subjects actively formed expectations or images. The topographies of the initial portion of the emitted potentials were significantly correlated with the N100 topography elicited by corresponding acoustic stimuli, but uncorrelated with the topographies of corresponding silence control periods. PMID:11302397

  13. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    PubMed Central

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  14. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    PubMed

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  15. Mother’s voice and heartbeat sounds elicit auditory plasticity in the human brain before full gestation

    PubMed Central

    Webb, Alexandra R.; Heller, Howard T.; Benson, Carol B.; Lahav, Amir

    2015-01-01

    Brain development is largely shaped by early sensory experience. However, it is currently unknown whether, how early, and to what extent the newborn’s brain is shaped by exposure to maternal sounds when the brain is most sensitive to early life programming. The present study examined this question in 40 infants born extremely prematurely (between 25- and 32-wk gestation) in the first month of life. Newborns were randomized to receive auditory enrichment in the form of audio recordings of maternal sounds (including their mother’s voice and heartbeat) or routine exposure to hospital environmental noise. The groups were otherwise medically and demographically comparable. Cranial ultrasonography measurements were obtained at 30 ± 3 d of life. Results show that newborns exposed to maternal sounds had a significantly larger auditory cortex (AC) bilaterally compared with control newborns receiving standard care. The magnitude of the right and left AC thickness was significantly correlated with gestational age but not with the duration of sound exposure. Measurements of head circumference and the widths of the frontal horn (FH) and the corpus callosum (CC) were not significantly different between the two groups. This study provides evidence for experience-dependent plasticity in the primary AC before the brain has reached full-term maturation. Our results demonstrate that despite the immaturity of the auditory pathways, the AC is more adaptive to maternal sounds than environmental noise. Further studies are needed to better understand the neural processes underlying this early brain plasticity and its functional implications for future hearing and language development. PMID:25713382

  16. Brain-computer interfaces using capacitive measurement of visual or auditory steady-state responses

    NASA Astrophysics Data System (ADS)

    Baek, Hyun Jae; Kim, Hyun Seok; Heo, Jeong; Lim, Yong Gyu; Park, Kwang Suk

    2013-04-01

    Objective. Brain-computer interface (BCI) technologies have been intensely studied to provide alternative communication tools entirely independent of neuromuscular activities. Current BCI technologies use electroencephalogram (EEG) acquisition methods that require unpleasant gel injections, impractical preparations and clean-up procedures. The next generation of BCI technologies requires practical, user-friendly, nonintrusive EEG platforms in order to facilitate the application of laboratory work in real-world settings. Approach. A capacitive electrode that does not require an electrolytic gel or direct electrode-scalp contact is a potential alternative to the conventional wet electrode in future BCI systems. We have proposed a new capacitive EEG electrode that contains a conductive polymer-sensing surface, which enhances electrode performance. This paper presents results from five subjects who exhibited visual or auditory steady-state responses according to BCI using these new capacitive electrodes. The steady-state visual evoked potential (SSVEP) spelling system and the auditory steady-state response (ASSR) binary decision system were employed. Main results. Offline tests demonstrated BCI performance high enough to be used in a BCI system (accuracy: 95.2%, ITR: 19.91 bpm for SSVEP BCI (6 s), accuracy: 82.6%, ITR: 1.48 bpm for ASSR BCI (14 s)) with the analysis time being slightly longer than that when wet electrodes were employed with the same BCI system (accuracy: 91.2%, ITR: 25.79 bpm for SSVEP BCI (4 s), accuracy: 81.3%, ITR: 1.57 bpm for ASSR BCI (12 s)). Subjects performed online BCI under the SSVEP paradigm in copy spelling mode and under the ASSR paradigm in selective attention mode with a mean information transfer rate (ITR) of 17.78 ± 2.08 and 0.7 ± 0.24 bpm, respectively. Significance. The results of these experiments demonstrate the feasibility of using our capacitive EEG electrode in BCI systems. This capacitive electrode may become a flexible and

  17. Are you listening? Brain activation associated with sustained nonspatial auditory attention in the presence and absence of stimulation.

    PubMed

    Seydell-Greenwald, Anna; Greenberg, Adam S; Rauschecker, Josef P

    2014-05-01

    Neuroimaging studies investigating the voluntary (top-down) control of attention largely agree that this process recruits several frontal and parietal brain regions. Since most studies used attention tasks requiring several higher-order cognitive functions (e.g. working memory, semantic processing, temporal integration, spatial orienting) as well as different attentional mechanisms (attention shifting, distractor filtering), it is unclear what exactly the observed frontoparietal activations reflect. The present functional magnetic resonance imaging study investigated, within the same participants, signal changes in (1) a "Simple Attention" task in which participants attended to a single melody, (2) a "Selective Attention" task in which they simultaneously ignored another melody, and (3) a "Beep Monitoring" task in which participants listened in silence for a faint beep. Compared to resting conditions with identical stimulation, all tasks produced robust activation increases in auditory cortex, cross-modal inhibition in visual and somatosensory cortex, and decreases in the default mode network, indicating that participants were indeed focusing their attention on the auditory domain. However, signal increases in frontal and parietal brain areas were only observed for tasks 1 and 2, but completely absent for task 3. These results lead to the following conclusions: under most conditions, frontoparietal activations are crucial for attention since they subserve higher-order cognitive functions inherently related to attention. However, under circumstances that minimize other demands, nonspatial auditory attention in the absence of stimulation can be maintained without concurrent frontal or parietal activations. PMID:23913818

  18. Repetition suppression and repetition enhancement underlie auditory memory-trace formation in the human brain: an MEG study.

    PubMed

    Recasens, Marc; Leung, Sumie; Grimm, Sabine; Nowak, Rafal; Escera, Carles

    2015-03-01

    The formation of echoic memory traces has traditionally been inferred from the enhanced responses to its deviations. The mismatch negativity (MMN), an auditory event-related potential (ERP) elicited between 100 and 250ms after sound deviation is an indirect index of regularity encoding that reflects a memory-based comparison process. Recently, repetition positivity (RP) has been described as a candidate ERP correlate of direct memory trace formation. RP consists of repetition suppression and enhancement effects occurring in different auditory components between 50 and 250ms after sound onset. However, the neuronal generators engaged in the encoding of repeated stimulus features have received little interest. This study intends to investigate the neuronal sources underlying the formation and strengthening of new memory traces by employing a roving-standard paradigm, where trains of different frequencies and different lengths are presented randomly. Source generators of repetition enhanced (RE) and suppressed (RS) activity were modeled using magnetoencephalography (MEG) in healthy subjects. Our results show that, in line with RP findings, N1m (~95-150ms) activity is suppressed with stimulus repetition. In addition, we observed the emergence of a sustained field (~230-270ms) that showed RE. Source analysis revealed neuronal generators of RS and RE located in both auditory and non-auditory areas, like the medial parietal cortex and frontal areas. The different timing and location of neural generators involved in RS and RE points to the existence of functionally separated mechanisms devoted to acoustic memory-trace formation in different auditory processing stages of the human brain. PMID:25528656

  19. On the temporal window of auditory-brain system in connection with subjective responses

    NASA Astrophysics Data System (ADS)

    Mouri, Kiminori

    2003-08-01

    The human auditory-brain system processes information extracted from autocorrelation function (ACF) of the source signal and interaural cross correlation function (IACF) of binaural sound signals which are associated with the left and right cerebral hemispheres, respectively. The purpose of this dissertation is to determine the desirable temporal window (2T: integration interval) for ACF and IACF mechanisms. For the ACF mechanism, the visual change of Φ(0), i.e., the power of ACF, was associated with the change of loudness, and it is shown that the recommended temporal window is given as about 30(τe)min [s]. The value of (τe)min is the minimum value of effective duration of the running ACF of the source signal. It is worth noticing from the experiment of EEG that the most preferred delay time of the first reflection sound is determined by the piece indicating (τe)min in the source signal. For the IACF mechanism, the temporal window is determined as below: The measured range of τIACC corresponding to subjective angle for the moving image sound depends on the temporal window. Here, the moving image was simulated by the use of two loudspeakers located at +/-20° in the horizontal plane, reproducing amplitude modulated band-limited noise alternatively. It is found that the temporal window has a wide range of values from 0.03 to 1 [s] for the modulation frequency below 0.2 Hz. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Kiminori Mouri, 5-3-3-1110 Harayama-dai, Sakai city, Osaka 590-0132, Japan. E-mail address: km529756@aol.com

  20. From Complex B1 Mapping to Local SAR Estimation for Human Brain MR Imaging Using Multi-channel Transceiver Coil at 7T

    PubMed Central

    Zhang, Xiaotong; Schmitter, Sebastian; Van de Moortel, Pierre-François; Liu, Jiaen

    2014-01-01

    Elevated Specific Absorption Rate (SAR) associated with increased main magnetic field strength remains as a major safety concern in ultra-high-field (UHF) Magnetic Resonance Imaging (MRI) applications. The calculation of local SAR requires the knowledge of the electric field induced by radiofrequency (RF) excitation, and the local electrical properties of tissues. Since electric field distribution cannot be directly mapped in conventional MR measurements, SAR estimation is usually performed using numerical model-based electromagnetic simulations which, however, are highly time consuming and cannot account for the specific anatomy and tissue properties of the subject undergoing a scan. In the present study, starting from the measurable RF magnetic fields (B1) in MRI, we conducted a series of mathematical deduction to estimate the local, voxel-wise and subject-specific SAR for each single coil element using a multi-channel transceiver array coil. We first evaluated the feasibility of this approach in numerical simulations including two different human head models. We further conducted experimental study in a physical phantom and in two human subjects at 7T using a multi-channel transceiver head coil. Accuracy of the results is discussed in the context of predicting local SAR in the human brain at UHF MRI using multi-channel RF transmission. PMID:23508259

  1. Plasticity in the neural coding of auditory space in the mammalian brain

    NASA Astrophysics Data System (ADS)

    King, Andrew J.; Parsons, Carl H.; Moore, David R.

    2000-10-01

    Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the "cocktail party effect") are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy.

  2. Management of auditory hallucinations as a sequela of traumatic brain injury: a case report and a relevant literature review.

    PubMed

    Dobry, Yuriy; Novakovic, Vladan; Barkin, Robert L; Sundaram, Vikram K

    2014-01-01

    A patient with progressively worsening auditory hallucinations and 30-year history of traumatic brain injury (TBI) was reported. To formulate a comprehensive diagnostic and treatment approach to patients with auditory sensory disturbances and other neuropsychiatric sequela of a TBI, an electronic search of the major behavioral science databases (PubMed, PsycINFO, Medline) and a textbook review were conducted to retrieve studies detailing the clinical characteristics, biological mechanisms, and therapeutic approaches to post-TBI psychosis. Additional references were incorporated from the bibliographies of the retrieved articles. Although infrequent, auditory hallucinations is a debilitating complication of TBI that can manifest itself 4-5 years after the occurrence of TBI. Because the age range of TBI survivors is 15-24 years, and the chance of developing post-TBI psychosis is reported to be up to 20%, this chronic neuropsychiatric complication and the available treatment options warrant close scrutiny from the clinical and the biomedical research community. Our case report and literature review demonstrates a clear need for a large, well-designed randomized trials to compare properties and efficacies of different, available, and promising pharmacotherapy agents for the treatment of post-TBI psychosis. PMID:24263164

  3. Brain systems for encoding and retrieval of auditory-verbal memory. An in vivo study in humans.

    PubMed

    Fletcher, P C; Frith, C D; Grasby, P M; Shallice, T; Frackowiak, R S; Dolan, R J

    1995-04-01

    Long-term auditory-verbal memory comprises, at a neuropsychological level, a number of distinct cognitive processes. In the present study we determined the brain systems engaged during encoding (experiment 1) and retrieval (experiment 2) of episodic auditory-verbal material. In the separate experiments, PET measurements of regional cerebral blood flow (rCBF), an index of neural activity, were performed in normal volunteers during either the encoding or the retrieval of paired word associates. In experiment 1, a dual task interference paradigm was used to isolate areas involved in episodic encoding from those which would be concurrently activated by other cognitive processes associated with the presentation of paired associates, notably priming. In experiment 2, we used the cued retrieval of paired associates from episodic or from semantic memory in order to isolate the neural correlates of episodic memories. Encoding of episodic memory was associated with activation of the left prefrontal cortex and the retrosplenial area of the cingulate cortex, while retrieval from episodic memory was associated with activation of the precuneus bilaterally and of the right prefrontal cortex. These results are compatible with the patterns of activation reported in a previous PET memory experiment in which encoding and retrieval were studied concurrently. They also indicate that separate brain systems are engaged during the encoding and retrieval phases of episodic auditory-verbal memory. Retrieval from episodic memory engages a different, but overlapping, system to that engaged by retrieval from semantic memory, a finding that lends functional anatomical support to this neuropsychological distinction. PMID:7735882

  4. The Application of the International Classification of Functioning, Disability and Health to Functional Auditory Consequences of Mild Traumatic Brain Injury.

    PubMed

    Werff, Kathy R Vander

    2016-08-01

    This article reviews the auditory consequences of mild traumatic brain injury (mTBI) within the context of the International Classification of Functioning, Disability and Health (ICF). Because of growing awareness of mTBI as a public health concern and the diverse and heterogeneous nature of the individual consequences, it is important to provide audiologists and other health care providers with a better understanding of potential implications in the assessment of levels of function and disability for individual interdisciplinary remediation planning. In consideration of body structures and function, the mechanisms of injury that may result in peripheral or central auditory dysfunction in mTBI are reviewed, along with a broader scope of effects of injury to the brain. The activity limitations and participation restrictions that may affect assessment and management in the context of an individual's personal factors and their environment are considered. Finally, a review of management strategies for mTBI from an audiological perspective as part of a multidisciplinary team is included. PMID:27489400

  5. Altered Small-World Brain Networks in Temporal Lobe in Patients with Schizophrenia Performing an Auditory Oddball Task

    PubMed Central

    Yu, Qingbao; Sui, Jing; Rachakonda, Srinivas; He, Hao; Pearlson, Godfrey; Calhoun, Vince D.

    2011-01-01

    The functional architecture of the human brain has been extensively described in terms of complex networks characterized by efficient small-world features. Recent functional magnetic resonance imaging (fMRI) studies have found altered small-world topological properties of brain functional networks in patients with schizophrenia (SZ) during the resting state. However, little is known about the small-world properties of brain networks in the context of a task. In this study, we investigated the topological properties of human brain functional networks derived from fMRI during an auditory oddball (AOD) task. Data were obtained from 20 healthy controls and 20 SZ; A left and a right task-related network which consisted of the top activated voxels in temporal lobe of each hemisphere were analyzed separately. All voxels were detected by group independent component analysis. Connectivity of the left and right task-related networks were estimated by partial correlation analysis and thresholded to construct a set of undirected graphs. The small-worldness values were decreased in both hemispheres in SZ. In addition, SZ showed longer shortest path length and lower global efficiency only in the left task-related networks. These results suggested small-world attributes are altered during the AOD task-related networks in SZ which provided further evidences for brain dysfunction of connectivity in SZ. PMID:21369355

  6. Intracranial Recording and Source Localization of Auditory Brain Responses Elicited at the 50 ms Latency in Three Children Aged from 3 to 16 Years

    PubMed Central

    Asano, Eishi; Gumenyuk, Valentina; Juhász, Csaba; Wagner, Michael; Rothermel, Robert D.; Chugani, Harry T.

    2013-01-01

    Maturational studies of the auditory-evoked brain response at the 50 ms latency provide an insight into why this response is aberrant in a number of psychiatric disorders that have developmental origin. Here, using intracranial recordings we found that neuronal activity of the primary contributors to this response can be localised at the lateral part of Heschl’s gyrus already at the age of 3.5 years. This study provides results to support the notion that deviations in cognitive function(s) attributed to the auditory P50 in adults might involve abnormalities in neuronal activity of the frontal lobe or in the interaction between the frontal and temporal lobes. Validation and localisation of progenitors of the adults’ P50 in young children is a much-needed step in the understanding of the biological significance of different subcomponents that comprise the auditory P50 in the adult brain. In combination with other approaches investigating neuronal mechanisms of auditory P50, the present results contribute to the greater understanding of what and why neuronal activity underlying this response is aberrant in a number of brain dysfunctions. Moreover, the present source localisation results of auditory response at the 50 ms latency might be useful in paediatric neurosurgery practice. PMID:19701702

  7. Brain Correlates of Early Auditory Processing Are Attenuated by Expectations for Time and Pitch

    ERIC Educational Resources Information Center

    Lange, Kathrin

    2009-01-01

    The present study investigated how auditory processing is modulated by expectations for time and pitch by analyzing reaction times and event-related potentials (ERPs). In two experiments, tone sequences were presented to the participants, who had to discriminate whether the last tone of the sequence contained a short gap or was continuous…

  8. Characteristics of Auditory Agnosia in a Child with Severe Traumatic Brain Injury: A Case Report

    ERIC Educational Resources Information Center

    Hattiangadi, Nina; Pillion, Joseph P.; Slomine, Beth; Christensen, James; Trovato, Melissa K.; Speedie, Lynn J.

    2005-01-01

    We present a case that is unusual in many respects from other documented incidences of auditory agnosia, including the mechanism of injury, age of the individual, and location of neurological insult. The clinical presentation is one of disturbance in the perception of spoken language, music, pitch, emotional prosody, and temporal auditory…

  9. UNRECOGNIZED ERRORS DUE TO ANALOG FILTERING OF THE BRAIN-STEM AUDITORY EVOKED RESPONSE

    EPA Science Inventory

    The brainstem auditory evoked response (BAER) is used as a tool both in clinical evaluation and in toxicological research, where the subject is most often the laboratory rat. As in other species, interpretation of the rat BAER waveform is based on the latencies and amplitudes of ...

  10. Reduced resting-state brain activity in the default mode network in children with (central) auditory processing disorders

    PubMed Central

    2014-01-01

    Background In recent years, there has been a growing interest in Central Auditory Processing Disorder (C)APD. However, the neural correlates of (C)APD are poorly understood. Previous neuroimaging experiments have shown changes in the intrinsic activity of the brain in various cognitive deficits and brain disorders. The present study investigated the spontaneous brain activity in (C)APD subjects with resting-state fMRI (rs-fMRI). Methods Thirteen children diagnosed with (C)APD and fifteen age and gender-matched controls participated in a rs-fMRI study during which they were asked to relax keeping their eyes open. Two different techniques of the rs-fMRI data analysis were used: Regional Homogeneity (ReHo) and Independent Component Analysis (ICA), which approach is rare. Results Both methods of data analysis showed comparable results in the pattern of DMN activity within groups. Additionally, ReHo analysis revealed increased co-activation of the superior frontal gyrus, the posterior cingulate cortex/the precuneus in controls, compared to the (C)APD group. ICA yielded inconsistent results across groups. Conclusions Our ReHo results suggest that (C)APD children seem to present reduced regional homogeneity in brain regions considered a part of the default mode network (DMN). These findings might contribute to a better understanding of neural mechanisms of (C)APD. PMID:25261349

  11. Far-field brainstem responses evoked by vestibular and auditory stimuli exhibit increases in interpeak latency as brain temperature is decreased

    NASA Technical Reports Server (NTRS)

    Hoffman, L. F.; Horowitz, J. M.

    1984-01-01

    The effect of decreasing of brain temperature on the brainstem auditory evoked response (BAER) in rats was investigated. Voltage pulses, applied to a piezoelectric crystal attached to the skull, were used to evoke stimuli in the auditory system by means of bone-conducted vibrations. The responses were recorded at 37 C and 34 C brain temperatures. The peaks of the BAER recorded at 34 C were delayed in comparison with the peaks from the 37 C wave, and the later peaks were more delayed than the earlier peaks. These results indicate that an increase in the interpeak latency occurs as the brain temperature is decreased. Preliminary experiments, in which responses to brief angular acceleration were used to measure the brainstem vestibular evoked response (BVER), have also indicated increases in the interpeak latency in response to the lowering of brain temperature.

  12. A Trade-Off between Somatosensory and Auditory Related Brain Activity during Object Naming But Not Reading

    PubMed Central

    Hope, Thomas M.H.; Prejawa, Susan; Parker Jones, ‘Ōiwi; Vitkovitch, Melanie; Price, Cathy J.

    2015-01-01

    The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying “1-2-3.” Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying “1-2-3” and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level. PMID:25788691

  13. Evaluating auditory stream segregation of SAM tone sequences by subjective and objective psychoacoustical tasks, and brain activity

    PubMed Central

    Dolležal, Lena-Vanessa; Brechmann, André; Klump, Georg M.; Deike, Susann

    2014-01-01

    Auditory stream segregation refers to a segregated percept of signal streams with different acoustic features. Different approaches have been pursued in studies of stream segregation. In psychoacoustics, stream segregation has mostly been investigated with a subjective task asking the subjects to report their percept. Few studies have applied an objective task in which stream segregation is evaluated indirectly by determining thresholds for a percept that depends on whether auditory streams are segregated or not. Furthermore, both perceptual measures and physiological measures of brain activity have been employed but only little is known about their relation. How the results from different tasks and measures are related is evaluated in the present study using examples relying on the ABA- stimulation paradigm that apply the same stimuli. We presented A and B signals that were sinusoidally amplitude modulated (SAM) tones providing purely temporal, spectral or both types of cues to evaluate perceptual stream segregation and its physiological correlate. Which types of cues are most prominent was determined by the choice of carrier and modulation frequencies (fmod) of the signals. In the subjective task subjects reported their percept and in the objective task we measured their sensitivity for detecting time-shifts of B signals in an ABA- sequence. As a further measure of processes underlying stream segregation we employed functional magnetic resonance imaging (fMRI). SAM tone parameters were chosen to evoke an integrated (1-stream), a segregated (2-stream), or an ambiguous percept by adjusting the fmod difference between A and B tones (Δfmod). The results of both psychoacoustical tasks are significantly correlated. BOLD responses in fMRI depend on Δfmod between A and B SAM tones. The effect of Δfmod, however, differs between auditory cortex and frontal regions suggesting differences in representation related to the degree of perceptual ambiguity of the sequences

  14. A case of ataxic diplegia, mental retardation, congenital nystagmus and abnormal auditory brain stem responses showing only waves I and II.

    PubMed

    Aiba, K; Yokochi, K; Ishikawa, T

    1986-01-01

    A three-year-old boy who had ataxic diplegia, mental retardation, horizontal pendular nystagmus with head nodding and abnormal auditory brain stem responses showing only waves I and II was presented. His clinical features coincided with recent reports in the Japanese literature of cases of a new syndrome that is congenital in origin and seen only in boys. PMID:3826555

  15. Suppression and facilitation of auditory neurons through coordinated acoustic and midbrain stimulation: investigating a deep brain stimulator for tinnitus

    NASA Astrophysics Data System (ADS)

    Offutt, Sarah J.; Ryan, Kellie J.; Konop, Alexander E.; Lim, Hubert H.

    2014-12-01

    Objective. The inferior colliculus (IC) is the primary processing center of auditory information in the midbrain and is one site of tinnitus-related activity. One potential option for suppressing the tinnitus percept is through deep brain stimulation via the auditory midbrain implant (AMI), which is designed for hearing restoration and is already being implanted in deaf patients who also have tinnitus. However, to assess the feasibility of AMI stimulation for tinnitus treatment we first need to characterize the functional connectivity within the IC. Previous studies have suggested modulatory projections from the dorsal cortex of the IC (ICD) to the central nucleus of the IC (ICC), though the functional properties of these projections need to be determined. Approach. In this study, we investigated the effects of electrical stimulation of the ICD on acoustic-driven activity within the ICC in ketamine-anesthetized guinea pigs. Main Results. We observed ICD stimulation induces both suppressive and facilitatory changes across ICC that can occur immediately during stimulation and remain after stimulation. Additionally, ICD stimulation paired with broadband noise stimulation at a specific delay can induce greater suppressive than facilitatory effects, especially when stimulating in more rostral and medial ICD locations. Significance. These findings demonstrate that ICD stimulation can induce specific types of plastic changes in ICC activity, which may be relevant for treating tinnitus. By using the AMI with electrode sites positioned with the ICD and the ICC, the modulatory effects of ICD stimulation can be tested directly in tinnitus patients.

  16. Processing of species-specific auditory patterns in the cricket brain by ascending, local, and descending neurons during standing and walking

    PubMed Central

    Zorović, M.

    2011-01-01

    The recognition of the male calling song is essential for phonotaxis in female crickets. We investigated the responses toward different models of song patterns by ascending, local, and descending neurons in the brain of standing and walking crickets. We describe results for two ascending, three local, and two descending interneurons. Characteristic dendritic and axonal arborizations of the local and descending neurons indicate a flow of auditory information from the ascending interneurons toward the lateral accessory lobes and point toward the relevance of this brain region for cricket phonotaxis. Two aspects of auditory processing were studied: the tuning of interneuron activity to pulse repetition rate and the precision of pattern copying. Whereas ascending neurons exhibited weak, low-pass properties, local neurons showed both low- and band-pass properties, and descending neurons represented clear band-pass filters. Accurate copying of single pulses was found at all three levels of the auditory pathway. Animals were walking on a trackball, which allowed an assessment of the effect that walking has on auditory processing. During walking, all neurons were additionally activated, and in most neurons, the spike rate was correlated to walking velocity. The number of spikes elicited by a chirp increased with walking only in ascending neurons, whereas the peak instantaneous spike rate of the auditory responses increased on all levels of the processing pathway. Extra spiking activity resulted in a somewhat degraded copying of the pulse pattern in most neurons. PMID:21346206

  17. Processing of species-specific auditory patterns in the cricket brain by ascending, local, and descending neurons during standing and walking.

    PubMed

    Zorović, M; Hedwig, B

    2011-05-01

    The recognition of the male calling song is essential for phonotaxis in female crickets. We investigated the responses toward different models of song patterns by ascending, local, and descending neurons in the brain of standing and walking crickets. We describe results for two ascending, three local, and two descending interneurons. Characteristic dendritic and axonal arborizations of the local and descending neurons indicate a flow of auditory information from the ascending interneurons toward the lateral accessory lobes and point toward the relevance of this brain region for cricket phonotaxis. Two aspects of auditory processing were studied: the tuning of interneuron activity to pulse repetition rate and the precision of pattern copying. Whereas ascending neurons exhibited weak, low-pass properties, local neurons showed both low- and band-pass properties, and descending neurons represented clear band-pass filters. Accurate copying of single pulses was found at all three levels of the auditory pathway. Animals were walking on a trackball, which allowed an assessment of the effect that walking has on auditory processing. During walking, all neurons were additionally activated, and in most neurons, the spike rate was correlated to walking velocity. The number of spikes elicited by a chirp increased with walking only in ascending neurons, whereas the peak instantaneous spike rate of the auditory responses increased on all levels of the processing pathway. Extra spiking activity resulted in a somewhat degraded copying of the pulse pattern in most neurons. PMID:21346206

  18. Brain activity underlying auditory perceptual learning during short period training: simultaneous fMRI and EEG recording

    PubMed Central

    2013-01-01

    Background There is an accumulating body of evidence indicating that neuronal functional specificity to basic sensory stimulation is mutable and subject to experience. Although fMRI experiments have investigated changes in brain activity after relative to before perceptual learning, brain activity during perceptual learning has not been explored. This work investigated brain activity related to auditory frequency discrimination learning using a variational Bayesian approach for source localization, during simultaneous EEG and fMRI recording. We investigated whether the practice effects are determined solely by activity in stimulus-driven mechanisms or whether high-level attentional mechanisms, which are linked to the perceptual task, control the learning process. Results The results of fMRI analyses revealed significant attention and learning related activity in left and right superior temporal gyrus STG as well as the left inferior frontal gyrus IFG. Current source localization of simultaneously recorded EEG data was estimated using a variational Bayesian method. Analysis of current localized to the left inferior frontal gyrus and the right superior temporal gyrus revealed gamma band activity correlated with behavioral performance. Conclusions Rapid improvement in task performance is accompanied by plastic changes in the sensory cortex as well as superior areas gated by selective attention. Together the fMRI and EEG results suggest that gamma band activity in the right STG and left IFG plays an important role during perceptual learning. PMID:23316957

  19. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  20. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation

    PubMed Central

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A.

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  1. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    PubMed

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  2. “Where Do Auditory Hallucinations Come From?”—A Brain Morphometry Study of Schizophrenia Patients With Inner or Outer Space Hallucinations

    PubMed Central

    Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc

    2011-01-01

    Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N = 12) and patients with only inner space hallucinations (N = 15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the “where” auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge. PMID:19666833

  3. MULTICHANNEL ANALYZER

    DOEpatents

    Kelley, G.G.

    1959-11-10

    A multichannel pulse analyzer having several window amplifiers, each amplifier serving one group of channels, with a single fast pulse-lengthener and a single novel interrogation circuit serving all channels is described. A pulse followed too closely timewise by another pulse is disregarded by the interrogation circuit to prevent errors due to pulse pileup. The window amplifiers are connected to the pulse lengthener output, rather than the linear amplifier output, so need not have the fast response characteristic formerly required.

  4. [Topography of the Event-Related Brain Responses during Discrimination of Auditory Motion in Humans].

    PubMed

    Shestopalova, L B; Petropavlovskaia, E A; Vaitulevich, S Ph; Nikitin, N I

    2015-01-01

    The present study investigates the hemispheric asymmetry of auditory event-related potentials (ERPs) and mismatch negativity (MMN) during passive discrimination of the moving sound stimuli presented according to the oddball paradigm. The sound movement to the left/right from the head midline was produced by linear changes of the interaural time delay (ITD). It was found that the right-hemispheric N1 and P2 responses were more prominent than the left-hemispheric ones, especially in the fronto-lateral region. On the contrary, N250 and MMN responses demonstrated contralateral dominance in the fronto-lateral and fronto-medial regions. Direction of sound motion had no significant effect on the ERP or MMN topography. The right-hemispheric asymmetry of N1 increased with sound velocity. Maximal asymmetry of P2 was obtained with short stimulus trajectories. The contralateral bias of N250 and MMN increased with the spatial difference between standard and deviant stimuli. The results showed different type of hemispheric asymmetry for the early and late ERP components which could reflect the activity of distinct neural populations involved in the sensory and cognitive processing of the auditory input. PMID:26860001

  5. Hearing status in neonatal hyperbilirubinemia by auditory brain stem evoked response and transient evoked otoacoustic emission.

    PubMed

    Baradaranfar, Mohammad Hossein; Atighechi, Saeid; Dadgarnia, Mohammad Hossein; Jafari, Rozita; Karimi, Ghasem; Mollasadeghi, Abolfazl; Eslami, Zia; Baradarnfar, Amin

    2011-01-01

    Hyperbilirubinemia at neonatal period is one of the major deteriorating factors of the auditory system. If left untreated, it may cause certain cerebral damage. This study aims to evaluate the impact of hyperbilirubinemia on the hearing of neonate. This study was conducted on 35 newborn babies with jaundice (bilirubin more than 20 mg/dL). Auditory brainstem response (ABR) and transient evoked otoacoustic emission (TEOAE) tests were performed, after treatment and one year after. ABR test results indicated that 26 children (74.3%) had normal hearing but 9 (25.7%) suffered from an impairment. As for TEOAE test, 30 children (85.7%) passed whereas the remaining (14.3%) seemed to be failures. The comparative results of the two tests pointed to autonomic neuropathy /autonomic dysreflexia symptoms in 5 babies. Due to the high incidence of autonomic neuropathy/autonomic dysreflexia among hyperbilirubinemic babies, screening in this regard seems reasonable. Our result emphasizes the necessity of more experiments on the afflicted areas. PMID:21598220

  6. Alterations in brain-stem auditory evoked potentials among drug addicts

    PubMed Central

    Garg, Sonia; Sharma, Rajeev; Mittal, Shilekh; Thapar, Satish

    2015-01-01

    Objective: To compare the absolute latencies, the interpeak latencies, and amplitudes of different waveforms of brainstem auditory evoked potentials (BAEP) in different drug abusers and controls, and to identify early neurological damage in persons who abuse different drugs so that proper counseling and timely intervention can be undertaken. Methods: In this cross-sectional study, BAEP’s were assessed by a data acquisition and analysis system in 58 male drug abusers in the age group of 15-45 years as well as in 30 age matched healthy controls. The absolute peak latencies and the interpeak latencies of BAEP were analyzed by applying one way ANOVA and student t-test. The study was carried out at the GGS Medical College, Faridkot, Punjab, India between July 2012 and May 2013. Results: The difference in the absolute peak latencies and interpeak latencies of BAEP in the 2 groups was found to be statistically significant in both the ears (p<0.05). However, the difference in the amplitude ratio in both the ears was found to be statistically insignificant. Conclusion: Chronic intoxication by different drugs has been extensively associated with prolonged absolute peak latencies and interpeak latencies of BAEP in drug abusers reflecting an adverse effect of drug dependence on neural transmission in central auditory nerve pathways. PMID:26166594

  7. Instrument specific brain activation in sensorimotor and auditory representation in musicians.

    PubMed

    Gebel, B; Braun, Ch; Kaza, E; Altenmüller, E; Lotze, M

    2013-07-01

    Musicians show a remarkable ability to interconnect motor patterns and sensory processing in the somatosensory and auditory domains. Many of these processes are specific for the instrument used. We were interested in the cerebral and cerebellar representations of these instrument-specific changes and therefore applied functional magnetic resonance imaging (fMRI) in two groups of instrumentalists with different instrumental training for comparable periods (approximately 15 years). The first group (trumpet players) uses tight finger and lip interaction; the second (pianists as control group) uses only the extremities for performance. fMRI tasks were balanced for instructions (piano and trumpet notes), sensory feedback (keypad and trumpet), and hand-lip interaction on the trumpet. During fMRI, both groups switched between different devices (trumpet or keypad) and performance was combined with or without auditory feedback. Playing the trumpet without any tone emission or using the mouthpiece showed an instrument training-specific activation increase in trumpet players. This was evident for the posterior-superior cerebellar hemisphere, the dominant primary sensorimotor cortex, and the left Heschl's gyrus. Additionally, trumpet players showed increased activity in the bilateral Heschl's gyrus during actual trumpet playing, although they showed significantly decreased loudness while playing with the mouthpiece in the scanner compared to pianists. PMID:23454048

  8. Auditory imagery: empirical findings.

    PubMed

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565

  9. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response

    PubMed Central

    Nuttall, Helen E.; Moore, David R.; Barry, Johanna G.; Krumbholz, Katrin

    2015-01-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. PMID:25787954

  10. Brain dynamics that correlate with effects of learning on auditory distance perception

    PubMed Central

    Wisniewski, Matthew G.; Mercado, Eduardo; Church, Barbara A.; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4–8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8–12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10–16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance. PMID:25538550

  11. Detection of brain magnetic fields with an atomic magnetometer

    NASA Astrophysics Data System (ADS)

    Xia, Hui; Hoffman, Dan; Baranga, Andrei; Romalis, Michael

    2006-05-01

    We report detection of magnetic fields generated by evoked brain activity with an atomic magnetometer. The measurements are performed with a high-density potassium magnetometer operating in a spin-exchange relaxation free regime. Compared to SQUID magnetometers which so far have been the only detectors capable of measuring the magnetic fields from the brain, atomic magnetometers have the advantages of higher sensitivity and spatial resolution, simple multi-channel recording, and no need for cryogenics. Using a multi-channel photodetector array we recorded magnetic fields from the brain correlated with an audio tone administered with a non-magnetic earphone. The spatial map of the magnetic field gives information about the location of the brain region responding to the auditory stimulation. Our results demonstrate the atomic magnetometer as an alternative and low cost technique for brain imaging applications, without using cryogenic apparatus.

  12. Auditory evoked potentials to spectro-temporal modulation of complex tones in normal subjects and patients with severe brain injury.

    PubMed

    Jones, S J; Vaz Pato, M; Sprague, L; Stokes, M; Munday, R; Haque, N

    2000-05-01

    In order to assess higher auditory processing capabilities, long-latency auditory evoked potentials (AEPs) were recorded to synthesized musical instrument tones in 22 post-comatose patients with severe brain injury causing variably attenuated behavioural responsiveness. On the basis of normative studies, three different types of spectro-temporal modulation were employed. When a continuous 'clarinet' tone changes pitch once every few seconds, N1/P2 potentials are evoked at latencies of approximately 90 and 180 ms, respectively. Their distribution in the fronto-central region is consistent with generators in the supratemporal cortex of both hemispheres. When the pitch is modulated at a much faster rate ( approximately 16 changes/s), responses to each change are virtually abolished but potentials with similar distribution are still elicited by changing the timbre (e.g. 'clarinet' to 'oboe') every few seconds. These responses appear to represent the cortical processes concerned with spectral pattern analysis and the grouping of frequency components to form sound 'objects'. Following a period of 16/s oscillation between two pitches, a more anteriorly distributed negativity is evoked on resumption of a steady pitch. Various lines of evidence suggest that this is probably equivalent to the 'mismatch negativity' (MMN), reflecting a pre-perceptual, memory-based process for detection of change in spectro-temporal sound patterns. This method requires no off-line subtraction of AEPs evoked by the onset of a tone, and the MMN is produced rapidly and robustly with considerably larger amplitude (usually >5 microV) than that to discontinuous pure tones. In the brain-injured patients, the presence of AEPs to two or more complex tone stimuli (in the combined assessment of two authors who were 'blind' to the clinical and behavioural data) was significantly associated with the demonstrable possession of discriminative hearing (the ability to respond differentially to verbal commands

  13. Multichannel fiber-based diffuse reflectance spectroscopy for the rat brain exposed to a laser-induced shock wave: comparison between ipsi- and contralateral hemispheres

    NASA Astrophysics Data System (ADS)

    Miyaki, Mai; Kawauchi, Satoko; Okuda, Wataru; Nawashiro, Hiroshi; Takemura, Toshiya; Sato, Shunichi; Nishidate, Izumi

    2015-03-01

    Due to considerable increase in the terrorism using explosive devices, blast-induced traumatic brain injury (bTBI) receives much attention worldwide. However, little is known about the pathology and mechanism of bTBI. In our previous study, we found that cortical spreading depolarization (CSD) occurred in the hemisphere exposed to a laser- induced shock wave (LISW), which was followed by long-lasting hypoxemia-oligemia. However, there is no information on the events occurred in the contralateral hemisphere. In this study, we performed multichannel fiber-based diffuse reflectance spectroscopy for the rat brain exposed to an LISW and compared the results for the ipsilateral and contralateral hemispheres. A pair of optical fibers was put on the both exposed right and left parietal bone; white light was delivered to the brain through source fibers and diffuse reflectance signals were collected with detection fibers for both hemispheres. An LISW was applied to the left (ipsilateral) hemisphere. By analyzing reflectance signals, we evaluated occurrence of CSD, blood volume and oxygen saturation for both hemispheres. In the ipsilateral hemispheres, we observed the occurrence of CSD and long-lasting hypoxemia-oligemia in all rats examined (n=8), as observed in our previous study. In the contralateral hemisphere, on the other hand, no occurrence of CSD was observed, but we observed oligemia in 7 of 8 rats and hypoxemia in 1 of 8 rats, suggesting a mechanism to cause hypoxemia or oligemia or both that is (are) not directly associated with CSD in the contralateral hemisphere.

  14. Comparisons of MRI images, and auditory-related and vocal-related protein expressions in the brain of echolocation bats and rodents.

    PubMed

    Hsiao, Chun-Jen; Hsu, Chih-Hsiang; Lin, Ching-Lung; Wu, Chung-Hsin; Jen, Philip Hung-Sun

    2016-08-17

    Although echolocating bats and other mammals share the basic design of laryngeal apparatus for sound production and auditory system for sound reception, they have a specialized laryngeal mechanism for ultrasonic sound emissions as well as a highly developed auditory system for processing species-specific sounds. Because the sounds used by bats for echolocation and rodents for communication are quite different, there must be differences in the central nervous system devoted to producing and processing species-specific sounds between them. The present study examines the difference in the relative size of several brain structures and expression of auditory-related and vocal-related proteins in the central nervous system of echolocation bats and rodents. Here, we report that bats using constant frequency-frequency-modulated sounds (CF-FM bats) and FM bats for echolocation have a larger volume of midbrain nuclei (inferior and superior colliculi) and cerebellum relative to the size of the brain than rodents (mice and rats). However, the former have a smaller volume of the cerebrum and olfactory bulb, but greater expression of otoferlin and forkhead box protein P2 than the latter. Although the size of both midbrain colliculi is comparable in both CF-FM and FM bats, CF-FM bats have a larger cerebrum and greater expression of otoferlin and forkhead box protein P2 than FM bats. These differences in brain structure and protein expression are discussed in relation to their biologically relevant sounds and foraging behavior. PMID:27337384

  15. Brain activity is related to individual differences in the number of items stored in auditory short-term memory for pitch: evidence from magnetoencephalography.

    PubMed

    Grimault, Stephan; Nolden, Sophie; Lefebvre, Christine; Vachon, François; Hyde, Krista; Peretz, Isabelle; Zatorre, Robert; Robitaille, Nicolas; Jolicoeur, Pierre

    2014-07-01

    We used magnetoencephalography (MEG) to examine brain activity related to the maintenance of non-verbal pitch information in auditory short-term memory (ASTM). We focused on brain activity that increased with the number of items effectively held in memory by the participants during the retention interval of an auditory memory task. We used very simple acoustic materials (i.e., pure tones that varied in pitch) that minimized activation from non-ASTM related systems. MEG revealed neural activity in frontal, temporal, and parietal cortices that increased with a greater number of items effectively held in memory by the participants during the maintenance of pitch representations in ASTM. The present results reinforce the functional role of frontal and temporal cortices in the retention of pitch information in ASTM. This is the first MEG study to provide both fine spatial localization and temporal resolution on the neural mechanisms of non-verbal ASTM for pitch in relation to individual differences in the capacity of ASTM. This research contributes to a comprehensive understanding of the mechanisms mediating the representation and maintenance of basic non-verbal auditory features in the human brain. PMID:24642285

  16. Age-Related Changes in Transient and Oscillatory Brain Responses to Auditory Stimulation during Early Adolescence

    ERIC Educational Resources Information Center

    Poulsen, Catherine; Picton, Terence W.; Paus, Tomas

    2009-01-01

    Maturational changes in the capacity to process quickly the temporal envelope of sound have been linked to language abilities in typically developing individuals. As part of a longitudinal study of brain maturation and cognitive development during adolescence, we employed dense-array EEG and spatiotemporal source analysis to characterize…

  17. The combined monitoring of brain stem auditory evoked potentials and intracranial pressure in coma. A study of 57 patients.

    PubMed Central

    García-Larrea, L; Artru, F; Bertrand, O; Pernier, J; Mauguière, F

    1992-01-01

    Continuous monitoring of brainstem auditory evoked potentials (BAEPs) was carried out in 57 comatose patients for periods ranging from 5 hours to 13 days. In 53 cases intracranial pressure (ICP) was also simultaneously monitored. The study of relative changes of evoked potentials over time proved more relevant to prognosis than the mere consideration of "statistical normality" of waveforms; thus progressive degradation of the BAEPs was associated with a bad outcome even if the responses remained within normal limits. Contrary to previous reports, a normal BAEP obtained during the second week of coma did not necessarily indicate a good vital outcome; it could, however, do so in cases with a low probability of secondary insults. The simultaneous study of BAEPs and ICP showed that apparently significant (greater than 40 mm Hg) acute rises in ICP were not always followed by BAEP changes. The stability of BAEP's despite "significant" ICP rises was associated in our patients with a high probability of survival, while prolongation of central latency of BAEPs in response to ICP modifications was almost invariably followed by brain death. Continuous monitoring of brainstem responses provided a useful physiological counterpart to physical parameters such as ICP. Serial recording of cortical EPs should be added to BAEP monitoring to permit the early detection of rostrocaudal deterioration. Images PMID:1402970

  18. Neuronal coupling by endogenous electric fields: cable theory and applications to coincidence detector neurons in the auditory brain stem.

    PubMed

    Goldwyn, Joshua H; Rinzel, John

    2016-04-01

    The ongoing activity of neurons generates a spatially and time-varying field of extracellular voltage (Ve). ThisVefield reflects population-level neural activity, but does it modulate neural dynamics and the function of neural circuits? We provide a cable theory framework to study how a bundle of model neurons generatesVeand how thisVefeeds back and influences membrane potential (Vm). We find that these "ephaptic interactions" are small but not negligible. The model neural population can generateVewith millivolt-scale amplitude, and thisVeperturbs theVmof "nearby" cables and effectively increases their electrotonic length. After using passive cable theory to systematically study ephaptic coupling, we explore a test case: the medial superior olive (MSO) in the auditory brain stem. The MSO is a possible locus of ephaptic interactions: sounds evoke large (millivolt scale)Vein vivo in this nucleus. TheVeresponse is thought to be generated by MSO neurons that perform a known neuronal computation with submillisecond temporal precision (coincidence detection to encode sound source location). Using a biophysically based model of MSO neurons, we find millivolt-scale ephaptic interactions consistent with the passive cable theory results. These subtle membrane potential perturbations induce changes in spike initiation threshold, spike time synchrony, and time difference sensitivity. These results suggest that ephaptic coupling may influence MSO function. PMID:26823512

  19. Diagnostic System Based on the Human AUDITORY-BRAIN Model for Measuring Environmental NOISE—AN Application to Railway Noise

    NASA Astrophysics Data System (ADS)

    SAKAI, H.; HOTEHAMA, T.; ANDO, Y.; PRODI, N.; POMPOLI, R.

    2002-02-01

    Measurements of railway noise were conducted by use of a diagnostic system of regional environmental noise. The system is based on the model of the human auditory-brain system. The model consists of the interplay of autocorrelators and an interaural crosscorrelator acting on the pressure signals arriving at the ear entrances, and takes into account the specialization of left and right human cerebral hemispheres. Different kinds of railway noise were measured through binaural microphones of a dummy head. To characterize the railway noise, physical factors, extracted from the autocorrelation functions (ACF) and interaural crosscorrelation function (IACF) of binaural signals, were used. The factors extracted from ACF were (1) energy represented at the origin of the delay, Φ (0), (2) effective duration of the envelope of the normalized ACF, τe, (3) the delay time of the first peak, τ1, and (4) its amplitude,ø1 . The factors extracted from IACF were (5) IACC, (6) interaural delay time at which the IACC is defined, τIACC, and (7) width of the IACF at the τIACC,WIACC . The factor Φ (0) can be represented as a geometrical mean of energies at both ears as listening level, LL.

  20. Asymmetries of the human social brain in the visual, auditory and chemical modalities

    PubMed Central

    Brancucci, Alfredo; Lucci, Giuliana; Mazzatenta, Andrea; Tommasi, Luca

    2008-01-01

    Structural and functional asymmetries are present in many regions of the human brain responsible for motor control, sensory and cognitive functions and communication. Here, we focus on hemispheric asymmetries underlying the domain of social perception, broadly conceived as the analysis of information about other individuals based on acoustic, visual and chemical signals. By means of these cues the brain establishes the border between ‘self’ and ‘other’, and interprets the surrounding social world in terms of the physical and behavioural characteristics of conspecifics essential for impression formation and for creating bonds and relationships. We show that, considered from the standpoint of single- and multi-modal sensory analysis, the neural substrates of the perception of voices, faces, gestures, smells and pheromones, as evidenced by modern neuroimaging techniques, are characterized by a general pattern of right-hemispheric functional asymmetry that might benefit from other aspects of hemispheric lateralization rather than constituting a true specialization for social information. PMID:19064350

  1. Mammalian CNS barosensitivity: studied by brain-stem auditory-evoked potential in mice.

    PubMed

    Chen, Ruiyong; Xiao, Weibing; Li, Jun; He, Jia; Chen, Haiting

    2012-01-01

    High pressure nervous syndrome (HPNS) is an instinctive response of mammalian high-class nervous functions to increased hydrostatic pressure. Electrophysiological activity of mammalian central nervous system (CNS), including brainstem auditory-evoked potential (BAEP), has characteristic changes under pressure. Here we recorded BAEP of 63 mice exposed to 0-4.0 MPa. The results showed that interpeak latencies between wave I and wave IV (IPL1-4) and their changes under pressures (deltaIPL1-4) responded to increasing pressure in a biphase pattern, shortened under pressure from 0 to 0.7MPa, then prolonged later. There were significantly negative correlations between base IPL1-4s and deltaIPL1-4s (p < 0.01). Individual IPL1-4s were supposed to respond to increasing pressure in a relative steady pattern in accordance with its base IPL1-4s. Those with shorter-base IPL1-4 presented direct increases in IPL1-4. However, those with longer-base IPL1-4 had a decreased IPL1-4 under small to moderate pressure then rebounded later. Our results suggested that mammalian CNS functions were susceptible to small to moderate pressure, as well as a higher pressure than 1.0MPa. Mice, as a statistical mass, had an "optimum" pressure about 0.7MPa, rather than atmospheric pressure, referred as shortest IPL1-4s. An individual's response to high pressure might be relied on his base biological condition. Our results highlighted a new approach to investigate a practical strategy to medical selecting barotolerant candidates for deep divers. Diversity of individual susceptibility to hydrostatic pressure was under discussed. Underlying mechanisms of the "optimum" pressure for CNS function and its significance to neurophysiology remain open to further exploration. PMID:22400446

  2. Auditory Verbal Hallucinations and Brain Dysconnectivity in the Perisylvian Language Network: A Multimodal Investigation

    PubMed Central

    Pettersson-Yeo, William; Allen, Paul; Catani, Marco; Williams, Steven; Barsaglini, Alessio; Kambeitz-Ilankovic, Lana M.; McGuire, Philip; Mechelli, Andrea

    2015-01-01

    Neuroimaging studies of schizophrenia have indicated that the development of auditory verbal hallucinations (AVHs) is associated with altered structural and functional connectivity within the perisylvian language network. However, these studies focussed mainly on either structural or functional alterations in patients with chronic schizophrenia. Therefore, they were unable to examine the relationship between the 2 types of measures and could not establish whether the observed alterations would be expressed in the early stage of the illness. We used diffusion tensor imaging and functional magnetic resonance imaging to examine white matter integrity and functional connectivity within the left perisylvian language network of 46 individuals with an at risk mental state for psychosis or a first episode of the illness, including 28 who had developed AVH group and 18 who had not (nonauditory verbal hallucination [nAVH] group), and 22 healthy controls. Inferences were made at P < .05 (corrected). The nAVH group relative to healthy controls showed a reduction of both white matter integrity and functional connectivity as well as a disruption of the normal structure−function relationship along the fronto-temporal pathway. For all measures, the AVH group showed intermediate values between healthy controls and the nAVH group. These findings seem to suggest that, in the early stage of the disorder, a significant impairment of fronto-temporal connectivity is evident in patients who do not experience AVHs. This is consistent with the hypothesis that, whilst mild disruption of connectivity might still enable the emergence of AVHs, more severe alterations may prevent the occurrence of the hallucinatory experience. PMID:24361862

  3. Auditory brain-stem evoked potentials in cat after kainic acid induced neuronal loss. II. Cochlear nucleus.

    PubMed

    Zaaroor, M; Starr, A

    1991-01-01

    Auditory brain-stem potentials (ABRs) were studied in cats for up to 6 weeks after kainic acid had been injected unilaterally into the cochlear nucleus (CN) producing extensive neuronal destruction. The ABR components were labeled by the polarity at the vertex (P, for positive) and their order of appearance (the arabic numerals 1, 2, etc.). Component P1 can be further subdivided into 2 subcomponents, P1a and P1b. The assumed correspondence between the ABR components in cat and man is indicated by providing human Roman numeral designations in parentheses following the feline notation, e.g., P2 (III). To stimulation of the ear ipsilateral to the injection, the ABR changes consisted of a loss of components P2 (III) and P3 (IV), and an attenuation and prolongation of latency of components P4 (V) and P5 (VI). The sustained potential shift from which the components arose was not affected. Wave P1a (I) was also slightly but significantly attenuated compatible with changes of excitability of nerve VIII in the cochlea secondary to cochlear nucleus destruction. Unexpectedly, to stimulation of the ear contralateral to the injection side, waves P2 (III), P3 (IV), and P4 (V) were also attenuated and delayed in latency but to a lesser degree than to stimulation of the ear ipsilateral to the injection. Changes in binaural interaction of the ABR following cochlear nucleus lesions were similar to those produced in normal animals by introducing a temporal delay of the input to one ear. The results of the present set of studies using kainic acid to induce neuronal loss in auditory pathway when combined with prior lesion and recording experiments suggest that each of the components of the ABR requires the integrity of an anatomically diffuse system comprising a set of neurons, their axons, and the neurons on which they terminate. Disruption of any portion of the system will alter the amplitude and/or the latency of that component. PMID:1716569

  4. Eye movement preparation causes spatially-specific modulation of auditory processing: New evidence from event-related brain potentials

    PubMed Central

    Gherri, Elena; Driver, Jon; Eimer, Martin

    2009-01-01

    To investigate whether saccade preparation can modulate processing of auditory stimuli in a spatially-specific fashion, ERPs were recorded for a Saccade task, in which the direction of a prepared saccade was cued, prior to an imperative auditory stimulus indicating whether to execute or withhold that saccade. For comparison, we also ran a conventional Covert Attention task, where the same cue now indicated the direction for a covert endogenous attentional shift prior to an auditory target-nontarget discrimination. Lateralised components previously observed during cued shifts of attention (ADAN, LDAP) did not differ significantly across tasks, indicating commonalities between auditory spatial attention and oculomotor control. Moreover, in both tasks, spatially-specific modulation of auditory processing was subsequently found, with enhanced negativity for lateral auditory nontarget stimuli at cued versus uncued locations. This modulation started earlier and was more pronounced for the Covert Attention task, but was also reliably present in the Saccade task, demonstrating that the effects of covert saccade preparation on auditory processing can be similar to effects of endogenous covert attentional orienting, albeit smaller. These findings provide new evidence for similarities but also some differences between oculomotor preparation and shifts of endogenous spatial attention. They also show that saccade preparation can affect not just vision, but also sensory processing of auditory events. PMID:18614157

  5. Brain functional connectivity during the experience of thought blocks in schizophrenic patients with persistent auditory verbal hallucinations: an EEG study.

    PubMed

    Angelopoulos, Elias; Koutsoukos, Elias; Maillis, Antonis; Papadimitriou, George N; Stefanis, Costas

    2014-03-01

    Thought blocks (TBs) are characterized by regular interruptions in the stream of thought. Outward signs are abrupt and repeated interruptions in the flow of conversation or actions while subjective experience is that of a total and uncontrollable emptying of the mind. In the very limited bibliography regarding TB, the phenomenon is thought to be conceptualized as a disturbance of consciousness that can be attributed to stoppages of continuous information processing due to an increase in the volume of information to be processed. In an attempt to investigate potential expression of the phenomenon on the functional properties of electroencephalographic (EEG) activity, an EEG study was contacted in schizophrenic patients with persisting auditory verbal hallucinations (AVHs) who additionally exhibited TBs. In this case, we hypothesized that the persistent and dense AVHs could serve the role of an increased information flow that the brain is unable to process, a condition that is perceived by the person as TB. Phase synchronization analyses performed on EEG segments during the experience of TBs showed that synchrony values exhibited a long-range common mode of coupling (grouped behavior) among the left temporal area and the remaining central and frontal brain areas. These common synchrony-fluctuation schemes were observed for 0.5 to 2s and were detected in a 4-s window following the estimated initiation of the phenomenon. The observation was frequency specific and detected in the broad alpha band region (6-12Hz). The introduction of synchrony entropy (SE) analysis applied on the cumulative synchrony distribution showed that TB states were characterized by an explicit preference of the system to be functioned at low values of synchrony, while the synchrony values are broadly distributed during the recovery state. Our results indicate that during TB states, the phase locking of several brain areas were converged uniformly in a narrow band of low synchrony values and in a

  6. The role of auditory transient and deviance processing in distraction of task performance: a combined behavioral and event-related brain potential study.

    PubMed

    Berti, Stefan

    2013-01-01

    Distraction of goal-oriented performance by a sudden change in the auditory environment is an everyday life experience. Different types of changes can be distracting, including a sudden onset of a transient sound and a slight deviation of otherwise regular auditory background stimulation. With regard to deviance detection, it is assumed that slight changes in a continuous sequence of auditory stimuli are detected by a predictive coding mechanisms and it has been demonstrated that this mechanism is capable of distracting ongoing task performance. In contrast, it is open whether transient detection-which does not rely on predictive coding mechanisms-can trigger behavioral distraction, too. In the present study, the effect of rare auditory changes on visual task performance is tested in an auditory-visual cross-modal distraction paradigm. The rare changes are either embedded within a continuous standard stimulation (triggering deviance detection) or are presented within an otherwise silent situation (triggering transient detection). In the event-related brain potentials, deviants elicited the mismatch negativity (MMN) while transients elicited an enhanced N1 component, mirroring pre-attentive change detection in both conditions but on the basis of different neuro-cognitive processes. These sensory components are followed by attention related ERP components including the P3a and the reorienting negativity (RON). This demonstrates that both types of changes trigger switches of attention. Finally, distraction of task performance is observable, too, but the impact of deviants is higher compared to transients. These findings suggest different routes of distraction allowing for the automatic processing of a wide range of potentially relevant changes in the environment as a pre-requisite for adaptive behavior. PMID:23874278

  7. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    PubMed

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-01

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension. PMID:12697279

  8. Conventional and cross-correlation brain-stem auditory evoked responses in the white leghorn chick: rate manipulations

    NASA Technical Reports Server (NTRS)

    Burkard, R.; Jones, S.; Jones, T.

    1994-01-01

    Rate-dependent changes in the chick brain-stem auditory evoked response (BAER) using conventional averaging and a cross-correlation technique were investigated. Five 15- to 19-day-old white leghorn chicks were anesthetized with Chloropent. In each chick, the left ear was acoustically stimulated. Electrical pulses of 0.1-ms duration were shaped, attenuated, and passed through a current driver to an Etymotic ER-2 which was sealed in the ear canal. Electrical activity from stainless-steel electrodes was amplified, filtered (300-3000 Hz) and digitized at 20 kHz. Click levels included 70 and 90 dB peSPL. In each animal, conventional BAERs were obtained at rates ranging from 5 to 90 Hz. BAERs were also obtained using a cross-correlation technique involving pseudorandom pulse sequences called maximum length sequences (MLSs). The minimum time between pulses, called the minimum pulse interval (MPI), ranged from 0.5 to 6 ms. Two BAERs were obtained for each condition. Dependent variables included the latency and amplitude of the cochlear microphonic (CM), wave 2 and wave 3. BAERs were observed in all chicks, for all level by rate combinations for both conventional and MLS BAERs. There was no effect of click level or rate on the latency of the CM. The latency of waves 2 and 3 increased with decreasing click level and increasing rate. CM amplitude decreased with decreasing click level, but was not influenced by click rate for the 70 dB peSPL condition. For the 90 dB peSPL click, CM amplitude was uninfluenced by click rate for conventional averaging. For MLS BAERs, CM amplitude was similar to conventional averaging for longer MPIs.(ABSTRACT TRUNCATED AT 250 WORDS).

  9. An fMRI Study of Auditory Orienting and Inhibition of Return in Pediatric Mild Traumatic Brain Injury

    PubMed Central

    Yang, Zhen; Yeo, Ronald A.; Pena, Amanda; Ling, Josef M.; Klimaj, Stefan; Campbell, Richard; Doezema, David

    2012-01-01

    Abstract Studies in adult mild traumatic brain injury (mTBI) have shown that two key measures of attention, spatial reorienting and inhibition of return (IOR), are impaired during the first few weeks of injury. However, it is currently unknown whether similar deficits exist following pediatric mTBI. The current study used functional magnetic resonance imaging (fMRI) to investigate the effects of semi-acute mTBI (<3 weeks post-injury) on auditory orienting in 14 pediatric mTBI patients (age 13.50±1.83 years; education: 6.86±1.88 years), and 14 healthy controls (age 13.29±2.09 years; education: 7.21±2.08 years), matched for age and years of education. The results indicated that patients with mTBI showed subtle (i.e., moderate effect sizes) but non-significant deficits on formal neuropsychological testing and during IOR. In contrast, functional imaging results indicated that patients with mTBI demonstrated significantly decreased activation within the bilateral posterior cingulate gyrus, thalamus, basal ganglia, midbrain nuclei, and cerebellum. The spatial topography of hypoactivation was very similar to our previous study in adults, suggesting that subcortical structures may be particularly affected by the initial biomechanical forces in mTBI. Current results also suggest that fMRI may be a more sensitive tool for identifying semi-acute effects of mTBI than the procedures currently used in clinical practice, such as neuropsychological testing and structural scans. fMRI findings could potentially serve as a biomarker for measuring the subtle injury caused by mTBI, and documenting the course of recovery. PMID:22533632

  10. Differences in brain circuitry for appetitive and reactive aggression as revealed by realistic auditory scripts

    PubMed Central

    Moran, James K.; Weierstall, Roland; Elbert, Thomas

    2014-01-01

    Aggressive behavior is thought to divide into two motivational elements: The first being a self-defensively motivated aggression against threat and a second, hedonically motivated “appetitive” aggression. Appetitive aggression is the less understood of the two, often only researched within abnormal psychology. Our approach is to understand it as a universal and adaptive response, and examine the functional neural activity of ordinary men (N = 50) presented with an imaginative listening task involving a murderer describing a kill. We manipulated motivational context in a between-subjects design to evoke appetitive or reactive aggression, against a neutral control, measuring activity with Magnetoencephalography (MEG). Results show differences in left frontal regions in delta (2–5 Hz) and alpha band (8–12 Hz) for aggressive conditions and right parietal delta activity differentiating appetitive and reactive aggression. These results validate the distinction of reward-driven appetitive aggression from reactive aggression in ordinary populations at the level of functional neural brain circuitry. PMID:25538590

  11. Auditory brain-stem evoked potentials in cat after kainic acid induced neuronal loss. I. Superior olivary complex.

    PubMed

    Zaaroor, M; Starr, A

    1991-01-01

    Auditory brain-stem potentials (ABRs) were studied in cats for up to 45 days after kainic acid had been injected unilaterally or bilaterally into the superior olivary complex (SOC) to produce neuronal destruction while sparing fibers of passage and the terminals of axons of extrinsic origin connecting to SOC neurons. The components of the ABR in cat were labeled by their polarity at the vertex (P, for positive) and their order of appearance (the arabic numerals 1, 2, etc.). Component P1 can be further subdivided into 2 subcomponents labeled P1a and P1b. The correspondences we have assumed between the ABR components in cat and man are indicated by providing a Roman numeral designation for the human component in parentheses following the feline notation, e.g., P4 (V). With bilateral SOC destruction, there was a significant and marked attenuation of waves P2 (III), P3 (IV), P4 (V), P5 (VI), and the sustained potential shift (SPS) amounting to as much as 80% of preoperative values. Following unilateral SOC destruction the attenuation of many of these same ABR components, in response to stimulation of either ear, was up to 50%. No component of the ABR was totally abolished even when the SOC was lesioned 100% bilaterally. In unilaterally lesioned cats with extensive neuronal loss (greater than 75%) the latencies of the components beginning at P3 (IV) were delayed to stimulation of the ear ipsilateral to the injection site but not to stimulation of the ear contralateral to the injection. Binaural interaction components of the ABR were affected in proportion to the attenuation of the ABR. These results are compatible with multiple brain regions contributing to the generation of the components of the ABR beginning with P2 (III) and that components P3 (IV), P4 (V), and P5 (VI) and the sustained potential shift depend particularly on the integrity of the neurons of the SOC bilaterally. The neurons of the lateral subdivision (LSO) and the medial nucleus of the trapezoid body

  12. Central auditory disorders: toward a neuropsychology of auditory objects

    PubMed Central

    Goll, Johanna C.; Crutch, Sebastian J.; Warren, Jason D.

    2012-01-01

    Purpose of review Analysis of the auditory environment, source identification and vocal communication all require efficient brain mechanisms for disambiguating, representing and understanding complex natural sounds as ‘auditory objects’. Failure of these mechanisms leads to a diverse spectrum of clinical deficits. Here we review current evidence concerning the phenomenology, mechanisms and brain substrates of auditory agnosias and related disorders of auditory object processing. Recent findings Analysis of lesions causing auditory object deficits has revealed certain broad anatomical correlations: deficient parsing of the auditory scene is associated with lesions involving the parieto-temporal junction, while selective disorders of sound recognition occur with more anterior temporal lobe or extra-temporal damage. Distributed neural networks have been increasingly implicated in the pathogenesis of such disorders as developmental dyslexia, congenital amusia and tinnitus. Auditory category deficits may arise from defective interaction of spectrotemporal encoding and executive and mnestic processes. Dedicated brain mechanisms are likely to process specialised sound objects such as voices and melodies. Summary Emerging empirical evidence suggests a clinically relevant, hierarchical and fractionated neuropsychological model of auditory object processing that provides a framework for understanding auditory agnosias and makes specific predictions to direct future work. PMID:20975559

  13. Bilinguals at the "cocktail party": dissociable neural activity in auditory-linguistic brain regions reveals neurobiological basis for nonnative listeners' speech-in-noise recognition deficits.

    PubMed

    Bidelman, Gavin M; Dexter, Lauren

    2015-04-01

    We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs. PMID:25747886

  14. Loss of auditory sensitivity from inner hair cell synaptopathy can be centrally compensated in the young but not old brain.

    PubMed

    Möhrle, Dorit; Ni, Kun; Varakina, Ksenya; Bing, Dan; Lee, Sze Chim; Zimmermann, Ulrike; Knipper, Marlies; Rüttiger, Lukas

    2016-08-01

    A dramatic shift in societal demographics will lead to rapid growth in the number of older people with hearing deficits. Poorer performance in suprathreshold speech understanding and temporal processing with age has been previously linked with progressing inner hair cell (IHC) synaptopathy that precedes age-dependent elevation of auditory thresholds. We compared central sound responsiveness after acoustic trauma in young, middle-aged, and older rats. We demonstrate that IHC synaptopathy progresses from middle age onward and hearing threshold becomes elevated from old age onward. Interestingly, middle-aged animals could centrally compensate for the loss of auditory fiber activity through an increase in late auditory brainstem responses (late auditory brainstem response wave) linked to shortening of central response latencies. In contrast, old animals failed to restore central responsiveness, which correlated with reduced temporal resolution in responding to amplitude changes. These findings may suggest that cochlear IHC synaptopathy with age does not necessarily induce temporal auditory coding deficits, as long as the capacity to generate neuronal gain maintains normal sound-induced central amplitudes. PMID:27318145

  15. A Longitudinal Evaluation of the Speech Perception Capabilities of Children Using Multichannel Tactile Vocoders.

    ERIC Educational Resources Information Center

    Eilers, Rebecca E.; And Others

    1996-01-01

    Thirty children with profound hearing impairments were followed over a three-year period with a semiannual battery of speech perception tests. Testing utilized multichannel tactile vocoders in variations of tactile and/or auditory/visual conditions. Performance in the tactile plus auditory condition generally exceeded that in other conditions,…

  16. fMRI reveals lateralized pattern of brain activity modulated by the metrics of stimuli during auditory rhyme processing.

    PubMed

    Hurschler, Martina A; Liem, Franziskus; Oechslin, Mathias; Stämpfli, Philipp; Meyer, Martin

    2015-08-01

    Our fMRI study investigates auditory rhyme processing in spoken language to further elucidate the topic of functional lateralization of language processing. During scanning, 14 subjects listened to four different types of versed word strings and subsequently performed either a rhyme or a meter detection task. Our results show lateralization to auditory-related temporal regions in the right hemisphere irrespective of task. As for the left hemisphere we report responses in the supramarginal gyrus as well as in the opercular part of the inferior frontal gyrus modulated by the presence of regular meter and rhyme. The interaction of rhyme and meter was associated with increased involvement of the superior temporal sulcus and the putamen of the right hemisphere. Overall, these findings support the notion of right-hemispheric specialization for suprasegmental analyses during processing of spoken sentences and provide neuroimaging evidence for the influence of metrics on auditory rhyme processing. PMID:26025759

  17. Volumetric comparison of auditory brain nuclei in ear-tufted Araucanas with those in other chicken breeds.

    PubMed

    Frahm, H D; Rehkämper, G

    1998-01-01

    Domestic chickens of the breed Araucana have ear-tufts, which affect the structure of the ear canal. Volumes of auditory brainstem nuclei were measured in three chicken breeds in order to evaluate whether the characteristics described for ear-tufted individuals of the Araucana chicken breed (alterations in the outer and middle ear anatomy) are associated with changes in the size of the relevant auditory nuclei. Allometric comparison reveals no size reductions of the angular, laminar and superior olivary nuclei in Araucanas, compared to Japanese Bantams and Brown Leghorns, but a slight increase in the size of the magnocellular nucleus. PMID:9672109

  18. The Role of Animacy in the Real Time Comprehension of Mandarin Chinese: Evidence from Auditory Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Philipp, Markus; Bornkessel-Schlesewsky, Ina; Bisang, Walter; Schlesewsky, Matthias

    2008-01-01

    Two auditory ERP studies examined the role of animacy in sentence comprehension in Mandarin Chinese by comparing active and passive sentences in simple verb-final (Experiment 1) and relative clause constructions (Experiment 2). In addition to the voice manipulation (which modulated the assignment of actor and undergoer roles to the arguments),…

  19. Information processing becomes slower and predominantly serial in aging: Characterization of response-related brain potentials in an auditory-visual distraction-attention task.

    PubMed

    Cid-Fernández, Susana; Lindín, Mónica; Díaz, Fernando

    2016-01-01

    The aim of this study was to evaluate the effects of aging and attentional capture provoked by novel auditory stimuli on behavior (reaction time [RT], hits) and on response-related brain potentials (preRFP, CRN, postRFP, parietalRP) to target visual stimuli. Twenty-two young, 27 middle-aged, and 24 old adults performed an auditory-visual distraction-attention task. The RTs and latencies of preRFP, postRFP and parietalRT were longer in old and middle-aged than in young participants, reflecting the well-established age-related slowing of processing and performance. The inter-peak latencies (P3b-preRFP, preRFP-parietalRP, parietalRP-postRFP) were also longer in old and middle-aged than in young participants, further indicating an age-related tendency to increased predominance of serial (rather than parallel) processing of information, and that preRFP, CRN, postRFP, and parietalRP represent different cognitive processes from those indexed by the stimulus-related P3b. Finally, a distraction effect in performance (all three groups) and in postRFP latency (only middle-aged group) was also observed. PMID:26589359

  20. Design and evaluation of area-efficient and wide-range impedance analysis circuit for multichannel high-quality brain signal recording system

    NASA Astrophysics Data System (ADS)

    Iwagami, Takuma; Tani, Takaharu; Ito, Keita; Nishino, Satoru; Harashima, Takuya; Kino, Hisashi; Kiyoyama, Koji; Tanaka, Tetsu

    2016-04-01

    To enable chronic and stable neural recording, we have been developing an implantable multichannel neural recording system with impedance analysis functions. One of the important things for high-quality neural signal recording is to maintain well interfaces between recording electrodes and tissues. We have proposed an impedance analysis circuit with a very small circuit area, which is implemented in a multichannel neural recording and stimulating system. In this paper, we focused on the design of an impedance analysis circuit configuration and the evaluation of a minimal voltage measurement unit. The proposed circuit has a very small circuit area of 0.23 mm2 designed with 0.18 µm CMOS technology and can measure interface impedances between recording electrodes and tissues in ultrawide ranges from 100 Ω to 10 MΩ. In addition, we also successfully acquired interface impedances using the proposed circuit in agarose gel experiments.

  1. The Drosophila Auditory System

    PubMed Central

    Boekhoff-Falk, Grace; Eberl, Daniel F.

    2013-01-01

    Development of a functional auditory system in Drosophila requires specification and differentiation of the chordotonal sensilla of Johnston’s organ (JO) in the antenna, correct axonal targeting to the antennal mechanosensory and motor center (AMMC) in the brain, and synaptic connections to neurons in the downstream circuit. Chordotonal development in JO is functionally complicated by structural, molecular and functional diversity that is not yet fully understood, and construction of the auditory neural circuitry is only beginning to unfold. Here we describe our current understanding of developmental and molecular mechanisms that generate the exquisite functions of the Drosophila auditory system, emphasizing recent progress and highlighting important new questions arising from research on this remarkable sensory system. PMID:24719289

  2. Site of auditory plasticity in the brain stem (VLVp) of the owl revealed by early monaural occlusion.

    PubMed

    Mogdans, J; Knudsen, E I

    1994-12-01

    1. The optic tectum of the barn owl contains a physiological map of interaural level difference (ILD) that underlies, in part, its map of auditory space. Monaural occlusion shifts the range of ILDs experienced by an animal and alters the correspondence of ILDs with source locations. Chronic monaural occlusion during development induces an adaptive shift in the tectal ILD map that compensates for the effects of the earplug. The data presented in this study indicate that one site of plasticity underlying this adaptive adjustment is in the posterior division of the ventral nucleus of the lateral lemniscus (VLVp), the first site of ILD comparison in the auditory pathway. 2. Single and multiple unit sites were recorded in the optic tecta and VLVps of ketamine-anesthetized owls. The owls were raised from 4 wk of age with one ear occluded with an earplug. Auditory testing, using digitally synthesized dichotic stimuli, was carried out 8-16 wk later with the earplug removed. The adaptive adjustment in ILD coding in each bird was quantified as the shift from normal ILD tuning measured in the optic tectum. Evidence of adaptive adjustment in the VLVp was based on statistical differences between the VLVp's ipsilateral and contralateral to the occluded ear in the sensitivity of units to excitatory-ear and inhibitory-ear stimulation. 3. The balance of excitatory to inhibitory influences on VLVp units was shifted in the adaptive direction in six out of eight owls. In three of these owls, adaptive differences in inhibition, but not in excitation, were found. For this group of owls, the patterns of response properties across the two VLVps can only be accounted for by plasticity in the VLVp. For the other three owls, the possibility that the difference between the two VLVps resulted from damage to one of the VLVps could not be eliminated, and for one of these, plasticity at a more peripheral site (in the cochlea or cochlear nucleus) could also explain the data. In the remaining two

  3. Sex, acceleration, brain imaging, and rhesus monkeys: Converging evidence for an evolutionary bias for looming auditory motion

    NASA Astrophysics Data System (ADS)

    Neuhoff, John G.

    2003-04-01

    Increasing acoustic intensity is a primary cue to looming auditory motion. Perceptual overestimation of increasing intensity could provide an evolutionary selective advantage by specifying that an approaching sound source is closer than actual, thus affording advanced warning and more time than expected to prepare for the arrival of the source. Here, multiple lines of converging evidence for this evolutionary hypothesis are presented. First, it is shown that intensity change specifying accelerating source approach changes in loudness more than equivalent intensity change specifying decelerating source approach. Second, consistent with evolutionary hunter-gatherer theories of sex-specific spatial abilities, it is shown that females have a significantly larger bias for rising intensity than males. Third, using functional magnetic resonance imaging in conjunction with approaching and receding auditory motion, it is shown that approaching sources preferentially activate a specific neural network responsible for attention allocation, motor planning, and translating perception into action. Finally, it is shown that rhesus monkeys also exhibit a rising intensity bias by orienting longer to looming tones than to receding tones. Together these results illustrate an adaptive perceptual bias that has evolved because it provides a selective advantage in processing looming acoustic sources. [Work supported by NSF and CDC.

  4. Touch activates human auditory cortex.

    PubMed

    Schürmann, Martin; Caetano, Gina; Hlushchuk, Yevhen; Jousmäki, Veikko; Hari, Riitta

    2006-05-01

    Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment. PMID:16488157

  5. Effect Of Electromagnetic Waves Emitted From Mobile Phone On Brain Stem Auditory Evoked Potential In Adult Males.

    PubMed

    Singh, K

    2015-01-01

    Mobile phone (MP) is commonly used communication tool. Electromagnetic waves (EMWs) emitted from MP may have potential health hazards. So, it was planned to study the effect of electromagnetic waves (EMWs) emitted from the mobile phone on brainstem auditory evoked potential (BAEP) in male subjects in the age group of 20-40 years. BAEPs were recorded using standard method of 10-20 system of electrode placement and sound click stimuli of specified intensity, duration and frequency.Right ear was exposed to EMW emitted from MP for about 10 min. On comparison of before and after exposure to MP in right ear (found to be dominating ear), there was significant increase in latency of II, III (p < 0.05) and V (p < 0.001) wave, amplitude of I-Ia wave (p < 0.05) and decrease in IPL of III-V wave (P < 0.05) after exposure to MP. But no significant change was found in waves of BAEP in left ear before vs after MP. On comparison of right (having exposure routinely as found to be dominating ear) and left ears (not exposed to MP), before exposure to MP, IPL of IIl-V wave and amplitude of V-Va is more (< 0.001) in right ear compared to more latency of III and IV wave (< 0.001) in left ear. After exposure to MP, the amplitude of V-Va was (p < 0.05) more in right ear compared to left ear. In conclusion, EMWs emitted from MP affects the auditory potential. PMID:27530007

  6. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy

    PubMed Central

    Lecaignard, Françoise; Bertrand, Olivier; Gimenez, Gérard; Mattout, Jérémie; Caclin, Anne

    2015-01-01

    Deviant stimuli, violating regularities in a sensory environment, elicit the mismatch negativity (MMN), largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience) are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities. PMID:26441602

  7. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy.

    PubMed

    Lecaignard, Françoise; Bertrand, Olivier; Gimenez, Gérard; Mattout, Jérémie; Caclin, Anne

    2015-01-01

    Deviant stimuli, violating regularities in a sensory environment, elicit the mismatch negativity (MMN), largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience) are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities. PMID:26441602

  8. Preferred EEG brain states at stimulus onset in a fixed interstimulus interval equiprobable auditory Go/NoGo task: a definitive study.

    PubMed

    Barry, Robert J; De Blasio, Frances M; De Pascalis, Vilfredo; Karamacoska, Diana

    2014-10-01

    This study examined the occurrence of preferred EEG phase states at stimulus onset in an equiprobable auditory Go/NoGo task with a fixed interstimulus interval, and their effects on the resultant event-related potentials (ERPs). We used a sliding short-time FFT decomposition of the EEG at Cz for each trial to assess prestimulus EEG activity in the delta, theta, alpha and beta bands. We determined the phase of each 2 Hz narrow-band contributing to these four broad bands at 125 ms before each stimulus onset, and for the first time, avoided contamination from poststimulus EEG activity. This phase value was extrapolated 125 ms to obtain the phase at stimulus onset, combined into the broad-band phase, and used to sort trials into four phase groups for each of the four broad bands. For each band, ERPs were derived for each phase from the raw EEG activity at 19 sites. Data sets from each band were separately decomposed using temporal Principal Components Analyses with unrestricted VARIMAX rotation to extract N1-1, PN, P2, P3, SW and LP components. Each component was analysed as a function of EEG phase at stimulus onset in the context of a simple conceptualisation of orthogonal phase effects (cortical negativity vs. positivity, negative driving vs. positive driving, waxing vs. waning). The predicted non-random occurrence of phase-defined brain states was confirmed. The preferred states of negativity, negative driving, and waxing were each associated with more efficient stimulus processing, as reflected in amplitude differences of the components. The present results confirm the existence of preferred brain states and their impact on the efficiency of brain dynamics in perceptual and cognitive processing. PMID:25043955

  9. Cross-Modal Recruitment of Primary Visual Cortex by Auditory Stimuli in the Nonhuman Primate Brain: A Molecular Mapping Study

    PubMed Central

    Hirst, Priscilla; Javadi Khomami, Pasha; Gharat, Amol; Zangenehpour, Shahin

    2012-01-01

    Recent studies suggest that exposure to only one component of audiovisual events can lead to cross-modal cortical activation. However, it is not certain whether such crossmodal recruitment can occur in the absence of explicit conditioning, semantic factors, or long-term associations. A recent study demonstrated that crossmodal cortical recruitment can occur even after a brief exposure to bimodal stimuli without semantic association. In addition, the authors showed that the primary visual cortex is under such crossmodal influence. In the present study, we used molecular activity mapping of the immediate early gene zif268. We found that animals, which had previously been exposed to a combination of auditory and visual stimuli, showed increased number of active neurons in the primary visual cortex when presented with sounds alone. As previously implied, this crossmodal activation appears to be the result of implicit associations of the two stimuli, likely driven by their spatiotemporal characteristics; it was observed after a relatively short period of exposure (~45 min) and lasted for a relatively long period after the initial exposure (~1 day). These results suggest that the previously reported findings may be directly rooted in the increased activity of the neurons occupying the primary visual cortex. PMID:22792489

  10. Noise-gated encoding of slow inputs by auditory brain stem neurons with a low-threshold K+ current.

    PubMed

    Gai, Yan; Doiron, Brent; Kotak, Vibhakar; Rinzel, John

    2009-12-01

    Phasic neurons, which do not fire repetitively to steady depolarization, are found at various stages of the auditory system. Phasic neurons are commonly described as band-pass filters because they do not respond to low-frequency inputs even when the amplitude is large. However, we show that phasic neurons can encode low-frequency inputs when noise is present. With a low-threshold potassium current (I(KLT)), a phasic neuron model responds to rising and falling phases of a subthreshold low-frequency signal with white noise. When the white noise was low-pass filtered, the phasic model also responded to the signal's trough but still not to the peak. In contrast, a tonic neuron model fired mostly to the signal's peak. To test the model predictions, whole cell slice recordings were obtained in the medial (MSO) and lateral (LSO) superior olivary neurons in gerbil from postnatal day 10 (P10) to 22. The phasic MSO neurons with strong I(KLT), mostly from gerbils aged P17 or older, showed firing patterns consistent with the preceding predictions. Moreover, injecting a virtual I(KLT) into weak-phasic MSO and tonic LSO neurons with putative weak or no I(KLT) (from gerbils younger than P17) shifted the neural response from the signal's peak to the rising phase. These findings advance our knowledge about how noise gates the signal pathway and how phasic neurons encode slow envelopes of sounds with high-frequency carriers. PMID:19812289

  11. Maps of the Auditory Cortex.

    PubMed

    Brewer, Alyssa A; Barton, Brian

    2016-07-01

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration. PMID:27145914

  12. Auditory spatial processing in Alzheimer's disease.

    PubMed

    Golden, Hannah L; Nicholas, Jennifer M; Yong, Keir X X; Downey, Laura E; Schott, Jonathan M; Mummery, Catherine J; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer's disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer's disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer's disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer's disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer's disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer's disease

  13. Decreases in energy and increases in phase locking of event-related oscillations to auditory stimuli occur during adolescence in human and rodent brain.

    PubMed

    Ehlers, Cindy L; Wills, Derek N; Desikan, Anita; Phillips, Evelyn; Havstad, James

    2014-01-01

    Synchrony of phase (phase locking) of event-related oscillations (EROs) within and between different brain areas has been suggested to reflect communication exchange between neural networks and as such may be a sensitive and translational measure of changes in brain remodeling that occur during adolescence. This study sought to investigate developmental changes in EROs using a similar auditory event-related potential (ERP) paradigm in both rats and humans. Energy and phase variability of EROs collected from 38 young adult men (aged 18-25 years), 33 periadolescent boys (aged 10-14 years), 15 male periadolescent rats [at postnatal day (PD) 36] and 19 male adult rats (at PD103) were investigated. Three channels of ERP data (frontal cortex, central cortex and parietal cortex) were collected from the humans using an 'oddball plus noise' paradigm that was presented under passive (no behavioral response required) conditions in the periadolescents and under active conditions (where each subject was instructed to depress a counter each time he detected an infrequent target tone) in adults and adolescents. ERPs were recorded in rats using only the passive paradigm. In order to compare the tasks used in rats to those used in humans, we first studied whether three ERO measures [energy, phase locking index (PLI) within an electrode site and phase difference locking index (PDLI) between different electrode sites] differentiated the 'active' from 'passive' ERP tasks. Secondly, we explored our main question of whether the three ERO measures differentiated adults from periadolescents in a similar manner in both humans and rats. No significant changes were found in measures of ERO energy between the active and passive tasks in the periadolescent human participants. There was a smaller but significant increase in PLI but not PDLI as a function of active task requirements. Developmental differences were found in energy, PLI and PDLI values between the periadolescents and adults in

  14. Brain Dynamics of Aging: Multiscale Variability of EEG Signals at Rest and during an Auditory Oddball Task1,2,3

    PubMed Central

    Sleimen-Malkoun, Rita; Perdikis, Dionysios; Müller, Viktor; Blanc, Jean-Luc; Huys, Raoul; Temprado, Jean-Jacques

    2015-01-01

    Abstract The present work focused on the study of fluctuations of cortical activity across time scales in young and older healthy adults. The main objective was to offer a comprehensive characterization of the changes of brain (cortical) signal variability during aging, and to make the link with known underlying structural, neurophysiological, and functional modifications, as well as aging theories. We analyzed electroencephalogram (EEG) data of young and elderly adults, which were collected at resting state and during an auditory oddball task. We used a wide battery of metrics that typically are separately applied in the literature, and we compared them with more specific ones that address their limits. Our procedure aimed to overcome some of the methodological limitations of earlier studies and verify whether previous findings can be reproduced and extended to different experimental conditions. In both rest and task conditions, our results mainly revealed that EEG signals presented systematic age-related changes that were time-scale-dependent with regard to the structure of fluctuations (complexity) but not with regard to their magnitude. Namely, compared with young adults, the cortical fluctuations of the elderly were more complex at shorter time scales, but less complex at longer scales, although always showing a lower variance. Additionally, the elderly showed signs of spatial, as well as between, experimental conditions dedifferentiation. By integrating these so far isolated findings across time scales, metrics, and conditions, the present study offers an overview of age-related changes in the fluctuation electrocortical activity while making the link with underlying brain dynamics. PMID:26464983

  15. Brain Dynamics of Aging: Multiscale Variability of EEG Signals at Rest and during an Auditory Oddball Task(1,2,3).

    PubMed

    Sleimen-Malkoun, Rita; Perdikis, Dionysios; Müller, Viktor; Blanc, Jean-Luc; Huys, Raoul; Temprado, Jean-Jacques; Jirsa, Viktor K

    2015-01-01

    The present work focused on the study of fluctuations of cortical activity across time scales in young and older healthy adults. The main objective was to offer a comprehensive characterization of the changes of brain (cortical) signal variability during aging, and to make the link with known underlying structural, neurophysiological, and functional modifications, as well as aging theories. We analyzed electroencephalogram (EEG) data of young and elderly adults, which were collected at resting state and during an auditory oddball task. We used a wide battery of metrics that typically are separately applied in the literature, and we compared them with more specific ones that address their limits. Our procedure aimed to overcome some of the methodological limitations of earlier studies and verify whether previous findings can be reproduced and extended to different experimental conditions. In both rest and task conditions, our results mainly revealed that EEG signals presented systematic age-related changes that were time-scale-dependent with regard to the structure of fluctuations (complexity) but not with regard to their magnitude. Namely, compared with young adults, the cortical fluctuations of the elderly were more complex at shorter time scales, but less complex at longer scales, although always showing a lower variance. Additionally, the elderly showed signs of spatial, as well as between, experimental conditions dedifferentiation. By integrating these so far isolated findings across time scales, metrics, and conditions, the present study offers an overview of age-related changes in the fluctuation electrocortical activity while making the link with underlying brain dynamics. PMID:26464983

  16. Auditory pathways: are 'what' and 'where' appropriate?

    PubMed

    Hall, Deborah A

    2003-05-13

    New evidence confirms that the auditory system encompasses temporal, parietal and frontal brain regions, some of which partly overlap with the visual system. But common assumptions about the functional homologies between sensory systems may be misleading. PMID:12747854

  17. [Brain stem auditory and visual evoked potentials in children and adolescents with Guillain-Barré syndrome].

    PubMed

    Zgorzalewicz, Małgorzata; Zielińska, Mariola; Kilarski, Dariusz

    2004-01-01

    amplitudes were found. These parameters were statistically significant in comparison to the control group. VEP results suggest the involvement of visual pathway in examined children and adolescents. EP can be used as a complementary method for the evaluation of clinically silent lesion in the auditory and optical tracts in GBS. PMID:15045865

  18. Auditory system

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1973-01-01

    The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.

  19. Auditory synesthesias.

    PubMed

    Afra, Pegah

    2015-01-01

    Synesthesia is experienced when sensory stimulation of one sensory modality (the inducer) elicits an involuntary or automatic sensation in another sensory modality or different aspect of the same sensory modality (the concurrent). Auditory synesthesias (AS) occur when auditory stimuli trigger a variety of concurrents, or when non-auditory sensory stimulations trigger auditory synesthetic perception. The AS are divided into three types: developmental, acquired, and induced. Developmental AS are not a neurologic disorder but a different way of experiencing one's environment. They are involuntary and highly consistent experiences throughout one's life. Acquired AS have been reported in association with neurologic diseases that cause deafferentation of anterior optic pathways, with pathologic lesions affecting the central nervous system (CNS) outside of the optic pathways, as well as non-lesional cases associated with migraine, and epilepsy. It also has been reported with mood disorders as well as a single idiopathic case. Induced AS has been reported in experimental and postsurgical blindfolding, as well as intake of hallucinogenics or psychedelics. In this chapter the three different types of synesthesia, their characteristics, and phenomologic differences, as well as their possible neural mechanisms are discussed. PMID:25726281

  20. Harmonic Training and the Formation of Pitch Representation in a Neural Network Model of the Auditory Brain

    PubMed Central

    Ahmad, Nasir; Higgins, Irina; Walker, Kerry M. M.; Stringer, Simon M.

    2016-01-01

    Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises. PMID:27047368

  1. Harmonic Training and the Formation of Pitch Representation in a Neural Network Model of the Auditory Brain.

    PubMed

    Ahmad, Nasir; Higgins, Irina; Walker, Kerry M M; Stringer, Simon M

    2016-01-01

    Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises. PMID:27047368

  2. List mode multichannel analyzer

    DOEpatents

    Archer, Daniel E.; Luke, S. John; Mauger, G. Joseph; Riot, Vincent J.; Knapp, David A.

    2007-08-07

    A digital list mode multichannel analyzer (MCA) built around a programmable FPGA device for onboard data analysis and on-the-fly modification of system detection/operating parameters, and capable of collecting and processing data in very small time bins (<1 millisecond) when used in histogramming mode, or in list mode as a list mode MCA.

  3. Auditory short-term memory in the primate auditory cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26541581

  4. Impaired auditory selective attention ameliorated by cognitive training with graded exposure to noise in patients with traumatic brain injury.

    PubMed

    Dundon, Neil M; Dockree, Suvi P; Buckley, Vanessa; Merriman, Niamh; Carton, Mary; Clarke, Sarah; Roche, Richard A P; Lalor, Edmund C; Robertson, Ian H; Dockree, Paul M

    2015-08-01

    Patients who suffer traumatic brain injury frequently report difficulty concentrating on tasks and completing routine activities in noisy and distracting environments. Such impairments can have long-term negative psychosocial consequences. A cognitive control function that may underlie this impairment is the capacity to select a goal-relevant signal for further processing while safeguarding it from irrelevant noise. A paradigmatic investigation of this problem was undertaken using a dichotic listening task (study 1) in which comprehension of a stream of speech to one ear was measured in the context of increasing interference from a second stream of irrelevant speech to the other ear. Controls showed an initial decline in performance in the presence of competing speech but thereafter showed adaptation to increasing audibility of irrelevant speech, even at the highest levels of noise. By contrast, patients showed linear decline in performance with increasing noise. Subsequently attempts were made to ameliorate this deficit (study 2) using a cognitive training procedure based on attention process training (APT) that included graded exposure to irrelevant noise over the course of training. Patients were assigned to adaptive and non-adaptive training schedules or to a no-training control group. Results showed that both types of training drove improvements in the dichotic listening and in naturalistic tasks of performance in noise. Improvements were also seen on measures of selective attention in the visual domain suggesting transfer of training. We also observed augmentation of event-related potentials (ERPs) linked to target processing (P3b) but no change in ERPs evoked by distractor stimuli (P3a) suggesting that training heightened tuning of target signals, as opposed to gating irrelevant noise. No changes in any of the above measures were observed in a no-training control group. Together these findings present an ecologically valid approach to measure selective

  5. Electrophysiological study of auditory development.

    PubMed

    Lippé, S; Martinez-Montes, E; Arcand, C; Lassonde, M

    2009-12-15

    Cortical auditory evoked potential (CAEP) testing, a non-invasive technique, is widely employed to study auditory brain development. The aim of this study was to investigate the development of the auditory electrophysiological signal without addressing specific abilities such as speech or music discrimination. We were interested in the temporal and spectral domains of conventional auditory evoked potentials. We analyzed cerebral responses to auditory stimulation (broadband noises) in 40 infants and children (1 month to 5 years 6 months) and 10 adults using high-density electrophysiological recording. We hypothesized that the adult auditory response has precursors that can be identified in infant and child responses. Results confirm that complex adult CAEP responses and spectral activity patterns appear after 5 years, showing decreased involvement of lower frequencies and increased involvement of higher frequencies. In addition, time-locked response to stimulus and event-related spectral pertubation across frequencies revealed alpha and beta band contributions to the CAEP of infants and toddlers before mutation to the beta and gamma band activity of the adult response. A detailed analysis of electrophysiological responses to a perceptual stimulation revealed general development patterns and developmental precursors of the adult response. PMID:19665050

  6. Source analysis of magnetic field responses from the human auditory cortex elicited by short speech sounds.

    PubMed

    Kuriki, S; Okita, Y; Hirata, Y

    1995-01-01

    We made a detailed source analysis of the magnetic field responses that were elicited in the human brain by different monosyllabic speech sounds, including vowel, plosive, fricative, and nasal speech. Recordings of the magnetic field responses from a lateral area of the left hemisphere of human subjects were made using a multichannel SQUID magnetometer, having 37 field-sensing coils. A single source of the equivalent current dipole of the field was estimated from the spatial distribution of the evoked responses. The estimated sources of an N1m wave occurring at about 100 ms after the stimulus onset of different monosyllables were located close to each other within a 10-mm-sided cube in the three-dimensional space of the brain. Those sources registered on the magnetic resonance images indicated a restricted area in the auditory cortex, including Heschl's gyri in the superior temporal plane. In the spatiotemporal domain the sources exhibited apparent movements, among which anterior shift with latency increase on the anteroposterior axis and inferior shift on the inferosuperior axis were common in the responses to all monosyllables. However, selective movements that depended on the type of consonants were observed on the mediolateral axis; the sources of plosive and fricative responses shifted laterally with latency increase, but the source of the vowel response shifted medially. These spatiotemporal movements of the sources are discussed in terms of dynamic excitation of the cortical neurons in multiple areas of the human auditory cortex. PMID:7621933

  7. Long-term recovery from hippocampal-related behavioral and biochemical abnormalities induced by noise exposure during brain development. Evaluation of auditory pathway integrity.

    PubMed

    Uran, S L; Gómez-Casati, M E; Guelman, L R

    2014-10-01

    Sound is an important part of man's contact with the environment and has served as critical means for survival throughout his evolution. As a result of exposure to noise, physiological functions such as those involving structures of the auditory and non-auditory systems might be damaged. We have previously reported that noise-exposed developing rats elicited hippocampal-related histological, biochemical and behavioral changes. However, no data about the time lapse of these changes were reported. Moreover, measurements of auditory pathway function were not performed in exposed animals. Therefore, with the present work, we aim to test the onset and the persistence of the different extra-auditory abnormalities observed in noise-exposed rats and to evaluate auditory pathway integrity. Male Wistar rats of 15 days were exposed to moderate noise levels (95-97 dB SPL, 2 h a day) during one day (acute noise exposure, ANE) or during 15 days (sub-acute noise exposure, SANE). Hippocampal biochemical determinations as well as short (ST) and long term (LT) behavioral assessments were performed. In addition, histological and functional evaluations of the auditory pathway were carried out in exposed animals. Our results show that hippocampal-related behavioral and biochemical changes (impairments in habituation, recognition and associative memories as well as distortion of anxiety-related behavior, decreases in reactive oxygen species (ROS) levels and increases in antioxidant enzymes activities) induced by noise exposure were almost completely restored by PND 90. In addition, auditory evaluation shows that increased cochlear thresholds observed in exposed rats were re-established at PND 90, although with a remarkable supra-threshold amplitude reduction. These data suggest that noise-induced hippocampal and auditory-related alterations are mostly transient and that the effects of noise on the hippocampus might be, at least in part, mediated by the damage on the auditory pathway

  8. Intrinsic firing properties in the avian auditory brain stem allow both integration and encoding of temporally modulated noisy inputs in vitro.

    PubMed

    Kreeger, Lauren J; Arshed, Arslaan; MacLeod, Katrina M

    2012-11-01

    The intrinsic properties of tonically firing neurons in the cochlear nucleus contribute to representing average sound intensity by favoring synaptic integration across auditory nerve inputs, reducing phase locking to fine temporal acoustic structure and enhancing envelope locking. To determine whether tonically firing neurons of the avian cochlear nucleus angularis (NA) resemble ideal integrators, we investigated their firing responses to noisy current injections during whole cell patch-clamp recordings in brain slices. One subclass of neurons (36% of tonically firing neurons, mainly subtype tonic III) showed no significant changes in firing rate with noise fluctuations, acting like pure integrators. In contrast, many tonically firing neurons (>60%, mainly subtype tonic I or II) showed a robust sensitivity to noisy current fluctuations, increasing their firing rates with increased fluctuation amplitudes. For noise-sensitive tonic neurons, the firing rate vs. average current curves with noise had larger maximal firing rates, lower gains, and wider dynamic ranges compared with FI curves for current steps without noise. All NA neurons showed fluctuation-driven patterning of spikes with a high degree of temporal reliability and millisecond spike time precision. Single-spiking neurons in NA also responded to noisy currents with higher firing rates and reliable spike trains, although less precisely than nucleus magnocellularis neurons. Thus some NA neurons function as integrators by encoding average input levels over wide dynamic ranges regardless of current fluctuations, others detect the degree of coherence in the inputs, and most encode the temporal patterns contained in their inputs with a high degree of precision. PMID:22914650

  9. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    PubMed Central

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  10. Time-resolved multi-channel optical system for assessment of brain oxygenation and perfusion by monitoring of diffuse reflectance and fluorescence

    NASA Astrophysics Data System (ADS)

    Milej, D.; Gerega, A.; Kacprzak, M.; Sawosz, P.; Weigl, W.; Maniewski, R.; Liebert, A.

    2014-03-01

    Time-resolved near-infrared spectroscopy is an optical technique which can be applied in tissue oxygenation assessment. In the last decade this method is extensively tested as a potential clinical tool for noninvasive human brain function monitoring and imaging. In the present paper we show construction of an instrument which allows for: (i) estimation of changes in brain tissue oxygenation using two-wavelength spectroscopy approach and (ii) brain perfusion assessment with the use of single-wavelength reflectometry or fluorescence measurements combined with ICG-bolus tracking. A signal processing algorithm based on statistical moments of measured distributions of times of flight of photons is implemented. This data analysis method allows for separation of signals originating from extra- and intracerebral tissue compartments. In this paper we present compact and easily reconfigurable system which can be applied in different types of time-resolved experiments: two-wavelength measurements at 687 and 832 nm, single wavelength reflectance measurements at 760 nm (which is at maximum of ICG absorption spectrum) or fluorescence measurements with excitation at 760 nm. Details of the instrument construction and results of its technical tests are shown. Furthermore, results of in-vivo measurements obtained for various modes of operation of the system are presented.

  11. A subfemtotesla multichannel atomic magnetometer

    NASA Astrophysics Data System (ADS)

    Kominis, I. K.; Kornack, T. W.; Allred, J. C.; Romalis, M. V.

    2003-04-01

    The magnetic field is one of the most fundamental and ubiquitous physical observables, carrying information about all electromagnetic phenomena. For the past 30 years, superconducting quantum interference devices (SQUIDs) operating at 4K have been unchallenged as ultrahigh-sensitivity magnetic field detectors, with a sensitivity reaching down to 1fTHz-1/2 (1fT = 10-15T). They have enabled, for example, mapping of the magnetic fields produced by the brain, and localization of the underlying electrical activity (magnetoencephalography). Atomic magnetometers, based on detection of Larmor spin precession of optically pumped atoms, have approached similar levels of sensitivity using large measurement volumes, but have much lower sensitivity in the more compact designs required for magnetic imaging applications. Higher sensitivity and spatial resolution combined with non-cryogenic operation of atomic magnetometers would enable new applications, including the possibility of mapping non-invasively the cortical modules in the brain. Here we describe a new spin-exchange relaxation-free (SERF) atomic magnetometer, and demonstrate magnetic field sensitivity of 0.54fTHz-1/2 with a measurement volume of only 0.3cm3. Theoretical analysis shows that fundamental sensitivity limits of this device are below 0.01fTHz-1/2. We also demonstrate simple multichannel operation of the magnetometer, and localization of magnetic field sources with a resolution of 2mm.

  12. A subfemtotesla multichannel atomic magnetometer.

    PubMed

    Kominis, I K; Kornack, T W; Allred, J C; Romalis, M V

    2003-04-10

    The magnetic field is one of the most fundamental and ubiquitous physical observables, carrying information about all electromagnetic phenomena. For the past 30 years, superconducting quantum interference devices (SQUIDs) operating at 4 K have been unchallenged as ultrahigh-sensitivity magnetic field detectors, with a sensitivity reaching down to 1 fT Hz(-1/2) (1 fT = 10(-15) T). They have enabled, for example, mapping of the magnetic fields produced by the brain, and localization of the underlying electrical activity (magnetoencephalography). Atomic magnetometers, based on detection of Larmor spin precession of optically pumped atoms, have approached similar levels of sensitivity using large measurement volumes, but have much lower sensitivity in the more compact designs required for magnetic imaging applications. Higher sensitivity and spatial resolution combined with non-cryogenic operation of atomic magnetometers would enable new applications, including the possibility of mapping non-invasively the cortical modules in the brain. Here we describe a new spin-exchange relaxation-free (SERF) atomic magnetometer, and demonstrate magnetic field sensitivity of 0.54 fT Hz(-1/2) with a measurement volume of only 0.3 cm3. Theoretical analysis shows that fundamental sensitivity limits of this device are below 0.01 fT Hz(-1/2). We also demonstrate simple multichannel operation of the magnetometer, and localization of magnetic field sources with a resolution of 2 mm. PMID:12686995

  13. Multichannel Human Body Communication

    NASA Astrophysics Data System (ADS)

    Przystup, Piotr; Bujnowski, Adam; Wtorek, Jerzy

    2016-01-01

    Human Body Communication is an attractive alternative for traditional wireless communication (Bluetooth, ZigBee) in case of Body Sensor Networks. Low power, high data rates and data security makes it ideal solution for medical applications. In this paper, signal attenuation for different frequencies, using FR4 electrodes, has been investigated. Performance of single and multichannel transmission with frequency modulation of analog signal has been tested. Experiment results show that HBC is a feasible solution for transmitting data between BSN nodes.

  14. Miniature multichannel biotelemeter system

    NASA Technical Reports Server (NTRS)

    Carraway, J. B.; Sumida, J. T. (Inventor)

    1974-01-01

    A miniature multichannel biotelemeter system is described. The system includes a transmitter where signals from different sources are sampled to produce a wavetrain of pulses. The transmitter also separates signals by sync pulses. The pulses amplitude modulate a radio frequency carrier which is received at a receiver unit. There the sync pulses are detected by a demultiplexer which routes the pulses from each different source to a separate output channel where the pulses are used to reconstruct the signals from the particular source.

  15. Causal contribution of primate auditory cortex to auditory perceptual decision-making

    PubMed Central

    Tsunada, Joji; Liu, Andrew S.K.; Gold, Joshua I.; Cohen, Yale E.

    2015-01-01

    Auditory perceptual decisions are thought to be mediated by the ventral auditory pathway. However, the specific and causal contributions of different brain regions in this pathway, including the middle-lateral (ML) and anterolateral (AL) belt regions of the auditory cortex, to auditory decisions have not been fully identified. To identify these contributions, we recorded from and microstimulated ML and AL sites while monkeys decided whether an auditory stimulus contained more low-frequency or high-frequency tone bursts. Both ML and AL neural activity was modulated by the frequency content of the stimulus. However, only the responses of the most stimulus-sensitive AL neurons were systematically modulated by the monkeys’ choices. Consistent with this observation, microstimulation of AL—but not ML—systematically biased the monkeys’ behavior toward the choice associated with the preferred frequency of the stimulated site. Together, these findings suggest that AL directly and causally contributes sensory evidence used to form this auditory decision. PMID:26656644

  16. Atypical brain lateralisation in the auditory cortex and language performance in 3- to 7-year-old children with high-functioning autism spectrum disorder: a child-customised magnetoencephalography (MEG) study

    PubMed Central

    2013-01-01

    chronological age was a significant predictor of shorter P50m latency in the right hemisphere. Conclusions Using a child-customised MEG device, we studied the P50m component that was evoked through binaural human voice stimuli in young ASD and TD children to examine differences in auditory cortex function that are associated with language development. Our results suggest that there is atypical brain function in the auditory cortex in young children with ASD, regardless of language development. PMID:24103585

  17. The brain-stem auditory-evoked response in the big brown bat (Eptesicus fuscus) to clicks and frequency-modulated sweeps.

    PubMed

    Burkard, R; Moss, C F

    1994-08-01

    Three experiments were performed to evaluate the effects of stimulus level on the brain-stem auditory-evoked response (BAER) in the big brown bat (Eptesicus fuscus), a species that uses frequency-modulated (FM) sonar sounds for echolocation. In experiment 1, the effects of click level on the BAER were investigated. Clicks were presented at levels of 30 to 90 dB pSPL in 10-dB steps. Each animal responded reliably to clicks at levels of 50 dB pSPL and above, showing a BAER containing four peaks in the first 3-4 ms from click onset (waves i-iv). With increasing click level, BAER peak amplitude increased and peak latency decreased. A decrease in the i-iv interval also occurred with increasing click level. In experiment 2, stimuli were 1-ms linear FM sweeps, decreasing in frequency from 100 to 20 kHz. Stimulus levels ranged from 20 to 90 dB pSPL. BAERs to FM sweeps were observed in all animals for levels of 40 dB pSPL and above. These responses were similar to the click-evoked BAER in waveform morphology, with the notable exception of an additional peak observed at the higher levels of FM sweeps. This peak (wave ia) occurred prior to the first wave seen at lower levels (wave ib). As the level of the FM sweep increased, there was a decrease in peak latency and an increase in peak amplitude. Similarity in the magnitude and behavior of the i-iv and ib-iv intervals suggests that wave ib to FM sweeps is the homolog of the wave i response to click stimuli. Experiment 3 tested the hypothesis that wave ia represented activity emanating from more basal cochlear regions than wave ib. FM sweeps (100-20 kHz) were presented at 90 dB pSPL, and broadband noise was raised in level until the BAER was eliminated. This "masked threshold" occurred at 85 dB SPL of noise. At masked threshold, the broadband noise was steeply high-pass filtered at five cutoff frequencies ranging from 20 to 80 kHz. Generally, wave ia was eliminated for masker cutoff frequencies of 56.6 kHz and below, while wave

  18. Auditory Efferent System Modulates Mosquito Hearing.

    PubMed

    Andrés, Marta; Seifert, Marvin; Spalthoff, Christian; Warren, Ben; Weiss, Lukas; Giraldo, Diego; Winkler, Margret; Pauls, Stephanie; Göpfert, Martin C

    2016-08-01

    The performance of vertebrate ears is controlled by auditory efferents that originate in the brain and innervate the ear, synapsing onto hair cell somata and auditory afferent fibers [1-3]. Efferent activity can provide protection from noise and facilitate the detection and discrimination of sound by modulating mechanical amplification by hair cells and transmitter release as well as auditory afferent action potential firing [1-3]. Insect auditory organs are thought to lack efferent control [4-7], but when we inspected mosquito ears, we obtained evidence for its existence. Antibodies against synaptic proteins recognized rows of bouton-like puncta running along the dendrites and axons of mosquito auditory sensory neurons. Electron microscopy identified synaptic and non-synaptic sites of vesicle release, and some of the innervating fibers co-labeled with somata in the CNS. Octopamine, GABA, and serotonin were identified as efferent neurotransmitters or neuromodulators that affect auditory frequency tuning, mechanical amplification, and sound-evoked potentials. Mosquito brains thus modulate mosquito ears, extending the use of auditory efferent systems from vertebrates to invertebrates and adding new levels of complexity to mosquito sound detection and communication. PMID:27476597

  19. Primary auditory cortical responses to electrical stimulation of the thalamus.

    PubMed

    Atencio, Craig A; Shih, Jonathan Y; Schreiner, Christoph E; Cheung, Steven W

    2014-03-01

    Cochlear implant electrical stimulation of the auditory system to rehabilitate deafness has been remarkably successful. Its deployment requires both an intact auditory nerve and a suitably patent cochlear lumen. When disease renders prerequisite conditions impassable, such as in neurofibromatosis type II and cochlear obliterans, alternative treatment targets are considered. Electrical stimulation of the cochlear nucleus and midbrain in humans has delivered encouraging clinical outcomes, buttressing the promise of central auditory prostheses to mitigate deafness in those who are not candidates for cochlear implantation. In this study we explored another possible implant target: the auditory thalamus. In anesthetized cats, we first presented pure tones to determine frequency preferences of thalamic and cortical sites. We then electrically stimulated tonotopically organized thalamic sites while recording from primary auditory cortical sites using a multichannel recording probe. Cathode-leading biphasic thalamic stimulation thresholds that evoked cortical responses were much lower than published accounts of cochlear and midbrain stimulation. Cortical activation dynamic ranges were similar to those reported for cochlear stimulation, but they were narrower than those found through midbrain stimulation. Our results imply that thalamic stimulation can activate auditory cortex at low electrical current levels and suggest an auditory thalamic implant may be a viable central auditory prosthesis. PMID:24335216

  20. Fractional channel multichannel analyzer

    DOEpatents

    Brackenbush, L.W.; Anderson, G.A.

    1994-08-23

    A multichannel analyzer incorporating the features of the present invention obtains the effect of fractional channels thus greatly reducing the number of actual channels necessary to record complex line spectra. This is accomplished by using an analog-to-digital converter in the asynchronous mode, i.e., the gate pulse from the pulse height-to-pulse width converter is not synchronized with the signal from a clock oscillator. This saves power and reduces the number of components required on the board to achieve the effect of radically expanding the number of channels without changing the circuit board. 9 figs.

  1. Fractional channel multichannel analyzer

    DOEpatents

    Brackenbush, Larry W.; Anderson, Gordon A.

    1994-01-01

    A multichannel analyzer incorporating the features of the present invention obtains the effect of fractional channels thus greatly reducing the number of actual channels necessary to record complex line spectra. This is accomplished by using an analog-to-digital converter in the asynscronous mode, i.e., the gate pulse from the pulse height-to-pulse width converter is not synchronized with the signal from a clock oscillator. This saves power and reduces the number of components required on the board to achieve the effect of radically expanding the number of channels without changing the circuit board.

  2. Separating heart and brain: on the reduction of physiological noise from multichannel functional near-infrared spectroscopy (fNIRS) signals

    NASA Astrophysics Data System (ADS)

    Bauernfeind, G.; Wriessnegger, S. C.; Daly, I.; Müller-Putz, G. R.

    2014-10-01

    Objective. Functional near-infrared spectroscopy (fNIRS) is an emerging technique for the in vivo assessment of functional activity of the cerebral cortex as well as in the field of brain-computer interface (BCI) research. A common challenge for the utilization of fNIRS in these areas is a stable and reliable investigation of the spatio-temporal hemodynamic patterns. However, the recorded patterns may be influenced and superimposed by signals generated from physiological processes, resulting in an inaccurate estimation of the cortical activity. Up to now only a few studies have investigated these influences, and still less has been attempted to remove/reduce these influences. The present study aims to gain insights into the reduction of physiological rhythms in hemodynamic signals (oxygenated hemoglobin (oxy-Hb), deoxygenated hemoglobin (deoxy-Hb)). Approach. We introduce the use of three different signal processing approaches (spatial filtering, a common average reference (CAR) method; independent component analysis (ICA); and transfer function (TF) models) to reduce the influence of respiratory and blood pressure (BP) rhythms on the hemodynamic responses. Main results. All approaches produce large reductions in BP and respiration influences on the oxy-Hb signals and, therefore, improve the contrast-to-noise ratio (CNR). In contrast, for deoxy-Hb signals CAR and ICA did not improve the CNR. However, for the TF approach, a CNR-improvement in deoxy-Hb can also be found. Significance. The present study investigates the application of different signal processing approaches to reduce the influences of physiological rhythms on the hemodynamic responses. In addition to the identification of the best signal processing method, we also show the importance of noise reduction in fNIRS data.

  3. Multichannel analog temperature sensing system

    NASA Astrophysics Data System (ADS)

    Gribble, R.

    1985-08-01

    A multichannel system that protects the numerous and costly water-cooled magnet coils on the translation section of the FRX-C/T magnetic fusion experiment is described. The system comprises a thermistor for each coil, a constant current circuit for each thermistor, and a multichannel analog-to-digital converter interfaced to the computer.

  4. Neurons Differentiated from Transplanted Stem Cells Respond Functionally to Acoustic Stimuli in the Awake Monkey Brain.

    PubMed

    Wei, Jing-Kuan; Wang, Wen-Chao; Zhai, Rong-Wei; Zhang, Yu-Hua; Yang, Shang-Chuan; Rizak, Joshua; Li, Ling; Xu, Li-Qi; Liu, Li; Pan, Ming-Ke; Hu, Ying-Zhou; Ghanemi, Abdelaziz; Wu, Jing; Yang, Li-Chuan; Li, Hao; Lv, Long-Bao; Li, Jia-Li; Yao, Yong-Gang; Xu, Lin; Feng, Xiao-Li; Yin, Yong; Qin, Dong-Dong; Hu, Xin-Tian; Wang, Zheng-Bo

    2016-07-26

    Here, we examine whether neurons differentiated from transplanted stem cells can integrate into the host neural network and function in awake animals, a goal of transplanted stem cell therapy in the brain. We have developed a technique in which a small "hole" is created in the inferior colliculus (IC) of rhesus monkeys, then stem cells are transplanted in situ to allow for investigation of their integration into the auditory neural network. We found that some transplanted cells differentiated into mature neurons and formed synaptic input/output connections with the host neurons. In addition, c-Fos expression increased significantly in the cells after acoustic stimulation, and multichannel recordings indicated IC specific tuning activities in response to auditory stimulation. These results suggest that the transplanted cells have the potential to functionally integrate into the host neural network. PMID:27425612

  5. Speech recognition for 40 patients receiving multichannel cochlear implants.

    PubMed

    Dowell, R C; Mecklenburg, D J; Clark, G M

    1986-10-01

    We collected data on 40 patients who received the Nucleus multichannel cochlear implant. Results were reviewed to determine if the coding strategy is effective in transmitting the intended speech features and to assess patient benefit in terms of communication skills. All patients demonstrated significant improvement over preoperative results with a hearing aid for both lipreading enhancement and speech recognition without lipreading. Of the patients, 50% demonstrated ability to understand connected discourse with auditory input only. For the 23 patients who were tested 12 months postoperatively, there was substantial improvement in open-set speech recognition. PMID:3755975

  6. Brain

    MedlinePlus

    ... will return after updating. Resources Archived Modules Updates Brain Cerebrum The cerebrum is the part of the ... the outside of the brain and spinal cord. Brain Stem The brain stem is the part of ...

  7. Multichannel birefringent filter

    NASA Technical Reports Server (NTRS)

    Gouxiang, A.; Huefeng, H.

    1985-01-01

    A birefringent filter with a large field of view and no additional polarization is discussed. It plays an important role in observing the solar monochromatic image and the solar vector magnetic field. It has only one channel. For simultaneous multichannel observations, the solar spectrograph is better than the birefringent filter. A suggestion was proposed to try to obtain a multichannel birefringent filter which will be used in a new telescope at the Huairou reservoir station of Beijing Observatory. By means of N polarizing beam splitters, (N+1) channels can be divided. In principle, any number of limitless channels can be obtained, thereby subdividing the whole solar spectrum. But since the space in a telescope is limited, the channels to be used are also limited. For the new telescope, 5 and 9 channels are being considered, and the spectral range is from lambda 3800A to lambda 7000A. Many lines are included in this range, for example, H, K, H beta, lambda lambda 5324A, 5250A, 6302A, H alpha, etc., and some of the lines are suited to measure solar velocity fields. According to the character of these lines, the half width of each channel is determined. Moreover, in some channels the solid polarizing Michelson interferometer is considered for measuring velocity field with a lm/s accuracy. The advantages of the filter and problems to be solved are listed.

  8. On-Line Statistical Segmentation of a Non-Speech Auditory Stream in Neonates as Demonstrated by Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Kudo, Noriko; Nonaka, Yulri; Mizuno, Noriko; Mizuno, Katsumi; Okanoya, Kazuo

    2011-01-01

    The ability to statistically segment a continuous auditory stream is one of the most important preparations for initiating language learning. Such ability is available to human infants at 8 months of age, as shown by a behavioral measurement. However, behavioral study alone cannot determine how early this ability is available. A recent study using…

  9. Estrogenic modulation of auditory processing: a vertebrate comparison

    PubMed Central

    Caras, Melissa L.

    2013-01-01

    Sex-steroid hormones are well-known regulators of vocal motor behavior in several organisms. A large body of evidence now indicates that these same hormones modulate processing at multiple levels of the ascending auditory pathway. The goal of this review is to provide a comparative analysis of the role of estrogens in vertebrate auditory function. Four major conclusions can be drawn from the literature: First, estrogens may influence the development of the mammalian auditory system. Second, estrogenic signaling protects the mammalian auditory system from noise- and age-related damage. Third, estrogens optimize auditory processing during periods of reproductive readiness in multiple vertebrate lineages. Finally, brain-derived estrogens can act locally to enhance auditory response properties in at least one avian species. This comparative examination may lead to a better appreciation of the role of estrogens in the processing of natural vocalizations and may provide useful insights toward alleviating auditory dysfunctions emanating from hormonal imbalances. PMID:23911849

  10. Electrophysiological measurement of human auditory function

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1975-01-01

    Contingent negative variations in the presence and amplitudes of brain potentials evoked by sound are considered. Evidence is produced that the evoked brain stem response to auditory stimuli is clearly related to brain events associated with cognitive processing of acoustic signals since their properties depend upon where the listener directs his attention, whether the signal is an expected event or a surprise, and when sound that is listened-for is heard at last.

  11. Multichannel signal enhancement

    DOEpatents

    Lewis, Paul S.

    1990-01-01

    A mixed adaptive filter is formulated for the signal processing problem where desired a priori signal information is not available. The formulation generates a least squares problem which enables the filter output to be calculated directly from an input data matrix. In one embodiment, a folded processor array enables bidirectional data flow to solve the recursive problem by back substitution without global communications. In another embodiment, a balanced processor array solves the recursive problem by forward elimination through the array. In a particular application to magnetoencephalography, the mixed adaptive filter enables an evoked response to an auditory stimulus to be identified from only a single trial.

  12. Multichannel optical sensing device

    DOEpatents

    Selkowitz, S.E.

    1985-08-16

    A multichannel optical sensing device is disclosed, for measuring the outdoor sky luminance or illuminance or the luminance or illuminance distribution in a room, comprising a plurality of light receptors, an optical shutter matrix including a plurality of liquid crystal optical shutter elements operable by electrical control signals between light transmitting and light stopping conditions, fiber optical elements connected between the receptors and the shutter elements, a microprocessor based programmable control unit for selectively supplying control signals to the optical shutter elements in a programmable sequence, a photodetector including an optical integrating spherical chamber having an input port for receiving the light from the shutter matrix and at least one detector element in the spherical chamber for producing output signals corresponding to the light, and output units for utilizing the output signals including a storage unit having a control connection to the microprocessor based programmable control unit for storing the output signals under the sequence control of the programmable control unit.

  13. Multichannel optical sensing device

    DOEpatents

    Selkowitz, Stephen E.

    1990-01-01

    A multichannel optical sensing device is disclosed, for measuring the outr sky luminance or illuminance or the luminance or illuminance distribution in a room, comprising a plurality of light receptors, an optical shutter matrix including a plurality of liquid crystal optical shutter elements operable by electrical control signals between light transmitting and light stopping conditions, fiber optic elements connected between the receptors and the shutter elements, a microprocessor based programmable control unit for selectively supplying control signals to the optical shutter elements in a programmable sequence, a photodetector including an optical integrating spherical chamber having an input port for receiving the light from the shutter matrix and at least one detector element in the spherical chamber for producing output signals corresponding to the light, and output units for utilizing the output signals including a storage unit having a control connection to the microprocessor based programmable control unit for storing the output signals under the sequence control of the programmable control unit.

  14. Software Configurable Multichannel Transceiver

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Cornelius, Harold; Hickling, Ron; Brooks, Walter

    2009-01-01

    Emerging test instrumentation and test scenarios increasingly require network communication to manage complexity. Adapting wireless communication infrastructure to accommodate challenging testing needs can benefit from reconfigurable radio technology. A fundamental requirement for a software-definable radio system is independence from carrier frequencies, one of the radio components that to date has seen only limited progress toward programmability. This paper overviews an ongoing project to validate the viability of a promising chipset that performs conversion of radio frequency (RF) signals directly into digital data for the wireless receiver and, for the transmitter, converts digital data into RF signals. The Software Configurable Multichannel Transceiver (SCMT) enables four transmitters and four receivers in a single unit the size of a commodity disk drive, programmable for any frequency band between 1 MHz and 6 GHz.

  15. McGurk illusion recalibrates subsequent auditory perception.

    PubMed

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of 'ada'. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as 'ada'. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as 'ada', activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  16. McGurk illusion recalibrates subsequent auditory perception

    PubMed Central

    Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  17. Auditory memory function in expert chess players

    PubMed Central

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. Results: The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Conclusion: Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time. PMID:26793666

  18. On-line statistical segmentation of a non-speech auditory stream in neonates as demonstrated by event-related brain potentials.

    PubMed

    Kudo, Noriko; Nonaka, Yulri; Mizuno, Noriko; Mizuno, Katsumi; Okanoya, Kazuo

    2011-09-01

    The ability to statistically segment a continuous auditory stream is one of the most important preparations for initiating language learning. Such ability is available to human infants at 8 months of age, as shown by a behavioral measurement. However, behavioral study alone cannot determine how early this ability is available. A recent study using measurements of event-related potential (ERP) revealed that neonates are able to detect statistical boundaries within auditory streams of speech syllables. Extending this line of research will allow us to better understand the cognitive preparation for language acquisition that is available to neonates. The aim of the present study was to examine the domain-generality of such statistical segmentation. Neonates were presented with nonlinguistic tone sequences composed of four tritone units, each consisting of three semitones extracted from one octave, for two 5-minute sessions. Only the first tone of each unit evoked a significant positivity in the frontal area during the second session, but not in the first session. This result suggests that the general ability to distinguish units in an auditory stream by statistical information is activated at birth and is probably innately prepared in humans. PMID:21884325

  19. Reconstructing speech from human auditory cortex.

    PubMed

    Pasley, Brian N; David, Stephen V; Mesgarani, Nima; Flinker, Adeen; Shamma, Shihab A; Crone, Nathan E; Knight, Robert T; Chang, Edward F

    2012-01-01

    How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex. PMID:22303281

  20. Reconstructing Speech from Human Auditory Cortex

    PubMed Central

    Pasley, Brian N.; David, Stephen V.; Mesgarani, Nima; Flinker, Adeen; Shamma, Shihab A.; Crone, Nathan E.; Knight, Robert T.; Chang, Edward F.

    2012-01-01

    How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex. PMID:22303281

  1. Multichannel SQUID systems for brain research

    SciTech Connect

    Ahonen, A.I.; Hamalainen, M.S.; Kajola, M.J.; Knuutila, J.E.F.; Lounasmaa, O.V.; Simola, J.T.; Vilkman, V.A. . Low Temperature Lab.); Tesche, C.D. . Thomas J. Watson Research Center)

    1991-03-01

    This paper reviews basis principles of magnetoencephalography (MEG) and neuromagnetic instrumentation. The authors' 24-channel system, based on planar gradiometer coils and dc-SQUIDs, is then described. Finally, recent MEG-experiments on human somatotopy and focal epilepsy, carried out in the authors' laboratory, are presented.

  2. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer

  3. Digital restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Chin, Roland T.

    1989-01-01

    The Wiener solution of a multichannel restoration scheme is presented. Using matrix diagonalization and block-Toeplitz to block-circulant approximation, the inversion of the multichannel, linear space-invariant imaging system becomes feasible by utilizing a fast iterative matrix inversion procedure. The restoration uses both the within-channel (spatial) and between-channel (spectral) correlation; hence, the restored result is a better estimate than that produced by independent channel restoration. Simulations are also presented.

  4. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    PubMed

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. PMID:25641042

  5. A unique cellular scaling rule in the avian auditory system.

    PubMed

    Corfield, Jeremy R; Long, Brendan; Krilow, Justin M; Wylie, Douglas R; Iwaniuk, Andrew N

    2016-06-01

    Although it is clear that neural structures scale with body size, the mechanisms of this relationship are not well understood. Several recent studies have shown that the relationship between neuron numbers and brain (or brain region) size are not only different across mammalian orders, but also across auditory and visual regions within the same brains. Among birds, similar cellular scaling rules have not been examined in any detail. Here, we examine the scaling of auditory structures in birds and show that the scaling rules that have been established in the mammalian auditory pathway do not necessarily apply to birds. In galliforms, neuronal densities decrease with increasing brain size, suggesting that auditory brainstem structures increase in size faster than neurons are added; smaller brains have relatively more neurons than larger brains. The cellular scaling rules that apply to auditory brainstem structures in galliforms are, therefore, different to that found in primate auditory pathway. It is likely that the factors driving this difference are associated with the anatomical specializations required for sound perception in birds, although there is a decoupling of neuron numbers in brain structures and hair cell numbers in the basilar papilla. This study provides significant insight into the allometric scaling of neural structures in birds and improves our understanding of the rules that govern neural scaling across vertebrates. PMID:26002617

  6. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  7. Auditory Training for Central Auditory Processing Disorder.

    PubMed

    Weihing, Jeffrey; Chermak, Gail D; Musiek, Frank E

    2015-11-01

    Auditory training (AT) is an important component of rehabilitation for patients with central auditory processing disorder (CAPD). The present article identifies and describes aspects of AT as they relate to applications in this population. A description of the types of auditory processes along with information on relevant AT protocols that can be used to address these specific deficits is included. Characteristics and principles of effective AT procedures also are detailed in light of research that reflects on their value. Finally, research investigating AT in populations who show CAPD or present with auditory complaints is reported. Although efficacy data in this area are still emerging, current findings support the use of AT for treatment of auditory difficulties. PMID:27587909

  8. The human auditory evoked response

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1974-01-01

    Figures are presented of computer-averaged auditory evoked responses (AERs) that point to the existence of a completely endogenous brain event. A series of regular clicks or tones was administered to the ear, and 'odd-balls' of different intensity or frequency respectively were included. Subjects were asked either to ignore the sounds (to read or do something else) or to attend to the stimuli. When they listened and counted the odd-balls, a P3 wave occurred at 300msec after stimulus. When the odd-balls consisted of omitted clicks or tone bursts, a similar response was observed. This could not have come from auditory nerve, but only from cortex. It is evidence of recognition, a conscious process.

  9. Reality of auditory verbal hallucinations

    PubMed Central

    Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-01-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency. PMID:19620178

  10. Electrophysiological measurement of human auditory function

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1975-01-01

    Knowledge of the human auditory evoked response is reviewed, including methods of determining this response, the way particular changes in the stimulus are coupled to specific changes in the response, and how the state of mind of the listener will influence the response. Important practical applications of this basic knowledge are discussed. Measurement of the brainstem evoked response, for instance, can state unequivocally how well the peripheral auditory apparatus functions. It might then be developed into a useful hearing test, especially for infants and preverbal or nonverbal children. Clinical applications of measuring the brain waves evoked 100 msec and later after the auditory stimulus are undetermined. These waves are clearly related to brain events associated with cognitive processing of acoustic signals, since their properties depend upon where the listener directs his attention and whether how long he expects the signal.

  11. Multichannel demultiplexer-demodulator

    NASA Technical Reports Server (NTRS)

    Courtois, Hector; Sherry, Mike; Cangiane, Peter; Caso, Greg

    1993-01-01

    One of the critical satellite technologies in a meshed VSAT (very small aperture terminal) satellite communication networks utilizing FDMA (frequency division multiple access) uplinks is a multichannel demultiplexer/demodulator (MCDD). TRW Electronic Systems Group developed a proof-of-concept (POC) MCDD using advanced digital technologies. This POC model demonstrates the capability of demultiplexing and demodulating multiple low to medium data rate FDMA uplinks with potential for expansion to demultiplexing and demodulating hundreds to thousands of narrowband uplinks. The TRW approach uses baseband sampling followed by successive wideband and narrowband channelizers with each channelizer feeding into a multirate, time-shared demodulator. A full-scale MCDD would consist of an 8 bit A/D sampling at 92.16 MHz, four wideband channelizers capable of demultiplexing eight wideband channels, thirty-two narrowband channelizers capable of demultiplexing one wideband signal into 32 narrowband channels, and thirty-two multirate demodulators. The POC model consists of an 8 bit A/D sampling at 23.04 MHz, one wideband channelizer, 16 narrowband channelizers, and three multirate demodulators. The implementation loss of the wideband and narrowband channels is 0.3dB and 0.75dB at 10(exp -7) E(sub b)/N(sub o) respectively.

  12. The Brain As a Mixer, I. Preliminary Literature Review: Auditory Integration. Studies in Language and Language Behavior, Progress Report Number VII.

    ERIC Educational Resources Information Center

    Semmel, Melvyn I.; And Others

    Methods to evaluate central hearing deficiencies and to localize brain damage are reviewed beginning with Bocca who showed that patients with temporal lobe tumors made significantly lower discrimination scores in the ear opposite the tumor when speech signals were distorted. Tests were devised to attempt to pinpoint brain damage on the basis of…

  13. Conceptual priming for realistic auditory scenes and for auditory words.

    PubMed

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. PMID:24378910

  14. Aging and auditory site of lesion.

    PubMed

    Otto, W C; McCandless, G A

    1982-01-01

    The purpose of this study was to examine and quantify the functional auditory problems of presbycusis through a battery of recently developed diagnostic tests and to evaluate the usefulness of these tests with an elderly population. Diagnostic measures used were impedence measures, speech discrimination tests, synthetic sentence identification, compressed speech, two measures of tone decay, the short increment sensitivity index, a digit span test, and auditory brain stem response audiometry. Significant differences were found between scores for elderly subjects and those of young subjects who had similar audiograms. Use of the Metz test as an objective measure of recruitment yielded results suggesting a higher incidence of recruitment than evidenced by previous studies using loudness balancing procedures. The Olsen-Noffsinger procedure of quantifying tone decay revealed a greater difference between age groups than did the Suprathreshold Adaptation Test. Synthetic sentence identification revealed the most consistent age effect among the tests of central auditory function. Auditory brain stem response audiometry revealed several examples of abnormally long interpeak latencies. It is concluded that there is both behavioral and electrophysiological evidence of central and peripheral auditory disorder frequently accompanying senescence. PMID:7095318

  15. A wireless multichannel EEG recording platform.

    PubMed

    Filipe, S; Charvet, G; Foerster, M; Porcherot, J; Bêche, J F; Bonnet, S; Audebert, P; Régis, G; Zongo, B; Robinet, S; Condemine, C; Mestais, C; Guillemaud, R

    2011-01-01

    A wireless multichannel data acquisition system is being designed for ElectroEncephaloGraphy (EEG) recording. The system is based on a custom integrated circuit (ASIC) for signal conditioning, amplification and digitization and also on commercial components for RF transmission. It supports the RF transmission of a 32-channel EEG recording sampled at 1 kHz with a 12-bit resolution. The RF communication uses the MICS band (Medical Implant Communication Service) at 402-405 Mhz. This integration is a first step towards a lightweight EEG cap for Brain Computer Interface (BCI) studies. Here, we present the platform architecture and its submodules. In vivo validations are presented with noise characterization and wireless data transfer measurements. PMID:22255783

  16. 40 Hz auditory steady state response to linguistic features of stimuli during auditory hallucinations.

    PubMed

    Ying, Jun; Yan, Zheng; Gao, Xiao-rong

    2013-10-01

    The auditory steady state response (ASSR) may reflect activity from different regions of the brain, depending on the modulation frequency used. In general, responses induced by low rates (≤40 Hz) emanate mostly from central structures of the brain, and responses from high rates (≥80 Hz) emanate mostly from the peripheral auditory nerve or brainstem structures. Besides, it was reported that the gamma band ASSR (30-90 Hz) played an important role in working memory, speech understanding and recognition. This paper investigated the 40 Hz ASSR evoked by modulated speech and reversed speech. The speech was Chinese phrase voice, and the noise-like reversed speech was obtained by temporally reversing the speech. Both auditory stimuli were modulated with a frequency of 40 Hz. Ten healthy subjects and 5 patients with hallucination symptom participated in the experiment. Results showed reduction in left auditory cortex response when healthy subjects listened to the reversed speech compared with the speech. In contrast, when the patients who experienced auditory hallucinations listened to the reversed speech, the auditory cortex of left hemispheric responded more actively. The ASSR results were consistent with the behavior results of patients. Therefore, the gamma band ASSR is expected to be helpful for rapid and objective diagnosis of hallucination in clinic. PMID:24142731

  17. Multichannel Coding of Applause Signals

    NASA Astrophysics Data System (ADS)

    Hotho, Gerard; van de Par, Steven; Breebaart, Jeroen

    2007-12-01

    We develop a parametric multichannel audio codec dedicated to coding signals consisting of a dense series of transient-type events. These signals of which applause is a typical example are known to be problematic for such audio codecs. The codec design is based on preservation of both timbre and transient-type event density. It combines a very low complexity and a low parameter bit rate (0.2 kbps). In a formal listening test, we compared the proposed codec to the recently standardised MPEG Surround multichannel codec, with an associated parameter bit rate of 9 kbps. We found the new codec to have a significantly higher audio quality than the MPEG Surround codec for the two multichannel applause signals under test. Though this seems promising, the technique presented is not fully mature, for example, because issues related to integration of the proposed codec in the MPEG Surround codec were not addressed.

  18. Electrophysiological correlates of auditory change detection and change deafness in complex auditory scenes.

    PubMed

    Puschmann, Sebastian; Sandmann, Pascale; Ahrens, Janina; Thorne, Jeremy; Weerda, Riklef; Klump, Georg; Debener, Stefan; Thiel, Christiane M

    2013-07-15

    Change deafness describes the failure to perceive even intense changes within complex auditory input, if the listener does not attend to the changing sound. Remarkably, previous psychophysical data provide evidence that this effect occurs independently of successful stimulus encoding, indicating that undetected changes are processed to some extent in auditory cortex. Here we investigated cortical representations of detected and undetected auditory changes using electroencephalographic (EEG) recordings and a change deafness paradigm. We applied a one-shot change detection task, in which participants listened successively to three complex auditory scenes, each of them consisting of six simultaneously presented auditory streams. Listeners had to decide whether all scenes were identical or whether the pitch of one stream was changed between the last two presentations. Our data show significantly increased middle-latency Nb responses for both detected and undetected changes as compared to no-change trials. In contrast, only successfully detected changes were associated with a later mismatch response in auditory cortex, followed by increased N2, P3a and P3b responses, originating from hierarchically higher non-sensory brain regions. These results strengthen the view that undetected changes are successfully encoded at sensory level in auditory cortex, but fail to trigger later change-related cortical responses that lead to conscious perception of change. PMID:23466938

  19. Studying brain function with near-infrared spectroscopy concurrently with electroencephalography

    NASA Astrophysics Data System (ADS)

    Tong, Y.; Rooney, E. J.; Bergethon, P. R.; Martin, J. M.; Sassaroli, A.; Ehrenberg, B. L.; Van Toi, Vo; Aggarwal, P.; Ambady, N.; Fantini, S.

    2005-04-01

    Near-infrared spectroscopy (NIRS) has been used for functional brain imaging by employing properly designed source-detector matrices. We demonstrate that by embedding a NIRS source-detector matrix within an electroencephalography (EEG) standard multi-channel cap, we can perform functional brain mapping of hemodynamic response and neuronal response simultaneously. In this study, the P300 endogenous evoked response was generated in human subjects using an auditory odd-ball paradigm while concurrently monitoring the hemodynamic response both spatially and temporally with NIRS. The electrical measurements showed the localization of evoked potential P300, which appeared around 320 ms after the odd-ball stimulus. The NIRS measurements demonstrate a hemodynamic change in the fronto-temporal cortex a few seconds after the appearance of P300.

  20. Multichannel time-slot permuters

    NASA Astrophysics Data System (ADS)

    Jordan, Harry F.; Lee, Kyungsook Y.; Lee, Daeshik

    1993-02-01

    We consider the general switching problem known as time-space-time domain permutations in telecommunications. We present a new set of multichannel time slot permuters for L parallel frames of M time slots (L equals 2l, M equals 2m). The multichannel time slot permuters are obtained by combining L X L spatial networks and time slot permuters for a frame of M time slots. In this paper, the Benes network, the Batcher sorter and the Lambda network for spatial networks, and their counterparts, the RJS time slot permuter, the S time slot sorter, and the Lambda time slot permuter are considered.

  1. Brain responses and looking behavior during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life

    PubMed Central

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Potton, Anita; Birtles, Deidre; Frostick, Caroline; Moore, Derek G.

    2013-01-01

    The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants. PMID:23882240

  2. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    PubMed Central

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  3. Attending to auditory memory.

    PubMed

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26638836

  4. Forebrain pathway for auditory space processing in the barn owl.

    PubMed

    Cohen, Y E; Miller, G L; Knudsen, E I

    1998-02-01

    The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway. PMID:9463450

  5. Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve

    PubMed Central

    Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.

    2015-01-01

    The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538

  6. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  7. Microcomputer to Multichannel Analyzer Interface.

    ERIC Educational Resources Information Center

    Metz, Roger N.

    1982-01-01

    Describes a microcomputer-based multichannel analyzer (MCA) in which the front end is connected to a microcomputer through a custom interface. Thus an MCA System of 1024 channel resolution, programmable in Basic rather than in machine language and having moderate cost, is achieved. (Author/SK)

  8. Midbrain auditory selectivity to natural sounds.

    PubMed

    Wohlgemuth, Melville J; Moss, Cynthia F

    2016-03-01

    This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation-sonar vocalizations-offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors. PMID:26884152

  9. Midbrain auditory selectivity to natural sounds

    PubMed Central

    Moss, Cynthia F.

    2016-01-01

    This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation—sonar vocalizations—offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors. PMID:26884152

  10. Automatic segmentation and classification of multiple sclerosis in multichannel MRI.

    PubMed

    Akselrod-Ballin, Ayelet; Galun, Meirav; Gomori, John Moshe; Filippi, Massimo; Valsasina, Paola; Basri, Ronen; Brandt, Achi

    2009-10-01

    We introduce a multiscale approach that combines segmentation with classification to detect abnormal brain structures in medical imagery, and demonstrate its utility in automatically detecting multiple sclerosis (MS) lesions in 3-D multichannel magnetic resonance (MR) images. Our method uses segmentation to obtain a hierarchical decomposition of a multichannel, anisotropic MR scans. It then produces a rich set of features describing the segments in terms of intensity, shape, location, neighborhood relations, and anatomical context. These features are then fed into a decision forest classifier, trained with data labeled by experts, enabling the detection of lesions at all scales. Unlike common approaches that use voxel-by-voxel analysis, our system can utilize regional properties that are often important for characterizing abnormal brain structures. We provide experiments on two types of real MR images: a multichannel proton-density-, T2-, and T1-weighted dataset of 25 MS patients and a single-channel fluid attenuated inversion recovery (FLAIR) dataset of 16 MS patients. Comparing our results with lesion delineation by a human expert and with previously extensively validated results shows the promise of the approach. PMID:19758850

  11. Organization of projection neurons and local neurons of the primary auditory center in the fruit fly Drosophila melanogaster.

    PubMed

    Matsuo, Eriko; Seki, Haruyoshi; Asai, Tomonori; Morimoto, Takako; Miyakawa, Hiroyoshi; Ito, Kei; Kamikouchi, Azusa

    2016-04-15

    Acoustic communication between insects serves as an excellent model system for analyzing the neuronal mechanisms underlying auditory information processing. The detailed organization of auditory neural circuits in the brain has not yet been described. To understand the central auditory pathways, we used the brain of the fruit fly Drosophila melanogaster as a model and performed a large-scale analysis of the interneurons associated with the primary auditory center. By screening expression driver strains and performing single-cell labeling of these strains, we identified 44 types of interneurons innervating the primary auditory center. Five types were local interneurons whereas the other 39 types were projection interneurons connecting the primary auditory center with other brain regions. The projection neurons comprised three frequency-selective pathways and two frequency-embracive pathways. Mapping of their connection targets revealed that five neuropils in the brain-the wedge (WED), anterior ventrolateral protocerebrum, posterior ventrolateral protocerebrum (PVLP), saddle (SAD), and gnathal ganglia (GNG)-were intensively connected with the primary auditory center. In addition, several other neuropils, including visual and olfactory centers in the brain, were directly connected to the primary auditory center. The distribution patterns of the spines and boutons of the identified neurons suggest that auditory information is sent mainly from the primary auditory center to the PVLP, WED, SAD, GNG, and thoracico-abdominal ganglia. Based on these findings, we established the first comprehensive map of secondary auditory interneurons, which indicates the downstream information flow to parallel ascending pathways, multimodal pathways, and descending pathways. PMID:26762251

  12. Altered auditory function in rats exposed to hypergravic fields

    NASA Technical Reports Server (NTRS)

    Jones, T. A.; Hoffman, L.; Horowitz, J. M.

    1982-01-01

    The effect of an orthodynamic hypergravic field of 6 G on the brainstem auditory projections was studied in rats. The brain temperature and EEG activity were recorded in the rats during 6 G orthodynamic acceleration and auditory brainstem responses were used to monitor auditory function. Results show that all animals exhibited auditory brainstem responses which indicated impaired conduction and transmission of brainstem auditory signals during the exposure to the 6 G acceleration field. Significant increases in central conduction time were observed for peaks 3N, 4P, 4N, and 5P (N = negative, P = positive), while the absolute latency values for these same peaks were also significantly increased. It is concluded that these results, along with those for fields below 4 G (Jones and Horowitz, 1981), indicate that impaired function proceeds in a rostro-caudal progression as field strength is increased.

  13. Anatomy, Physiology and Function of the Auditory System

    NASA Astrophysics Data System (ADS)

    Kollmeier, Birger

    The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.

  14. What causes auditory distraction?

    PubMed

    Macken, William J; Phelps, Fiona G; Jones, Dylan M

    2009-02-01

    The role of separating task-relevant from task-irrelevant aspects of the environment is typically assigned to the executive functioning of working memory. However, pervasive aspects of auditory distraction have been shown to be unrelated to working memory capacity in a range of studies of individual differences. We measured individual differences in global pattern matching and deliberate recoding of auditory sequences, and showed that, although deliberate processing was related to short-term memory performance, it did not predict the extent to which that performance was disrupted by task-irrelevant sound. Individual differences in global sequence processing were, however, positively related to the degree to which auditory distraction occurred. We argue that much auditory distraction, rather than being a negative function of working memory capacity, is in fact a positive function of the acuity of obligatory auditory processing. PMID:19145024

  15. Spectrotemporal resolution tradeoff in auditory processing as revealed by human auditory brainstem responses and psychophysical indices.

    PubMed

    Bidelman, Gavin M; Syed Khaja, Ameenuddin

    2014-06-20

    Auditory filter theory dictates a physiological compromise between frequency and temporal resolution of cochlear signal processing. We examined neurophysiological correlates of these spectrotemporal tradeoffs in the human auditory system using auditory evoked brain potentials and psychophysical responses. Temporal resolution was assessed using scalp-recorded auditory brainstem responses (ABRs) elicited by paired clicks. The inter-click interval (ICI) between successive pulses was parameterized from 0.7 to 25 ms to map ABR amplitude recovery as a function of stimulus spacing. Behavioral frequency difference limens (FDLs) and auditory filter selectivity (Q10 of psychophysical tuning curves) were obtained to assess relations between behavioral spectral acuity and electrophysiological estimates of temporal resolvability. Neural responses increased monotonically in amplitude with increasing ICI, ranging from total suppression (0.7 ms) to full recovery (25 ms) with a temporal resolution of ∼3-4 ms. ABR temporal thresholds were correlated with behavioral Q10 (frequency selectivity) but not FDLs (frequency discrimination); no correspondence was observed between Q10 and FDLs. Results suggest that finer frequency selectivity, but not discrimination, is associated with poorer temporal resolution. The inverse relation between ABR recovery and perceptual frequency tuning demonstrates a time-frequency tradeoff between the temporal and spectral resolving power of the human auditory system. PMID:24793771

  16. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features. PMID:22271265

  17. Scalable multichannel MRI data acquisition system.

    PubMed

    Bodurka, Jerzy; Ledden, Patrick J; van Gelderen, Peter; Chu, Renxin; de Zwart, Jacco A; Morris, Doug; Duyn, Jeff H

    2004-01-01

    A scalable multichannel digital MRI receiver system was designed to achieve high bandwidth echo-planar imaging (EPI) acquisitions for applications such as BOLD-fMRI. The modular system design allows for easy extension to an arbitrary number of channels. A 16-channel receiver was developed and integrated with a General Electric (GE) Signa 3T VH/3 clinical scanner. Receiver performance was evaluated on phantoms and human volunteers using a custom-built 16-element receive-only brain surface coil array. At an output bandwidth of 1 MHz, a 100% acquisition duty cycle was achieved. Overall system noise figure and dynamic range were better than 0.85 dB and 84 dB, respectively. During repetitive EPI scanning on phantoms, the relative temporal standard deviation of the image intensity time-course was below 0.2%. As compared to the product birdcage head coil, 16-channel reception with the custom array yielded a nearly 6-fold SNR gain in the cerebral cortex and a 1.8-fold SNR gain in the center of the brain. The excellent system stability combined with the increased sensitivity and SENSE capabilities of 16-channel coils are expected to significantly benefit and enhance fMRI applications. PMID:14705057

  18. Auditory Neuroimaging with fMRI and PET

    PubMed Central

    Talavage, Thomas M.; Gonzalez-Castillo, Javier; Scott, Sophie K.

    2013-01-01

    For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. PMID:24076424

  19. Lateralization of auditory-cortex functions.

    PubMed

    Tervaniemi, Mari; Hugdahl, Kenneth

    2003-12-01

    In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding. PMID:14629926

  20. Formation of associations in auditory cortex by slow changes of tonic firing.

    PubMed

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2011-01-01

    We review event-related slow firing changes in the auditory cortex and related brain structures. Two types of changes can be distinguished, namely increases and decreases of firing, lasting in the order of seconds. Triggering events can be auditory stimuli, reinforcers, and behavioral responses. Slow firing changes terminate with reinforcers and possibly with auditory stimuli and behavioral responses. A necessary condition for the emergence of slow firing changes seems to be that subjects have learnt that consecutive sensory or behavioral events are contingent on reinforcement. They disappear when the contingencies are no longer present. Slow firing changes in auditory cortex bear similarities with slow changes of neuronal activity that have been observed in subcortical parts of the auditory system and in other non-sensory brain structures. We propose that slow firing changes in auditory cortex provide a neuronal mechanism for anticipating, memorizing, and associating events that are related to hearing and of behavioral relevance. This may complement the representation of the timing and types of auditory and auditory-related events which may be provided by phasic responses in auditory cortex. The presence of slow firing changes indicates that many more auditory-related aspects of a behavioral procedure are reflected in the neuronal activity of auditory cortex than previously assumed. PMID:20488230

  1. Multichannel Error Correction Code Decoder

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA Lewis Research Center's Digital Systems Technology Branch has an ongoing program in modulation, coding, onboard processing, and switching. Recently, NASA completed a project to incorporate a time-shared decoder into the very-small-aperture terminal (VSAT) onboard-processing mesh architecture. The primary goal was to demonstrate a time-shared decoder for a regenerative satellite that uses asynchronous, frequency-division multiple access (FDMA) uplink channels, thereby identifying hardware and power requirements and fault-tolerant issues that would have to be addressed in a operational system. A secondary goal was to integrate and test, in a system environment, two NASA-sponsored, proof-of-concept hardware deliverables: the Harris Corp. high-speed Bose Chaudhuri-Hocquenghem (BCH) codec and the TRW multichannel demultiplexer/demodulator (MCDD). A beneficial byproduct of this project was the development of flexible, multichannel-uplink signal-generation equipment.

  2. Is the auditory sensory memory sensitive to visual information?

    PubMed Central

    Besle, Julien; Fort, Alexandra; Giard, Marie-Hélène

    2005-01-01

    The Mismatch Negativity component of the auditory event-related brain potentials can be used as a probe to study the representation of sounds in Auditory Sensory Memory (ASM). Yet, it has been shown that an auditory MMN can also be elicited by an illusory auditory deviance induced by visual changes. This suggests that some visual information may be encoded in ASM and is accessible to the auditory MMN process. However, it is not known whether visual information influences ASM representation for any audiovisual event or whether this phenomenon is limited to specific domains in which strong audiovisual illusions occur. To highlight this issue, we have compared the topographies of MMNs elicited by non-speech audiovisual stimuli deviating from audiovisual standards on the visual, the auditory or both dimensions. Contrary to what occurs with audiovisual illusions, each unimodal deviants elicited sensory-specific MMNs and the MMN to audiovisual deviants included both sensory components. The visual MMN was however different from a genuine visual MMN obtained in a visual-only control oddbbal paradigm, suggesting that auditory and visual information interacts before the MMN process occurs. Furthermore, the MMN to audiovisual deviants was significantly different from the sum of the two sensory-specific MMNs, showing that the processes of visual and auditory change detection are not completely independent. PMID:16041497

  3. Web-based multi-channel analyzer

    DOEpatents

    Gritzo, Russ E.

    2003-12-23

    The present invention provides an improved multi-channel analyzer designed to conveniently gather, process, and distribute spectrographic pulse data. The multi-channel analyzer may operate on a computer system having memory, a processor, and the capability to connect to a network and to receive digitized spectrographic pulses. The multi-channel analyzer may have a software module integrated with a general-purpose operating system that may receive digitized spectrographic pulses for at least 10,000 pulses per second. The multi-channel analyzer may further have a user-level software module that may receive user-specified controls dictating the operation of the multi-channel analyzer, making the multi-channel analyzer customizable by the end-user. The user-level software may further categorize and conveniently distribute spectrographic pulse data employing non-proprietary, standard communication protocols and formats.

  4. Multichannel analysis of surface waves

    USGS Publications Warehouse

    Park, C.B.; Miller, R.D.; Xia, J.

    1999-01-01

    The frequency-dependent properties of Rayleigh-type surface waves can be utilized for imaging and characterizing the shallow subsurface. Most surface-wave analysis relies on the accurate calculation of phase velocities for the horizontally traveling fundamental-mode Rayleigh wave acquired by stepping out a pair of receivers at intervals based on calculated ground roll wavelengths. Interference by coherent source-generated noise inhibits the reliability of shear-wave velocities determined through inversion of the whole wave field. Among these nonplanar, nonfundamental-mode Rayleigh waves (noise) are body waves, scattered and nonsource-generated surface waves, and higher-mode surface waves. The degree to which each of these types of noise contaminates the dispersion curve and, ultimately, the inverted shear-wave velocity profile is dependent on frequency as well as distance from the source. Multichannel recording permits effective identification and isolation of noise according to distinctive trace-to-trace coherency in arrival time and amplitude. An added advantage is the speed and redundancy of the measurement process. Decomposition of a multichannel record into a time variable-frequency format, similar to an uncorrelated Vibroseis record, permits analysis and display of each frequency component in a unique and continuous format. Coherent noise contamination can then be examined and its effects appraised in both frequency and offset space. Separation of frequency components permits real-time maximization of the S/N ratio during acquisition and subsequent processing steps. Linear separation of each ground roll frequency component allows calculation of phase velocities by simply measuring the linear slope of each frequency component. Breaks in coherent surface-wave arrivals, observable on the decomposed record, can be compensated for during acquisition and processing. Multichannel recording permits single-measurement surveying of a broad depth range, high levels of

  5. Temperature sensitive auditory neuropathy.

    PubMed

    Zhang, Qiujing; Lan, Lan; Shi, Wei; Yu, Lan; Xie, Lin-Yi; Xiong, Fen; Zhao, Cui; Li, Na; Yin, Zifang; Zong, Liang; Guan, Jing; Wang, Dayong; Sun, Wei; Wang, Qiuju

    2016-05-01

    Temperature sensitive auditory neuropathy is a very rare and puzzling disorder. In the present study, we reported three unrelated 2 to 6 year-old children who were diagnosed as auditory neuropathy patients who complained of severe hearing loss when they had fever. Their hearing thresholds varied from the morning to the afternoon. Two of these patients' hearing improved with age, and one patient received positive results from cochlear implant. Genetic analysis revealed that these three patients had otoferlin (OTOF) homozygous or compound heterozygous mutations with the genotypes c.2975_2978delAG/c.4819C>T, c.4819C>T/c.4819C>T, or c.2382_2383delC/c.1621G>A, respectively. Our study suggests that these gene mutations may be the cause of temperature sensitive auditory neuropathy. The long term follow up results suggest that the hearing loss in this type of auditory neuropathy may recover with age. PMID:26778470

  6. Auditory motion affects visual biological motion processing.

    PubMed

    Brooks, A; van der Zwan, R; Billard, A; Petreska, B; Clarke, S; Blanke, O

    2007-02-01

    The processing of biological motion is a critical, everyday task performed with remarkable efficiency by human sensory systems. Interest in this ability has focused to a large extent on biological motion processing in the visual modality (see, for example, Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait. Perception and Psychophysics, 44(4), 339-347). In naturalistic settings, however, it is often the case that biological motion is defined by input to more than one sensory modality. For this reason, here in a series of experiments we investigate behavioural correlates of multisensory, in particular audiovisual, integration in the processing of biological motion cues. More specifically, using a new psychophysical paradigm we investigate the effect of suprathreshold auditory motion on perceptions of visually defined biological motion. Unlike data from previous studies investigating audiovisual integration in linear motion processing [Meyer, G. F. & Wuerger, S. M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12(11), 2557-2560; Wuerger, S. M., Hofbauer, M., & Meyer, G. F. (2003). The integration of auditory and motion signals at threshold. Perception and Psychophysics, 65(8), 1188-1196; Alais, D. & Burr, D. (2004). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19, 185-194], we report the existence of direction-selective effects: relative to control (stationary) auditory conditions, auditory motion in the same direction as the visually defined biological motion target increased its detectability, whereas auditory motion in the opposite direction had the inverse effect. Our data suggest these effects do not arise through general shifts in visuo-spatial attention, but instead are a consequence of motion-sensitive, direction-tuned integration mechanisms that are, if not unique to biological visual motion, at least not common to all types of

  7. Subcortical modulation in auditory processing and auditory hallucinations.

    PubMed

    Ikuta, Toshikazu; DeRosse, Pamela; Argyelan, Miklos; Karlsgodt, Katherine H; Kingsley, Peter B; Szeszko, Philip R; Malhotra, Anil K

    2015-12-15

    Hearing perception in individuals with auditory hallucinations has not been well studied. Auditory hallucinations have previously been shown to involve primary auditory cortex activation. This activation suggests that auditory hallucinations activate the terminal of the auditory pathway as if auditory signals are submitted from the cochlea, and that a hallucinatory event is therefore perceived as hearing. The primary auditory cortex is stimulated by some unknown source that is outside of the auditory pathway. The current study aimed to assess the outcomes of stimulating the primary auditory cortex through the auditory pathway in individuals who have experienced auditory hallucinations. Sixteen patients with schizophrenia underwent functional magnetic resonance imaging (fMRI) sessions, as well as hallucination assessments. During the fMRI session, auditory stimuli were presented in one-second intervals at times when scanner noise was absent. Participants listened to auditory stimuli of sine waves (SW) (4-5.5kHz), English words (EW), and acoustically reversed English words (arEW) in a block design fashion. The arEW were employed to deliver the sound of a human voice with minimal linguistic components. Patients' auditory hallucination severity was assessed by the auditory hallucination item of the Brief Psychiatric Rating Scale (BPRS). During perception of arEW when compared with perception of SW, bilateral activation of the globus pallidus correlated with severity of auditory hallucinations. EW when compared with arEW did not correlate with auditory hallucination severity. Our findings suggest that the sensitivity of the globus pallidus to the human voice is associated with the severity of auditory hallucination. PMID:26275927

  8. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  9. [Central auditory prosthesis].

    PubMed

    Lenarz, T; Lim, H; Joseph, G; Reuter, G; Lenarz, M

    2009-06-01

    Deaf patients with severe sensory hearing loss can benefit from a cochlear implant (CI), which stimulates the auditory nerve fibers. However, patients who do not have an intact auditory nerve cannot benefit from a CI. The majority of these patients are neurofibromatosis type 2 (NF2) patients who developed neural deafness due to growth or surgical removal of a bilateral acoustic neuroma. The only current solution is the auditory brainstem implant (ABI), which stimulates the surface of the cochlear nucleus in the brainstem. Although the ABI provides improvement in environmental awareness and lip-reading capabilities, only a few NF2 patients have achieved some limited open set speech perception. In the search for alternative procedures our research group in collaboration with Cochlear Ltd. (Australia) developed a human prototype auditory midbrain implant (AMI), which is designed to electrically stimulate the inferior colliculus (IC). The IC has the potential as a new target for an auditory prosthesis as it provides access to neural projections necessary for speech perception as well as a systematic map of spectral information. In this paper the present status of research and development in the field of central auditory prostheses is presented with respect to technology, surgical technique and hearing results as well as the background concepts of ABI and AMI. PMID:19517084

  10. Auditory perception vs. recognition: representation of complex communication sounds in the mouse auditory cortical fields.

    PubMed

    Geissler, Diana B; Ehret, Günter

    2004-02-01

    Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies. PMID:15009150

  11. Multichanneled puzzle-like encryption

    NASA Astrophysics Data System (ADS)

    Amaya, Dafne; Tebaldi, Myrian; Torroba, Roberto; Bolognini, Néstor

    2008-07-01

    In order to increase data security transmission we propose a multichanneled puzzle-like encryption method. The basic principle relies on the input information decomposition, in the same way as the pieces of a puzzle. Each decomposed part of the input object is encrypted separately in a 4 f double random phase mask architecture, by setting the optical parameters in a determined status. Each parameter set defines a channel. In order to retrieve the whole information it is necessary to properly decrypt and compose all channels. Computer simulations that confirm our proposal are presented.

  12. Auditory-limbic interactions in chronic tinnitus: Challenges for neuroimaging research.

    PubMed

    Leaver, Amber M; Seydell-Greenwald, Anna; Rauschecker, Josef P

    2016-04-01

    Tinnitus is a widespread auditory disorder affecting approximately 10-15% of the population, often with debilitating consequences. Although tinnitus commonly begins with damage to the auditory system due to loud-noise exposure, aging, or other etiologies, the exact neurophysiological basis of chronic tinnitus remains unknown. Many researchers point to a central auditory origin of tinnitus; however, a growing body of evidence also implicates other brain regions, including the limbic system. Correspondingly, we and others have proposed models of tinnitus in which the limbic and auditory systems both play critical roles and interact with one another. Specifically, we argue that damage to the auditory system generates an initial tinnitus signal, consistent with previous research. In our model, this "transient" tinnitus is suppressed when a limbic frontostriatal network, comprised of ventromedial prefrontal cortex and ventral striatum, successfully modulates thalamocortical transmission in the auditory system. Thus, in chronic tinnitus, limbic-system damage and resulting inefficiency of auditory-limbic interactions prevents proper compensation of the tinnitus signal. Neuroimaging studies utilizing connectivity methods like resting-state fMRI and diffusion MRI continue to uncover tinnitus-related anomalies throughout auditory, limbic, and other brain systems. However, directly assessing interactions between these brain regions and networks has proved to be more challenging. Here, we review existing empirical support for models of tinnitus stressing a critical role for involvement of "non-auditory" structures in tinnitus pathophysiology, and discuss the possible impact of newly refined connectivity techniques from neuroimaging on tinnitus research. PMID:26299843

  13. Auditory temporal resolution is linked to resonance frequency of the auditory cortex.

    PubMed

    Baltus, Alina; Herrmann, Christoph Siegfried

    2015-10-01

    A brief silent gap embedded in an otherwise continuous sound is missed by a human listener when it falls below a certain threshold: the gap detection threshold. This can be interpreted as an indicator that auditory perception is a non-continuous process, during which acoustic input is fragmented into a discrete chain of events. Current research provides evidence for a covariation between rhythmic properties of speech and ongoing rhythmic activity in the brain. Therefore, the discretization of acoustic input is thought to facilitate speech processing. Ongoing oscillations in the auditory cortex are suggested to represent a neuronal mechanism which implements the discretization process and leads to a limited auditory temporal resolution. Since gap detection thresholds seem to vary considerably between individuals, the present study addresses the question of whether individual differences in the frequency of underlying ongoing oscillatory mechanisms can be associated with auditory temporal resolution. To address this question we determined an individual gap detection threshold and a preferred oscillatory frequency for each participant. The preferred frequency of the auditory cortex was identified using an auditory steady state response (ASSR) paradigm: amplitude-modulated sounds with modulation frequencies in the gamma range were presented binaurally; the frequency which elicited the largest spectral amplitude was considered the preferred oscillatory frequency. Our results show that individuals with higher preferred auditory frequencies perform significantly better in the gap detection task. Moreover, this correlation between oscillation frequency and gap detection was supported by high test-retest reliabilities for gap detection thresholds as well as preferred frequencies. PMID:26268810

  14. Development of multichannel MEG system at IGCAR

    NASA Astrophysics Data System (ADS)

    Mariyappa, N.; Parasakthi, C.; Gireesan, K.; Sengottuvel, S.; Patel, Rajesh; Janawadkar, M. P.; Radhakrishnan, T. S.; Sundar, C. S.

    2013-02-01

    We describe some of the challenging aspects in the indigenous development of the whole head multichannel magnetoencephalography (MEG) system at IGCAR, Kalpakkam. These are: i) fabrication and testing of a helmet shaped sensor array holder of a polymeric material experimentally tested to be compatible with liquid helium temperatures, ii) the design and fabrication of the PCB adapter modules, keeping in mind the inter-track cross talk considerations between the electrical leads used to provide connections from SQUID at liquid helium temperature (4.2K) to the electronics at room temperature (300K) and iii) use of high resistance manganin wires for the 86 channels (86×8 leads) essential to reduce the total heat leak which, however, inevitably causes an attenuation of the SQUID output signal due to voltage drop in the leads. We have presently populated 22 of the 86 channels, which include 6 reference channels to reject the common mode noise. The whole head MEG system to cover all the lobes of the brain will be progressively assembled when other three PCB adapter modules, presently under fabrication, become available. The MEG system will be used for a variety of basic and clinical studies including localization of epileptic foci during pre-surgical mapping in collaboration with neurologists.

  15. Auditory models for speech analysis

    NASA Astrophysics Data System (ADS)

    Maybury, Mark T.

    This paper reviews the psychophysical basis for auditory models and discusses their application to automatic speech recognition. First an overview of the human auditory system is presented, followed by a review of current knowledge gleaned from neurological and psychoacoustic experimentation. Next, a general framework describes established peripheral auditory models which are based on well-understood properties of the peripheral auditory system. This is followed by a discussion of current enhancements to that models to include nonlinearities and synchrony information as well as other higher auditory functions. Finally, the initial performance of auditory models in the task of speech recognition is examined and additional applications are mentioned.

  16. Auditory hallucinations induced by trazodone

    PubMed Central

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-01-01

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048

  17. Multichannel Learning: Connecting All to Education.

    ERIC Educational Resources Information Center

    Anzalone, Steve, Ed.

    Drafted for the Learning Technologies for Basic Education project, this document assembles case studies which provide an overview of multichannel learning, or reinforce learning through the use of several instructional paths and various media including print, broadcast, and online. Through the cases, multichannel learning is depicted as an…

  18. Multichannel Compression, Temporal Cues, and Audibility.

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Turner, Christopher W.

    1998-01-01

    The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…

  19. Multichannel Analyzer Built from a Microcomputer.

    ERIC Educational Resources Information Center

    Spencer, C. D.; Mueller, P.

    1979-01-01

    Describes a multichannel analyzer built using eight-bit S-100 bus microcomputer hardware. The output modes are an oscilloscope display, print data, and send data to another computer. Discusses the system's hardware, software, costs, and advantages relative to commercial multichannels. (Author/GA)

  20. A Student-Made Inexpensive Multichannel Pipet

    ERIC Educational Resources Information Center

    Dragojlovic, Veljko

    2009-01-01

    An inexpensive multichannel pipet designed to deliver small volumes of liquid simultaneously to wells in a multiwell plate can be prepared by students in a single laboratory period. The multichannel pipet is made of disposable plastic 1 mL syringes and drilled plastic plates, which are used to make plunger and barrel assemblies. Application of the…

  1. Resource allocation models of auditory working memory.

    PubMed

    Joseph, Sabine; Teki, Sundeep; Kumar, Sukhbinder; Husain, Masud; Griffiths, Timothy D

    2016-06-01

    Auditory working memory (WM) is the cognitive faculty that allows us to actively hold and manipulate sounds in mind over short periods of time. We develop here a particular perspective on WM for non-verbal, auditory objects as well as for time based on the consideration of possible parallels to visual WM. In vision, there has been a vigorous debate on whether WM capacity is limited to a fixed number of items or whether it represents a limited resource that can be allocated flexibly across items. Resource allocation models predict that the precision with which an item is represented decreases as a function of total number of items maintained in WM because a limited resource is shared among stored objects. We consider here auditory work on sequentially presented objects of different pitch as well as time intervals from the perspective of dynamic resource allocation. We consider whether the working memory resource might be determined by perceptual features such as pitch or timbre, or bound objects comprising multiple features, and we speculate on brain substrates for these behavioural models. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26835560

  2. Mapping tonotopy in human auditory cortex.

    PubMed

    van Dijk, Pim; Langers, Dave R M

    2013-01-01

    Tonotopy is arguably the most prominent organizational principle in the auditory pathway. Nevertheless, the layout of tonotopic maps in humans is still debated. We present neuroimaging data that robustly identify multiple tonotopic maps in the bilateral auditory cortex. In contrast with some earlier publications, tonotopic gradients were not found to be collinearly aligned along Heschl's gyrus; instead, two tonotopic maps ran diagonally across the anterior and posterior banks of Heschl's gyrus, set at a pronounced angle. On the basis of the direction of the tonotopic gradient, distinct subdivisions of the auditory cortex could be clearly demarcated that suggest homologies with the tonotopic organization in other primates. Finally, we applied our method to tinnitus patients to show that - contradictory to some pathophysiological models - tinnitus does not necessarily involve large-scale tonotopic reorganization. Overall, we expect that tonotopic mapping techniques will significantly enhance our ability to study the hierarchical functional organization of distinct auditory processing centers in the healthy and diseased human brain. PMID:23716248

  3. Integration and segregation in auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  4. Use of a multichannel cochlear implant in the congenitally and prelingually deaf population.

    PubMed

    Waltzman, S B; Cohen, N L; Shapiro, W H

    1992-04-01

    Fourteen children and three adults, each congenitally and prelinguistically deaf, received the Nucleus multichannel implant. All underwent extensive evaluations and rehabilitation. The surgery was uneventful, and no patients have been lost to follow-up. Results have shown a significant increase in auditory and speech reception and perception skills in all children. Some children have open-set speech recognition using the prosthesis alone. The adults have shown an increased awareness of sound along with minimal improvement in perceptual skills. This supports the concept that early implantation of congenitally and prelinguistically deaf individuals results in improved performance. PMID:1556888

  5. Prestimulus Network Integration of Auditory Cortex Predisposes Near-Threshold Perception Independently of Local Excitability

    PubMed Central

    Leske, Sabine; Ruhnau, Philipp; Frey, Julia; Lithari, Chrysa; Müller, Nadia; Hartmann, Thomas; Weisz, Nathan

    2015-01-01

    An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm. PMID:26408799

  6. Positron Emission Tomography in Cochlear Implant and Auditory Brainstem Implant Recipients.

    ERIC Educational Resources Information Center

    Miyamoto, Richard T.; Wong, Donald

    2001-01-01

    Positron emission tomography imaging was used to evaluate the brain's response to auditory stimulation, including speech, in deaf adults (five with cochlear implants and one with an auditory brainstem implant). Functional speech processing was associated with activation in areas classically associated with speech processing. (Contains five…

  7. Pilocarpine Seizures Cause Age-Dependent Impairment in Auditory Location Discrimination

    ERIC Educational Resources Information Center

    Neill, John C.; Liu, Zhao; Mikati, Mohammad; Holmes, Gregory L.

    2005-01-01

    Children who have status epilepticus have continuous or rapidly repeating seizures that may be life-threatening and may cause life-long changes in brain and behavior. The extent to which status epilepticus causes deficits in auditory discrimination is unknown. A naturalistic auditory location discrimination method was used to evaluate this…

  8. Prestimulus Network Integration of Auditory Cortex Predisposes Near-Threshold Perception Independently of Local Excitability.

    PubMed

    Leske, Sabine; Ruhnau, Philipp; Frey, Julia; Lithari, Chrysa; Müller, Nadia; Hartmann, Thomas; Weisz, Nathan

    2015-12-01

    An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm. PMID:26408799

  9. An anatomical and functional topography of human auditory cortical areas

    PubMed Central

    Moerel, Michelle; De Martino, Federico; Formisano, Elia

    2014-01-01

    While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that—whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis—the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions. PMID:25120426

  10. An anatomical and functional topography of human auditory cortical areas.

    PubMed

    Moerel, Michelle; De Martino, Federico; Formisano, Elia

    2014-01-01

    While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that-whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis-the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions. PMID:25120426

  11. Neurotrophic factor intervention restores auditory function in deafened animals

    NASA Astrophysics Data System (ADS)

    Shinohara, Takayuki; Bredberg, Göran; Ulfendahl, Mats; Pyykkö, Ilmari; Petri Olivius, N.; Kaksonen, Risto; Lindström, Bo; Altschuler, Richard; Miller, Josef M.

    2002-02-01

    A primary cause of deafness is damage of receptor cells in the inner ear. Clinically, it has been demonstrated that effective functionality can be provided by electrical stimulation of the auditory nerve, thus bypassing damaged receptor cells. However, subsequent to sensory cell loss there is a secondary degeneration of the afferent nerve fibers, resulting in reduced effectiveness of such cochlear prostheses. The effects of neurotrophic factors were tested in a guinea pig cochlear prosthesis model. After chemical deafening to mimic the clinical situation, the neurotrophic factors brain-derived neurotrophic factor and an analogue of ciliary neurotrophic factor were infused directly into the cochlea of the inner ear for 26 days by using an osmotic pump system. An electrode introduced into the cochlea was used to elicit auditory responses just as in patients implanted with cochlear prostheses. Intervention with brain-derived neurotrophic factor and the ciliary neurotrophic factor analogue not only increased the survival of auditory spiral ganglion neurons, but significantly enhanced the functional responsiveness of the auditory system as measured by using electrically evoked auditory brainstem responses. This demonstration that neurotrophin intervention enhances threshold sensitivity within the auditory system will have great clinical importance for the treatment of deaf patients with cochlear prostheses. The findings have direct implications for the enhancement of responsiveness in deafferented peripheral nerves.

  12. Least squares restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.

    1991-01-01

    Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.

  13. Hearing it right: Evidence of hemispheric lateralization in auditory imagery.

    PubMed

    Prete, Giulia; Marzoli, Daniele; Brancucci, Alfredo; Tommasi, Luca

    2016-02-01

    An advantage of the right ear (REA) in auditory processing (especially for verbal content) has been firmly established in decades of behavioral, electrophysiological and neuroimaging research. The laterality of auditory imagery, however, has received little attention, despite its potential relevance for the understanding of auditory hallucinations and related phenomena. In Experiments 1-4 we find that right-handed participants required to imagine hearing a voice or a sound unilaterally show a strong population bias to localize the self-generated auditory image at their right ear, likely the result of left-hemispheric dominance in auditory processing. In Experiments 5-8 - by means of the same paradigm - it was also ascertained that the right-ear bias for hearing imagined voices depends just on auditory attention mechanisms, as biases due to other factors (i.e., lateralized movements) were controlled. These results, suggesting a central role of the left hemisphere in auditory imagery, demonstrate that brain asymmetries can drive strong lateral biases in mental imagery. PMID:26706706

  14. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  15. No Counterpart of Visual Perceptual Echoes in the Auditory System

    PubMed Central

    İlhan, Barkın; VanRullen, Rufin

    2012-01-01

    It has been previously demonstrated by our group that a visual stimulus made of dynamically changing luminance evokes an echo or reverberation at ∼10 Hz, lasting up to a second. In this study we aimed to reveal whether similar echoes also exist in the auditory modality. A dynamically changing auditory stimulus equivalent to the visual stimulus was designed and employed in two separate series of experiments, and the presence of reverberations was analyzed based on reverse correlations between stimulus sequences and EEG epochs. The first experiment directly compared visual and auditory stimuli: while previous findings of ∼10 Hz visual echoes were verified, no similar echo was found in the auditory modality regardless of frequency. In the second experiment, we tested if auditory sequences would influence the visual echoes when they were congruent or incongruent with the visual sequences. However, the results in that case similarly did not reveal any auditory echoes, nor any change in the characteristics of visual echoes as a function of audio-visual congruence. The negative findings from these experiments suggest that brain oscillations do not equivalently affect early sensory processes in the visual and auditory modalities, and that alpha (8–13 Hz) oscillations play a special role in vision. PMID:23145143

  16. A corollary discharge maintains auditory sensitivity during sound production

    NASA Astrophysics Data System (ADS)

    Poulet, James F. A.; Hedwig, Berthold

    2002-08-01

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  17. Selective corticostriatal plasticity during acquisition of an auditory discrimination task

    PubMed Central

    Xiong, Qiaojie; Znamenskiy, Petr; Zador, Anthony M

    2015-01-01

    Perceptual decisions are based on the activity of sensory cortical neurons, but how organisms learn to transform this activity into appropriate actions remains unknown. Projections from the auditory cortex to the auditory striatum carry information that drives decisions in an auditory frequency discrimination task1. To assess the role of these projections in learning, we developed a Channelrhodopsin-2-based assay to selectively probe for synaptic plasticity associated with corticostriatal neurons representing different frequencies. Here we report that learning this auditory discrimination preferentially potentiates corticostriatal synapses from neurons representing either high or low frequencies, depending on reward contingencies. We observed frequency-dependent corticostriatal potentiation in vivo over the course of training, and in vitro in striatal brain slices. Our findings suggest a model in which the corticostriatal synapses made by neurons tuned to different features of the sound are selectively potentiated to enable the learned transformation of sound into action. PMID:25731173

  18. Selective corticostriatal plasticity during acquisition of an auditory discrimination task.

    PubMed

    Xiong, Qiaojie; Znamenskiy, Petr; Zador, Anthony M

    2015-05-21

    Perceptual decisions are based on the activity of sensory cortical neurons, but how organisms learn to transform this activity into appropriate actions remains unknown. Projections from the auditory cortex to the auditory striatum carry information that drives decisions in an auditory frequency discrimination task. To assess the role of these projections in learning, we developed a channelrhodopsin-2-based assay to probe selectively for synaptic plasticity associated with corticostriatal neurons representing different frequencies. Here we report that learning this auditory discrimination preferentially potentiates corticostriatal synapses from neurons representing either high or low frequencies, depending on reward contingencies. We observe frequency-dependent corticostriatal potentiation in vivo over the course of training, and in vitro in striatal brain slices. Our findings suggest a model in which the corticostriatal synapses made by neurons tuned to different features of the sound are selectively potentiated to enable the learned transformation of sound into action. PMID:25731173

  19. Auditory evoked responses to rhythmic sound pulses in dolphins.

    PubMed

    Popov, V V; Supin, A Y

    1998-10-01

    The ability of auditory evoked potentials to follow sound pulse (click or pip) rate was studied in bottlenosed dolphins. Sound pulses were presented in 20-ms rhythmic trains separated by 80-ms pauses. Rhythmic click or pip trains evoked a quasi-sustained response consisting of a sequence of auditory brainstem responses. This was designated as the rate-following response. Rate following response peak-to-peak amplitude dependence on sound pulse rate was almost flat up to 200 s-1, then displayed a few peaks and valleys superimposed on a low-pass filtering function with a cut-off frequency of 1700 s-1 at a 0.1-amplitude level. Peaks and valleys of the function corresponded to the pattern of the single auditory brain stem response spectrum; the low-pass cut-off frequency was below the auditory brain stem response spectrum bandwidth. Rate-following response frequency composition (magnitudes of the fundamental and harmonics) corresponded to the auditory brain stem response frequency spectrum except for lower fundamental magnitudes at frequencies above 1700 Hz. These regularities were similar for both click and pip trains. The rate-following response to steady-state rhythmic stimulation was similar to the rate-following response evoked by short trains except for a slight amplitude decrease with the rate increase above 10 s-1. The latter effect is attributed to a long-term rate-dependent adaptation in conditions of the steady-state pulse stimulation. PMID:9809455

  20. [Neural Representation of Sound Texture in the Auditory Cortex].

    PubMed

    Shiramatsu Isoguchi, Tomoyo; Takahashi, Hirokazu

    2015-06-01

    Natural sounds have a variety of sound spectra, which produce the so-called textures of sounds. These sound textures are extracted and perceived through interactions of the auditory, emotional, and cognitive systems in our brain. Recent studies have investigated how our brain handles musical sound textures, such as consonant and dissonant chords, or major and minor scales. Accumulating evidence indicates that the mammal auditory system has adapted to extract the harmonic structure of sounds and that this adaptation plays crucial roles in the perception of the consonance of two-tone chords. In addintion, functional magnetic resonance imaging studies have shown that major and minor scales activate not only the auditory system but also the emotional and cognitive systems. Our study revealed that phase synchrony within the auditory cortex of rodents represents the tonality of three-tone chords in a band-specific manner, and these findings support the hypothesis that the auditory system interact with the emotional and/or cognitive systems. Thus, the neural bases for the perception of sound textures are widely distributed within our brain, and these evolution of these neural systems significantly affects the establishment of musical grammar. PMID:26062583

  1. Role of the auditory system in speech production.

    PubMed

    Guenther, Frank H; Hickok, Gregory

    2015-01-01

    This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer. PMID:25726268

  2. Overriding auditory attentional capture.

    PubMed

    Dalton, Polly; Lavie, Nilli

    2007-02-01

    Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when the target was not a singleton (i.e., when nontargets were made heterogeneous, or when more than one target sound was presented). These results suggest that auditory attentional capture depends on the observer's attentional set, as does visual attentional capture. The suggestion that hearing might act as an early warning system that would always be tuned to unexpected unique stimuli must therefore be modified to accommodate these strategy-dependent capture effects. PMID:17557587

  3. Multi-channel fiber photometry for population neuronal activity recording

    PubMed Central

    Guo, Qingchun; Zhou, Jingfeng; Feng, Qiru; Lin, Rui; Gong, Hui; Luo, Qingming; Zeng, Shaoqun; Luo, Minmin; Fu, Ling

    2015-01-01

    Fiber photometry has become increasingly popular among neuroscientists as a convenient tool for the recording of genetically defined neuronal population in behaving animals. Here, we report the development of the multi-channel fiber photometry system to simultaneously monitor neural activities in several brain areas of an animal or in different animals. In this system, a galvano-mirror modulates and cyclically couples the excitation light to individual multimode optical fiber bundles. A single photodetector collects excited light and the configuration of fiber bundle assembly and the scanner determines the total channel number. We demonstrated that the system exhibited negligible crosstalk between channels and optical signals could be sampled simultaneously with a sample rate of at least 100 Hz for each channel, which is sufficient for recording calcium signals. Using this system, we successfully recorded GCaMP6 fluorescent signals from the bilateral barrel cortices of a head-restrained mouse in a dual-channel mode, and the orbitofrontal cortices of multiple freely moving mice in a triple-channel mode. The multi-channel fiber photometry system would be a valuable tool for simultaneous recordings of population activities in different brain areas of a given animal and different interacting individuals. PMID:26504642

  4. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    NASA Astrophysics Data System (ADS)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  5. Cross-Modal Plasticity in Higher-Order Auditory Cortex of Congenitally Deaf Cats Does Not Limit Auditory Responsiveness to Cochlear Implants

    PubMed Central

    Baumhoff, Peter; Tillein, Jochen; Lomber, Stephen G.; Hubka, Peter; Kral, Andrej

    2016-01-01

    Congenital sensory deprivation can lead to reorganization of the deprived cortical regions by another sensory system. Such cross-modal reorganization may either compete with or complement the “original“ inputs to the deprived area after sensory restoration and can thus be either adverse or beneficial for sensory restoration. In congenital deafness, a previous inactivation study documented that supranormal visual behavior was mediated by higher-order auditory fields in congenitally deaf cats (CDCs). However, both the auditory responsiveness of “deaf” higher-order fields and interactions between the reorganized and the original sensory input remain unknown. Here, we studied a higher-order auditory field responsible for the supranormal visual function in CDCs, the auditory dorsal zone (DZ). Hearing cats and visual cortical areas served as a control. Using mapping with microelectrode arrays, we demonstrate spatially scattered visual (cross-modal) responsiveness in the DZ, but show that this did not interfere substantially with robust auditory responsiveness elicited through cochlear implants. Visually responsive and auditory-responsive neurons in the deaf auditory cortex formed two distinct populations that did not show bimodal interactions. Therefore, cross-modal plasticity in the deaf higher-order auditory cortex had limited effects on auditory inputs. The moderate number of scattered cross-modally responsive neurons could be the consequence of exuberant connections formed during development that were not pruned postnatally in deaf cats. Although juvenile brain circuits are modified extensively by experience, the main driving input to the cross-modally (visually) reorganized higher-order auditory cortex remained auditory in congenital deafness. SIGNIFICANCE STATEMENT In a common view, the “unused” auditory cortex of deaf individuals is reorganized to a compensatory sensory function during development. According to this view, cross-modal plasticity takes

  6. Multichannel Spectrometer of Time Distribution

    NASA Astrophysics Data System (ADS)

    Akindinova, E. V.; Babenko, A. G.; Vakhtel, V. M.; Evseev, N. A.; Rabotkin, V. A.; Kharitonova, D. D.

    2015-06-01

    For research and control of characteristics of radiation fluxes, radioactive sources in particular, for example, in paper [1], a spectrometer and methods of data measurement and processing based on the multichannel counter of time intervals of accident events appearance (impulses of particle detector) MC-2A (SPC "ASPECT") were created. The spectrometer has four independent channels of registration of time intervals of impulses appearance and correspondent amplitude and spectrometric channels for control along the energy spectra of the operation stationarity of paths of each of the channels from the detector to the amplifier. The registration of alpha-radiation is carried out by the semiconductor detectors with energy resolution of 16-30 keV. Using a spectrometer there have been taken measurements of oscillations of alpha-radiation 239-Pu flux intensity with a subsequent autocorrelative statistical analysis of the time series of readings.

  7. Primate reaching cued by multichannel spatiotemporal cortical microstimulation.

    PubMed

    Fitzsimmons, N A; Drake, W; Hanson, T L; Lebedev, M A; Nicolelis, M A L

    2007-05-23

    Both humans and animals can discriminate signals delivered to sensory areas of their brains using electrical microstimulation. This opens the possibility of creating an artificial sensory channel that could be implemented in neuroprosthetic devices. Although microstimulation delivered through multiple implanted electrodes could be beneficial for this purpose, appropriate microstimulation protocols have not been developed. Here, we report a series of experiments in which owl monkeys performed reaching movements guided by spatiotemporal patterns of cortical microstimulation delivered to primary somatosensory cortex through chronically implanted multielectrode arrays. The monkeys learned to discriminate microstimulation patterns, and their ability to learn new patterns and new behavioral rules improved during several months of testing. Significantly, information was conveyed to the brain through the interplay of microstimulation patterns delivered to multiple electrodes and the temporal order in which these electrodes were stimulated. This suggests multichannel microstimulation as a viable means of sensorizing neural prostheses. PMID:17522304

  8. An auditory feature detection circuit for sound pattern recognition

    PubMed Central

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-01-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns. PMID:26601259

  9. Altered intrinsic connectivity of the auditory cortex in congenital amusia.

    PubMed

    Leveque, Yohana; Fauvel, Baptiste; Groussard, Mathilde; Caclin, Anne; Albouy, Philippe; Platel, Hervé; Tillmann, Barbara

    2016-07-01

    Congenital amusia, a neurodevelopmental disorder of music perception and production, has been associated with abnormal anatomical and functional connectivity in a right frontotemporal pathway. To investigate whether spontaneous connectivity in brain networks involving the auditory cortex is altered in the amusic brain, we ran a seed-based connectivity analysis, contrasting at-rest functional MRI data of amusic and matched control participants. Our results reveal reduced frontotemporal connectivity in amusia during resting state, as well as an overconnectivity between the auditory cortex and the default mode network (DMN). The findings suggest that the auditory cortex is intrinsically more engaged toward internal processes and less available to external stimuli in amusics compared with controls. Beyond amusia, our findings provide new evidence for the link between cognitive deficits in pathology and abnormalities in the connectivity between sensory areas and the DMN at rest. PMID:27009161

  10. Multichannel, Active Low-Pass Filters

    NASA Technical Reports Server (NTRS)

    Lev, James J.

    1989-01-01

    Multichannel integrated circuits cascaded to obtain matched characteristics. Gain and phase characteristics of channels of multichannel, multistage, active, low-pass filter matched by making filter of cascaded multichannel integrated-circuit operational amplifiers. Concept takes advantage of inherent equality of electrical characteristics of nominally-identical circuit elements made on same integrated-circuit chip. Characteristics of channels vary identically with changes in temperature. If additional matched channels needed, chips containing more than two operational amplifiers apiece (e.g., commercial quad operational amplifliers) used. Concept applicable to variety of equipment requiring matched gain and phase in multiple channels - radar, test instruments, communication circuits, and equipment for electronic countermeasures.

  11. Multi-channel polarized thermal emitter

    DOEpatents

    Lee, Jae-Hwang; Ho, Kai-Ming; Constant, Kristen P

    2013-07-16

    A multi-channel polarized thermal emitter (PTE) is presented. The multi-channel PTE can emit polarized thermal radiation without using a polarizer at normal emergence. The multi-channel PTE consists of two layers of metallic gratings on a monolithic and homogeneous metallic plate. It can be fabricated by a low-cost soft lithography technique called two-polymer microtransfer molding. The spectral positions of the mid-infrared (MIR) radiation peaks can be tuned by changing the periodicity of the gratings and the spectral separation between peaks are tuned by changing the mutual angle between the orientations of the two gratings.

  12. A Multichannel Bioluminescence Determination Platform for Bioassays.

    PubMed

    Kim, Sung-Bae; Naganawa, Ryuichi

    2016-01-01

    The present protocol introduces a multichannel bioluminescence determination platform allowing a high sample throughput determination of weak bioluminescence with reduced standard deviations. The platform is designed to carry a multichannel conveyer, an optical filter, and a mirror cap. The platform enables us to near-simultaneously determine ligands in multiple samples without the replacement of the sample tubes. Furthermore, the optical filters beneath the multichannel conveyer are designed to easily discriminate colors during assays. This optical system provides excellent time- and labor-efficiency to users during bioassays. PMID:27424912

  13. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates.

    PubMed

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-01-01

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys. PMID:27438411

  14. Auditory Memory for Timbre

    ERIC Educational Resources Information Center

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  15. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  16. Auditory Channel Problems.

    ERIC Educational Resources Information Center

    Mann, Philip H.; Suiter, Patricia A.

    This teacher's guide contains a list of general auditory problem areas where students have the following problems: (a) inability to find or identify source of sound; (b) difficulty in discriminating sounds of words and letters; (c) difficulty with reproducing pitch, rhythm, and melody; (d) difficulty in selecting important from unimportant sounds;…

  17. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    PubMed

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. PMID:23664946

  18. Prediction in the service of comprehension: modulated early brain responses to omitted speech segments.

    PubMed

    Bendixen, Alexandra; Scharinger, Mathias; Strauß, Antje; Obleser, Jonas

    2014-04-01

    Speech signals are often compromised by disruptions originating from external (e.g., masking noise) or internal (e.g., inaccurate articulation) sources. Speech comprehension thus entails detecting and replacing missing information based on predictive and restorative neural mechanisms. The present study targets predictive mechanisms by investigating the influence of a speech segment's predictability on early, modality-specific electrophysiological responses to this segment's omission. Predictability was manipulated in simple physical terms in a single-word framework (Experiment 1) or in more complex semantic terms in a sentence framework (Experiment 2). In both experiments, final consonants of the German words Lachs ([laks], salmon) or Latz ([lats], bib) were occasionally omitted, resulting in the syllable La ([la], no semantic meaning), while brain responses were measured with multi-channel electroencephalography (EEG). In both experiments, the occasional presentation of the fragment La elicited a larger omission response when the final speech segment had been predictable. The omission response occurred ∼125-165 msec after the expected onset of the final segment and showed characteristics of the omission mismatch negativity (MMN), with generators in auditory cortical areas. Suggestive of a general auditory predictive mechanism at work, this main observation was robust against varying source of predictive information or attentional allocation, differing between the two experiments. Source localization further suggested the omission response enhancement by predictability to emerge from left superior temporal gyrus and left angular gyrus in both experiments, with additional experiment-specific contributions. These results are consistent with the existence of predictive coding mechanisms in the central auditory system, and suggestive of the general predictive properties of the auditory system to support spoken word recognition. PMID:24561233

  19. Auditory Technology and Its Impact on Bilingual Deaf Education

    ERIC Educational Resources Information Center

    Mertes, Jennifer

    2015-01-01

    Brain imaging studies suggest that children can simultaneously develop, learn, and use two languages. A visual language, such as American Sign Language (ASL), facilitates development at the earliest possible moments in a child's life. Spoken language development can be delayed due to diagnostic evaluations, device fittings, and auditory skill…

  20. Auditory, Tactile, and Audiotactile Information Processing Following Visual Deprivation

    ERIC Educational Resources Information Center

    Occelli, Valeria; Spence, Charles; Zampini, Massimiliano

    2013-01-01

    We highlight the results of those studies that have investigated the plastic reorganization processes that occur within the human brain as a consequence of visual deprivation, as well as how these processes give rise to behaviorally observable changes in the perceptual processing of auditory and tactile information. We review the evidence showing…

  1. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    PubMed Central

    Meyer, Martin; Baumann, Simon; Marchina, Sarah; Jancke, Lutz

    2007-01-01

    Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control) habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control) habituation phase they heard brief telephone ringing. In the third (conditioning) phase we coincidently presented the visual stimulus (CS) paired with the auditory stimulus (UCS). In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS-) or viewed the visual stimulus in isolation (extinction, CS+) according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation) we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale) and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able to show that brain

  2. Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System

    PubMed Central

    Anderson, Lucy A.

    2016-01-01

    High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the

  3. Voiced-speech representation by an analog silicon model of the auditory periphery.

    PubMed

    Liu, W; Andreou, A G; Goldstein, M H

    1992-01-01

    An analog CMOS integration of a model for the auditory periphery is presented. The model consists of middle ear, basilar membrane, and hair cell/synapse modules which are derived from neurophysiological studies. The circuit realization of each module is discussed, and experimental data of each module's response to sinusoidal excitation are given. The nonlinear speech processing capabilities of the system are demonstrated using the voiced syllable |ba|. The multichannel output of the silicon model corresponds to the time-varying instantaneous firing rates of auditory nerve fibers that have different characteristic frequencies. These outputs are similar to the physiologically obtained responses. The actual implementation uses subthreshold CMOS technology and analog continuous-time circuits, resulting in a real-time, micropower device with potential applications as a preprocessor of auditory stimuli. PMID:18276451

  4. Characterization of auditory synaptic inputs to gerbil perirhinal cortex.

    PubMed

    Kotak, Vibhakar C; Mowery, Todd M; Sanes, Dan H

    2015-01-01

    The representation of acoustic cues involves regions downstream from the auditory cortex (ACx). One such area, the perirhinal cortex (PRh), processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG) and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of type A gamma-aminobutyric acid (GABA-A) receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the ACx. PMID:26321918

  5. Cross auditory-spatial learning in early-blind individuals.

    PubMed

    Chan, Chetwyn C H; Wong, Alex W K; Ting, Kin-Hung; Whitfield-Gabrieli, Susan; He, Jufang; Lee, Tatia M C

    2012-11-01

    Cross-modal processing enables the utilization of information received via different sensory organs to facilitate more complicated human actions. We used functional MRI on early-blind individuals to study the neural processes associated with cross auditory-spatial learning. The auditory signals, converted from echoes of ultrasonic signals emitted from a navigation device, were novel to the participants. The subjects were trained repeatedly for 4 weeks in associating the auditory signals with different distances. Subjects' blood-oxygenation-level-dependent responses were captured at baseline and after training using a sound-to-distance judgment task. Whole-brain analyses indicated that the task used in the study involved auditory discrimination as well as spatial localization. The learning process was shown to be mediated by the inferior parietal cortex and the hippocampus, suggesting the integration and binding of auditory features to distances. The right cuneus was found to possibly serve a general rather than a specific role, forming an occipital-enhanced network for cross auditory-spatial learning. This functional network is likely to be unique to those with early blindness, since the normal-vision counterparts shared activities only in the parietal cortex. PMID:21932260

  6. Multimodal lexical processing in auditory cortex is literacy skill dependent.

    PubMed

    McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R

    2014-09-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. PMID:23588185

  7. Cochleotopic selectivity of a multichannel scala tympani electrode array using the 2-deoxyglucose technique.

    PubMed

    Brown, M; Shepherd, R K; Webster, W R; Martin, R L; Clark, G M

    1992-05-01

    The 2-deoxyglucose (2-DG) technique was used to study the cochleotopic selectivity of a multichannel scala tympani electrode array in four cats with another acting as an unstimulated control. Each animal was unilaterally deafened and a multichannel electrode array inserted 6 mm into the scala tympani. Thresholds to electrical stimulation were determined by recording electrically evoked auditory brainstem responses (EABRs). Each animal was injected with 2-DG, and electrically stimulated using bipolar electrodes located either distal or proximal to the round window. The contralateral ear was stimulated with acoustic tone pips at frequencies that matched the electrode place. Stimulation of both distal and proximal bipolar electrodes at 3 x EABR threshold, evoked localized 2-DG labelling in both ipsilateral cochlear nucleus (CN) and the contralateral inferior colliculus (IC), which was very similar in orientation and breadth to labelling evoked by the contralateral tone pips. The cochleotopic position of labelling to proximal stimulation was located in the 24-26 kHz region of each structure, whereas the distal labelling was located around 12 kHz. Distal stimulation at 10 x EABR threshold produced very broad 2-DG labelling in IC centered around the 12 kHz place. The present 2-DG results clearly illustrate cochleotopic selectivity using multichannel bipolar scala tympani electrodes. The extent of this selectivity is dependent on electrical stimulus levels. The 2-DG technique has great potential in evaluating the efficacy of new electrode array designs. PMID:1618713

  8. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  9. Auditory object cognition in dementia.

    PubMed

    Goll, Johanna C; Kim, Lois G; Hailstone, Julia C; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J; Warren, Jason D

    2011-07-01

    The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n=21), progressive nonfluent aphasia (PNFA; n=5), logopenic progressive aphasia (LPA; n=7) and aphasia in association with a progranulin gene mutation (GAA; n=1), and in healthy age-matched controls (n=20). Based on a cognitive framework treating complex sounds as 'auditory objects', we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671

  10. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease.

    PubMed

    Golden, Hannah L; Agustus, Jennifer L; Goll, Johanna C; Downey, Laura E; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629

  11. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease

    PubMed Central

    Golden, Hannah L.; Agustus, Jennifer L.; Goll, Johanna C.; Downey, Laura E.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629

  12. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study.

    PubMed

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-01-01

    Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event. PMID:27242420

  13. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study

    PubMed Central

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-01-01

    Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event. PMID:27242420

  14. Multichannel blind iterative image restoration.

    PubMed

    Sroubek, Filip; Flusser, Jan

    2003-01-01

    Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun. PMID:18237981

  15. The auditory characteristics of children with inner auditory canal stenosis.

    PubMed

    Ai, Yu; Xu, Lei; Li, Li; Li, Jianfeng; Luo, Jianfen; Wang, Mingming; Fan, Zhaomin; Wang, Haibo

    2016-07-01

    Conclusions This study shows that the prevalence of auditory neuropathy spectrum disorder (ANSD) in the children with inner auditory canal (IAC) stenosis is much higher than those without IAC stenosis, regardless of whether they have other inner ear anomalies. In addition, the auditory characteristics of ANSD with IAC stenosis are significantly different from those of ANSD without any middle and inner ear malformations. Objectives To describe the auditory characteristics in children with IAC stenosis as well as to examine whether the narrow inner auditory canal is associated with ANSD. Method A total of 21 children, with inner auditory canal stenosis, participated in this study. A series of auditory tests were measured. Meanwhile, a comparative study was conducted on the auditory characteristics of ANSD, based on whether the children were associated with isolated IAC stenosis. Results Wave V in the ABR was not observed in all the patients, while cochlear microphonic (CM) response was detected in 81.1% ears with stenotic IAC. Sixteen of 19 (84.2%) ears with isolated IAC stenosis had CM response present on auditory brainstem responses (ABR) waveforms. There was no significant difference in ANSD characteristics between the children with and without isolated IAC stenosis. PMID:26981851

  16. Experience with a multichannel system for biomagnetic study.

    PubMed

    Schneider, S; Abraham-Fuchs, K; Reichenberger, H; Seifert, H; Hoenig, H E; Röhrlein, G

    1993-11-01

    The components of the biomagnetic multichannel system Krenikon are described. The combination of biomagnetically yielded localizations with anatomic images gained from MR or CT is discussed as well as the enhancement of the signal-to-noise ratio by using a correlation technique. The overall localization accuracy is tested with technical phantoms. With volunteers measurements of auditory, visual and somatosensory evoked fields are performed to evaluate the system performance in vivo. Clinical studies were performed mainly with partners from the Universities of Erlangen-Nünberg and Ulm. The data acquisition time typically is 2-10 min which is tolerable both for the patient and the clinical staff. Electric potentials even with invasive electrodes can be recorded simultaneously with the magnetic fields. MEG gives important information for the presurgical diagnosis of epileptic patients and for the understanding of the epilepsy genesis. With MCG, centres of biologic excitation such as ventricular ectopies or accessory bundles in WPW syndrome have been successfully localized. PMID:8274986

  17. Organization of projection neurons and local neurons of the primary auditory center in the fruit fly Drosophila melanogaster.

    PubMed

    Matsuo, Eriko; Seki, Haruyoshi; Asai, Tomonori; Morimoto, Takako; Miyakawa, Hiroyoshi; Ito, Kei; Kamikouchi, Azusa

    2016-04-15

    Acoustic communication between insects serves as an excellent model system for analyzing the neuronal mechanisms underlying auditory information processing. The detailed organization of auditory neural circuits in the brain has not yet been described. To understand the central auditory pathways, we used the brain of the fruit fly Drosophila melanogaster as a model and performed a large-scale analysis of the interneurons associated with the primary auditory center. By screening expression driver strains and performing single-cell labeling of these strains, we identified 44 types of interneurons innervating the primary auditory center. Five types were local interneurons whereas the other 39 types were projection interneurons connecting the primary auditory center with other brain regions. The projection neurons comprised three frequency-selective pathways and two frequency-embracive pathways. Mapping of their connection targets revealed that five neuropils in the brain-the wedge (WED), anterior ventrolateral protocerebrum, posterior ventrolateral protocerebrum (PVLP), saddle (SAD), and gnathal ganglia (GNG)-were intensively connected with the primary auditory center. In addition, several other neuropils, including visual and olfactory centers in the brain, were directly connected to the primary auditory center. The distribution patterns of the spines and boutons of the identified neurons suggest that auditory information is sent mainly from the primary auditory center to the PVLP, WED, SAD, GNG, and thoracico-abdominal ganglia. Based on these findings, we established the first comprehensive map of secondary auditory interneurons, which indicates the downstream information flow to parallel ascending pathways, multimodal pathways, and descending pathways. J. Comp. Neurol. 524:1099-1164, 2016. © 2016 Wiley Periodicals, Inc. PMID:26854012

  18. Early hominin auditory capacities.

    PubMed

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  19. Early hominin auditory capacities

    PubMed Central

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis

    2015-01-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  20. Dynamic multi-channel TMS with reconfigurable coil.

    PubMed

    Jiang, Ruoli; Jansen, Ben H; Sheth, Bhavin R; Chen, Ji

    2013-05-01

    Investigations of the causal involvement of particular brain areas and interconnections in behavior require an external stimulation system with reasonable spatio-temporal resolution. Current transcranial magnetic stimulation (TMS) technology is limited to stimulating a single brain area once in a given trial. Here, we present a feasibility study for a novel TMS system based on multi-channel reconfigurable coils. With this hardware, researchers will be able to stimulate multiple brain sites in any temporal order in a trial. The system employs a wire-mesh coil, constructed using x- and y-directional wires. By varying the current direction and/or strength on each wire, we can configure the proposed mesh-wire coil into a standard loop coil and figure-eight coil of varying size. This provides maximum flexibility to the experimenter in that the location and extent of stimulation on the brain surface can be modified depending on experimental requirement. Moreover, one can dynamically and automatically modify the site(s) of stimulation several times within the span of seconds. By pre-storing various sequences of excitation patterns inside a control unit, one can explore the effect of dynamic TMS on behavior, in associative learning, and as rehabilitative therapy. Here, we present a computer simulation and bench experiments that show the feasibility of the dynamically-reconfigurable coil. PMID:23193321

  1. Seeing sounds and hearing colors: an event-related potential study of auditory-visual synesthesia.

    PubMed

    Goller, Aviva I; Otten, Leun J; Ward, Jamie

    2009-10-01

    In auditory-visual synesthesia, sounds automatically elicit conscious and reliable visual experiences. It is presently unknown whether this reflects early or late processes in the brain. It is also unknown whether adult audiovisual synesthesia resembles auditory-induced visual illusions that can sometimes occur in the general population or whether it resembles the electrophysiological deflection over occipital sites that has been noted in infancy and has been likened to synesthesia. Electrical brain activity was recorded from adult synesthetes and control participants who were played brief tones and required to monitor for an infrequent auditory target. The synesthetes were instructed to attend either to the auditory or to the visual (i.e., synesthetic) dimension of the tone, whereas the controls attended to the auditory dimension alone. There were clear differences between synesthetes and controls that emerged early (100 msec after tone onset). These differences tended to lie in deflections of the auditory-evoked potential (e.g., the auditory N1, P2, and N2) rather than the presence of an additional posterior deflection. The differences occurred irrespective of what the synesthetes attended to (although attention had a late effect). The results suggest that differences between synesthetes and others occur early in time, and that synesthesia is qualitatively different from similar effects found in infants and certain auditory-induced visual illusions in adults. In addition, we report two novel cases of synesthesia in which colors elicit sounds, and vice versa. PMID:18823243

  2. Grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents

    PubMed Central

    Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang

    2015-01-01

    Abstract Purpose: Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. Methods: We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. Results: The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Conclusions: Our results indicate “cross-modal” plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents. PMID:25698109

  3. Spontaneous activity in the developing auditory system.

    PubMed

    Wang, Han Chin; Bergles, Dwight E

    2015-07-01

    Spontaneous electrical activity is a common feature of sensory systems during early development. This sensory-independent neuronal activity has been implicated in promoting their survival and maturation, as well as growth and refinement of their projections to yield circuits that can rapidly extract information about the external world. Periodic bursts of action potentials occur in auditory neurons of mammals before hearing onset. This activity is induced by inner hair cells (IHCs) within the developing cochlea, which establish functional connections with spiral ganglion neurons (SGNs) several weeks before they are capable of detecting external sounds. During this pre-hearing period, IHCs fire periodic bursts of Ca(2+) action potentials that excite SGNs, triggering brief but intense periods of activity that pass through auditory centers of the brain. Although spontaneous activity requires input from IHCs, there is ongoing debate about whether IHCs are intrinsically active and their firing periodically interrupted by external inhibitory input (IHC-inhibition model), or are intrinsically silent and their firing periodically promoted by an external excitatory stimulus (IHC-excitation model). There is accumulating evidence that inner supporting cells in Kölliker's organ spontaneously release ATP during this time, which can induce bursts of Ca(2+) spikes in IHCs that recapitulate many features of auditory neuron activity observed in vivo. Nevertheless, the role of supporting cells in this process remains to be established in vivo. A greater understanding of the molecular mechanisms responsible for generating IHC activity in the developing cochlea will help reveal how these events contribute to the maturation of nascent auditory circuits. PMID:25296716

  4. Auditory interfaces: The human perceiver

    NASA Technical Reports Server (NTRS)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  5. Visual activity predicts auditory recovery from deafness after adult cochlear implantation.

    PubMed

    Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2013-12-01

    Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients. PMID:24136826

  6. Six Degrees of Auditory Spatial Separation.

    PubMed

    Carlile, Simon; Fox, Alex; Orchard-Mills, Emily; Leung, Johahn; Alais, David

    2016-06-01

    The location of a sound is derived computationally from acoustical cues rather than being inherent in the topography of the input signal, as in vision. Since Lord Rayleigh, the descriptions of that representation have swung between "labeled line" and "opponent process" models. Employing a simple variant of a two-point separation judgment using concurrent speech sounds, we found that spatial discrimination thresholds changed nonmonotonically as a function of the overall separation. Rather than increasing with separation, spatial discrimination thresholds first declined as two-point separation increased before reaching a turning point and increasing thereafter with further separation. This "dipper" function, with a minimum at 6 ° of separation, was seen for regions around the midline as well as for more lateral regions (30 and 45 °). The discrimination thresholds for the binaural localization cues were linear over the same range, so these cannot explain the shape of these functions. These data and a simple computational model indicate that the perception of auditory space involves a local code or multichannel mapping emerging subsequent to the binaural cue coding. PMID:27033087

  7. Influence of aging over 10 years on auditory and vestibular functions in three patients with auditory neuropathy.

    PubMed

    Masuda, Takeshi; Kaga, Kimitaka

    2011-05-01

    The influence of aging on hearing and vestibular function in patients with auditory neuropathy has not been investigated. The purpose of this study was to investigate how hearing and vestibular function in this disease change with aging. The subjects were three female patients with auditory neuropathy. We checked their hearing and vestibular function by speech discrimination tests, ABR, ECochG, DPOAE, caloric test, damped-rotational chair test, and VEMPs. In all three patients, speech discrimination ability and vestibular function markedly declined with aging. However, speech language understanding and higher brain function were less affected by aging. PMID:21198343

  8. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    NASA Astrophysics Data System (ADS)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  9. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    PubMed

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. PMID:25900043

  10. Multichannel DBS halftoning for improved texture quality

    NASA Astrophysics Data System (ADS)

    Slavuj, Radovan; Pedersen, Marius

    2015-01-01

    The paper aims to develop a method for multichannel halftoning based on the Direct Binary Search (DBS) algorithm. We integrate specifics and benefits of multichannel printing into the halftoning method in order to further improve texture quality of DBS and to create halftoning that would suit for multichannel printing. Originally, multichannel printing is developed for an extended color gamut, at the same time additional channels can help to improve individual and combined texture of color halftoning. It does so in a similar manner to the introduction of the light colors (diluted inks) in printing. Namely, if one observes Red, Green and Blue inks as the light version of the M+Y, C+Y, C+M combinations, the visibility of the unwanted halftoning textures can be reduced. Analogy can be extent to any number of ink combinations, or Neugebauer Primaries (NPs) as the alternative building blocks. The extended variability of printing spatially distributed NPs could provide many practical solution and improvements in color accuracy, image quality, and could enable spectral printing. This could be done by selection of NPs per dot area location based on the constraint of the desired reproduction. Replacement with brighter NP at the location could induce a color difference where a tradeoff between image quality and color accuracy is created. With multichannel enabled DBS haftoning, we are able to reduce visibility of the textures, to provide better rendering of transitions, especially in mid and dark tones.

  11. Auditory tracts identified with combined fMRI and diffusion tractography.

    PubMed

    Javad, Faiza; Warren, Jason D; Micallef, Caroline; Thornton, John S; Golay, Xavier; Yousry, Tarek; Mancini, Laura

    2014-01-01

    The auditory tracts in the human brain connect the inferior colliculus (IC) and medial geniculate body (MGB) to various components of the auditory cortex (AC). While in non-human primates and in humans, the auditory system is differentiated in core, belt and parabelt areas, the correspondence between these areas and anatomical landmarks on the human superior temporal gyri is not straightforward, and at present not completely understood. However it is not controversial that there is a hierarchical organization of auditory stimuli processing in the auditory system. The aims of this study were to demonstrate that it is possible to non-invasively and robustly identify auditory projections between the auditory thalamus/brainstem and different functional levels of auditory analysis in the cortex of human subjects in vivo combining functional magnetic resonance imaging (fMRI) with diffusion MRI, and to investigate the possibility of differentiating between different components of the auditory pathways (e.g. projections to areas responsible for sound, pitch and melody processing). We hypothesized that the major limitation in the identification of the auditory pathways is the known problem of crossing fibres and addressed this issue acquiring DTI with b-values higher than commonly used and adopting a multi-fibre ball-and-stick analysis model combined with probabilistic tractography. Fourteen healthy subjects were studied. Auditory areas were localized functionally using an established hierarchical pitch processing fMRI paradigm. Together fMRI and diffusion MRI allowed the successful identification of tracts connecting IC with AC in 64 to 86% of hemispheres and left sound areas with homologous areas in the right hemisphere in 86% of hemispheres. The identified tracts corresponded closely with a three-dimensional stereotaxic atlas based on postmortem data. The findings have both neuroscientific and clinical implications for delineation of the human auditory system in vivo

  12. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  13. Practiced musical style shapes auditory skills.

    PubMed

    Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari

    2012-04-01

    Musicians' processing of sounds depends highly on instrument, performance practice, and level of expertise. Here, we measured the mismatch negativity (MMN), a preattentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, and rock/pop) and in nonmusicians using a novel, fast, and musical sounding multifeature MMN paradigm. We found MMN to all six deviants, showing that MMN paradigms can be adapted to resemble a musical context. Furthermore, we found that jazz musicians had larger MMN amplitude than all other experimental groups across all sound features, indicating greater overall sensitivity to auditory outliers. Furthermore, we observed a tendency toward shorter latency of the MMN to all feature changes in jazz musicians compared to band musicians. These findings indicate that the characteristics of the style of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in music. PMID:22524351

  14. Reducing temporal fluctuations in MRI with the multichannel method SENSE

    NASA Astrophysics Data System (ADS)

    Moeller, Steen; Van de Moortele, Pierre-Francois; Goerke, Ute; Uğurbil, Kâmil

    2006-03-01

    Multi-channel acquisition is employed in MRI to decrease total imaging time. In this paper, artifact free images are calculated by utilizing the difference in spatial encoding of the MR signal from neighboring channels. The encoding functions are estimated in the presence of noise and motion. For fMRI studies, the temporal stability of the signal is essential, since neuronal activity in the brain is detected by probing subtle BOLD (blood oxygen level dependent) signal changes. To ensure artifact free noise representation a new type of weight is used. By effectively selecting and eliminating low SNR pixels, increased temporal stability is achieved. Using the parallel imaging method SENSE the proposed method is tested with in-vivo data to ensure noise suppression and demonstrate correct assignment of fMRI activation.

  15. Multichannel DC SQUID sensor array for biomagnetic applications

    SciTech Connect

    Hoenig, H.E.; Daalmans, G.M.; Bar, L.; Bommel, F.; Paulus, A.; Uhl, D.; Weisse, H.J. ); Schneider, S.; Seifert, H.; Reichenberger, H.; Abraham-Fuchs, K. )

    1991-03-01

    This paper reports on a biomagnetic multichannel system for medical diagnosis of brain and heart KRENIKON has been developed. 37 axial 2st order gradiometers - manufactured as flexible superconducting printed circuits - are arranged in a circular flat array of 19 cm diameter. Additionally, 3 orthogonal magnetometers are provided. The DC SQUIDs are fabricated in all-Nb technology, ten on a chip. The sensor system is operated in a shielded room with two layers of soft magnetic material and one layer of Al. The every day noise level is 10 fT/Hz{sup 1/2} at frequencies above 10 Hz. Within 2 years of operation in a normal urban surrounding, useful clinical applications have been demonstrated (e.g. for epilepsy and heart arrhythmias).

  16. Auditory Evoked Potential Response and Hearing Loss: A Review

    PubMed Central

    Paulraj, M. P; Subramaniam, Kamalraj; Yaccob, Sazali Bin; Adom, Abdul H. Bin; Hema, C. R

    2015-01-01

    Hypoacusis is the most prevalent sensory disability in the world and consequently, it can lead to impede speech in human beings. One best approach to tackle this issue is to conduct early and effective hearing screening test using Electroencephalogram (EEG). EEG based hearing threshold level determination is most suitable for persons who lack verbal communication and behavioral response to sound stimulation. Auditory evoked potential (AEP) is a type of EEG signal emanated from the brain scalp by an acoustical stimulus. The goal of this review is to assess the current state of knowledge in estimating the hearing threshold levels based on AEP response. AEP response reflects the auditory ability level of an individual. An intelligent hearing perception level system enables to examine and determine the functional integrity of the auditory system. Systematic evaluation of EEG based hearing perception level system predicting the hearing loss in newborns, infants and multiple handicaps will be a priority of interest for future research. PMID:25893012

  17. Auditory Reserve and the Legacy of Auditory Experience

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381

  18. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C.-L.; Olson, W. S.

    1985-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Its properties and limitations are presented. The effect of noise was investigated and a better understanding of the performance of the algorithm with noisy data has been achieved. The restoration scheme with the selection of appropriate constraints was applied to a practical problem. The 6.6, 10.7, 18, and 21 GHz satellite images obtained by the scanning multichannel microwave radiometer (SMMR), each having different spatial resolution, were restored to a common, high resolution (that of the 37 GHz channels) to demonstrate the effectiveness of the method. Both simulated data and real data were used in this study. The restored multichannel images may be utilized to retrieve rainfall distributions.

  19. Multichannel framework for singular quantum mechanics

    SciTech Connect

    Camblong, Horacio E.; Epele, Luis N.; Fanchiotti, Huner; García Canal, Carlos A.; Ordóñez, Carlos R.

    2014-01-15

    A multichannel S-matrix framework for singular quantum mechanics (SQM) subsumes the renormalization and self-adjoint extension methods and resolves its boundary-condition ambiguities. In addition to the standard channel accessible to a distant (“asymptotic”) observer, one supplementary channel opens up at each coordinate singularity, where local outgoing and ingoing singularity waves coexist. The channels are linked by a fully unitary S-matrix, which governs all possible scenarios, including cases with an apparent nonunitary behavior as viewed from asymptotic distances. -- Highlights: •A multichannel framework is proposed for singular quantum mechanics and analogues. •The framework unifies several established approaches for singular potentials. •Singular points are treated as new scattering channels. •Nonunitary asymptotic behavior is subsumed in a unitary multichannel S-matrix. •Conformal quantum mechanics and the inverse quartic potential are highlighted.

  20. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.

    PubMed

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442

  1. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children

    PubMed Central

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442

  2. Hypermnesia using auditory input.

    PubMed

    Allen, J

    1992-07-01

    The author investigated whether hypermnesia would occur with auditory input. In addition, the author examined the effects of subjects' knowledge that they would later be asked to recall the stimuli. Two groups of 26 subjects each were given three successive recall trials after they listened to an audiotape of 59 high-imagery nouns. The subjects in the uninformed group were not told that they would later be asked to remember the words; those in the informed group were. Hypermnesia was evident, but only in the uninformed group. PMID:1447564

  3. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C. L.; Olson, W. S.

    1983-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Some of its properties and limitations are also presented. The selection of appropriate constraints was emphasized in a practical application. Multichannel microwave images, each having different spatial resolution, were restored to a common highest resolution to demonstrate the effectiveness of the method. Both noise-free and noisy images were used in this investigation.

  4. Software compensated multichannel pressure sensing system

    NASA Technical Reports Server (NTRS)

    Chapman, John J.

    1990-01-01

    A PC-based software system is described which can be used for data acquisition and thermal-error correction of a multichannel pressure-sensor system developed for use in a cryogenic environment. The software incorporates pressure-sensitivity and sensor-offset compensation files into thermal error-correction algorithms, and the sensors are calibrated by simulating the operating conditions. The system is found to be effective in the collecting, storing, and processing of multichannel pressure-sensor data to correct thermally induced offset and sensitivity errors.

  5. Multichannel blind deconvolution of spatially misaligned images.

    PubMed

    Sroubek, Filip; Flusser, Jan

    2005-07-01

    Existing multichannel blind restoration techniques assume perfect spatial alignment of channels, correct estimation of blur size, and are prone to noise. We developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priori distribution of original images defined by the variational integral. This stochastic approach enables us to recover the blurs and the original image from channels severely corrupted by noise. We observe that the exact knowledge of the blur size is not necessary, and we prove that translation misregistration up to a certain extent can be automatically removed in the restoration process. PMID:16028551

  6. Optical multichannel sensing of skin blood pulsations

    NASA Astrophysics Data System (ADS)

    Spigulis, Janis; Erts, Renars; Kukulis, Indulis; Ozols, Maris; Prieditis, Karlis

    2004-09-01

    Time resolved detection and analysis of the skin back-scattered optical signals (reflection photoplethysmography or PPG) provide information on skin blood volume pulsations and can serve for cardiovascular assessment. The multi-channel PPG concept has been developed and clinically verified in this study. Portable two- and four-channel PPG monitoring devices have been designed for real-time data acquisition and processing. The multi-channel devices were successfully applied for cardiovascular fitness tests and for early detection of arterial occlusions in extremities. The optically measured heartbeat pulse wave propagation made possible to estimate relative arterial resistances for numerous patients and healthy volunteers.

  7. Auditory neglect and related disorders.

    PubMed

    Gutschalk, Alexander; Dykstra, Andrew

    2015-01-01

    Neglect is a neurologic disorder, typically associated with lesions of the right hemisphere, in which patients are biased towards their ipsilesional - usually right - side of space while awareness for their contralesional - usually left - side is reduced or absent. Neglect is a multimodal disorder that often includes deficits in the auditory domain. Classically, auditory extinction, in which left-sided sounds that are correctly perceived in isolation are not detected in the presence of synchronous right-sided stimulation, has been considered the primary sign of auditory neglect. However, auditory extinction can also be observed after unilateral auditory cortex lesions and is thus not specific for neglect. Recent research has shown that patients with neglect are also impaired in maintaining sustained attention, on both sides, a fact that is reflected by an impairment of auditory target detection in continuous stimulation conditions. Perhaps the most impressive auditory symptom in full-blown neglect is alloacusis, in which patients mislocalize left-sided sound sources to their right, although even patients with less severe neglect still often show disturbance of auditory spatial perception, most commonly a lateralization bias towards the right. We discuss how these various disorders may be explained by a single model of neglect and review emerging interventions for patient rehabilitation. PMID:25726290

  8. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  9. Tinnitus and hyperacusis involve hyperactivity and enhanced connectivity in auditory-limbic-arousal-cerebellar network.

    PubMed

    Chen, Yu-Chen; Li, Xiaowei; Liu, Lijie; Wang, Jian; Lu, Chun-Qiang; Yang, Ming; Jiao, Yun; Zang, Feng-Chao; Radziwon, Kelly; Chen, Guang-Di; Sun, Wei; Krishnan Muthaiah, Vijaya Prakash; Salvi, Richard; Teng, Gao-Jun

    2015-01-01

    Hearing loss often triggers an inescapable buzz (tinnitus) and causes everyday sounds to become intolerably loud (hyperacusis), but exactly where and how this occurs in the brain is unknown. To identify the neural substrate for these debilitating disorders, we induced both tinnitus and hyperacusis with an ototoxic drug (salicylate) and used behavioral, electrophysiological, and functional magnetic resonance imaging (fMRI) techniques to identify the tinnitus-hyperacusis network. Salicylate depressed the neural output of the cochlea, but vigorously amplified sound-evoked neural responses in the amygdala, medial geniculate, and auditory cortex. Resting-state fMRI revealed hyperactivity in an auditory network composed of inferior colliculus, medial geniculate, and auditory cortex with side branches to cerebellum, amygdala, and reticular formation. Functional connectivity revealed enhanced coupling within the auditory network and segments of the auditory network and cerebellum, reticular formation, amygdala, and hippocampus. A testable model accounting for distress, arousal, and gating of tinnitus and hyperacusis is proposed. PMID:25962854

  10. Lateralization of Auditory Language: An EEG Study of Bilingual Crow Indian Adolescents.

    ERIC Educational Resources Information Center

    Vocate, Donna R.

    A study was undertaken to learn whether involvement of the brain's right hemisphere in auditory language processing, a phenomenon found in a previous study of Crow-English bilinguals, was language-specific. Alpha blocking response as measured by electroencephalography (EEG) was used as an indicator of brain activity. It was predicted that (1)…

  11. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  12. The Perception of Auditory Motion.

    PubMed

    Carlile, Simon; Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  13. Options for Auditory Training for Adults with Hearing Loss.

    PubMed

    Olson, Anne D

    2015-11-01

    Hearing aid devices alone do not adequately compensate for sensory losses despite significant technological advances in digital technology. Overall use rates of amplification among adults with hearing loss remain low, and overall satisfaction and performance in noise can be improved. Although improved technology may partially address some listening problems, auditory training may be another alternative to improve speech recognition in noise and satisfaction with devices. The literature underlying auditory plasticity following placement of sensory devices suggests that additional auditory training may be needed for reorganization of the brain to occur. Furthermore, training may be required to acquire optimal performance from devices. Several auditory training programs that are readily accessible for adults with hearing loss, hearing aids, or cochlear implants are described. Programs that can be accessed via Web-based formats and smartphone technology are reviewed. A summary table is provided for easy access to programs with descriptions of features that allow hearing health care providers to assist clients in selecting the most appropriate auditory training program to fit their needs. PMID:27587915

  14. Insult-induced adaptive plasticity of the auditory system

    PubMed Central

    Gold, Joshua R.; Bajo, Victoria M.

    2014-01-01

    The brain displays a remarkable capacity for both widespread and region-specific modifications in response to environmental challenges, with adaptive processes bringing about the reweighing of connections in neural networks putatively required for optimizing performance and behavior. As an avenue for investigation, studies centered around changes in the mammalian auditory system, extending from the brainstem to the cortex, have revealed a plethora of mechanisms that operate in the context of sensory disruption after insult, be it lesion-, noise trauma, drug-, or age-related. Of particular interest in recent work are those aspects of auditory processing which, after sensory disruption, change at multiple—if not all—levels of the auditory hierarchy. These include changes in excitatory, inhibitory and neuromodulatory networks, consistent with theories of homeostatic plasticity; functional alterations in gene expression and in protein levels; as well as broader network processing effects with cognitive and behavioral implications. Nevertheless, there abounds substantial debate regarding which of these processes may only be sequelae of the original insult, and which may, in fact, be maladaptively compelling further degradation of the organism's competence to cope with its disrupted sensory context. In this review, we aim to examine how the mammalian auditory system responds in the wake of particular insults, and to disambiguate how the changes that develop might underlie a correlated class of phantom disorders, including tinnitus and hyperacusis, which putatively are brought about through maladaptive neuroplastic disruptions to auditory networks governing the spatial and temporal processing of acoustic sensory information. PMID:24904256

  15. Intrahemispheric cortico-cortical connections of the human auditory cortex.

    PubMed

    Cammoun, Leila; Thiran, Jean Philippe; Griffa, Alessandra; Meuli, Reto; Hagmann, Patric; Clarke, Stephanie

    2015-11-01

    The human auditory cortex comprises the supratemporal plane and large parts of the temporal and parietal convexities. We have investigated the relevant intrahemispheric cortico-cortical connections using in vivo DSI tractography combined with landmark-based registration, automatic cortical parcellation and whole-brain structural connection matrices in 20 right-handed male subjects. On the supratemporal plane, the pattern of connectivity was related to the architectonically defined early-stage auditory areas. It revealed a three-tier architecture characterized by a cascade of connections from the primary auditory cortex to six adjacent non-primary areas and from there to the superior temporal gyrus. Graph theory-driven analysis confirmed the cascade-like connectivity pattern and demonstrated a strong degree of segregation and hierarchy within early-stage auditory areas. Putative higher-order areas on the temporal and parietal convexities had more widely spread local connectivity and long-range connections with the prefrontal cortex; analysis of optimal community structure revealed five distinct modules in each hemisphere. The pattern of temporo-parieto-frontal connectivity was partially asymmetrical. In conclusion, the human early-stage auditory cortical connectivity, as revealed by in vivo DSI tractography, has strong similarities with that of non-human primates. The modular architecture and hemispheric asymmetry in higher-order regions is compatible with segregated processing streams and lateralization of cognitive functions. PMID:25173473

  16. Auditory hallucinations: A review of the ERC "VOICE" project.

    PubMed

    Hugdahl, Kenneth

    2015-06-22

    In this invited review I provide a selective overview of recent research on brain mechanisms and cognitive processes involved in auditory hallucinations. The review is focused on research carried out in the "VOICE" ERC Advanced Grant Project, funded by the European Research Council, but I also review and discuss the literature in general. Auditory hallucinations are suggested to be perceptual phenomena, with a neuronal origin in the speech perception areas in the temporal lobe. The phenomenology of auditory hallucinations is conceptualized along three domains, or dimensions; a perceptual dimension, experienced as someone speaking to the patient; a cognitive dimension, experienced as an inability to inhibit, or ignore the voices, and an emotional dimension, experienced as the "voices" having primarily a negative, or sinister, emotional tone. I will review cognitive, imaging, and neurochemistry data related to these dimensions, primarily the first two. The reviewed data are summarized in a model that sees auditory hallucinations as initiated from temporal lobe neuronal hyper-activation that draws attentional focus inward, and which is not inhibited due to frontal lobe hypo-activation. It is further suggested that this is maintained through abnormal glutamate and possibly gamma-amino-butyric-acid transmitter mediation, which could point towards new pathways for pharmacological treatment. A final section discusses new methods of acquiring quantitative data on the phenomenology and subjective experience of auditory hallucination that goes beyond standard interview questionnaires, by suggesting an iPhone/iPod app. PMID:26110121

  17. Auditory color constancy

    NASA Astrophysics Data System (ADS)

    Kluender, Keith R.; Kiefte, Michael

    2003-10-01

    It is both true and efficient that sensorineural systems respond to change and little else. Perceptual systems do not record absolute level be it loudness, pitch, brightness, or color. This fact has been demonstrated in every sensory domain. For example, the visual system is remarkable at maintaining color constancy over widely varying illumination such as sunlight and varieties of artificial light (incandescent, fluorescent, etc.) for which spectra reflected from objects differ dramatically. Results will be reported for a series of experiments demonstrating how auditory systems similarly compensate for reliable characteristics of spectral shape in acoustic signals. Specifically, listeners' perception of vowel sounds, characterized by both local (e.g., formants) and broad (e.g., tilt) spectral composition, changes radically depending upon reliable spectral composition of precursor signals. These experiments have been conducted using a variety of precursor signals consisting of meaningful and time-reversed vocoded sentences, as well as novel nonspeech precursors consisting of multiple filter poles modulating sinusoidally across a source spectrum with specific local and broad spectral characteristics. Constancy across widely varying spectral compositions shares much in common with visual color constancy. However, auditory spectral constancy appears to be more effective than visual constancy in compensating for local spectral fluctuations. [Work supported by NIDCD DC-04072.

  18. Flexible information coding in human auditory cortex during perception, imagery, and STM of complex sounds.

    PubMed

    Linke, Annika C; Cusack, Rhodri

    2015-07-01

    Auditory cortex is the first cortical region of the human brain to process sounds. However, it has recently been shown that its neurons also fire in the absence of direct sensory input, during memory maintenance and imagery. This has commonly been taken to reflect neural coding of the same acoustic information as during the perception of sound. However, the results of the current study suggest that the type of information encoded in auditory cortex is highly flexible. During perception and memory maintenance, neural activity patterns are stimulus specific, reflecting individual sound properties. Auditory imagery of the same sounds evokes similar overall activity in auditory cortex as perception. However, during imagery abstracted, categorical information is encoded in the neural patterns, particularly when individuals are experiencing more vivid imagery. This highlights the necessity to move beyond traditional "brain mapping" inference in human neuroimaging, which assumes common regional activation implies similar mental representations. PMID:25603030

  19. Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis.

    PubMed

    Fletcher, Phillip D; Downey, Laura E; Golden, Hannah L; Clark, Camilla N; Slattery, Catherine F; Paterson, Ross W; Schott, Jonathan M; Rohrer, Jonathan D; Rossor, Martin N; Warren, Jason D

    2015-06-01

    Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music ('musicophilia') occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease. PMID:25929717

  20. The frequency modulated auditory evoked response (FMAER), a technical advance for study of childhood language disorders: cortical source localization and selected case studies

    PubMed Central

    2013-01-01

    Background Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report's purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. Results FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. Conclusion The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral

  1. Multichannel analyzers at high rates of input

    NASA Technical Reports Server (NTRS)

    Rudnick, S. J.; Strauss, M. G.

    1969-01-01

    Multichannel analyzer, used with a gating system incorporating pole-zero compensation, pile-up rejection, and baseline-restoration, achieves good resolution at high rates of input. It improves resolution, reduces tailing and rate-contributed continuum, and eliminates spectral shift.

  2. Multi-channel electric aerosol spectrometer

    NASA Astrophysics Data System (ADS)

    Mirme, A.; Noppel, M.; Peil, I.; Salm, J.; Tamm, E.; Tammet, H.

    Multi-channel electric mobility spectrometry is a most efficient technique for the rapid measurement of an unstable aerosol particle size spectrum. The measuring range of the spectrometer from 10 microns to 10 microns is achieved by applying diffusional and field charging mechanisms simultaneously. On-line data processing is carried out with a microcomputer. Experimental calibration ensures correctness of measurement.

  3. A multi-channel waveform digitizer system

    SciTech Connect

    Bieser, F.; Muller, W.F.J. )

    1990-04-01

    The authors report on the design and performance of a multichannel waveform digitizer system for use with the Multiple Sample Ionization Chamber (MUSIC) Detector at the Bevalac. 128 channels of 20 MHz Flash ADC plus 256 word deep memory are housed in a single crate. Digital thresholds and hit pattern logic facilitate zero suppression during readout which is performed over a standard VME bus.

  4. Asymmetry in primary auditory cortex activity in tinnitus patients and controls.

    PubMed

    Geven, L I; de Kleine, E; Willemsen, A T M; van Dijk, P

    2014-01-01

    Tinnitus is a bothersome phantom sound percept and its neural correlates are not yet disentangled. Previously published papers, using [(18)F]-fluoro-deoxyglucose positron emission tomography (FDG-PET), have suggested an increased metabolism in the left primary auditory cortex in tinnitus patients. This unilateral hyperactivity has been used as a target in localized treatments such as transcranial magnetic stimulation. The purpose of the current study was to test whether left-sided hyperactivity in the auditory cortex is specific to tinnitus or is a general characteristic of the auditory system unrelated to tinnitus. Therefore, FDG-PET was used to measure brain metabolism in 20 tinnitus patients and to compare their results to those in 19 control subjects without tinnitus. In contrast to our expectation, there was no hyperactivity associated with tinnitus. Nevertheless, the activity in the left primary auditory cortex was higher than in the right primary auditory cortex, but this asymmetry was present in both tinnitus patients and control subjects. In contrast, the lateralization in secondary auditory cortex was opposite, with higher activation in the right hemisphere. These data show that hemisphere asymmetries in the metabolic resting activity of the auditory cortex are present, but these are not associated with tinnitus and are a normal characteristic of the normal brain. PMID:24161276

  5. A lateralized auditory evoked potential elicited when auditory objects are defined by spatial motion.

    PubMed

    Butcher, Andrew; Govenlock, Stanley W; Tata, Matthew S

    2011-02-01

    Scene analysis involves the process of segmenting a field of overlapping objects from each other and from the background. It is a fundamental stage of perception in both vision and hearing. The auditory system encodes complex cues that allow listeners to find boundaries between sequential objects, even when no gap of silence exists between them. In this sense, object perception in hearing is similar to perceiving visual objects defined by isoluminant color, motion or binocular disparity. Motion is one such cue: when a moving sound abruptly disappears from one location and instantly reappears somewhere else, the listener perceives two sequential auditory objects. Smooth reversals of motion direction do not produce this segmentation. We investigated the brain electrical responses evoked by this spatial segmentation cue and compared them to the familiar auditory evoked potential elicited by sound onsets. Segmentation events evoke a pattern of negative and positive deflections that are unlike those evoked by onsets. We identified a negative component in the waveform - the Lateralized Object-Related Negativity - generated by the hemisphere contralateral to the side on which the new sound appears. The relationship between this component and similar components found in related paradigms is considered. PMID:21056097

  6. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    PubMed

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  7. Frequency-specific disruptions of neuronal oscillations reveal aberrant auditory processing in schizophrenia.

    PubMed

    Hayrynen, Lauren K; Hamm, Jordan P; Sponheim, Scott R; Clementz, Brett A

    2016-06-01

    Individuals with schizophrenia exhibit abnormalities in evoked brain responses in oddball paradigms. These could result from (a) insufficient salience-related cortical signaling (P300), (b) insufficient suppression of irrelevant aspects of the auditory environment, or (c) excessive neural noise. We tested whether disruption of ongoing auditory steady-state responses at predetermined frequencies informed which of these issues contribute to auditory stimulus relevance processing abnormalities in schizophrenia. Magnetoencephalography data were collected for 15 schizophrenia and 15 healthy subjects during an auditory oddball paradigm (25% targets; 1-s interstimulus interval). Auditory stimuli (pure tones: 1 kHz standards, 2 kHz targets) were administered during four continuous background (auditory steady-state) stimulation conditions: (1) no stimulation, (2) 24 Hz, (3) 40 Hz, and (4) 88 Hz. The modulation of the auditory steady-state response (aSSR) and the evoked responses to the transient stimuli were quantified and compared across groups. In comparison to healthy participants, the schizophrenia group showed greater disruption of the ongoing aSSR by targets regardless of steady-state frequency, and reduced amplitude of both M100 and M300 event-related field components. During the no-stimulation condition, schizophrenia patients showed accentuation of left hemisphere 40 Hz response to both standard and target stimuli, indicating an effort to enhance local stimulus processing. Together, these findings suggest abnormalities in auditory stimulus relevance processing in schizophrenia patients stem from insufficient amplification of salient stimuli. PMID:26933842

  8. Norepinephrine Modulates Coding of Complex Vocalizations in the Songbird Auditory Cortex Independent of Local Neuroestrogen Synthesis.

    PubMed

    Ikeda, Maaya Z; Jeon, Sung David; Cowell, Rosemary A; Remage-Healey, Luke

    2015-06-24

    The catecholamine norepinephrine plays a significant role in auditory processing. Most studies to date have examined the effects of norepinephrine on the neuronal response to relatively simple stimuli, such as tones and calls. It is less clear how norepinephrine shapes the detection of complex syntactical sounds, as well as the coding properties of sensory neurons. Songbirds provide an opportunity to understand how auditory neurons encode complex, learned vocalizations, and the potential role of norepinephrine in modulating the neuronal computations for acoustic communication. Here, we infused norepinephrine into the zebra finch auditory cortex and performed extracellular recordings to study the modulation of song representations in single neurons. Consistent with its proposed role in enhancing signal detection, norepinephrine decreased spontaneous activity and firing during stimuli, yet it significantly enhanced the auditory signal-to-noise ratio. These effects were all mimicked by clonidine, an α-2 receptor agonist. Moreover, a pattern classifier analysis indicated that norepinephrine enhanced the ability of single neurons to accurately encode complex auditory stimuli. Because neuroestrogens are also known to enhance auditory processing in the songbird brain, we tested the hypothesis that norepinephrine actions depend on local estrogen synthesis. Neither norepinephrine nor adrenergic receptor antagonist infusion into the auditory cortex had detectable effects on local estradiol levels. Moreover, pretreatment with fadrozole, a specific aromatase inhibitor, did not block norepinephrine's neuromodulatory effects. Together, these findings indicate that norepinephrine enhances signal detection and information encoding for complex auditory stimuli by suppressing spontaneous "noise" activity and that these actions are independent of local neuroestrogen synthesis. PMID:26109659

  9. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    PubMed Central

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  10. Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?

    PubMed

    McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh

    2014-05-01

    Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions. PMID:24691154

  11. Cortical auditory disorders: clinical and psychoacoustic features.

    PubMed Central

    Mendez, M F; Geehan, G R

    1988-01-01

    The symptoms of two patients with bilateral cortical auditory lesions evolved from cortical deafness to other auditory syndromes: generalised auditory agnosia, amusia and/or pure word deafness, and a residual impairment of temporal sequencing. On investigation, both had dysacusis, absent middle latency evoked responses, acoustic errors in sound recognition and matching, inconsistent auditory behaviours, and similarly disturbed psychoacoustic discrimination tasks. These findings indicate that the different clinical syndromes caused by cortical auditory lesions form a spectrum of related auditory processing disorders. Differences between syndromes may depend on the degree of involvement of a primary cortical processing system, the more diffuse accessory system, and possibly the efferent auditory system. Images PMID:2450968

  12. Hearing in action; auditory properties of neurons in the red nucleus of alert primates

    PubMed Central

    Lovell, Jonathan M.; Mylius, Judith; Scheich, Henning; Brosch, Michael

    2014-01-01

    The response of neurons in the Red Nucleus pars magnocellularis (RNm) to both tone bursts and electrical stimulation were observed in three cynomolgus monkeys (Macaca fascicularis), in a series of studies primarily designed to characterize the influence of the dopaminergic ventral midbrain on auditory processing. Compared to its role in motor behavior, little is known about the sensory response properties of neurons in the red nucleus (RN); particularly those concerning the auditory modality. Sites in the RN were recognized by observing electrically evoked body movements characteristic for this deep brain structure. In this study we applied brief monopolar electrical stimulation to 118 deep brain sites at a maximum intensity of 200 μA, thus evoking minimal body movements. Auditory sensitivity of RN neurons was analyzed more thoroughly at 15 sites, with the majority exhibiting broad tuning curves and phase locking up to 1.03 kHz. Since the RN appears to receive inputs from a very early stage of the ascending auditory system, our results suggest that sounds can modify the motor control exerted by this brain nucleus. At selected locations, we also tested for the presence of functional connections between the RN and the auditory cortex by inserting additional microelectrodes into the auditory cortex and investigating how action potentials and local field potentials (LFPs) were affected by electrical stimulation of the RN. PMID:24860417

  13. Auditory evoked field measurement using magneto-impedance sensors

    NASA Astrophysics Data System (ADS)

    Wang, K.; Tajima, S.; Song, D.; Hamada, N.; Cai, C.; Uchiyama, T.

    2015-05-01

    The magnetic field of the human brain is extremely weak, and it is mostly measured and monitored in the magnetoencephalography method using superconducting quantum interference devices. In this study, in order to measure the weak magnetic field of the brain, we constructed a Magneto-Impedance sensor (MI sensor) system that can cancel out the background noise without any magnetic shield. Based on our previous studies of brain wave measurements, we used two MI sensors in this system for monitoring both cerebral hemispheres. In this study, we recorded and compared the auditory evoked field signals of the subject, including the N100 (or N1) and the P300 (or P3) brain waves. The results suggest that the MI sensor can be applied to brain activity measurement.

  14. Auditory evoked field measurement using magneto-impedance sensors

    SciTech Connect

    Wang, K. Tajima, S.; Song, D.; Uchiyama, T.; Hamada, N.; Cai, C.

    2015-05-07

    The magnetic field of the human brain is extremely weak, and it is mostly measured and monitored in the magnetoencephalography method using superconducting quantum interference devices. In this study, in order to measure the weak magnetic field of the brain, we constructed a Magneto-Impedance sensor (MI sensor) system that can cancel out the background noise without any magnetic shield. Based on our previous studies of brain wave measurements, we used two MI sensors in this system for monitoring both cerebral hemispheres. In this study, we recorded and compared the auditory evoked field signals of the subject, including the N100 (or N1) and the P300 (or P3) brain waves. The results suggest that the MI sensor can be applied to brain activity measurement.

  15. Auditory perspective taking.

    PubMed

    Martinson, Eric; Brock, Derek

    2013-06-01

    Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners. PMID:23096077

  16. Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.

    PubMed

    Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd

    2014-11-01

    In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and

  17. Auditory Processing Disorder in Children

    MedlinePlus

    ... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...

  18. Classroom Demonstrations of Auditory Perception.

    ERIC Educational Resources Information Center

    Haws, LaDawn; Oppy, Brian J.

    2002-01-01

    Presents activities to help students gain understanding about auditory perception. Describes demonstrations that cover topics, such as sound localization, wave cancellation, frequency/pitch variation, and the influence of media on sound propagation. (CMK)

  19. Leiomyoma of External Auditory Canal.

    PubMed

    George, M V; Puthiyapurayil, Jamsheeda

    2016-09-01

    This article reports a case of piloleiomyoma of external auditory canal, which is the 7th case of leiomyoma of the external auditory canal being reported and the 2nd case of leiomyoma arising from arrectores pilorum muscles, all the other five cases were angioleiomyomas, arising from blood vessels. A 52 years old male presented with a mass in the right external auditory canal and decreased hearing of 6 months duration. Tumor excision done by end aural approach. Histopathological examination report was leiomyoma. It is extremely rare for leiomyoma to occur in the external auditory canal because of the non-availability of smooth muscles in the external canal. So it should be considered as a very rare differential diagnosis for any tumor or polyp in the ear canal. PMID:27508144

  20. Cortical Synaptic Inhibition Declines during Auditory Learning

    PubMed Central

    von Trapp, Gardiner; Mowery, Todd M.; Kotak, Vibhakar C.; Sanes, Dan H.

    2015-01-01

    Auditory learning is associated with an enhanced representation of acoustic cues in primary auditory cortex, and modulation of inhibitory strength is causally involved in learning. If this inhibitory plasticity is associated with task learning and improvement, its expression should emerge and persist until task proficiency is achieved. We tested this idea by measuring changes to cortical inhibitory synaptic transmission as adult gerbils progressed through the process of associative learning and perceptual improvement. Using either of two procedures, aversive or appetitive conditioning, animals were trained to detect amplitude-modulated noise and then tested daily. Following each training session, a thalamocortical brain slice was generated, and inhibitory synaptic properties were recorded from layer 2/3 pyramidal neurons. Initial associative learning was accompanied by a profound reduction in the amplitude of spontaneous IPSCs (sIPSCs). However, sIPSC amplitude returned to control levels when animals reached asymptotic behavioral performance. In contrast, paired-pulse ratios decreased in trained animals as well as in control animals that experienced unpaired conditioned and unconditioned stimuli. This latter observation suggests that inhibitory release properties are modified during behavioral conditioning, even when an association between the sound and reinforcement cannot occur. These results suggest that associative learning is accompanied by a reduction of postsynaptic inhibitory strength that persists for several days during learning and perceptual improvement. PMID:25904785

  1. Neural dynamics of phonological processing in the dorsal auditory stream.

    PubMed

    Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali

    2013-09-25

    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors. PMID:24068810

  2. Morphometric changes in subcortical structures of the central auditory pathway in mice with bilateral nodular heterotopia.

    PubMed

    Truong, Dongnhu T; Rendall, Amanda R; Rosen, Glenn D; Fitch, R Holly

    2015-04-01

    Malformations of cortical development (MCD) have been observed in human reading and language impaired populations. Injury-induced MCD in rodent models of reading disability show morphological changes in the auditory thalamic nucleus (medial geniculate nucleus; MGN) and auditory processing impairments, thus suggesting a link between MCD, MGN, and auditory processing behavior. Previous neuroanatomical examination of a BXD29 recombinant inbred strain (BXD29-Tlr4(lps-2J)/J) revealed MCD consisting of bilateral subcortical nodular heterotopia with partial callosal agenesis. Subsequent behavioral characterization showed a severe impairment in auditory processing-a deficient behavioral phenotype seen across both male and female BXD29-Tlr4(lps-2J)/J mice. In the present study we expanded upon the neuroanatomical findings in the BXD29-Tlr4(lps-2J)/J mutant mouse by investigating whether subcortical changes in cellular morphology are present in neural structures critical to central auditory processing (MGN, and the ventral and dorsal subdivisions of the cochlear nucleus; VCN and DCN, respectively). Stereological assessment of brain tissue of male and female BXD29-Tlr4(lps-2J)/J mice previously tested on an auditory processing battery revealed overall smaller neurons in the MGN of BXD29-Tlr4(lps-2J)/J mutant mice in comparison to BXD29/Ty coisogenic controls, regardless of sex. Interestingly, examination of the VCN and DCN revealed sexually dimorphic changes in neuronal size, with a distribution shift toward larger neurons in female BXD29-Tlr4(lps-2J)/J brains. These effects were not seen in males. Together, the combined data set supports and further expands the observed co-occurrence of MCD, auditory processing impairments, and changes in subcortical anatomy of the central auditory pathway. The current stereological findings also highlight sex differences in neuroanatomical presentation in the presence of a common auditory behavioral phenotype. PMID:25549859

  3. The shape of ears to come: dynamic coding of auditory space.

    PubMed

    King, A J.; Schnupp, J W.H.; Doubell, T P.

    2001-06-01

    In order to pinpoint the location of a sound source, we make use of a variety of spatial cues that arise from the direction-dependent manner in which sounds interact with the head, torso and external ears. Accurate sound localization relies on the neural discrimination of tiny differences in the values of these cues and requires that the brain circuits involved be calibrated to the cues experienced by each individual. There is growing evidence that the capacity for recalibrating auditory localization continues well into adult life. Many details of how the brain represents auditory space and of how those representations are shaped by learning and experience remain elusive. However, it is becoming increasingly clear that the task of processing auditory spatial information is distributed over different regions of the brain, some working hierarchically, others independently and in parallel, and each apparently using different strategies for encoding sound source location. PMID:11390297

  4. Multi-sensory integration in brainstem and auditory cortex.

    PubMed

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  5. Multi-sensory integration in brainstem and auditory cortex

    PubMed Central

    Basura, Gregory J.; Koehler, Seth D.; Shore, Susan E.

    2012-01-01

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), similar auditory-somatosensory integration has been described in the normal system (Lakatos et al. 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al. 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, & 40ms), and greater suppression at 20ms pairing-intervals for single unit responses.. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. PMID:22995545

  6. Coupling output of multichannel high power microwaves

    SciTech Connect

    Li Guolin; Shu Ting; Yuan Chengwei; Zhang Jun; Yang Jianhua; Jin Zhenxing; Yin Yi; Wu Dapeng; Zhu Jun; Ren Heming; Yang Jie

    2010-12-15

    The coupling output of multichannel high power microwaves is a promising technique for the development of high power microwave technologies, as it can enhance the output capacities of presently studied devices. According to the investigations on the spatial filtering method and waveguide filtering method, the hybrid filtering method is proposed for the coupling output of multichannel high power microwaves. As an example, a specific structure is designed for the coupling output of S/X/X band three-channel high power microwaves and investigated with the hybrid filtering method. In the experiments, a pulse of 4 GW X band beat waves and a pulse of 1.8 GW S band microwave are obtained.

  7. Multichannel radiometer calibration: a new approach

    NASA Astrophysics Data System (ADS)

    Diaz, Susana; Booth, Charles R.; Armstrong, Roy; Brunat, Claudio; Cabrera, Sergio; Camilion, Carolina; Casiccia, Claudio; Deferrari, Guillermo; Fuenzalida, Humberto; Lovengreen, Charlotte; Paladini, Alejandro; Pedroni, Jorge; Rosales, Alejandro; Zagarese, Horacio; Vernet, Maria

    2005-09-01

    The error in irradiance measured with Sun-calibrated multichannel radiometers may be large when the solar zenith angle (SZA) increases. This could be particularly detrimental in radiometers installed at mid and high latitudes, where SZAs at noon are larger than 50° during part of the year. When a multiregressive methodology, including the total ozone column and SZA, was applied in the calculation of the calibration constant, an important improvement was observed. By combining two different equations, an improvement was obtained at almost all the SZAs in the calibration. An independent test that compared the irradiance of a multichannel instrument and a spectroradiometer installed in Ushuaia, Argentina, was used to confirm the results.

  8. Multichannel radiometer calibration: a new approach.

    PubMed

    Diaz, Susana; Booth, Charles R; Armstrong, Roy; Brunat, Claudio; Cabrera, Sergio; Camilion, Carolina; Casiccia, Claudio; Deferrari, Guillermo; Fuenzalida, Humberto; Lovengreen, Charlotte; Paladini, Alejandro; Pedroni, Jorge; Rosales, Alejandro; Zagarese, Horacio; Vernet, Maria

    2005-09-10

    The error in irradiance measured with Sun-calibrated multichannel radiometers may be large when the solar zenith angle (SZA) increases. This could be particularly detrimental in radiometers installed at mid and high latitudes, where SZAs at noon are larger than 50 degrees during part of the year. When a multiregressive methodology, including the total ozone column and SZA, was applied in the calculation of the calibration constant, an important improvement was observed. By combining two different equations, an improvement was obtained at almost all the SZAs in the calibration. An independent test that compared the irradiance of a multichannel instrument and a spectroradiometer installed in Ushuaia, Argentina, was used to confirm the results. PMID:16161648

  9. Multichannel simultaneous magnetic induction measurement system (MUSIMITOS).

    PubMed

    Steffen, Matthias; Heimann, Konrad; Bernstein, Nina; Leonhardt, Steffen

    2008-06-01

    Non-contact heart and lung activity monitoring would be a desirable supplement to conventional monitoring techniques. Based on the potential of non-contact magnetic induction measurements, requirements for an adequate monitoring system were estimated. This formed the basis for the development of the presented extendable multichannel simultaneous magnetic induction measurement system (MUSIMITOS). Special focus was given to the dynamic behaviour and simultaneous multichannel measurements, so that the system allows for up to 14 receiver coils working simultaneously at 6 excitation frequencies. Moreover, a real-time software concept for online signal processing visualization in combination with a fast software demodulation is presented. Finally, first steps towards a clinical application are pointed out and technical performance as well as first in vivo measurements are presented. This paper covers some aspects previously presented in Steffen and Leonhardt (2007 Proc. 13th Int. Conf. on Electrical Bioimpedance and the 8th Conf. on Electrical Impedance Tomography, Graz 2007). PMID:18544830

  10. Auditory nerve disease and auditory neuropathy spectrum disorders.

    PubMed

    Kaga, Kimitaka

    2016-02-01

    In 1996, a new type of bilateral hearing disorder was discerned and published almost simultaneously by Kaga et al. [1] and Starr et al. [2]. Although the pathophysiology of this disorder as reported by each author was essentially identical, Kaga used the term "auditory nerve disease" and Starr used the term "auditory neuropathy". Auditory neuropathy (AN) in adults is an acquired disorder characterized by mild-to-moderate pure-tone hearing loss, poor speech discrimination, and absence of the auditory brainstem response (ABR) all in the presence of normal cochlear outer hair cell function as indicated by normal distortion product otoacoustic emissions (DPOAEs) and evoked summating potentials (SPs) by electrocochleography (ECoG). A variety of processes and etiologies are thought to be involved in its pathophysiology including mutations of the OTOF and/or OPA1 genes. Most of the subsequent reports in the literature discuss the various auditory profiles of patients with AN [3,4] and in this report we present the profiles of an additional 17 cases of adult AN. Cochlear implants are useful for the reacquisition of hearing in adult AN although hearing aids are ineffective. In 2008, the new term of Auditory Neuropathy Spectrum Disorders (ANSD) was proposed by the Colorado Children's Hospital group following a comprehensive study of newborn hearing test results. When ABRs were absent and DPOAEs were present in particular cases during newborn screening they were classified as ANSD. In 2013, our group in the Tokyo Medical Center classified ANSD into three types by following changes in ABRs and DPOAEs over time with development. In Type I, there is normalization of hearing over time, Type II shows a change into profound hearing loss and Type III is true auditory neuropathy (AN). We emphasize that, in adults, ANSD is not the same as AN. PMID:26209259

  11. Auditory Consonant Trigrams: A Psychometric Update†.

    PubMed

    Shura, Robert D; Rowland, Jared A; Miskey, Holly M

    2016-02-01

    The Auditory Consonant Trigrams (ACT) test was developed to evaluate immediate memory in the absence of rehearsal. There are few psychometric studies of the measure and a lack of normative data using samples from the United States or Veterans. ACT data were examined for 184 participants who passed the Word Memory Test, denied a history of moderate to severe traumatic brain injury (TBI), and consented for research purposes only. Reliability and construct validity were examined and normative data developed using a healthy subsample. Cronbach's α for the ACT total score was 0.79. Regression analyses suggested that years of education, estimated premorbid IQ, psychomotor speed, working memory, and impulsivity had the strongest relationships with performance on the ACT. Performance was unrelated to posttraumatic stress disorder and remote mild TBI, but the presence of major depressive disorder was associated with lower total scores. These results demonstrate the ACT has adequate psychometric properties. PMID:26645315

  12. Biomedical Simulation Models of Human Auditory Processes

    NASA Technical Reports Server (NTRS)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  13. Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners.

    PubMed

    Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim

    2015-06-15

    Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433

  14. Theta, beta and gamma rate modulations in the developing auditory system.

    PubMed

    Vanvooren, Sophie; Hofmann, Michael; Poelmans, Hanne; Ghesquière, Pol; Wouters, Jan

    2015-09-01

    In the brain, the temporal analysis of many important auditory features relies on the synchronized firing of neurons to the auditory input rhythm. These so-called neural oscillations play a crucial role in sensory and cognitive processing and deviances in oscillatory activity have shown to be associated with neurodevelopmental disorders. Given the importance of neural auditory oscillations in normal and impaired sensory and cognitive functioning, there has been growing interest in their developmental trajectory from early childhood on. In the present study, neural auditory processing was investigated in typically developing young children (n = 40) and adults (n = 27). In all participants, auditory evoked theta, beta and gamma responses were recorded. The results of this study show maturational differences between children and adults in neural auditory processing at cortical as well as at brainstem level. Neural background noise at cortical level was shown to be higher in children compared to adults. In addition, higher theta response amplitudes were measured in children compared to adults. For beta and gamma rate modulations, different processing asymmetry patterns were observed between both age groups. The mean response phase was also shown to differ significantly between children and adults for all rates. Results suggest that cortical auditory processing of beta develops from a general processing pattern into a more specialized asymmetric processing preference over age. Moreover, the results indicate an enhancement of bilateral representation of monaural sound input at brainstem with age. A dissimilar efficiency of auditory signal transmission from brainstem to cortex along the auditory pathway between children and adults is suggested. These developmental differences might be due to both functional experience-dependent as well as anatomical changes. The findings of the present study offer important information about maturational differences between children

  15. Multichannel cochlear implants in partially ossified cochleas.

    PubMed

    Balkany, T; Gantz, B; Nadol, J B

    1988-01-01

    Deposition of bone within the fluid spaces of the cochlea is encountered commonly in cochlear implant candidates and previously has been considered a relative contraindication to the use of multichannel intracochlear electrodes. This contraindication has been based on possible mechanical difficulty with electrode insertion as well as uncertainty about the potential benefit of the multichannel device in the patient. Fifteen profoundly deaf patients with partial ossification of the basal turn of the cochlea received implants with long intracochlear electrodes (11, Nucleus; 1, University of California at San Francisco/Storz; and 3, Symbion/Inneraid). In 11 cases, ossification had been predicted preoperatively by computed tomographic scan. Electrodes were completely inserted in 14 patients, and partial insertion was accomplished in one patient. All patients currently are using their devices and nine of 12 postlingually deaf patients have achieved some degree of open-set speech discrimination. This series demonstrates that in experienced hands, insertion of long multichannel electrodes into partially ossified cochleas is possible and that results are similar to those achieved in patients who have nonossified cochleas. PMID:3140705

  16. Compact multichannel imaging laser radar receiver

    NASA Astrophysics Data System (ADS)

    Burns, Hoyt N.; Yun, Steven T.; Keltos, Michael L.; Kimmet, James S.

    1999-05-01

    Direct detection imaging Laser Radar (LADAR) produces 3-dimensional range imagery that can be processed to provide target acquisition and precision aimpoint definition in real time. This paper describes the current status of the Parallel Multichannel Imaging LADAR Receiver (PMR), developed under an SBIR Phase II program by the Air Force Research Laboratory, Munitions Directorate (AFRL/MN). The heart of the PMR is the Multichannel Optical Receiver Photonic Hybrid (MORPH), a high performance 16-channel LADAR receiver card which includes fiber-coupled detectors, pulse discrimination, and range counting circuitry on a 3 X 5 inch circuit card. The MORPH provides high downrange resolution (3 inches), multiple-hit (8 per channel) range and reflectance data for each detector. Silicon (Si) and indium gallium arsenide (InGaAs) pin diode or avalanche photodiode (APD) detectors are supported. The modular PMR uses an array of MORPH circuit cards to form a compact multichannel imaging LADAR receiver with any multiple of 16 channels. A 32-channel system measures 3 X 5 X 1.4 inches and weighs 1 lb. A prototype PMR system is currently undergoing field-testing. This paper focuses on field test results and applications of the PMR technology.

  17. A computerized multichannel platelet aggregometer system.

    PubMed

    Kuzara, D; Zoltan, B J; Greathouse, S L; Jordan, C W; Kohler, C A

    1986-08-01

    Commercially available instrumentation for conducting platelet aggregation studies in clinical and research laboratories consists of one-, two-, or four-channel aggregometers used in conjunction with strip chart recorders. These instruments have limited utility in large-scale drug screening and evaluation of the mode of action of drugs or in the clinical diagnosis of platelet disorders. A new instrument, a computerized multichannel aggregometer system (CMPAS) has been developed to collect, display, and analyze platelet aggregation data. The system is comprised of a 24-channel Born-type aggregometer, interfaced to a Rockwell AIM-65 microcomputer through an analogue-to-digital converter and an Epson dot-matrix printer. Each channel is individually calibrated, and aggregation data can be collected on up to 24 different platelet-rich plasma samples simultaneously. Conversational programs written in BASIC prompt the user for the addition of agonists and inhibitors. The tracings for each channel are displayed simultaneously, and a program automatically analyzes the data to generate the following parameters: baseline optical density, maximum aggregation response, positive and negative slopes, time to peak aggregation, and percentage response. Computerized multichannel aggregometer system data outputs are comparable to data generated by a standard Chronolog aggregometer unit. The advantages of the system include multichannel capability, simultaneous display of all channels allowing relative comparisons between control and experimental groups, and time savings and improved efficiency in conducting and analyzing aggregation experiments. PMID:3755779

  18. Recovery from auditory and visual neglect after optokinetic stimulation with pursuit eye movements--transient modulation and enduring treatment effects.

    PubMed

    Kerkhoff, G; Keller, I; Artinger, F; Hildebrandt, H; Marquardt, C; Reinhart, S; Ziegler, W

    2012-05-01

    Optokinetic stimulation (OKS) modulates many facets of the neglect syndrome. This sensory stimulation technique is known to activate multiple brain regions (temporo-parietal cortex, basal ganglia, brain stem, cerebellum) some of which are involved in auditory and visual space coding. Here, we evaluated whether OKS modulates auditory neglect transiently and induces a sustained effect (Study 1), and whether repetitive OKS permanently recovers auditory neglect (Study 2). In Study 1, 20 patients with visuospatial neglect and auditory neglect in an auditory midline task following rightsided stroke were randomly allocated to an experimental and a control group matched for neglect severity and socio-demographic factors. Both groups showed a stable, pathological shift of their auditory subjective median plane (ASMP) in front space to the right side. During leftward OKS the experimental group showed a complete normalization of the shift of the ASMP, which endured until 30 min poststimulation, and returned almost to baseline values 24h after OKS. In contrast, the control group who viewed the identical but static dot pattern, showed neither change in their ASMP during this condition, nor any significant change at 30 min or 24h poststimulation. In Study 2, we show in two samples of neglect patients (N = 3 each) that repetitive leftward OKS with smooth pursuit eye movements as a therapy induces lasting improvements in auditory (the ASMP) and visual neglect while visual scanning therapy yielded no measurable effects on auditory and significantly smaller effects on visual neglect. In conclusion, the experiments show that a single session of OKS induces rapid though transient recovery from auditory neglect including a sustained effect after termination of stimulation, while repetitive OKS therapy yields enduring and multimodal recovery from auditory and visual neglect. OKS therapy with pursuit eye movements therefore represents a multimodally effective and easily applicable

  19. Psychology of auditory perception.

    PubMed

    Lotto, Andrew; Holt, Lori

    2011-09-01

    Audition is often treated as a 'secondary' sensory system behind vision in the study of cognitive science. In this review, we focus on three seemingly simple perceptual tasks to demonstrate the complexity of perceptual-cognitive processing involved in everyday audition. After providing a short overview of the characteristics of sound and their neural encoding, we present a description of the perceptual task of segregating multiple sound events that are mixed together in the signal reaching the ears. Then, we discuss the ability to localize the sound source in the environment. Finally, we provide some data and theory on how listeners categorize complex sounds, such as speech. In particular, we present research on how listeners weigh multiple acoustic cues in making a categorization decision. One conclusion of this review is that it is time for auditory cognitive science to be developed to match what has been done in vision in order for us to better understand how humans communicate with speech and music. WIREs Cogni Sci 2011 2 479-489 DOI: 10.1002/wcs.123 For further resources related to this article, please visit the WIREs website. PMID:26302301

  20. Differential auditory signal processing in an animal model

    NASA Astrophysics Data System (ADS)

    Lim, Dukhwan; Kim, Chongsun; Chang, Sun O.

    2002-05-01

    Auditory evoked responses were collected in male zebra finches (Poephila guttata) to objectively determine differential frequency selectivity. First, the mating call of the animal was recorded and analyzed for its frequency components through the customized program. Then, auditory brainstem responses and cortical responses of each anesthetized animal were routinely recorded in response to tone bursts of 1-8 kHz derived from the corresponding mating call spectrum. From the results, most mating calls showed relatively consistent spectral structures. The upper limit of the spectrum was well under 10 kHz. The peak energy bands were concentrated in the region less than 5 kHz. The assessment of auditory brainstem responses and cortical evoked potentials showed differential selectivity with a series of characteristic scales. This system appears to be an excellent model to investigate complex sound processing and related language behaviors. These data could also be used in designing effective signal processing strategies in auditory rehabilitation devices such as hearing aids and cochlear implants. [Work supported by Brain Science & Engineering Program from Korean Ministry of Science and Technology.

  1. What works in auditory working memory? A neural oscillations perspective.

    PubMed

    Wilsch, Anna; Obleser, Jonas

    2016-06-01

    Working memory is a limited resource: brains can only maintain small amounts of sensory input (memory load) over a brief period of time (memory decay). The dynamics of slow neural oscillations as recorded using magneto- and electroencephalography (M/EEG) provide a window into the neural mechanics of these limitations. Especially oscillations in the alpha range (8-13Hz) are a sensitive marker for memory load. Moreover, according to current models, the resultant working memory load is determined by the relative noise in the neural representation of maintained information. The auditory domain allows memory researchers to apply and test the concept of noise quite literally: Employing degraded stimulus acoustics increases memory load and, at the same time, allows assessing the cognitive resources required to process speech in noise in an ecologically valid and clinically relevant way. The present review first summarizes recent findings on neural oscillations, especially alpha power, and how they reflect memory load and memory decay in auditory working memory. The focus is specifically on memory load resulting from acoustic degradation. These findings are then contrasted with contextual factors that benefit neural as well as behavioral markers of memory performance, by reducing representational noise. We end on discussing the functional role of alpha power in auditory working memory and suggest extensions of the current methodological toolkit. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26556773

  2. Evaluation of Central Auditory Discrimination Abilities in Older Adults

    PubMed Central

    Freigang, Claudia; Schmidt, Lucas; Wagner, Jan; Eckardt, Rahel; Steinhagen-Thiessen, Elisabeth; Ernst, Arne; Rübsamen, Rudolf

    2011-01-01

    The present study focuses on auditory discrimination abilities in older adults aged 65–89 years. We applied the “Leipzig inventory for patient psychoacoustic” (LIPP), a psychoacoustic test battery specifically designed to identify deficits in central auditory processing. These tests quantify the just noticeable differences (JND) for the three basic acoustic parameters (i.e., frequency, intensity, and signal duration). Three different test modes [monaural, dichotic signal/noise (s/n) and interaural] were used, stimulus level was 35 dB sensation level. The tests are designed as three-alternative forced-choice procedure with a maximum-likelihood procedure estimating p = 0.5 correct response value. These procedures have proven to be highly efficient and provide a reliable outcome. The measurements yielded significant age-dependent deteriorations in the ability to discriminate single acoustic features pointing to progressive impairments in central auditory processing. The degree of deterioration was correlated to the different acoustic features and to the test modes. Most prominent, interaural frequency and signal duration discrimination at low test frequencies was elevated which indicates a deterioration of time- and phase-dependent processing at brain stem and cortical levels. LIPP proves to be an effective tool to identify basic pathophysiological mechanisms and the source of a specific impairment in auditory processing of the elderly. PMID:21577251

  3. Adaptation to Vocal Expressions Reveals Multistep Perception of Auditory Emotion

    PubMed Central

    Maurage, Pierre; Rouger, Julien; Latinus, Marianne; Belin, Pascal

    2014-01-01

    The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect. PMID:24920615

  4. Auditory scene analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    PubMed Central

    Brown, David J.; Simpson, Andrew J. R.; Proulx, Michael J.

    2015-01-01

    A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36) performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio–visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this. PMID:26528202

  5. Auditory scene analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    PubMed

    Brown, David J; Simpson, Andrew J R; Proulx, Michael J

    2015-01-01

    A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don't yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36) performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this. PMID:26528202

  6. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing

    PubMed Central

    Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  7. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    PubMed

    Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  8. Demonstration of prosthetic activation of central auditory pathways using ( sup 14 C)-2-deoxyglucose

    SciTech Connect

    Evans, D.A.; Niparko, J.K.; Altschuler, R.A.; Frey, K.A.; Miller, J.M. )

    1990-02-01

    The cochlear prosthesis is not applicable to patients who lack an implantable cochlea or an intact vestibulocochlear nerve. Direct electrical stimulation of the cochlear nucleus (CN) of the brain stem might provide a method for auditory rehabilitation of these patients. A penetrating CN electrode has been developed and tissue tolerance to this device demonstrated. This study was undertaken to evaluate metabolic activation of central nervous system (CNS) auditory tracts produced by such implants. Regional cerebral glucose use resulting from CN stimulation was estimated in a series of chronically implanted guinea pigs with the use of ({sup 14}C)-2-deoxyglucose (2-DG). Enhanced 2-DG uptake was observed in structures of the auditory tract. The activation of central auditory structures achieved with CN stimulation was similar to that produced by acoustic stimulation and by electrical stimulation of the modiolar portion of the auditory nerve in control groups. An interesting banding pattern was observed in the inferior colliculus following CN stimulation, as previously described with acoustic stimulation. This study demonstrates that functional metabolic activation of central auditory pathways can be achieved with a penetrating CNS auditory prosthesis.

  9. A review on auditory space adaptations to altered head-related cues

    PubMed Central

    Mendonça, Catarina

    2014-01-01

    In this article we present a review of current literature on adaptations to altered head-related auditory localization cues. Localization cues can be altered through ear blocks, ear molds, electronic hearing devices, and altered head-related transfer functions (HRTFs). Three main methods have been used to induce auditory space adaptation: sound exposure, training with feedback, and explicit training. Adaptations induced by training, rather than exposure, are consistently faster. Studies on localization with altered head-related cues have reported poor initial localization, but improved accuracy and discriminability with training. Also, studies that displaced the auditory space by altering cue values reported adaptations in perceived source position to compensate for such displacements. Auditory space adaptations can last for a few months even without further contact with the learned cues. In most studies, localization with the subject's own unaltered cues remained intact despite the adaptation to a second set of cues. Generalization is observed from trained to untrained sound source positions, but there is mixed evidence regarding cross-frequency generalization. Multiple brain areas might be involved in auditory space adaptation processes, but the auditory cortex (AC) may play a critical role. Auditory space plasticity may involve context-dependent cue reweighting. PMID:25120422

  10. Auditory and non-auditory effects of noise on health.

    PubMed

    Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen

    2014-04-12

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-auditory effects of environmental noise exposure on public health is growing. Observational and experimental studies have shown that noise exposure leads to annoyance, disturbs sleep and causes daytime sleepiness, affects patient outcomes and staff performance in hospitals, increases the occurrence of hypertension and cardiovascular disease, and impairs cognitive performance in schoolchildren. In this Review, we stress the importance of adequate noise prevention and mitigation strategies for public health. PMID:24183105

  11. Auditory and non-auditory effects of noise on health

    PubMed Central

    Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen

    2014-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-auditory effects of environmental noise exposure on public health is growing. Observational and experimental studies have shown that noise exposure leads to annoyance, disturbs sleep and causes daytime sleepiness, affects patient outcomes and staff performance in hospitals, increases the occurrence of hypertension and cardiovascular disease, and impairs cognitive performance in schoolchildren. In this Review, we stress the importance of adequate noise prevention and mitigation strategies for public health. PMID:24183105

  12. Two distinct auditory-motor circuits for monitoring speech production as revealed by content-specific suppression of auditory cortex.

    PubMed

    Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet

    2015-06-01

    Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech. PMID:24414279

  13. Current understanding of auditory neuropathy.

    PubMed

    Boo, Nem-Yun

    2008-12-01

    Auditory neuropathy is defined by the presence of normal evoked otoacoustic emissions (OAE) and absent or abnormal auditory brainstem responses (ABR). The sites of lesion could be at the cochlear inner hair cells, spiral ganglion cells of the cochlea, synapse between the inner hair cells and auditory nerve, or the auditory nerve itself. Genetic, infectious or neonatal/perinatal insults are the 3 most commonly identified underlying causes. Children usually present with delay in speech and language development while adult patients present with hearing loss and disproportionately poor speech discrimination for the degree of hearing loss. Although cochlear implant is the treatment of choice, current evidence show that it benefits only those patients with endocochlear lesions, but not those with cochlear nerve deficiency or central nervous system disorders. As auditory neuropathy is a disorder with potential long-term impact on a child's development, early hearing screen using both OAE and ABR should be carried out on all newborns and infants to allow early detection and intervention. PMID:19904452

  14. Individual differences in auditory abilities.

    PubMed

    Kidd, Gary R; Watson, Charles S; Gygi, Brian

    2007-07-01

    Performance on 19 auditory discrimination and identification tasks was measured for 340 listeners with normal hearing. Test stimuli included single tones, sequences of tones, amplitude-modulated and rippled noise, temporal gaps, speech, and environmental sounds. Principal components analysis and structural equation modeling of the data support the existence of a general auditory ability and four specific auditory abilities. The specific abilities are (1) loudness and duration (overall energy) discrimination; (2) sensitivity to temporal envelope variation; (3) identification of highly familiar sounds (speech and nonspeech); and (4) discrimination of unfamiliar simple and complex spectral and temporal patterns. Examination of Scholastic Aptitude Test (SAT) scores for a large subset of the population revealed little or no association between general or specific auditory abilities and general intellectual ability. The findings provide a basis for research to further specify the nature of the auditory abilities. Of particular interest are results suggestive of a familiar sound recognition (FSR) ability, apparently specialized for sound recognition on the basis of limited or distorted information. This FSR ability is independent of normal variation in both spectral-temporal acuity and of general intellectual ability. PMID:17614500

  15. Experience-dependent modulation of tonotopic neural responses in human auditory cortex.

    PubMed Central

    Morris, J S; Friston, K J; Dolan, R J

    1998-01-01

    Experience-dependent plasticity of receptive fields in the auditory cortex has been demonstrated by electrophysiological experiments in animals. In the present study we used PET neuroimaging to measure regional brain activity in volunteer human subjects during discriminatory classical conditioning of high (8000 Hz) or low (200 Hz) frequency tones by an aversive 100 dB white noise burst. Conditioning-related, frequency-specific modulation of tonotopic neural responses in the auditory cortex was observed. The modulated regions of the auditory cortex positively covaried with activity in the amygdala, basal forebrain and orbitofrontal cortex, and showed context-specific functional interactions with the medial geniculate nucleus. These results accord with animal single-unit data and support neurobiological models of auditory conditioning and value-dependent neural selection. PMID:9608726

  16. Simultaneous recording of rat auditory cortex and thalamus via a titanium-based, microfabricated, microelectrode device

    NASA Astrophysics Data System (ADS)

    McCarthy, P. T.; Rao, M. P.; Otto, K. J.

    2011-08-01

    Direct recording from sequential processing stations within the brain has provided opportunity for enhancing understanding of important neural circuits, such as the corticothalamic loops underlying auditory, visual, and somatosensory processing. However, the common reliance upon microwire-based electrodes to perform such recordings often necessitates complex surgeries and increases trauma to neural tissues. This paper reports the development of titanium-based, microfabricated, microelectrode devices designed to address these limitations by allowing acute recording from the thalamic nuclei and associated cortical sites simultaneously in a minimally invasive manner. In particular, devices were designed to simultaneously probe rat auditory cortex and auditory thalamus, with the intent of recording auditory response latencies and isolated action potentials within the separate anatomical sites. Details regarding the design, fabrication, and characterization of these devices are presented, as are preliminary results from acute in vivo recording.

  17. Central Gain Restores Auditory Processing following Near-Complete Cochlear Denervation.

    PubMed

    Chambers, Anna R; Resnik, Jennifer; Yuan, Yasheng; Whitton, Jonathon P; Edge, Albert S; Liberman, M Charles; Polley, Daniel B

    2016-02-17

    Sensory organ damage induces a host of cellular and physiological changes in the periphery and the brain. Here, we show that some aspects of auditory processing recover after profound cochlear denervation due to a progressive, compensatory plasticity at higher stages of the central auditory pathway. Lesioning >95% of cochlear nerve afferent synapses, while sparing hair cells, in adult mice virtually eliminated the auditory brainstem response and acoustic startle reflex, yet tone detection behavior was nearly normal. As sound-evoked responses from the auditory nerve grew progressively weaker following denervation, sound-evoked activity in the cortex-and, to a lesser extent, the midbrain-rebounded or surpassed control levels. Increased central gain supported the recovery of rudimentary sound features encoded by firing rate, but not features encoded by precise spike timing such as modulated noise or speech. These findings underscore the importance of central plasticity in the perceptual sequelae of cochlear hearing impairment. PMID:26833137

  18. Steady state visually evoked potential correlates of auditory hallucinations in schizophrenia.

    PubMed

    Line, P; Silberstein, R B; Wright, J J; Copolov, D L

    1998-11-01

    This study attempted to localize regions of brain electrical activity associated with the onset of auditory hallucinations. Changes in Steady State Visually Evoked Potential (SSVEP) topography associated with the onset of spontaneous auditory hallucinations was studied in eight schizophrenic patients. The SSVEP elicited by a spatially uniform sinusoidally varying visual flicker was recorded using a 64-channel electrode helmet. A large and significant decrease in SSVEP latency in the right temporo/parietal region occurred in the second prior to the report of auditory hallucinations. A control task with matching motor movements produced no significant decrease in SSVEP latency in the same right temporo/parietal location. This finding suggests that activity of fine temporal resolution in the neural networks in the right temporo/parietal area may be implicated in the genesis of auditory hallucination, in conformity with certain neuropsychological theories. PMID:9811555

  19. Modulation of auditory processing during speech movement planning is limited in adults who stutter.

    PubMed

    Daliri, Ayoub; Max, Ludo

    2015-04-01

    Stuttering is associated with atypical structural and functional connectivity in sensorimotor brain areas, in particular premotor, motor, and auditory regions. It remains unknown, however, which specific mechanisms of speech planning and execution are affected by these neurological abnormalities. To investigate pre-movement sensory modulation, we recorded 12 stuttering and 12 nonstuttering adults' auditory evoked potentials in response to probe tones presented prior to speech onset in a delayed-response speaking condition vs. no-speaking control conditions (silent reading; seeing nonlinguistic symbols). Findings indicate that, during speech movement planning, the nonstuttering group showed a statistically significant modulation of auditory processing (reduced N1 amplitude) that was not observed in the stuttering group. Thus, the obtained results provide electrophysiological evidence in support of the hypothesis that stuttering is associated with deficiencies in modulating the cortical auditory system during speech movement planning. This specific sensorimotor integration deficiency may contribute to inefficient feedback monitoring and, consequently, speech dysfluencies. PMID:25796060

  20. Music training alters the course of adolescent auditory development

    PubMed Central

    Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina

    2015-01-01

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739

  1. Music training alters the course of adolescent auditory development.

    PubMed

    Tierney, Adam T; Krizman, Jennifer; Kraus, Nina

    2015-08-11

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739

  2. Hearing loss and the central auditory system: Implications for hearing aids

    NASA Astrophysics Data System (ADS)

    Frisina, Robert D.

    2003-04-01

    Hearing loss can result from disorders or damage to the ear (peripheral auditory system) or the brain (central auditory system). Here, the basic structure and function of the central auditory system will be highlighted as relevant to cases of permanent hearing loss where assistive devices (hearing aids) are called for. The parts of the brain used for hearing are altered in two basic ways in instances of hearing loss: (1) Damage to the ear can reduce the number and nature of input channels that the brainstem receives from the ear, causing plasticity of the central auditory system. This plasticity may partially compensate for the peripheral loss, or add new abnormalities such as distorted speech processing or tinnitus. (2) In some situations, damage to the brain can occur independently of the ear, as may occur in cases of head trauma, tumors or aging. Implications of deficits to the central auditory system for speech perception in noise, hearing aid use and future innovative circuit designs will be provided to set the stage for subsequent presentations in this special educational session. [Work supported by NIA-NIH Grant P01 AG09524 and the International Center for Hearing & Speech Research, Rochester, NY.

  3. Processing and Analysis of Multichannel Extracellular Neuronal Signals: State-of-the-Art and Challenges

    PubMed Central

    Mahmud, Mufti; Vassanelli, Stefano

    2016-01-01

    In recent years multichannel neuronal signal acquisition systems have allowed scientists to focus on research questions which were otherwise impossible. They act as a powerful means to study brain (dys)functions in in-vivo and in in-vitro animal models. Typically, each session of electrophysiological experiments with multichannel data acquisition systems generate large amount of raw data. For example, a 128 channel signal acquisition system with 16 bits A/D conversion and 20 kHz sampling rate will generate approximately 17 GB data per hour (uncompressed). This poses an important and challenging problem of inferring conclusions from the large amounts of acquired data. Thus, automated signal processing and analysis tools are becoming a key component in neuroscience research, facilitating extraction of relevant information from neuronal recordings in a reasonable time. The purpose of this review is to introduce the reader to the current state-of-the-art of open-source packages for (semi)automated processing and analysis of multichannel extracellular neuronal signals (i.e., neuronal spikes, local field potentials, electroencephalogram, etc.), and the existing Neuroinformatics infrastructure for tool and data sharing. The review is concluded by pinpointing some major challenges that are being faced, which include the development of novel benchmarking techniques, cloud-based distributed processing and analysis tools, as well as defining novel means to share and standardize data. PMID:27313507

  4. Processing and Analysis of Multichannel Extracellular Neuronal Signals: State-of-the-Art and Challenges.

    PubMed

    Mahmud, Mufti; Vassanelli, Stefano

    2016-01-01

    In recent years multichannel neuronal signal acquisition systems have allowed scientists to focus on research questions which were otherwise impossible. They act as a powerful means to study brain (dys)functions in in-vivo and in in-vitro animal models. Typically, each session of electrophysiological experiments with multichannel data acquisition systems generate large amount of raw data. For example, a 128 channel signal acquisition system with 16 bits A/D conversion and 20 kHz sampling rate will generate approximately 17 GB data per hour (uncompressed). This poses an important and challenging problem of inferring conclusions from the large amounts of acquired data. Thus, automated signal processing and analysis tools are becoming a key component in neuroscience research, facilitating extraction of relevant information from neuronal recordings in a reasonable time. The purpose of this review is to introduce the reader to the current state-of-the-art of open-source packages for (semi)automated processing and analysis of multichannel extracellular neuronal signals (i.e., neuronal spikes, local field potentials, electroencephalogram, etc.), and the existing Neuroinformatics infrastructure for tool and data sharing. The review is concluded by pinpointing some major challenges that are being faced, which include the development of novel benchmarking techniques, cloud-based distributed processing and analysis tools, as well as defining novel means to share and standardize data. PMID:27313507

  5. Generators and Connectivity of the Early Auditory Evoked Gamma Band Response.

    PubMed

    Polomac, Nenad; Leicht, Gregor; Nolte, Guido; Andreou, Christina; Schneider, Till R; Steinmann, Saskia; Engel, Andreas K; Mulert, Christoph

    2015-11-01

    High frequency oscillations in the gamma range are known to be involved in early stages of auditory information processing in terms of synchronization of brain regions, e.g., in cognitive functions. It has been shown using EEG source localisation, as well as simultaneously recorded EEG-fMRI, that the auditory evoked gamma-band response (aeGBR) is modulated by attention. In addition to auditory cortex activity a dorsal anterior cingulate cortex (dACC) generator could be involved. In the present study we investigated aeGBR magnetic fields using magnetoencephalography (MEG). We aimed to localize the aeGBR sources and its connectivity features in relation to mental effort. We investigated the aeGBR magnetic fields in 13 healthy participants using a 275-channel CTF-MEG system. The experimental paradigms were two auditory choice reaction tasks with different difficulties and demands for mental effort. We performed source localization with eLORETA and calculated the aeGBR lagged phase synchronization between bilateral auditory cortices and frontal midline structures. The eLORETA analysis revealed sources of the aeGBR within bilateral auditory cortices and in frontal midline structures of the brain including the dACC. Compared to the control condition the dACC source activity was found to be significantly stronger during the performance of the cognitively demanding task. Moreover, this task involved a significantly stronger functional connectivity between auditory cortices and dACC. In accordance with previous EEG and EEG-fMRI investigations, our study confirms an aeGBR generator in the dACC by means of MEG and suggests its involvement in the effortful processing of auditory stimuli. PMID:25926268

  6. Context effects on auditory distraction

    PubMed Central

    Chen, Sufen; Sussman, Elyse S.

    2014-01-01

    The purpose of the study was to test the hypothesis that sound context modulates the magnitude of auditory distraction, indexed by behavioral and electrophysiological measures. Participants were asked to identify tone duration, while irrelevant changes occurred in tone frequency, tone intensity, and harmonic structure. Frequency deviants were randomly intermixed with standards (Uni-Condition), with intensity deviants (Bi-Condition), and with both intensity and complex deviants (Tri-Condition). Only in the Tri-Condition did the auditory distraction effect reflect the magnitude difference among the frequency and intensity deviants. The mixture of the different types of deviants in the Tri-Condition modulated the perceived level of distraction, demonstrating that the sound context can modulate the effect of deviance level on processing irrelevant acoustic changes in the environment. These findings thus indicate that perceptual contrast plays a role in change detection processes that leads to auditory distraction. PMID:23886958

  7. Multi-Channel neurodegenerative pattern analysis and its application in Alzheimer's disease characterization

    PubMed Central

    Liu, Sidong; Cai, Weidong; Wen, Lingfeng; Feng, David Dagan; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J.; Eberl, Stefan; ADNI

    2014-01-01

    Neuroimaging has played an important role in non-invasive diagnosis and differentiation of neurodegenerative disorders, such as Alzheimer's disease and Mild Cognitive Impairment. Various features have been extracted from the neuroimaging data to characterize the disorders, and these features can be roughly divided into global and local features. Recent studies show a tendency of using local features in disease characterization, since they are capable of identifying the subtle disease-specific patterns associated with the effects of the disease on human brain. However, problems arise if the neuroimaging database involved multiple disorders or progressive disorders, as disorders of different types or at different progressive stages might exhibit different degenerative patterns. It is difficult for the researchers to reach consensus on what brain regions could effectively distinguish multiple disorders or multiple progression stages. In this study we proposed a Multi-Channel pattern analysis approach to identify the most discriminative local brain metabolism features for neurodegenerative disorder characterization. We compared our method to global methods and other pattern analysis methods based on clinical expertise or statistics tests. The preliminary results suggested that the proposed Multi-Channel pattern analysis method outperformed other approaches in Alzheimer's disease characterization, and meanwhile provided important insights into the underlying pathology of Alzheimer's disease and Mild Cognitive Impairment. PMID:24933011

  8. Effects of Auditory Input in Individuation Tasks

    ERIC Educational Resources Information Center

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2008-01-01

    Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre-familiarized sounds, etc.). The current study extends this research by examining how…

  9. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  10. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates

    PubMed Central

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-01-01

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memo