Science.gov

Sample records for multichannel auditory brain

  1. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  2. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  3. Multichannel Spatial Auditory Display for Speed Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  4. Multichannel spatial auditory display for speech communications.

    PubMed

    Begault, D R; Erbe, T

    1994-10-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  5. Enhanced multi-channel model for auditory spectrotemporal integration.

    PubMed

    Oh, Yonghee; Feth, Lawrence L; Hoglund, Evelyn M

    2015-11-01

    In psychoacoustics, a multi-channel model has traditionally been used to describe detection improvement for multicomponent signals. This model commonly postulates that energy or information within either the frequency or time domain is transformed into a probabilistic decision variable across the auditory channels, and that their weighted linear summation determines optimum detection performance when compared to a critical value such as a decision criterion. In this study, representative integration-based channel models, specifically focused on signal-processing properties of the auditory periphery are reviewed (e.g., Durlach's channel model). In addition, major limitations of the previous channel models are described when applied to spectral, temporal, and spectrotemporal integration performance by human listeners. Here, integration refers to detection threshold improvements as the number of brief tone bursts in a signal is increased. Previous versions of the multi-channel model underestimate listener performance in these experiments. Further, they are unable to apply a single processing unit to signals which vary simultaneously in time and frequency. Improvements to the previous channel models are proposed by considering more realistic conditions such as correlated signal responses in the auditory channels, nonlinear properties in system performance, and a peripheral processing unit operating in both time and frequency domains.

  6. Consequences of Broad Auditory Filters for Identification of Multichannel-Compressed Vowels

    ERIC Educational Resources Information Center

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose: In view of previous findings (Bor, Souza, & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method: Listeners were recruited in 3 groups:…

  7. Consequences of Broad Auditory Filters for Identification of Multichannel-Compressed Vowels

    ERIC Educational Resources Information Center

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose: In view of previous findings (Bor, Souza, & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method: Listeners were recruited in 3 groups:…

  8. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  9. Multi-channel spatial auditory display for speech communications

    NASA Astrophysics Data System (ADS)

    Begault, Durand; Erbe, Tom

    1993-10-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  10. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  11. A Brain System for Auditory Working Memory.

    PubMed

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  12. Consequences of broad auditory filters for identification of multichannel-compressed vowels

    PubMed Central

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose In view of previous findings (Bor, Souza & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method Listeners were recruited in three groups: listeners with flat sensorineural loss, listeners with sloping sensorineural loss, and a control group of listeners with normal hearing. Individual auditory filter measurements were obtained at 500 and 2000 Hz. The filter widths were related to identification of vowels processed with 16-channel MCC and with a control (linear) condition. Results Listeners with flat loss had broader filters at 500 Hz but not at 2000 Hz, compared to listeners with sloping loss. Vowel identification was poorer for MCC compared to linear amplification. Listeners with flat loss made more errors than listeners with sloping loss, and there was a significant relationship between filter width and the effects of MCC. Conclusions Broadened auditory filters can reduce the ability to process amplitude-compressed vowel spectra. This suggests that individual frequency selectivity is one factor which influences benefit of MCC, when a high number of compression channels are used. PMID:22207696

  13. Auditory brain stem responses in the detection of brain death.

    PubMed

    Ozgirgin, O Nuri; Ozçelik, Tuncay; Sevimli, Nilay Kizilkaya

    2003-01-01

    We evaluated comatose patients by auditory brain stem responses (ABR) to determine the role of ABR in the diagnosis of impending brain death. Sixty comatose patients in the intensive care unit were evaluated by brain stem evoked response audiometry. Correlations were sought between the absence or presence of ABRs and the presenting pathology, the Glasgow Coma Scale (GCS) scores, and ultimate diagnoses. The brain stem responses were totally absent in 41 patients. Presence of wave I could be obtained in only 10 patients. All the waveforms were found in nine patients; however, in eight patients the potentials disappeared as the GCS scores decreased to 3. Detection of wave I alone strongly suggested dysfunction of the brain stem. However, loss of wave I particularly in trauma patients aroused doubt as to whether the absence was associated with auditory end organ injury or brain stem dysfunction. The results suggest that evaluation of ABR may support brain death in a comatose patient (i) when wave I is present alone, (ii) the absence of wave I is accompanied by a documented auditory end organ injury, or (iii) when previously recorded potentials are no longer detectable.

  14. Exploring functional connectivity networks with multichannel brain array coils.

    PubMed

    Anteraper, Sheeba Arnold; Whitfield-Gabrieli, Susan; Keil, Boris; Shannon, Steven; Gabrieli, John D; Triantafyllou, Christina

    2013-01-01

    The use of multichannel array head coils in functional and structural magnetic resonance imaging (MRI) provides increased signal-to-noise ratio (SNR), higher sensitivity, and parallel imaging capabilities. However, their benefits remain to be systematically explored in the context of resting-state functional connectivity MRI (fcMRI). In this study, we compare signal detectability within and between commercially available multichannel brain coils, a 32-Channel (32Ch), and a 12-Channel (12Ch) at 3T, in a high-resolution regime to accurately map resting-state networks. We investigate whether the 32Ch coil can extract and map fcMRI more efficiently and robustly than the 12Ch coil using seed-based and graph-theory-based analyses. Our findings demonstrate that although the 12Ch coil can be used to reveal resting-state connectivity maps, the 32Ch coil provides increased detailed functional connectivity maps (using seed-based analysis) as well as increased global and local efficiency, and cost (using graph-theory-based analysis), in a number of widely reported resting-state networks. The exploration of subcortical networks, which are scarcely reported due to limitations in spatial-resolution and coil sensitivity, also proved beneficial with the 32Ch coil. Further, comparisons regarding the data acquisition time required to successfully map these networks indicated that scan time can be significantly reduced by 50% when a coil with increased number of channels (i.e., 32Ch) is used. Switching to multichannel arrays in resting-state fcMRI could, therefore, provide both detailed functional connectivity maps and acquisition time reductions, which could further benefit imaging special subject populations, such as patients or pediatrics who have less tolerance in lengthy imaging sessions.

  15. Multichannel Brain-Signal-Amplifying and Digitizing System

    NASA Technical Reports Server (NTRS)

    Gevins, Alan

    2005-01-01

    An apparatus has been developed for use in acquiring multichannel electroencephalographic (EEG) data from a human subject. EEG apparatuses with many channels in use heretofore have been too heavy and bulky to be worn, and have been limited in dynamic range to no more than 18 bits. The present apparatus is small and light enough to be worn by the subject. It is capable of amplifying EEG signals and digitizing them to 22 bits in as many as 150 channels. The apparatus is controlled by software and is plugged into the USB port of a personal computer. This apparatus makes it possible, for the first time, to obtain high-resolution functional EEG images of a thinking brain in a real-life, ambulatory setting outside a research laboratory or hospital.

  16. The utility of multichannel local field potentials for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Hwang, Eun Jung; Andersen, Richard A.

    2013-08-01

    Objective. Local field potentials (LFPs) that carry information about the subject's motor intention have the potential to serve as a complement or alternative to spike signals for brain-machine interfaces (BMIs). The goal of this study is to assess the utility of LFPs for BMIs by characterizing the largely unknown information coding properties of multichannel LFPs. Approach. Two monkeys were implanted, each with a 16-channel electrode array, in the parietal reach region where both LFPs and spikes are known to encode the subject's intended reach target. We examined how multichannel LFPs recorded during a reach task jointly carry reach target information, and compared the LFP performance to simultaneously recorded multichannel spikes. Main Results. LFPs yielded a higher number of channels that were informative about reach targets than spikes. Single channel LFPs provided more accurate target information than single channel spikes. However, LFPs showed significantly larger signal and noise correlations across channels than spikes. Reach target decoders performed worse when using multichannel LFPs than multichannel spikes. The underperformance of multichannel LFPs was mostly due to their larger noise correlation because noise de-correlated multichannel LFPs produced a decoding accuracy comparable to multichannel spikes. Despite the high noise correlation, decoders using LFPs in addition to spikes outperformed decoders using only spikes. Significance. These results demonstrate that multichannel LFPs could effectively complement spikes for BMI applications by yielding more informative channels. The utility of multichannel LFPs may be further augmented if their high noise correlation can be taken into account by decoders.

  17. The SRI24 multichannel brain atlas: construction and applications

    NASA Astrophysics Data System (ADS)

    Rohlfing, Torsten; Zahr, Natalie M.; Sullivan, Edith V.; Pfefferbaum, Adolf

    2008-03-01

    We present a new standard atlas of the human brain based on magnetic resonance images. The atlas was generated using unbiased population registration from high-resolution images obtained by multichannel-coil acquisition at 3T in a group of 24 normal subjects. The final atlas comprises three anatomical channels (T I-weighted, early and late spin echo), three diffusion-related channels (fractional anisotropy, mean diffusivity, diffusion-weighted image), and three tissue probability maps (CSF, gray matter, white matter). The atlas is dynamic in that it is implicitly represented by nonrigid transformations between the 24 subject images, as well as distortion-correction alignments between the image channels in each subject. The atlas can, therefore, be generated at essentially arbitrary image resolutions and orientations (e.g., AC/PC aligned), without compounding interpolation artifacts. We demonstrate in this paper two different applications of the atlas: (a) region definition by label propagation in a fiber tracking study is enabled by the increased sharpness of our atlas compared with other available atlases, and (b) spatial normalization is enabled by its average shape property. In summary, our atlas has unique features and will be made available to the scientific community as a resource and reference system for future imaging-based studies of the human brain.

  18. Spatiotemporal brain dynamics of auditory temporal assimilation.

    PubMed

    Hironaga, Naruhito; Mitsudo, Takako; Hayamizu, Mariko; Nakajima, Yoshitaka; Takeichi, Hiroshige; Tobimatsu, Shozo

    2017-09-12

    Time is a fundamental dimension, but millisecond-level judgments sometimes lead to perceptual illusions. We previously introduced a "time-shrinking illusion" using a psychological paradigm that induces auditory temporal assimilation (ATA). In ATA, the duration of two successive intervals (T1 and T2), marked by three auditory stimuli, can be perceived as equal when they are not. Here, we investigate the spatiotemporal profile of human temporal judgments using magnetoencephalography (MEG). Behavioural results showed typical ATA: participants judged T1 and T2 as equal when T2 - T1 ≤ +80 ms. MEG source-localisation analysis demonstrated that regional activity differences between judgment and no-judgment conditions emerged in the temporoparietal junction (TPJ) during T2. This observation in the TPJ may indicate its involvement in the encoding process when T1 ≠ T2. Activation in the inferior frontal gyrus (IFG) was enhanced irrespective of the stimulus patterns when participants engaged in temporal judgment. Furthermore, just after the final marker, activity in the IFG was enhanced specifically for the time-shrinking pattern. This indicates that activity in the IFG is also related to the illusory perception of time-interval equality. Based on these observations, we propose neural signatures for judgments of temporal equality in the human brain.

  19. Auditory temporal processing deficits in children with periventricular brain injury.

    PubMed

    Downie, Andrea L S; Jakobson, Lorna S; Frisk, Virginia; Ushycky, Irene

    2002-02-01

    The present study investigated whether auditory temporal processing deficits are related to the presence and/or the severity of periventricular brain injury and the reading difficulties experienced by extremely low birthweight (ELBW: birthweight <1000 g) children. Results indicate that ELBW children with mild or severe brain lesions obtained significantly lower scores on a test requiring auditory temporal order judgments than ELBW children without periventricular brain injury or children who were full-term. Structural equation modeling indicated that a model in which auditory temporal processing deficits predicted speech sound discrimination and phonological processing ability provided a better fit for the data than did a second model, which hypothesized that auditory temporal processing deficits are associated with poor reading abilities through a working memory deficit. These findings suggest that an impairment in auditory temporal processing may contribute to the reading difficulties experienced by ELBW children. Copyright 2002 Elsevier Science (USA).

  20. [Brain stem auditory evoked potentials in brain death state].

    PubMed

    Kojder, I; Garell, S; Włodarczyk, E; Sagan, L; Jezewski, D; Slósarek, J

    1998-01-01

    The authors studied auditory brainstem evoked potentials (BAEP) in 27 organ donors aged 40 to 68 years treated in neurosurgery units in Szczecin and Grenoble. Abnormal results were found in all cases. In 63% of cases no evoked action potentials were obtained, in 34% only the 1st wave was obtained, and in two cases evolution was observed with activity extinction. The authors believe that in the process of shaping of BAEP morphotic extinction begins from the later waves to earlier ones in agreement with the rostrocaudal direction of extinction of the functions or brain midline structures, and in a single study various findings may be obtained.

  1. Auditory brain stem response to complex sounds: a tutorial.

    PubMed

    Skoe, Erika; Kraus, Nina

    2010-06-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brain stem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical auditory function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, on-line auditory processing), helps shape sensory perception. Thus, by being an objective and noninvasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, and persons with hearing loss, auditory processing, and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical or research programs.

  2. Evoked potential correlates of selective attention with multi-channel auditory inputs

    NASA Technical Reports Server (NTRS)

    Schwent, V. L.; Hillyard, S. A.

    1975-01-01

    Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.

  3. Evoked potential correlates of selective attention with multi-channel auditory inputs

    NASA Technical Reports Server (NTRS)

    Schwent, V. L.; Hillyard, S. A.

    1975-01-01

    Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.

  4. Auditory multistability and neurotransmitter concentrations in the human brain.

    PubMed

    Kondo, Hirohito M; Farkas, Dávid; Denham, Susan L; Asai, Tomohisa; Winkler, István

    2017-02-19

    Multistability in perception is a powerful tool for investigating sensory-perceptual transformations, because it produces dissociations between sensory inputs and subjective experience. Spontaneous switching between different perceptual objects occurs during prolonged listening to a sound sequence of tone triplets or repeated words (termed auditory streaming and verbal transformations, respectively). We used these examples of auditory multistability to examine to what extent neurochemical and cognitive factors influence the observed idiosyncratic patterns of switching between perceptual objects. The concentrations of glutamate-glutamine (Glx) and γ-aminobutyric acid (GABA) in brain regions were measured by magnetic resonance spectroscopy, while personality traits and executive functions were assessed using questionnaires and response inhibition tasks. Idiosyncratic patterns of perceptual switching in the two multistable stimulus configurations were identified using a multidimensional scaling (MDS) analysis. Intriguingly, although switching patterns within each individual differed between auditory streaming and verbal transformations, similar MDS dimensions were extracted separately from the two datasets. Individual switching patterns were significantly correlated with Glx and GABA concentrations in auditory cortex and inferior frontal cortex but not with the personality traits and executive functions. Our results suggest that auditory perceptual organization depends on the balance between neural excitation and inhibition in different brain regions.This article is part of the themed issue 'Auditory and visual scene analysis'.

  5. [Analysis of auditory information in the brain of the cetacean].

    PubMed

    Popov, V V; Supin, A Ia

    2006-01-01

    The cetacean brain specifics involve an exceptional development of the auditory neural centres. The place of projection sensory areas including the auditory that in the cetacean brain cortex is essentially different from that in other mammals. The EP characteristics indicated presence of several functional divisions in the auditory cortex. Physiological studies of the cetacean auditory centres were mainly performed using the EP technique. Of several types of the EPs, the short-latency auditory EP was most thoroughly studied. In cetacean, it is characterised by exceptionally high temporal resolution with the integration time about 0.3 ms which corresponds to the cut-off frequency 1700 Hz. This much exceeds the temporal resolution of the hearing in terranstrial mammals. The frequency selectivity of hearing in cetacean was measured using a number of variants of the masking technique. The hearing frequency selectivity acuity in cetacean exceeds that of most terraneous mammals (excepting the bats). This acute frequency selectivity provides the differentiation among the finest spectral patterns of auditory signals.

  6. The human brain maintains contradictory and redundant auditory sensory predictions.

    PubMed

    Pieszek, Marika; Widmann, Andreas; Gruber, Thomas; Schröger, Erich

    2013-01-01

    Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs) in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound) as well as violations of the visual-auditory prediction (i.e., an incongruent sound) elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]). Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density) equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.

  7. An auditory brain-computer interface using virtual sound field.

    PubMed

    Gao, Haiyang; Ouyang, Minhui; Zhang, Dan; Hong, Bo

    2011-01-01

    Brain-computer interfaces (BCIs) exploring the auditory communication channel might be preferable for amyotrophic lateral sclerosis (ALS) patients with poor sight or with the visual system being occupied for other uses. Spatial attention was proven to be able to modulate the event-related potentials (ERPs); yet up to now, there is no auditory BCI based on virtual sound field. In this study, auditory spatial attention was introduced by using stimuli in a virtual sound field. Subjects attended selectively to the virtual location of the target sound and discriminated its relevant properties. The concurrently recorded ERP components and the users' performance were compared with those of the paradigm where all sounds were presented in the frontal direction. The early ERP components (100-250 ms) and the simulated online accuracies indicated that spatial attention indeed added effective discriminative information for BCI classification. The proposed auditory paradigm using virtual sound field may lead to a high-performance and portable BCI system.

  8. Methods for the analysis of auditory processing in the brain.

    PubMed

    Theunissen, Frédéric E; Woolley, Sarah M N; Hsu, Anne; Fremouw, Thane

    2004-06-01

    Understanding song perception and singing behavior in birds requires the study of auditory processing of complex sounds throughout the avian brain. We can divide the basics of auditory perception into two general processes: (1) encoding, the process whereby sound is transformed into neural activity and (2) decoding, the process whereby patterns of neural activity take on perceptual meaning and therefore guide behavioral responses to sounds. In birdsong research, most studies have focused on the decoding process: What are the responses of the specialized auditory neurons in the song control system? and What do they mean for the bird? Recently, new techniques addressing both encoding and decoding have been developed for use in songbirds. Here, we first describe some powerful methods for analyzing what acoustical aspects of complex sounds like songs are encoded by auditory processing neurons in songbird brain. These methods include the estimation and analysis of spectro-temporal receptive fields (STRFs) for auditory neurons. Then we discuss the decoding methods that have been used to understand how songbird neurons may discriminate among different songs and other sounds based on mean spike-count rates.

  9. Thalamic and parietal brain morphology predicts auditory category learning.

    PubMed

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.

  10. Neural network approach in multichannel auditory event-related potential analysis.

    PubMed

    Wu, F Y; Slater, J D; Ramsay, R E

    1994-04-01

    Even though there are presently no clearly defined criteria for the assessment of P300 event-related potential (ERP) abnormality, it is strongly indicated through statistical analysis that such criteria exist for classifying control subjects and patients with diseases resulting in neuropsychological impairment such as multiple sclerosis (MS). We have demonstrated the feasibility of artificial neural network (ANN) methods in classifying ERP waveforms measured at a single channel (Cz) from control subjects and MS patients. In this paper, we report the results of multichannel ERP analysis and a modified network analysis methodology to enhance automation of the classification rule extraction process. The proposed methodology significantly reduces the work of statistical analysis. It also helps to standardize the criteria of P300 ERP assessment and facilitate the computer-aided analysis on neuropsychological functions.

  11. Analysis of auditory function using brainstem auditory evoked potentials and auditory steady state responses in infants with perinatal brain injury.

    PubMed

    Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio; Porras-Kattz, Eneida

    2010-02-01

    Approximately 2-4 % of newborns with perinatal risk factors present hearing loss. The aim of this study was to analyse the auditory function in infants with perinatal brain injury (PBI). Brainstem auditory evoked potentials (BAEPs), auditory steady state responses (ASSRs), and tympanometry studies were carried out in 294 infants with PBI (586 ears, two infants had unilateral microtia-atresia). BAEPs were abnormal in 158 (27%) ears, ASSRs in 227 (39%), and tympanometry anomalies were present in 131 (22%) ears. When ASSR thresholds were compared with BAEPs, the assessment yielded 92% sensitivity and 68% specificity. When ASSR thresholds were compared with tympanometry results as an indicator of middle-ear pathology, the assessment gave 96% sensitivity and 77% specificity. When BAEP thresholds were compared with tympanometry results, sensitivity was 35% and specificity 95%. In conclusion, BAEPs are useful test for neonatal auditory screening; they identify with more accuracy sensorineural hearing losses. ASSRs are more pertinent for identifying conductive hearing loss associated with middle-ear pathology. The consistency and accuracy of these results could be considered in additional studies.

  12. Brain Metabolism during Hallucination-Like Auditory Stimulation in Schizophrenia

    PubMed Central

    Horga, Guillermo; Fernández-Egea, Emilio; Mané, Anna; Font, Mireia; Schatz, Kelly C.; Falcon, Carles; Lomeña, Francisco; Bernardo, Miguel; Parellada, Eduard

    2014-01-01

    Auditory verbal hallucinations (AVH) in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1) at rest and (2) during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia. PMID:24416328

  13. Analysis of auditory information in the brains of cetaceans.

    PubMed

    Popov, V V; Supin, A Ya

    2007-03-01

    A characteristic feature of the brains of toothed cetaceans is the exclusive development of the auditory neural centers. The location of the projection sensory zones, including the auditory zones, in the cetacean cortex is significantly different from that in other mammals. The characteristics of evoked potentials demonstrate the existence of several functional subdivisions in the auditory cortex. Physiological studies of the auditory neural centers of cetaceans have been performed predominantly using the evoked potentials method. Of the several types of evoked potentials available for non-invasive recording, the most detailed studies have been performed using short-latency auditory evoked potentials (SLAEP). SLAEP in cetaceans are characterized by exclusively high time resolution, with integration times of about 0.3 msec, which on the frequency scale corresponds to a cut-off frequency of 1700 Hz. This is more than an order of magnitude greater than the time resolution of hearing in terrestrial mammals. The frequency selectivity of hearing in cetaceans has been measured using several versions of the masking method. The acuity of frequency selectivity in cetaceans is several times greater than that in most terrestrial mammals (except bats). The acute frequency selectivity allows the discrimination of very fine spectral patterns of sound signals.

  14. Brain metabolism during hallucination-like auditory stimulation in schizophrenia.

    PubMed

    Horga, Guillermo; Fernández-Egea, Emilio; Mané, Anna; Font, Mireia; Schatz, Kelly C; Falcon, Carles; Lomeña, Francisco; Bernardo, Miguel; Parellada, Eduard

    2014-01-01

    Auditory verbal hallucinations (AVH) in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1) at rest and (2) during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia.

  15. Brain networks underlying mental imagery of auditory and visual information.

    PubMed

    Zvyagintsev, Mikhail; Clemens, Benjamin; Chechko, Natalya; Mathiak, Krystyna A; Sack, Alexander T; Mathiak, Klaus

    2013-05-01

    Mental imagery is a complex cognitive process that resembles the experience of perceiving an object when this object is not physically present to the senses. It has been shown that, depending on the sensory nature of the object, mental imagery also involves correspondent sensory neural mechanisms. However, it remains unclear which areas of the brain subserve supramodal imagery processes that are independent of the object modality, and which brain areas are involved in modality-specific imagery processes. Here, we conducted a functional magnetic resonance imaging study to reveal supramodal and modality-specific networks of mental imagery for auditory and visual information. A common supramodal brain network independent of imagery modality, two separate modality-specific networks for imagery of auditory and visual information, and a common deactivation network were identified. The supramodal network included brain areas related to attention, memory retrieval, motor preparation and semantic processing, as well as areas considered to be part of the default-mode network and multisensory integration areas. The modality-specific networks comprised brain areas involved in processing of respective modality-specific sensory information. Interestingly, we found that imagery of auditory information led to a relative deactivation within the modality-specific areas for visual imagery, and vice versa. In addition, mental imagery of both auditory and visual information widely suppressed the activity of primary sensory and motor areas, for example deactivation network. These findings have important implications for understanding the mechanisms that are involved in generation of mental imagery. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity.

    PubMed

    Laing, Mark; Rees, Adrian; Vuong, Quoc C

    2015-01-01

    The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  17. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity

    PubMed Central

    Laing, Mark; Rees, Adrian; Vuong, Quoc C.

    2015-01-01

    The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies. PMID:26483710

  18. Temporal Stability of Multichannel, Multimodal ERP (Related Brain Potentials) Recordings

    DTIC Science & Technology

    1986-06-01

    variability. Early papers by Travis and Gottlober (1936, 1937), Davis and Davis (1936), Rubin (1938) and Williams (1939) suggested that EEG activity...Travis, L. E. & Gottlober , A. Do brain waves have individuality? Science, 1936, 84, 532-533. Travis, L. E. & Gottlober , A. How consistent are an

  19. Infant Auditory Processing and Event-related Brain Oscillations

    PubMed Central

    Musacchia, Gabriella; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P.; Benasich, April A.

    2015-01-01

    Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net

  20. Effect of prenatal lignocaine on auditory brain stem evoked response.

    PubMed Central

    Bozynski, M E; Schumacher, R E; Deschner, L S; Kileny, P

    1989-01-01

    To test the hypothesis that there would be a positive correlation between the interpeak wave (I-V) interval as measured by auditory brain stem evoked response and the ratio of umbilical cord blood arterial to venous lignocaine concentrations in infants born after maternal epidural anaesthesia, 10 normal infants born at full term by elective caesarean section were studied. Umbilical cord arterial and venous plasma samples were assayed for lignocaine, and auditory brain stem evoked responses were elicited at 35 and 70 dB at less than 4 (test 1) and greater than or equal to 48 hours (test 2). Mean wave I-V intervals were prolonged in test 1 when compared with test 2. Linear regression showed the arterial:venous ratio accounted for 66% (left ear) and 43% (right ear) of the variance in test 1 intervals. No association was found in test 2. In newborn infants, changes in serial auditory brain stem evoked response tests occur after maternal lignocaine epidural anaesthesia and these changes correlate with blood lignocaine concentrations. PMID:2774635

  1. Selective entrainment of brain oscillations drives auditory perceptual organization.

    PubMed

    Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles

    2017-07-27

    Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Brain Region-Specific Activity Patterns after Recent or Remote Memory Retrieval of Auditory Conditioned Fear

    ERIC Educational Resources Information Center

    Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee

    2012-01-01

    Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or…

  3. Brain Region-Specific Activity Patterns after Recent or Remote Memory Retrieval of Auditory Conditioned Fear

    ERIC Educational Resources Information Center

    Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee

    2012-01-01

    Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or…

  4. Connections for auditory language in the human brain.

    PubMed

    Gierhan, Sarah M E

    2013-11-01

    The white matter bundles that underlie comprehension and production of language have been investigated for a number of years. Several studies have examined which fiber bundles (or tracts) are involved in auditory language processing, and which kind of language information is transmitted by which fiber tract. However, there is much debate about exactly which fiber tracts are involved, their precise course in the brain, how they should be named, and which functions they fulfill. Therefore, the present article reviews the available language-related literature, and educes a neurocognitive model of the pathways for auditory language processing. Besides providing an overview of the current methods used for relating fiber anatomy to function, this article details the precise anatomy of the fiber tracts and their roles in phonological, semantic and syntactic processing, articulation, and repetition.

  5. Brain Mapping of Language and Auditory Perception in High-Functioning Autistic Adults: A PET Study.

    ERIC Educational Resources Information Center

    Muller, R-A.; Behen, M. E.; Rothermel, R. D.; Chugani, D. C.; Muzik, O.; Mangner, T. J.; Chugani, H. T.

    1999-01-01

    A study used positron emission tomography (PET) to study patterns of brain activation during auditory processing in five high-functioning adults with autism. Results found that participants showed reversed hemispheric dominance during the verbal auditory stimulation and reduced activation of the auditory cortex and cerebellum. (CR)

  6. Tactual and auditory vigilance in split-brain man.

    PubMed Central

    Dimond, S J

    1979-01-01

    Two studies are reported of tactual and auditory vigilance performance in patients with a split-brain or partial commissurotomy to examine the attentional behaviour of the right and left hemisphere, and to identify defects in attention which may be related to the division of the cerebral commissures. The performance of the right hemisphere on all tasks of sustained attention so far studied was substantially better than that of the left. Considerable depletion of concentration was observed for the total split-brain group but not in patients with partial commissurotomy. One of the more unusual phenomena of the split-brain condition is that gaps of attention, often lasting many seconds, occur predominantly on the left hemisphere. The switch to a different type of signal on the same hemisphere does not stop them but the switching of signals from one hemisphere to another does. The defect is interpreted as a failure of attention peculiar to the individual hemisphere under test. PMID:762586

  7. Auditory Brain Stem Responses Recorded With Uncushioned Earphones.

    PubMed

    Marsh, R R; Knightly, C A

    1992-11-01

    Although the cushion is essential to accurate pure-tone audiometry with conventional earphones, it may interfere with the auditory brain stem response (ABR) testing of small infants because of its size and the risk of ear canal collapse. To determine the consequences of ABR testing with an uncushioned earphone, adults were tested with and without the cushion, and probe-tube sound measurements were made. Although removing the cushion results in substantial signal attenuation below 1 kHz, there is little effect on the click-elicited ABR.

  8. Multichannel optical brain imaging to separate cerebral vascular, tissue metabolic, and neuronal effects of cocaine

    NASA Astrophysics Data System (ADS)

    Ren, Hugang; Luo, Zhongchi; Yuan, Zhijia; Pan, Yingtian; Du, Congwu

    2012-02-01

    Characterization of cerebral hemodynamic and oxygenation metabolic changes, as well neuronal function is of great importance to study of brain functions and the relevant brain disorders such as drug addiction. Compared with other neuroimaging modalities, optical imaging techniques have the potential for high spatiotemporal resolution and dissection of the changes in cerebral blood flow (CBF), blood volume (CBV), and hemoglobing oxygenation and intracellular Ca ([Ca2+]i), which serves as markers of vascular function, tissue metabolism and neuronal activity, respectively. Recently, we developed a multiwavelength imaging system and integrated it into a surgical microscope. Three LEDs of λ1=530nm, λ2=570nm and λ3=630nm were used for exciting [Ca2+]i fluorescence labeled by Rhod2 (AM) and sensitizing total hemoglobin (i.e., CBV), and deoxygenated-hemoglobin, whereas one LD of λ1=830nm was used for laser speckle imaging to form a CBF mapping of the brain. These light sources were time-sharing for illumination on the brain and synchronized with the exposure of CCD camera for multichannel images of the brain. Our animal studies indicated that this optical approach enabled simultaneous mapping of cocaine-induced changes in CBF, CBV and oxygenated- and deoxygenated hemoglobin as well as [Ca2+]i in the cortical brain. Its high spatiotemporal resolution (30μm, 10Hz) and large field of view (4x5 mm2) are advanced as a neuroimaging tool for brain functional study.

  9. The brain's voices: comparing nonclinical auditory hallucinations and imagery.

    PubMed

    Linden, David E J; Thornton, Katy; Kuswanto, Carissa N; Johnston, Stephen J; van de Ven, Vincent; Jackson, Michael C

    2011-02-01

    Although auditory verbal hallucinations are often thought to denote mental illness, the majority of voice hearers do not satisfy the criteria for a psychiatric disorder. Here, we report the first functional imaging study of such nonclinical hallucinations in 7 healthy voice hearers comparing them with auditory imagery. The human voice area in the superior temporal sulcus was activated during both hallucinations and imagery. Other brain areas supporting both hallucinations and imagery included fronto temporal language areas in the left hemisphere and their contralateral homologues and the supplementary motor area (SMA). Hallucinations are critically distinguished from imagery by lack of voluntary control. We expected this difference to be reflected in the relative timing of prefrontal and sensory areas. Activity of the SMA indeed preceded that of auditory areas during imagery, whereas during hallucinations, the 2 processes occurred instantaneously. Voluntary control was thus represented in the relative timing of prefrontal and sensory activation, whereas the sense of reality of the sensory experience may be a product of the voice area activation. Our results reveal mechanisms of the generation of sensory experience in the absence of external stimulation and suggest new approaches to the investigation of the neurobiology of psychopathology.

  10. Multichannel cochlear implant for selective neuronal activation and chronic use in the free-moving Mongolian gerbil.

    PubMed

    Wiegner, Armin; Wright, Charles G; Vollmer, Maike

    2016-11-01

    Animal models for chronic multichannel cochlear implant stimulation and selective neuronal activation contribute to a better understanding of auditory signal processing and central neural plasticity. This paper describes the design and surgical implantation of a multichannel cochlear implant (CI) system for chronic use in the free-moving gerbil. For chronic stimulation, adult-deafened gerbils were connected to a multichannel commutator that allowed low resistance cable rotation and stable electric connectivity to the current source. Despite the small scale of the gerbil cochlea and auditory brain regions, final electrophysiological mapping experiments revealed selective and tonotopically organized neuronal activation in the auditory cortex. Contact impedances and electrically evoked auditory brainstem responses were stable over several weeks demonstrating the long-term integrity of the implant and the efficacy of the stimulation. Most animal models on multichannel signal processing and stimulation-induced plasticity are limited to larger animals such as ferrets, cats and primates. Multichannel CI stimulation in the free-moving rodent and evidence for selective neuronal activation in gerbil auditory cortex have not been previously reported. Overall, our results show that the gerbil is a robust rodent model for selective and tonotopically organized multichannel CI stimulation. We anticipate that this model provides a useful tool to develop and test both passive stimulation and behavioral training strategies for plastic reorganization and restoration of degraded unilateral and bilateral central auditory signal processing in the hearing impaired and deaf central auditory system. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Sex differences in brain structure in auditory and cingulate regions.

    PubMed

    Brun, Caroline C; Leporé, Natasha; Luders, Eileen; Chou, Yi-Yu; Madsen, Sarah K; Toga, Arthur W; Thompson, Paul M

    2009-07-01

    We applied a new method to visualize the three-dimensional profile of sex differences in brain structure based on MRI scans of 100 young adults. We compared 50 men with 50 women, matched for age and other relevant demographics. As predicted, left hemisphere auditory and language-related regions were proportionally expanded in women versus men, suggesting a possible structural basis for the widely replicated sex differences in language processing. In men, primary visual, and visuo-spatial association areas of the parietal lobes were proportionally expanded, in line with prior reports of relative strengths in visuo-spatial processing in men. We relate these three-dimensional patterns to prior functional and structural studies, and to theoretical predictions based on nonlinear scaling of brain morphometry.

  12. Atypical Bilateral Brain Synchronization in the Early Stage of Human Voice Auditory Processing in Young Children with Autism

    PubMed Central

    Kurita, Toshiharu; Kikuchi, Mitsuru; Yoshimura, Yuko; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Hirosawa, Tetsu; Furutani, Naoki; Higashida, Haruhiro; Ikeda, Takashi; Mutou, Kouhei; Asada, Minoru; Minabe, Yoshio

    2016-01-01

    Autism spectrum disorder (ASD) has been postulated to involve impaired neuronal cooperation in large-scale neural networks, including cortico-cortical interhemispheric circuitry. In the context of ASD, alterations in both peripheral and central auditory processes have also attracted a great deal of interest because these changes appear to represent pathophysiological processes; therefore, many prior studies have focused on atypical auditory responses in ASD. The auditory evoked field (AEF), recorded by magnetoencephalography, and the synchronization of these processes between right and left hemispheres was recently suggested to reflect various cognitive abilities in children. However, to date, no previous study has focused on AEF synchronization in ASD subjects. To assess global coordination across spatially distributed brain regions, the analysis of Omega complexity from multichannel neurophysiological data was proposed. Using Omega complexity analysis, we investigated the global coordination of AEFs in 3–8-year-old typically developing (TD) children (n = 50) and children with ASD (n = 50) in 50-ms time-windows. Children with ASD displayed significantly higher Omega complexities compared with TD children in the time-window of 0–50 ms, suggesting lower whole brain synchronization in the early stage of the P1m component. When we analyzed the left and right hemispheres separately, no significant differences in any time-windows were observed. These results suggest lower right-left hemispheric synchronization in children with ASD compared with TD children. Our study provides new evidence of aberrant neural synchronization in young children with ASD by investigating auditory evoked neural responses to the human voice. PMID:27074011

  13. Atypical Bilateral Brain Synchronization in the Early Stage of Human Voice Auditory Processing in Young Children with Autism.

    PubMed

    Kurita, Toshiharu; Kikuchi, Mitsuru; Yoshimura, Yuko; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Hirosawa, Tetsu; Furutani, Naoki; Higashida, Haruhiro; Ikeda, Takashi; Mutou, Kouhei; Asada, Minoru; Minabe, Yoshio

    2016-01-01

    Autism spectrum disorder (ASD) has been postulated to involve impaired neuronal cooperation in large-scale neural networks, including cortico-cortical interhemispheric circuitry. In the context of ASD, alterations in both peripheral and central auditory processes have also attracted a great deal of interest because these changes appear to represent pathophysiological processes; therefore, many prior studies have focused on atypical auditory responses in ASD. The auditory evoked field (AEF), recorded by magnetoencephalography, and the synchronization of these processes between right and left hemispheres was recently suggested to reflect various cognitive abilities in children. However, to date, no previous study has focused on AEF synchronization in ASD subjects. To assess global coordination across spatially distributed brain regions, the analysis of Omega complexity from multichannel neurophysiological data was proposed. Using Omega complexity analysis, we investigated the global coordination of AEFs in 3-8-year-old typically developing (TD) children (n = 50) and children with ASD (n = 50) in 50-ms time-windows. Children with ASD displayed significantly higher Omega complexities compared with TD children in the time-window of 0-50 ms, suggesting lower whole brain synchronization in the early stage of the P1m component. When we analyzed the left and right hemispheres separately, no significant differences in any time-windows were observed. These results suggest lower right-left hemispheric synchronization in children with ASD compared with TD children. Our study provides new evidence of aberrant neural synchronization in young children with ASD by investigating auditory evoked neural responses to the human voice.

  14. Concurrent brain responses to separate auditory and visual targets.

    PubMed

    Finoia, Paola; Mitchell, Daniel J; Hauk, Olaf; Beste, Christian; Pizzella, Vittorio; Duncan, John

    2015-08-01

    In the attentional blink, a target event (T1) strongly interferes with perception of a second target (T2) presented within a few hundred milliseconds. Concurrently, the brain's electromagnetic response to the second target is suppressed, especially a late negative-positive EEG complex including the traditional P3 wave. An influential theory proposes that conscious perception requires access to a distributed, frontoparietal global workspace, explaining the attentional blink by strong mutual inhibition between concurrent workspace representations. Often, however, the attentional blink is reduced or eliminated for targets in different sensory modalities, suggesting a limit to such global inhibition. Using functional magnetic resonance imaging, we confirm that visual and auditory targets produce similar, distributed patterns of frontoparietal activity. In an attentional blink EEG/MEG design, however, an auditory T1 and visual T2 are identified without mutual interference, with largely preserved electromagnetic responses to T2. The results suggest parallel brain responses to target events in different sensory modalities. Copyright © 2015 the American Physiological Society.

  15. Scale-free brain quartet: artistic filtering of multi-channel brainwave music.

    PubMed

    Wu, Dan; Li, Chaoyi; Yao, Dezhong

    2013-01-01

    To listen to the brain activities as a piece of music, we proposed the scale-free brainwave music (SFBM) technology, which translated scalp EEGs into music notes according to the power law of both EEG and music. In the present study, the methodology was extended for deriving a quartet from multi-channel EEGs with artistic beat and tonality filtering. EEG data from multiple electrodes were first translated into MIDI sequences by SFBM, respectively. Then, these sequences were processed by a beat filter which adjusted the duration of notes in terms of the characteristic frequency. And the sequences were further filtered from atonal to tonal according to a key defined by the analysis of the original music pieces. Resting EEGs with eyes closed and open of 40 subjects were utilized for music generation. The results revealed that the scale-free exponents of the music before and after filtering were different: the filtered music showed larger variety between the eyes-closed (EC) and eyes-open (EO) conditions, and the pitch scale exponents of the filtered music were closer to 1 and thus it was more approximate to the classical music. Furthermore, the tempo of the filtered music with eyes closed was significantly slower than that with eyes open. With the original materials obtained from multi-channel EEGs, and a little creative filtering following the composition process of a potential artist, the resulted brainwave quartet opened a new window to look into the brain in an audible musical way. In fact, as the artistic beat and tonal filters were derived from the brainwaves, the filtered music maintained the essential properties of the brain activities in a more musical style. It might harmonically distinguish the different states of the brain activities, and therefore it provided a method to analyze EEGs from a relaxed audio perspective.

  16. Scale-Free Brain Quartet: Artistic Filtering of Multi-Channel Brainwave Music

    PubMed Central

    Wu, Dan; Li, Chaoyi; Yao, Dezhong

    2013-01-01

    To listen to the brain activities as a piece of music, we proposed the scale-free brainwave music (SFBM) technology, which translated scalp EEGs into music notes according to the power law of both EEG and music. In the present study, the methodology was extended for deriving a quartet from multi-channel EEGs with artistic beat and tonality filtering. EEG data from multiple electrodes were first translated into MIDI sequences by SFBM, respectively. Then, these sequences were processed by a beat filter which adjusted the duration of notes in terms of the characteristic frequency. And the sequences were further filtered from atonal to tonal according to a key defined by the analysis of the original music pieces. Resting EEGs with eyes closed and open of 40 subjects were utilized for music generation. The results revealed that the scale-free exponents of the music before and after filtering were different: the filtered music showed larger variety between the eyes-closed (EC) and eyes-open (EO) conditions, and the pitch scale exponents of the filtered music were closer to 1 and thus it was more approximate to the classical music. Furthermore, the tempo of the filtered music with eyes closed was significantly slower than that with eyes open. With the original materials obtained from multi-channel EEGs, and a little creative filtering following the composition process of a potential artist, the resulted brainwave quartet opened a new window to look into the brain in an audible musical way. In fact, as the artistic beat and tonal filters were derived from the brainwaves, the filtered music maintained the essential properties of the brain activities in a more musical style. It might harmonically distinguish the different states of the brain activities, and therefore it provided a method to analyze EEGs from a relaxed audio perspective. PMID:23717527

  17. Development of auditory-specific brain rhythm in infants.

    PubMed

    Fujioka, Takako; Mourad, Nasser; Trainor, Laurel J

    2011-02-01

    Human infants rapidly develop their auditory perceptual abilities and acquire culture-specific knowledge in speech and music in the second 6 months of life. In the adult brain, neural rhythm around 10 Hz in the temporal lobes is thought to reflect sound analysis and subsequent cognitive processes such as memory and attention. To study when and how such rhythm emerges in infancy, we examined electroencephalogram (EEG) recordings in infants 4 and 12 months of age during sound stimulation and silence. In the 4-month-olds, the amplitudes of narrowly tuned 4-Hz brain rhythm, recorded from bilateral temporal electrodes, were modulated by sound stimuli. In the 12-month-olds, the sound-induced modulation occurred at faster 6-Hz rhythm at temporofrontal locations. The brain rhythms in the older infants consisted of more complex components, as even evident in individual data. These findings suggest that auditory-specific rhythmic neural activity, which is already established before 6 months of age, involves more speed-efficient long-range neural networks by the age of 12 months when long-term memory for native phoneme representation and for musical rhythmic features is formed. We suggest that maturation of distinct rhythmic components occurs in parallel, and that sensory-specific functions bound to particular thalamo-cortical networks are transferred to newly developed higher-order networks step by step until adult hierarchical neural oscillatory mechanisms are achieved across the whole brain. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  18. Cross contrast multi-channel image registration using image synthesis for MR brain images.

    PubMed

    Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L

    2017-02-01

    Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Behavioral and electrophysiological auditory processing measures in traumatic brain injury after acoustically controlled auditory training: a long-term study

    PubMed Central

    Figueiredo, Carolina Calsolari; de Andrade, Adriana Neves; Marangoni-Castan, Andréa Tortosa; Gil, Daniela; Suriano, Italo Capraro

    2015-01-01

    ABSTRACT Objective To investigate the long-term efficacy of acoustically controlled auditory training in adults after tarumatic brain injury. Methods A total of six audioogically normal individuals aged between 20 and 37 years were studied. They suffered severe traumatic brain injury with diffuse axional lesion and underwent an acoustically controlled auditory training program approximately one year before. The results obtained in the behavioral and electrophysiological evaluation of auditory processing immediately after acoustically controlled auditory training were compared to reassessment findings, one year later. Results Quantitative analysis of auditory brainsteim response showed increased absolute latency of all waves and interpeak intervals, bilaterraly, when comparing both evaluations. Moreover, increased amplitude of all waves, and the wave V amplitude was statistically significant for the right ear, and wave III for the left ear. As to P3, decreased latency and increased amplitude were found for both ears in reassessment. The previous and current behavioral assessment showed similar results, except for the staggered spondaic words in the left ear and the amount of errors on the dichotic consonant-vowel test. Conclusion The acoustically controlled auditory training was effective in the long run, since better latency and amplitude results were observed in the electrophysiological evaluation, in addition to stability of behavioral measures after one-year training. PMID:26676270

  20. Comparison of temporal properties of auditory single units in response to cochlear infrared laser stimulation recorded with multi-channel and single tungsten electrodes

    NASA Astrophysics Data System (ADS)

    Tan, Xiaodong; Xia, Nan; Young, Hunter; Richter, Claus-Peter

    2015-02-01

    Auditory prostheses may benefit from Infrared Neural Stimulation (INS) because optical stimulation allows for spatially selective activation of neuron populations. Selective activation of neurons in the cochlear spiral ganglion can be determined in the central nucleus of the inferior colliculus (ICC) because the tonotopic organization of frequencies in the cochlea is maintained throughout the auditory pathway. The activation profile of INS is well represented in the ICC by multichannel electrodes (MCEs). To characterize single unit properties in response to INS, however, single tungsten electrodes (STEs) should be used because of its better signal-to-noise ratio. In this study, we compared the temporal properties of ICC single units recorded with MCEs and STEs in order to characterize the response properties of single auditory neurons in response to INS in guinea pigs. The length along the cochlea stimulated with infrared radiation corresponded to a frequency range of about 0.6 octaves, similar to that recorded with STEs. The temporal properties of single units recorded with MCEs showed higher maximum rates, shorter latencies, and higher firing efficiencies compared to those recorded with STEs. When the preset amplitude threshold for triggering MCE recordings was raised to twice over the noise level, the temporal properties of the single units became similar to those obtained with STEs. Undistinguishable neural activities from multiple sources in MCE recordings could be responsible for the response property difference between MCEs and STEs. Thus, caution should be taken in single unit recordings with MCEs.

  1. Multichannel neural recording with a 128 Mbps UWB wireless transmitter for implantable brain-machine interfaces.

    PubMed

    Ando, H; Takizawa, K; Yoshida, T; Matsushita, K; Hirata, M; Suzuki, T

    2015-01-01

    To realize a low-invasive and high accuracy BMI (Brain-machine interface) system, we have already developed a fully-implantable wireless BMI system which consists of ECoG neural electrode arrays, neural recording ASICs, a Wi-Fi based wireless data transmitter and a wireless power receiver with a rechargeable battery. For accurate estimation of movement intentions, it is important for a BMI system to have a large number of recording channels. In this paper, we report a new multi-channel BMI system which is able to record up to 4096-ch ECoG data by multiple connections of 64-ch ASICs and time division multiplexing of recorded data. This system has an ultra-wide-band (UWB) wireless unit for transmitting the recorded neural signals to outside the body. By preliminary experiments with a human body equivalent liquid phantom, we confirmed 4096-ch UWB wireless data transmission at 128 Mbps mode below 20 mm distance.

  2. A Wearable Multi-Channel fNIRS System for Brain Imaging in Freely Moving Subjects

    PubMed Central

    Piper, Sophie K.; Krueger, Arne; Koch, Stefan P.; Mehnert, Jan; Habermehl, Christina; Steinbrink, Jens; Obrig, Hellmuth; Schmitz, Christoph H.

    2013-01-01

    Functional near infrared spectroscopy (fNIRS) is a versatile neuroimaging tool with an increasing acceptance in the neuroimaging community. While often lauded for its portability, most of the fNIRS setups employed in neuroscientific research still impose usage in a laboratory environment. We present a wearable, multi-channel fNIRS imaging system for functional brain imaging in unrestrained settings. The system operates without optical fiber bundles, using eight dual wavelength light emitting diodes and eight electro-optical sensors, which can be placed freely on the subject's head for direct illumination and detection. Its performance is tested on N = 8 subjects in a motor execution paradigm performed under three different exercising conditions: (i) during outdoor bicycle riding, (ii) while pedaling on a stationary training bicycle, and (iii) sitting still on the training bicycle. Following left hand gripping, we observe a significant decrease in the deoxyhemoglobin concentration over the contralateral motor cortex in all three conditions. A significant task-related ΔHbO2 increase was seen for the non-pedaling condition. Although the gross movements involved in pedaling and steering a bike induced more motion artifacts than carrying out the same task while sitting still, we found no significant differences in the shape or amplitude of the HbR time courses for outdoor or indoor cycling and sitting still. We demonstrate the general feasibility of using wearable multi-channel NIRS during strenuous exercise in natural, unrestrained settings and discuss the origins and effects of data artifacts. We provide quantitative guidelines for taking condition-dependent signal quality into account to allow the comparison of data across various levels of physical exercise. To the best of our knowledge, this is the first demonstration of functional NIRS brain imaging during an outdoor activity in a real life situation in humans. PMID:23810973

  3. A wearable multi-channel fNIRS system for brain imaging in freely moving subjects.

    PubMed

    Piper, Sophie K; Krueger, Arne; Koch, Stefan P; Mehnert, Jan; Habermehl, Christina; Steinbrink, Jens; Obrig, Hellmuth; Schmitz, Christoph H

    2014-01-15

    Functional near infrared spectroscopy (fNIRS) is a versatile neuroimaging tool with an increasing acceptance in the neuroimaging community. While often lauded for its portability, most of the fNIRS setups employed in neuroscientific research still impose usage in a laboratory environment. We present a wearable, multi-channel fNIRS imaging system for functional brain imaging in unrestrained settings. The system operates without optical fiber bundles, using eight dual wavelength light emitting diodes and eight electro-optical sensors, which can be placed freely on the subject's head for direct illumination and detection. Its performance is tested on N=8 subjects in a motor execution paradigm performed under three different exercising conditions: (i) during outdoor bicycle riding, (ii) while pedaling on a stationary training bicycle, and (iii) sitting still on the training bicycle. Following left hand gripping, we observe a significant decrease in the deoxyhemoglobin concentration over the contralateral motor cortex in all three conditions. A significant task-related ΔHbO2 increase was seen for the non-pedaling condition. Although the gross movements involved in pedaling and steering a bike induced more motion artifacts than carrying out the same task while sitting still, we found no significant differences in the shape or amplitude of the HbR time courses for outdoor or indoor cycling and sitting still. We demonstrate the general feasibility of using wearable multi-channel NIRS during strenuous exercise in natural, unrestrained settings and discuss the origins and effects of data artifacts. We provide quantitative guidelines for taking condition-dependent signal quality into account to allow the comparison of data across various levels of physical exercise. To the best of our knowledge, this is the first demonstration of functional NIRS brain imaging during an outdoor activity in a real life situation in humans.

  4. Bigger Brains or Bigger Nuclei? Regulating the Size of Auditory Structures in Birds

    PubMed Central

    Kubke, M. Fabiana; Massoglia, Dino P.; Carr, Catherine E.

    2012-01-01

    Increases in the size of the neuronal structures that mediate specific behaviors are believed to be related to enhanced computational performance. It is not clear, however, what developmental and evolutionary mechanisms mediate these changes, nor whether an increase in the size of a given neuronal population is a general mechanism to achieve enhanced computational ability. We addressed the issue of size by analyzing the variation in the relative number of cells of auditory structures in auditory specialists and generalists. We show that bird species with different auditory specializations exhibit variation in the relative size of their hindbrain auditory nuclei. In the barn owl, an auditory specialist, the hind-brain auditory nuclei involved in the computation of sound location show hyperplasia. This hyperplasia was also found in songbirds, but not in non-auditory specialists. The hyperplasia of auditory nuclei was also not seen in birds with large body weight suggesting that the total number of cells is selected for in auditory specialists. In barn owls, differences observed in the relative size of the auditory nuclei might be attributed to modifications in neurogenesis and cell death. Thus, hyperplasia of circuits used for auditory computation accompanies auditory specialization in different orders of birds. PMID:14726625

  5. Quantitative map of multiple auditory cortical regions with a stereotaxic fine-scale atlas of the mouse brain.

    PubMed

    Tsukano, Hiroaki; Horie, Masao; Hishida, Ryuichi; Takahashi, Kuniyuki; Takebayashi, Hirohide; Shibuki, Katsuei

    2016-02-29

    Optical imaging studies have recently revealed the presence of multiple auditory cortical regions in the mouse brain. We have previously demonstrated, using flavoprotein fluorescence imaging, at least six regions in the mouse auditory cortex, including the anterior auditory field (AAF), primary auditory cortex (AI), the secondary auditory field (AII), dorsoanterior field (DA), dorsomedial field (DM), and dorsoposterior field (DP). While multiple regions in the visual cortex and somatosensory cortex have been annotated and consolidated in recent brain atlases, the multiple auditory cortical regions have not yet been presented from a coronal view. In the current study, we obtained regional coordinates of the six auditory cortical regions of the C57BL/6 mouse brain and illustrated these regions on template coronal brain slices. These results should reinforce the existing mouse brain atlases and support future studies in the auditory cortex.

  6. Quantitative map of multiple auditory cortical regions with a stereotaxic fine-scale atlas of the mouse brain

    PubMed Central

    Tsukano, Hiroaki; Horie, Masao; Hishida, Ryuichi; Takahashi, Kuniyuki; Takebayashi, Hirohide; Shibuki, Katsuei

    2016-01-01

    Optical imaging studies have recently revealed the presence of multiple auditory cortical regions in the mouse brain. We have previously demonstrated, using flavoprotein fluorescence imaging, at least six regions in the mouse auditory cortex, including the anterior auditory field (AAF), primary auditory cortex (AI), the secondary auditory field (AII), dorsoanterior field (DA), dorsomedial field (DM), and dorsoposterior field (DP). While multiple regions in the visual cortex and somatosensory cortex have been annotated and consolidated in recent brain atlases, the multiple auditory cortical regions have not yet been presented from a coronal view. In the current study, we obtained regional coordinates of the six auditory cortical regions of the C57BL/6 mouse brain and illustrated these regions on template coronal brain slices. These results should reinforce the existing mouse brain atlases and support future studies in the auditory cortex. PMID:26924462

  7. Conductive Hearing Loss during Infancy: Effects on Later Auditory Brain Stem Electrophysiology.

    ERIC Educational Resources Information Center

    Gunnarson, Adele D.; Finitzo, Terese

    1991-01-01

    Long-term effects on auditory electrophysiology from early fluctuating hearing loss were studied in 27 children, aged 5 to 7 years, who had been evaluated originally in infancy. Findings suggested that early fluctuating hearing loss disrupts later auditory brain stem electrophysiology. (Author/DB)

  8. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    PubMed

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  9. Selective attention in an overcrowded auditory scene: implications for auditory-based brain-computer interface design.

    PubMed

    Maddox, Ross K; Cheung, Willy; Lee, Adrian K C

    2012-11-01

    Listeners are good at attending to one auditory stream in a crowded environment. However, is there an upper limit of streams present in an auditory scene at which this selective attention breaks down? Here, participants were asked to attend one stream of spoken letters amidst other letter streams. In half of the trials, an initial primer was played, cueing subjects to the sound configuration. Results indicate that performance increases with token repetitions. Priming provided a performance benefit, suggesting that stream selection, not formation, is the bottleneck associated with attention in an overcrowded scene. Results' implications for brain-computer interfaces are discussed.

  10. A multi-channel magnetic induction tomography measurement system for human brain model imaging.

    PubMed

    Xu, Zheng; Luo, Haijun; He, Wei; He, Chuanhong; Song, Xiaodong; Zahng, Zhanglong

    2009-06-01

    This paper proposes a multi-channel magnetic induction tomography measurement system for biological conductivity imaging in a human brain model. A hemispherical glass bowl filled with a salt solution is used as the human brain model; meanwhile, agar blocks of different conductivity are placed in the solution to simulate the intracerebral hemorrhage. The excitation and detection coils are fixed co-axially, and the axial gradiometer is used as the detection coil in order to cancel the primary field. On the outer surface of the glass bowl, 15 sensor units are arrayed in two circles as measurement parts, and a single sensor unit for cancelling the phase drift is placed beside the glass bowl. The phase sensitivity of our system is 0.204 degrees /S m(-1) with the excitation frequency of 120 kHz and the phase noise is in the range of -0.03 degrees to +0.05 degrees . Only the coaxial detection coil is available for each excitation coil; therefore, 15 phase data are collected in each measurement turn. Finally, the two-dimensional images of conductivity distribution are obtained using an interpolation algorithm. The frequency-varying experiment indicates that the imaging quality becomes better as the excitation frequency is increased.

  11. Non-local Atlas-guided Multi-channel Forest Learning for Human Brain Labeling

    PubMed Central

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang

    2015-01-01

    Labeling MR brain images into anatomically meaningful regions is important in many quantitative brain researches. In many existing label fusion methods, appearance information is widely used. Meanwhile, recent progress in computer vision suggests that the context feature is very useful in identifying an object from a complex scene. In light of this, we propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). In particular, we employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and the target labels (i.e., corresponding to certain anatomical structures). Moreover, to accommodate the high inter-subject variations, we further extend our learning-based label fusion to a multi-atlas scenario, i.e., we train a random forest for each atlas and then obtain the final labeling result according to the consensus of all atlases. We have comprehensively evaluated our method on both LONI-LBPA40 and IXI datasets, and achieved the highest labeling accuracy, compared to the state-of-the-art methods in the literature. PMID:26942235

  12. Multi-channel linear descriptors for event-related EEG collected in brain computer interface

    NASA Astrophysics Data System (ADS)

    Pei, Xiao-mei; Zheng, Chong-xun; Xu, Jin; Bin, Guang-yu; Wang, Hong-wu

    2006-03-01

    By three multi-channel linear descriptors, i.e. spatial complexity (Ω), field power (Σ) and frequency of field changes (Φ), event-related EEG data within 8-30 Hz were investigated during imagination of left or right hand movement. Studies on the event-related EEG data indicate that a two-channel version of Ω, Σ and Φ could reflect the antagonistic ERD/ERS patterns over contralateral and ipsilateral areas and also characterize different phases of the changing brain states in the event-related paradigm. Based on the selective two-channel linear descriptors, the left and right hand motor imagery tasks are classified to obtain satisfactory results, which testify the validity of the three linear descriptors Ω, Σ and Φ for characterizing event-related EEG. The preliminary results show that Ω, Σ together with Φ have good separability for left and right hand motor imagery tasks, which could be considered for classification of two classes of EEG patterns in the application of brain computer interfaces.

  13. Network Analysis of Functional Brain Connectivity Driven by Gamma-Band Auditory Steady-State Response in Auditory Hallucinations.

    PubMed

    Ying, Jun; Zhou, Dan; Lin, Ke; Gao, Xiaorong

    The auditory steady-state response (ASSR) may reflect activity from different regions of the brain. Particularly, it was reported that the gamma-band ASSR plays an important role in working memory, speech understanding, and recognition. Traditionally, the ASSR has been determined by power spectral density analysis, which cannot detect the exact overall distributed properties of the ASSR. Functional network analysis has recently been applied in electroencephalography studies. Previous studies on resting or working state found a small-world organization of the brain network. Some researchers have studied dysfunctional networks caused by diseases. The present study investigates the brain connection networks of schizophrenia patients with auditory hallucinations during an ASSR task. A directed transfer function is utilized to estimate the brain connectivity patterns. Moreover, the structures of brain networks are analyzed by converting the connectivity matrices into graphs. It is found that for normal subjects, network connections are mainly distributed at the central and frontal-temporal regions. This indicates that the central regions act as transmission hubs of information under ASSR stimulation. For patients, network connections seem unordered. The finding that the path length was larger in patients compared to that in normal subjects under most thresholds provides insight into the structures of connectivity patterns. The results suggest that there are more synchronous oscillations that cover a long distance on the cortex but a less efficient network for patients with auditory hallucinations.

  14. Characteristics of auditory agnosia in a child with severe traumatic brain injury: a case report.

    PubMed

    Hattiangadi, Nina; Pillion, Joseph P; Slomine, Beth; Christensen, James; Trovato, Melissa K; Speedie, Lynn J

    2005-01-01

    We present a case that is unusual in many respects from other documented incidences of auditory agnosia, including the mechanism of injury, age of the individual, and location of neurological insult. The clinical presentation is one of disturbance in the perception of spoken language, music, pitch, emotional prosody, and temporal auditory processing in the absence of significant deficits in the comprehension of written language, expressive language production, or peripheral auditory function. Furthermore, the patient demonstrates relatively preserved function in other aspects of audition such as sound localization, voice recognition, and perception of animal noises and environmental sounds. This case study demonstrates that auditory agnosia is possible following traumatic brain injury in a child, and illustrates the necessity of assessment with a wide variety of auditory stimuli to fully characterize auditory agnosia in a single individual.

  15. BabySQUID: A mobile, high-resolution multichannel magnetoencephalography system for neonatal brain assessment

    NASA Astrophysics Data System (ADS)

    Okada, Yoshio; Pratt, Kevin; Atwood, Christopher; Mascarenas, Anthony; Reineman, Richard; Nurminen, Jussi; Paulson, Douglas

    2006-02-01

    We developed a prototype of a mobile, high-resolution, multichannel magnetoencephalography (MEG) system, called babySQUID, for assessing brain functions in newborns and infants. Unlike electroencephalography, MEG signals are not distorted by the scalp or the fontanels and sutures in the skull. Thus, brain activity can be measured and localized with MEG as if the sensors were above an exposed brain. The babySQUID is housed in a moveable cart small enough to be transported from one room to another. To assess brain functions, one places the baby on the bed of the cart and the head on its headrest with MEG sensors just below. The sensor array consists of 76 first-order axial gradiometers, each with a pickup coil diameter of 6mm and a baseline of 30mm, in a high-density array with a spacing of 12-14mm center-to-center. The pickup coils are 6±1mm below the outer surface of the headrest. The short gap provides unprecedented sensitivity since the scalp and skull are thin (as little as 3-4mm altogether) in babies. In an electromagnetically unshielded room in a hospital, the field sensitivity at 1kHz was ˜17fT/√Hz. The noise was reduced from ˜400to200fT/√Hz at 1Hz using a reference cancellation technique and further to ˜40fT/√Hz using a gradient common mode rejection technique. Although the residual environmental magnetic noise interfered with the operation of the babySQUID, the instrument functioned sufficiently well to detect spontaneous brain signals from babies with a signal to noise ratio (SNR) of as much as 7.6:1. In a magnetically shielded room, the field sensitivity was 17fT/√Hz at 20Hz and 30fT/√Hz at 1Hz without implementation of reference or gradient cancellation. The sensitivity was sufficiently high to detect spontaneous brain activity from a 7month old baby with a SNR as much as 40:1 and evoked somatosensory responses with a 50Hz bandwidth after as little as four averages. We expect that both the noise and the sensor gap can be reduced further by

  16. The TLC: a novel auditory nucleus of the mammalian brain.

    PubMed

    Saldaña, Enrique; Viñuela, Antonio; Marshall, Allen F; Fitzpatrick, Douglas C; Aparicio, M-Auxiliadora

    2007-11-28

    We have identified a novel nucleus of the mammalian brain and termed it the tectal longitudinal column (TLC). Basic histologic stains, tract-tracing techniques and three-dimensional reconstructions reveal that the rat TLC is a narrow, elongated structure spanning the midbrain tectum longitudinally. This paired nucleus is located close to the midline, immediately dorsal to the periaqueductal gray matter. It occupies what has traditionally been considered the most medial region of the deep superior colliculus and the most medial region of the inferior colliculus. The TLC differs from the neighboring nuclei of the superior and inferior colliculi and the periaqueductal gray by its distinct connections and cytoarchitecture. Extracellular electrophysiological recordings show that TLC neurons respond to auditory stimuli with physiologic properties that differ from those of neurons in the inferior or superior colliculi. We have identified the TLC in rodents, lagomorphs, carnivores, nonhuman primates, and humans, which indicates that the nucleus is conserved across mammals. The discovery of the TLC reveals an unexpected level of longitudinal organization in the mammalian tectum and raises questions as to the participation of this mesencephalic region in essential, yet completely unexplored, aspects of multisensory and/or sensorimotor integration.

  17. An auditory brain-computer interface evoked by natural speech

    NASA Astrophysics Data System (ADS)

    Lopez-Gordo, M. A.; Fernandez, E.; Romero, S.; Pelayo, F.; Prieto, Alberto

    2012-06-01

    Brain-computer interfaces (BCIs) are mainly intended for people unable to perform any muscular movement, such as patients in a complete locked-in state. The majority of BCIs interact visually with the user, either in the form of stimulation or biofeedback. However, visual BCIs challenge their ultimate use because they require the subjects to gaze, explore and shift eye-gaze using their muscles, thus excluding patients in a complete locked-in state or under the condition of the unresponsive wakefulness syndrome. In this study, we present a novel fully auditory EEG-BCI based on a dichotic listening paradigm using human voice for stimulation. This interface has been evaluated with healthy volunteers, achieving an average information transmission rate of 1.5 bits min-1 in full-length trials and 2.7 bits min-1 using the optimal length of trials, recorded with only one channel and without formal training. This novel technique opens the door to a more natural communication with users unable to use visual BCIs, with promising results in terms of performance, usability, training and cognitive effort.

  18. Multichannel brain recordings in behaving Drosophila reveal oscillatory activity and local coherence in response to sensory stimulation and circuit activation

    PubMed Central

    Paulk, Angelique C.; Zhou, Yanqiong; Stratton, Peter; Liu, Li

    2013-01-01

    Neural networks in vertebrates exhibit endogenous oscillations that have been associated with functions ranging from sensory processing to locomotion. It remains unclear whether oscillations may play a similar role in the insect brain. We describe a novel “whole brain” readout for Drosophila melanogaster using a simple multichannel recording preparation to study electrical activity across the brain of flies exposed to different sensory stimuli. We recorded local field potential (LFP) activity from >2,000 registered recording sites across the fly brain in >200 wild-type and transgenic animals to uncover specific LFP frequency bands that correlate with: 1) brain region; 2) sensory modality (olfactory, visual, or mechanosensory); and 3) activity in specific neural circuits. We found endogenous and stimulus-specific oscillations throughout the fly brain. Central (higher-order) brain regions exhibited sensory modality-specific increases in power within narrow frequency bands. Conversely, in sensory brain regions such as the optic or antennal lobes, LFP coherence, rather than power, best defined sensory responses across modalities. By transiently activating specific circuits via expression of TrpA1, we found that several circuits in the fly brain modulate LFP power and coherence across brain regions and frequency domains. However, activation of a neuromodulatory octopaminergic circuit specifically increased neuronal coherence in the optic lobes during visual stimulation while decreasing coherence in central brain regions. Our multichannel recording and brain registration approach provides an effective way to track activity simultaneously across the fly brain in vivo, allowing investigation of functional roles for oscillations in processing sensory stimuli and modulating behavior. PMID:23864378

  19. Nonlocal atlas-guided multi-channel forest learning for human brain labeling

    SciTech Connect

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang

    2016-02-15

    Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI-LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the

  20. Nonlocal atlas-guided multi-channel forest learning for human brain labeling

    PubMed Central

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang

    2016-01-01

    Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI_LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the

  1. Neuroanatomy of "hearing voices": a frontotemporal brain structural abnormality associated with auditory hallucinations in schizophrenia.

    PubMed

    Gaser, Christian; Nenadic, Igor; Volz, Hans-Peter; Büchel, Christian; Sauer, Heinrich

    2004-01-01

    Auditory hallucinations are a frequent symptom in schizophrenia. While functional imaging studies have suggested the association of certain patterns of brain activity with sub-syndromes or single symptoms (e.g. positive symptoms such as hallucinations), there has been only limited evidence from structural imaging or post-mortem studies. In this study, we investigated the relation of local brain structural deficits to severity of auditory hallucinations, particularly in perisylvian areas previously reported to be involved in auditory hallucinations. In order to overcome certain limitations of conventional volumetric methods, we used deformation-based morphometry (DBM), a novel automated whole-brain morphometric technique, to assess local gray and white matter deficits in structural magnetic resonance images of 85 schizophrenia patients. We found severity of auditory hallucinations to be significantly correlated (P < 0.001) with volume loss in the left transverse temporal gyrus of Heschl (primary auditory cortex) and left (inferior) supramarginal gyrus, as well as middle/inferior right prefrontal gyri. This demonstrates a pattern of distributed structural abnormalities specific for auditory hallucinations and suggests hallucination-specific alterations in areas of a frontotemporal network for processing auditory information and language.

  2. Auditory and vestibular dysfunction associated with blast-related traumatic brain injury.

    PubMed

    Fausti, Stephen A; Wilmington, Debra J; Gallun, Frederick J; Myers, Paula J; Henry, James A

    2009-01-01

    The dramatic escalation of blast exposure in military deployments has created an unprecedented amount of traumatic brain injury (TBI) and associated auditory impairment. Auditory dysfunction has become the most prevalent individual service-connected disability, with compensation totaling more than 1 billion dollars annually. Impairment due to blast can include peripheral hearing loss, central auditory processing deficits, vestibular impairment, and tinnitus. These deficits are particularly challenging in the TBI population, as symptoms can be mistaken for posttraumatic stress disorder, mental-health issues, and cognitive deficits. In addition, comorbid factors such as attention, cognition, neuronal loss, noise toxicity, etc., can confound assessment, causing misdiagnosis. Furthermore, some auditory impairments, such as sensorineural hearing loss, will continue to progress with age, unlike many other injuries. In the TBI population, significant clinical challenges are the accurate differentiation of auditory and vestibular impairments from multiple, many times overlapping, symptoms and the development of multidisciplinary rehabilitation strategies to improve treatment outcomes and quality of life for these patients.

  3. Lateralized auditory brain function in children with normal reading ability and in children with dyslexia.

    PubMed

    Johnson, Blake W; McArthur, Genevieve; Hautus, Michael; Reid, Melanie; Brock, Jon; Castles, Anne; Crain, Stephen

    2013-03-01

    We examined central auditory processing in typically- and atypically-developing readers. Concurrent EEG and MEG brain measurements were obtained from a group of 16 children with dyslexia aged 8-12 years, and a group of 16 age-matched children with normal reading ability. Auditory responses were elicited using 500ms duration broadband noise. These responses were strongly lateralized in control children. Children with dyslexia showed significantly less lateralisation of auditory cortical functioning, and a different pattern of development of auditory lateralization with age. These results provide further evidence that the core neurophysiological deficit of dyslexia is a problem in the balance of auditory function between the two hemispheres. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Are Auditory Hallucinations Related to the Brain's Resting State Activity? A 'Neurophenomenal Resting State Hypothesis'

    PubMed Central

    2014-01-01

    While several hypotheses about the neural mechanisms underlying auditory verbal hallucinations (AVH) have been suggested, the exact role of the recently highlighted intrinsic resting state activity of the brain remains unclear. Based on recent findings, we therefore developed what we call the 'resting state hypotheses' of AVH. Our hypothesis suggest that AVH may be traced back to abnormally elevated resting state activity in auditory cortex itself, abnormal modulation of the auditory cortex by anterior cortical midline regions as part of the default-mode network, and neural confusion between auditory cortical resting state changes and stimulus-induced activity. We discuss evidence in favour of our 'resting state hypothesis' and show its correspondence with phenomenal, i.e., subjective-experiential features as explored in phenomenological accounts. Therefore I speak of a 'neurophenomenal resting state hypothesis' of auditory hallucinations in schizophrenia. PMID:25598821

  5. The Relationship between Phonological and Auditory Processing and Brain Organization in Beginning Readers

    ERIC Educational Resources Information Center

    Pugh, Kenneth R.; Landi, Nicole; Preston, Jonathan L.; Mencl, W. Einar; Austin, Alison C.; Sibley, Daragh; Fulbright, Robert K.; Seidenberg, Mark S.; Grigorenko, Elena L.; Constable, R. Todd; Molfese, Peter; Frost, Stephen J.

    2013-01-01

    We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of…

  6. The Relationship between Phonological and Auditory Processing and Brain Organization in Beginning Readers

    ERIC Educational Resources Information Center

    Pugh, Kenneth R.; Landi, Nicole; Preston, Jonathan L.; Mencl, W. Einar; Austin, Alison C.; Sibley, Daragh; Fulbright, Robert K.; Seidenberg, Mark S.; Grigorenko, Elena L.; Constable, R. Todd; Molfese, Peter; Frost, Stephen J.

    2013-01-01

    We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of…

  7. Gonadotropin-releasing hormone (GnRH) modulates auditory processing in the fish brain.

    PubMed

    Maruska, Karen P; Tricas, Timothy C

    2011-04-01

    Gonadotropin-releasing hormone 1 (GnRH1) neurons control reproductive activity, but GnRH2 and GnRH3 neurons have widespread projections and function as neuromodulators in the vertebrate brain. While these extra-hypothalamic GnRH forms function as olfactory and visual neuromodulators, their potential effect on processing of auditory information is unknown. To test the hypothesis that GnRH modulates the processing of auditory information in the brain, we used immunohistochemistry to determine seasonal variations in these neuropeptide systems, and in vivo single-neuron recordings to identify neuromodulation in the midbrain torus semicircularis of the soniferous damselfish Abudefduf abdominalis. Our results show abundant GnRH-immunoreactive (-ir) axons in auditory processing regions of the midbrain and hindbrain. The number of extra-hypothalamic GnRH somata and the density of GnRH-ir axons within the auditory torus semicircularis also varied across the year, suggesting seasonal changes in GnRH influence of auditory processing. Exogenous application of GnRH (sGnRH and cGnRHII) caused a primarily inhibitory effect on auditory-evoked single neuron responses in the torus semicircularis. In the majority of neurons, GnRH caused a long-lasting decrease in spike rate in response to both tone bursts and playbacks of complex natural sounds. GnRH also decreased response latency and increased auditory thresholds in a frequency and stimulus type-dependent manner. To our knowledge, these results show for the first time in any vertebrate that GnRH can influence context-specific auditory processing in vivo in the brain, and may function to modulate seasonal auditory-mediated social behaviors.

  8. Attention to human speakers in a virtual auditory environment: brain potential evidence.

    PubMed

    Nager, Wido; Dethlefsen, Christina; Münte, Thomas F

    2008-07-18

    Listening to a speech message requires the accurate selection of the relevant auditory input especially when distracting background noise or other speech messages are present. To investigate such auditory selection processes we presented three different speech messages simultaneously spoken by different actors at separate spatial locations (-70, 0, 70/ azimuth). Stimuli were recorded using an artificial head with microphones embedded in the "auditory canals" to capture the interaural time and level differences as well as some of the filter properties of the outer ear structures as auditory spatial cues, thus creating a realistic virtual auditory space. In a given experimental run young healthy participants listened via headphones and either attended to the rightmost or the leftmost message in order to comprehend the story. Superimposed on the speech messages task irrelevant probe stimuli (syllables sharing spatial and spectral characteristics, 4 probes/s) were presented that were used for the generation of event-related brain potentials computed from 29 channels of EEG. ERPs to probe stimuli were characterized by a negativity starting at 250 ms with a contralateral frontal maximum for the probes sharing spatial/spectral features of the attended story relative to those for the unattended message. The relatively late onset of this attention effect was interpreted to reflect the task demands in this complex auditory environment. This study demonstrates the feasibility to use virtual auditory environments in conjunction with the probe technique to study auditory selection under realistic conditions.

  9. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory.

    PubMed

    Kraus, Nina; Strait, Dana L; Parbery-Clark, Alexandra

    2012-04-01

    Musicians benefit from real-life advantages, such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians' auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. © 2012 New York Academy of Sciences.

  10. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory

    PubMed Central

    Kraus, Nina; Strait, Dana; Parbery-Clark, Alexandra

    2012-01-01

    Musicians benefit from real-life advantages such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians’ auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. PMID:22524346

  11. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations.

    PubMed

    Ćurčić-Blake, Branislava; Ford, Judith M; Hubl, Daniela; Orlov, Natasza D; Sommer, Iris E; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W; David, Olivier; Mulert, Christoph; Woodward, Todd S; Aleman, André

    2017-01-01

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of particular relevance. However, reconciliation of these theories with experimental evidence is missing. We review 50 studies investigating functional (EEG and fMRI) and anatomic (diffusion tensor imaging) connectivity in these networks, and explore the evidence supporting abnormal connectivity in these networks associated with AVH. We distinguish between functional connectivity during an actual hallucination experience (symptom capture) and functional connectivity during either the resting state or a task comparing individuals who hallucinate with those who do not (symptom association studies). Symptom capture studies clearly reveal a pattern of increased coupling among the auditory, language and striatal regions. Anatomical and symptom association functional studies suggest that the interhemispheric connectivity between posterior auditory regions may depend on the phase of illness, with increases in non-psychotic individuals and first episode patients and decreases in chronic patients. Leading hypotheses involving concepts as unstable memories, source monitoring, top-down attention, and hybrid models of hallucinations are supported in part by the published connectivity data, although several caveats and inconsistencies remain. Specifically, possible changes in fronto-temporal connectivity are still under debate. Precise hypotheses concerning the directionality of connections deduced from current theoretical approaches should be tested using experimental approaches that allow for discrimination of competing hypotheses.

  12. An in vivo investigation of first spike latencies in the inferior colliculus in response to multichannel penetrating auditory brainstem implant stimulation

    NASA Astrophysics Data System (ADS)

    Mauger, Stefan J.; Shivdasani, Mohit N.; Rathbone, Graeme D.; Argent, Rebecca E.; Paolini, Antonio G.

    2010-06-01

    The cochlear nucleus (CN) is the first auditory processing site within the brain and the target location of the auditory brainstem implant (ABI), which provides speech perception to patients who cannot benefit from a cochlear implant (CI). Although there is variance between ABI recipient speech performance outcomes, performance is typically low compared to CI recipients. Temporal aspects of neural firing such as first spike latency (FSL) are thought to code for many speech features; however, no studies have investigated FSL from CN stimulation. Consequently, ABIs currently do not incorporate CN-specific temporal information. We therefore systematically investigated inferior colliculus (IC) neuron's FSL response to frequency-specific electrical stimulation of the CN in rats. The range of FSLs from electrical stimulation of many neurons indicates that both monosynaptic and polysynaptic pathways were activated, suggesting initial activation of multiple CN neuron types. Electrical FSLs for a single neuron did not change irrespective of the CN frequency region stimulated, indicating highly segregated projections from the CN to the IC. These results present the first evidence of temporal responses to frequency-specific CN electrical stimulation. Understanding the auditory system's temporal response to electrical stimulation will help in future ABI designs and stimulation strategies.

  13. [Forensic application of brainstem auditory evoked potential in patients with brain concussion].

    PubMed

    Zheng, Xing-Bin; Li, Sheng-Yan; Huang, Si-Xing; Ma, Ke-Xin

    2008-12-01

    To investigate changes of brainstem auditory evoked potential (BAEP) in patients with brain concussion. Nineteen patients with brain concussion were studied with BAEP examination. The data was compared to the healthy persons reported in literatures. The abnormal rate of BAEP for patients with brain concussion was 89.5%. There was a statistically significant difference between the abnormal rate of patients and that of healthy persons (P<0.05). The abnormal rate of BAEP in the brainstem pathway for patients with brain concussion was 73.7%, indicating dysfunction of the brainstem in those patients. BAEP might be helpful in forensic diagnosis of brain concussion.

  14. Prediction of auditory and visual p300 brain-computer interface aptitude.

    PubMed

    Halder, Sebastian; Hammer, Eva Maria; Kleih, Sonja Claudia; Bogdan, Martin; Rosenstiel, Wolfgang; Birbaumer, Niels; Kübler, Andrea

    2013-01-01

    Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball. Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude. Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy. Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection. Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population.

  15. Prediction of Auditory and Visual P300 Brain-Computer Interface Aptitude

    PubMed Central

    Halder, Sebastian; Hammer, Eva Maria; Kleih, Sonja Claudia; Bogdan, Martin; Rosenstiel, Wolfgang; Birbaumer, Niels; Kübler, Andrea

    2013-01-01

    Objective Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball. Methods Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude. Results Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy. Conclusions Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection. Significance Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population. PMID:23457444

  16. Multi-channel atomic magnetometer for magnetoencephalography: a configuration study.

    PubMed

    Kim, Kiwoong; Begus, Samo; Xia, Hui; Lee, Seung-Kyun; Jazbinsek, Vojko; Trontelj, Zvonko; Romalis, Michael V

    2014-04-01

    Atomic magnetometers are emerging as an alternative to SQUID magnetometers for detection of biological magnetic fields. They have been used to measure both the magnetocardiography (MCG) and magnetoencephalography (MEG) signals. One of the virtues of the atomic magnetometers is their ability to operate as a multi-channel detector while using many common elements. Here we study two configurations of such a multi-channel atomic magnetometer optimized for MEG detection. We describe measurements of auditory evoked fields (AEF) from a human brain as well as localization of dipolar phantoms and auditory evoked fields. A clear N100m peak in AEF was observed with a signal-to-noise ratio of higher than 10 after averaging of 250 stimuli. Currently the intrinsic magnetic noise level is 4fTHz(-1/2) at 10Hz. We compare the performance of the two systems in regards to current source localization and discuss future development of atomic MEG systems. © 2013.

  17. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe

    PubMed Central

    Berns, Gregory S.; Cook, Peter F.; Foxley, Sean; Jbabdi, Saad; Miller, Karla L.; Marino, Lori

    2015-01-01

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of ‘associative′ regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin (Delphinus delphis) and a pantropical dolphin (Stenella attenuata) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species

  18. [Brainstem auditory evoked potentials as a method to assist the diagnosis of brain death].

    PubMed

    Jardim, Mônica; Person, Osmar Clayton; Rapoport, Priscila Bogar

    2008-01-01

    brainstem auditory evoked potentials in brain death. to verify the agreement between the response in the auditory brainstem audiometry and the clinical outcome, analyzing the pattern of responses to electric stimulation. a cross-sectional study performed in 30 patients with Glasgow coma score of 3, submitted to the auditory brainstem audiometry and followed up until their clinical outcome: recovery or death. The test was considered positive to brain death when there was no registry of waves or when there was only the registry of wave I; and negative when there were two or more waves, independently of their latencies. Among the patients who presented positive results for brain death (86.66%), all died; the only patient who recovered presented a negative result, indicating a specificity of 100%. Internal consistency of data was also observed, with an intraclass correlation coefficient of 0.562, obtained using the Cronbach s test; and a significant agreement between the test and the clinical outcome using the Kappa s test, with a confidence interval of 95% (K = 0.545; p = 0.015). in the present study, the brainstem auditory evoked potential demonstrated to be highly specific in death prediction of patients in Glasgow coma score of 3, and was useful in assisting the diagnosis of brain death.

  19. Usefulness of piezoelectric earphones in recording the brain stem auditory evoked potentials: a new early deflection.

    PubMed

    Hughes, J R; Fino, J

    1980-03-01

    Piezoelectric and electromagnetic earphones are compared in the recording of brain stem auditory evoked responses in man. With the piezoelectric phones the stimulus artifact can be essentially eliminated, permitting an early peak I' to appear at 1.1 msec. Also, other peaks, especially later ones, including one called VIII at 11.0 msec, appeared more consistently with smaller standard deviations of latency.

  20. Brain-stem auditory evoked potentials in children and adolescents with anorexia nervosa.

    PubMed

    Pilecki, Witold; Bolanowski, Marek; Szawrowicz-Pelka, Teresa; Kalka, Dariusz; Janocha, Anna; Salomon, Ewa; Rusiecki, Leslaw; Czerchawski, Leszek; Sobieszczanska, Malgorzata

    2010-01-01

    In the course of anorexia nervosa (AN), the central nervous system (CNS) undergoes both anatomic and functional changes that may cause disturbances of stimulation transmission in the sensory areas of CNS. Method of brain-stem auditory evoked potentials (BAEPs) was used in the children with AN to test the auditory pathway transmission. The study included 37 children and adolescents, aged 10-18 years, with clinically diagnosed AN. BAEPs were recorded after a click stimulation of 75 dB intensity. Then, wave I latency (response from the auditory nerve) and inter-peak latency I-V (IPL I-V; response from the brain-stem) were analyzed. Abnormalities of the BAEPs recordings were noted total in 32.4% of the study patients. Predominantly (in 24.3%), a decreased transmission within the brain-stem, expressed as the IPL I-V prolongation, was observed. It was also found that the percentage of the abnormal BAEPs results and the degree of IPL I-V prolongation were increasing together with enhancing AN severity. IPL I-V prolongation observed in the AN children reflects a disturbed neural transmission in the brain-stem section of the auditory pathway and can be ascribed to impairments in the nerves myelin sheath.

  1. Closed eyes condition increases auditory brain responses in schizophrenia.

    PubMed

    Griskova-Bulanova, Inga; Dapsys, Kastytis; Maciulis, Valentinas; Arnfred, Sidse M

    2013-02-28

    The 40-Hz auditory steady-state responses (ASSRs) of 14 medicated schizophrenic patients were recorded in eyes-open and eyes-closed conditions as previously done in healthy volunteers. Patients show significantly increased precision of the evoked response with eyes closed, and a significant increase of broad-band noise activity when eyes are open. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Turning down the noise: the benefit of musical training on the aging auditory brain.

    PubMed

    Alain, Claude; Zendel, Benjamin Rich; Hutka, Stefanie; Bidelman, Gavin M

    2014-02-01

    Age-related decline in hearing abilities is a ubiquitous part of aging, and commonly impacts speech understanding, especially when there are competing sound sources. While such age effects are partially due to changes within the cochlea, difficulties typically exist beyond measurable hearing loss, suggesting that central brain processes, as opposed to simple peripheral mechanisms (e.g., hearing sensitivity), play a critical role in governing hearing abilities late into life. Current training regimens aimed to improve central auditory processing abilities have experienced limited success in promoting listening benefits. Interestingly, recent studies suggest that in young adults, musical training positively modifies neural mechanisms, providing robust, long-lasting improvements to hearing abilities as well as to non-auditory tasks that engage cognitive control. These results offer the encouraging possibility that musical training might be used to counteract age-related changes in auditory cognition commonly observed in older adults. Here, we reviewed studies that have examined the effects of age and musical experience on auditory cognition with an emphasis on auditory scene analysis. We infer that musical training may offer potential benefits to complex listening and might be utilized as a means to delay or even attenuate declines in auditory perception and cognition that often emerge later in life.

  3. A blueprint for vocal learning: auditory predispositions from brains to genomes

    PubMed Central

    Wheatcroft, David; Qvarnström, Anna

    2015-01-01

    Memorizing and producing complex strings of sound are requirements for spoken human language. We share these behaviours with likely more than 4000 species of songbirds, making birds our primary model for studying the cognitive basis of vocal learning and, more generally, an important model for how memories are encoded in the brain. In songbirds, as in humans, the sounds that a juvenile learns later in life depend on auditory memories formed early in development. Experiments on a wide variety of songbird species suggest that the formation and lability of these auditory memories, in turn, depend on auditory predispositions that stimulate learning when a juvenile hears relevant, species-typical sounds. We review evidence that variation in key features of these auditory predispositions are determined by variation in genes underlying the development of the auditory system. We argue that increased investigation of the neuronal basis of auditory predispositions expressed early in life in combination with modern comparative genomic approaches may provide insights into the evolution of vocal learning. PMID:26246333

  4. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions.

  5. Human auditory evoked potentials in the assessment of brain function during major cardiovascular surgery.

    PubMed

    Rodriguez, Rosendo A

    2004-06-01

    Focal neurologic and intellectual deficits or memory problems are relatively frequent after cardiac surgery. These complications have been associated with cerebral hypoperfusion, embolization, and inflammation that occur during or after surgery. Auditory evoked potentials, a neurophysiologic technique that evaluates the function of neural structures from the auditory nerve to the cortex, provide useful information about the functional status of the brain during major cardiovascular procedures. Skepticism regarding the presence of artifacts or difficulty in their interpretation has outweighed considerations of its potential utility and noninvasiveness. This paper reviews the evidence of their potential applications in several aspects of the management of cardiac surgery patients. The sensitivity of auditory evoked potentials to the effects of changes in brain temperature makes them useful for monitoring cerebral hypothermia and rewarming during cardiopulmonary bypass. The close relationship between evoked potential waveforms and specific anatomic structures facilitates the assessment of the functional integrity of the central nervous system in cardiac surgery patients. This feature may also be relevant in the management of critical patients under sedation and coma or in the evaluation of their prognosis during critical care. Their objectivity, reproducibility, and relative insensitivity to learning effects make auditory evoked potentials attractive for the cognitive assessment of cardiac surgery patients. From a clinical perspective, auditory evoked potentials represent an additional window for the study of underlying cerebral processes in healthy and diseased patients. From a research standpoint, this technology offers opportunities for a better understanding of the particular cerebral deficits associated with patients who are undergoing major cardiovascular procedures.

  6. Connecting the ear to the brain: molecular mechanisms of auditory circuit assembly

    PubMed Central

    Appler, Jessica M.; Goodrich, Lisa V.

    2011-01-01

    Our sense of hearing depends on precisely organized circuits that allow us to sense, perceive, and respond to complex sounds in our environment, from music and language to simple warning signals. Auditory processing begins in the cochlea of the inner ear, where sounds are detected by sensory hair cells and then transmitted to the central nervous system by spiral ganglion neurons, which faithfully preserve the frequency, intensity, and timing of each stimulus. During the assembly of auditory circuits, spiral ganglion neurons establish precise connections that link hair cells in the cochlea to target neurons in the auditory brainstem, develop specific firing properties, and elaborate unusual synapses both in the periphery and in the CNS. Understanding how spiral ganglion neurons acquire these unique properties is a key goal in auditory neuroscience, as these neurons represent the sole input of auditory information to the brain. In addition, the best currently available treatment for many forms of deafness is the cochlear implant, which compensates for lost hair cell function by directly stimulating the auditory nerve. Historically, studies of the auditory system have lagged behind other sensory systems due to the small size and inaccessibility of the inner ear. With the advent of new molecular genetic tools, this gap is narrowing. Here, we summarize recent insights into the cellular and molecular cues that guide the development of spiral ganglion neurons, from their origin in the proneurosensory domain of the otic vesicle to the formation of specialized synapses that ensure rapid and reliable transmission of sound information from the ear to the brain. PMID:21232575

  7. Expression of c-fos in auditory and non-auditory brain regions of the gerbil after manipulations that induce tinnitus.

    PubMed

    Wallhäusser-Franke, E; Mahlke, C; Oliva, R; Braun, S; Wenz, G; Langner, G

    2003-12-01

    Subjective tinnitus is a phantom sound sensation that does not result from acoustic stimulation and is audible to the affected subject only. Tinnitus-like sensations in animals can be evoked by procedures that also cause tinnitus in humans. In gerbils, we investigated brain activation after systemic application of sodium salicylate or exposure to loud noise, both known to be reliable tinnitus-inductors. Brains were screened for neurons containing the c-fos protein. After salicylate injections, auditory cortex was the only auditory area with consistently increased numbers of immunoreactive neurons compared to controls. Exposure to impulse noise led to prolonged c-fos expression in auditory cortex and dorsal cochlear nucleus. After both manipulations c-fos expression was increased in the amygdala, in thalamic midline, and intralaminar areas, in frontal cortex, as well as in hypothalamic and brainstem regions involved in behavioral and physiological defensive reactions. Activation of these non-auditory areas was attributed to acute stress, to aversive-affective components and autonomous reactions associated with the treatments and a resulting tinnitus. The present findings are in accordance with former results that provided evidence for suppressed activation in auditory midbrain but enhanced activation of the auditory cortex after injecting high doses of salicylate. In addition, our present results provide evidence that acute stress coinciding with a disruption of hearing may evoke activation of the auditory cortex. We interpret these results in favor of our model of central tinnitus generation.

  8. Auditory Temporal Processing Deficits in Chronic Stroke: A Comparison of Brain Damage Lateralization Effect.

    PubMed

    Jafari, Zahra; Esmaili, Mahdiye; Delbari, Ahmad; Mehrpour, Masoud; Mohajerani, Majid H

    2016-06-01

    There have been a few reports about the effects of chronic stroke on auditory temporal processing abilities and no reports regarding the effects of brain damage lateralization on these abilities. Our study was performed on 2 groups of chronic stroke patients to compare the effects of hemispheric lateralization of brain damage and of age on auditory temporal processing. Seventy persons with normal hearing, including 25 normal controls, 25 stroke patients with damage to the right brain, and 20 stroke patients with damage to the left brain, without aphasia and with an age range of 31-71 years were studied. A gap-in-noise (GIN) test and a duration pattern test (DPT) were conducted for each participant. Significant differences were found between the 3 groups for GIN threshold, overall GIN percent score, and DPT percent score in both ears (P ≤ .001). For all stroke patients, performance in both GIN and DPT was poorer in the ear contralateral to the damaged hemisphere, which was significant in DPT and in 2 measures of GIN (P ≤ .046). Advanced age had a negative relationship with temporal processing abilities for all 3 groups. In cases of confirmed left- or right-side stroke involving auditory cerebrum damage, poorer auditory temporal processing is associated with the ear contralateral to the damaged cerebral hemisphere. Replication of our results and the use of GIN and DPT tests for the early diagnosis of auditory processing deficits and for monitoring the effects of aural rehabilitation interventions are recommended. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  9. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Brain Network Interactions in Auditory, Visual and Linguistic Processing

    ERIC Educational Resources Information Center

    Horwitz, Barry; Braun, Allen R.

    2004-01-01

    In the paper, we discuss the importance of network interactions between brain regions in mediating performance of sensorimotor and cognitive tasks, including those associated with language processing. Functional neuroimaging, especially PET and fMRI, provide data that are obtained essentially simultaneously from much of the brain, and thus are…

  11. Multichannel NIRS analysis of brain activity during semantic differential rating of drawing stimuli containing different affective polarities.

    PubMed

    Suzuki, Miho; Gyoba, Jiro; Sakuta, Yuiko

    2005-02-25

    We used 24-channel near-infrared spectroscopy (NIRS) to measure activity in the temporal, parietal, and frontal regions of the brain in eight Japanese women while the participants rated line drawings using semantic differential scales. Participants rated the seven line drawings on 15 bipolar semantic scales, each of which belonged to one of three semantic classes: Evaluation, Activity, or Potency. Suzuki et al. [M. Suzuki, J. Gyoba, Y. Sakuta, Multichannel near-infrared spectroscopy analysis of brain activities during semantic differential rating of drawings, Tohoku Psychologica Folia 62 (2003) 86-98.] had reported previously that the right superior temporal gyrus and the right inferior parietal lobule are associated with Activity rating, while the brain regions around the central fissure were related to Potency rating. Based on these suggestions, we investigated the brain activity in these regions during rating of stimuli containing different affective polarities. When drawings were reported as 'static' or 'calm', oxyhemoglobin concentration was higher around the right superior temporal gyrus as compared to when they were considered 'noisy' or 'excitable'. Oxyhemoglobin concentrations around the central fissure were also higher when drawings were rated as 'soft', 'smooth', or 'blunt' compared to 'hard', 'rough', or 'sharp'. Any characteristic oxyhemoglobin changes were not found during the ratings on the evaluation scales. Our results suggest that activation patterns of the temporal and parietal regions are significantly modified by semantic polarities of Activity and Potency.

  12. Neurogenesis in the brain auditory pathway of a marsupial, the northern native cat (Dasyurus hallucatus)

    SciTech Connect

    Aitkin, L.; Nelson, J.; Farrington, M.; Swann, S. )

    1991-07-08

    Neurogenesis in the auditory pathway of the marsupial Dasyurus hallucatus was studied. Intraperitoneal injections of tritiated thymidine (20-40 microCi) were made into pouch-young varying from 1 to 56 days pouch-life. Animals were killed as adults and brain sections were prepared for autoradiography and counterstained with a Nissl stain. Neurons in the ventral cochlear nucleus were generated prior to 3 days pouch-life, in the superior olive at 5-7 days, and in the dorsal cochlear nucleus over a prolonged period. Inferior collicular neurogenesis lagged behind that in the medial geniculate, the latter taking place between days 3 and 9 and the former between days 7 and 22. Neurogenesis began in the auditory cortex on day 9 and was completed by about day 42. Thus neurogenesis was complete in the medullary auditory nuclei before that in the midbrain commenced, and in the medial geniculate before that in the auditory cortex commenced. The time course of neurogenesis in the auditory pathway of the native cat was very similar to that in another marsupial, the brushtail possum. For both, neurogenesis occurred earlier than in eutherian mammals of a similar size but was more protracted.

  13. Design of the multi-channel electroencephalography-based brain-computer interface with novel dry sensors.

    PubMed

    Wu, Shang-Lin; Liao, Lun-De; Liou, Chang-Hong; Chen, Shi-An; Ko, Li-Wei; Chen, Bo-Wei; Wang, Po-Sheng; Chen, Sheng-Fu; Lin, Chin-Teng

    2012-01-01

    The traditional brain-computer interface (BCI) system measures the electroencephalography (EEG) signals by the wet sensors with the conductive gel and skin preparation processes. To overcome the limitations of traditional BCI system with conventional wet sensors, a wireless and wearable multi-channel EEG-based BCI system is proposed in this study, including the wireless EEG data acquisition device, dry spring-loaded sensors, a size-adjustable soft cap. The dry spring-loaded sensors are made of metal conductors, which can measure the EEG signals without skin preparation and conductive gel. In addition, the proposed system provides a size-adjustable soft cap that can be used to fit user's head properly. Indeed, the results are shown that the proposed system can properly and effectively measure the EEG signals with the developed cap and sensors, even under movement. In words, the developed wireless and wearable BCI system is able to be used in cognitive neuroscience applications.

  14. Auditory perception and syntactic cognition: brain activity-based decoding within and across subjects.

    PubMed

    Herrmann, Björn; Maess, Burkhard; Kalberlah, Christian; Haynes, John-Dylan; Friederici, Angela D

    2012-05-01

    The present magnetoencephalography study investigated whether the brain states of early syntactic and auditory-perceptual processes can be decoded from single-trial recordings with a multivariate pattern classification approach. In particular, it was investigated whether the early neural activation patterns in response to rule violations in basic auditory perception and in high cognitive processes (syntax) reflect a functional organization that largely generalizes across individuals or is subject-specific. On this account, subjects were auditorily presented with correct sentences, syntactically incorrect sentences, correct sentences including an interaural time difference change, and sentences containing both violations. For the analysis, brain state decoding was carried out within and across subjects with three pairwise classifications. Neural patterns elicited by each of the violation sentences were separately classified with the patterns elicited by the correct sentences. The results revealed the highest decoding accuracies over temporal cortex areas for all three classification types. Importantly, both the magnitude and the spatial distribution of decoding accuracies for the early neural patterns were very similar for within-subject and across-subject decoding. At the same time, across-subject decoding suggested a hemispheric bias, with the most consistent patterns in the left hemisphere. Thus, the present data show that not only auditory-perceptual processing brain states but also cognitive brain states of syntactic rule processing can be decoded from single-trial brain activations. Moreover, the findings indicate that the neural patterns in response to syntactic cognition and auditory perception reflect a functional organization that is highly consistent across individuals. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  15. Effects of training and motivation on auditory P300 brain-computer interface performance.

    PubMed

    Baykara, E; Ruf, C A; Fioravanti, C; Käthner, I; Simon, N; Kleih, S C; Kübler, A; Halder, S

    2016-01-01

    Brain-computer interface (BCI) technology aims at helping end-users with severe motor paralysis to communicate with their environment without using the natural output pathways of the brain. For end-users in complete paralysis, loss of gaze control may necessitate non-visual BCI systems. The present study investigated the effect of training on performance with an auditory P300 multi-class speller paradigm. For half of the participants, spatial cues were added to the auditory stimuli to see whether performance can be further optimized. The influence of motivation, mood and workload on performance and P300 component was also examined. In five sessions, 16 healthy participants were instructed to spell several words by attending to animal sounds representing the rows and columns of a 5 × 5 letter matrix. 81% of the participants achieved an average online accuracy of ⩾ 70%. From the first to the fifth session information transfer rates increased from 3.72 bits/min to 5.63 bits/min. Motivation significantly influenced P300 amplitude and online ITR. No significant facilitative effect of spatial cues on performance was observed. Training improves performance in an auditory BCI paradigm. Motivation influences performance and P300 amplitude. The described auditory BCI system may help end-users to communicate independently of gaze control with their environment. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Localized brain activation related to the strength of auditory learning in a parrot.

    PubMed

    Eda-Fujiwara, Hiroko; Imagawa, Takuya; Matsushita, Masanori; Matsuda, Yasushi; Takeuchi, Hiro-Aki; Satoh, Ryohei; Watanabe, Aiko; Zandbergen, Matthijs A; Manabe, Kazuchika; Kawashima, Takashi; Bolhuis, Johan J

    2012-01-01

    Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus), a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory.

  17. Development and modulation of intrinsic membrane properties control the temporal precision of auditory brain stem neurons.

    PubMed

    Franzen, Delwen L; Gleiss, Sarah A; Berger, Christina; Kümpfbeck, Franziska S; Ammer, Julian J; Felmy, Felix

    2015-01-15

    Passive and active membrane properties determine the voltage responses of neurons. Within the auditory brain stem, refinements in these intrinsic properties during late postnatal development usually generate short integration times and precise action-potential generation. This developmentally acquired temporal precision is crucial for auditory signal processing. How the interactions of these intrinsic properties develop in concert to enable auditory neurons to transfer information with high temporal precision has not yet been elucidated in detail. Here, we show how the developmental interaction of intrinsic membrane parameters generates high firing precision. We performed in vitro recordings from neurons of postnatal days 9-28 in the ventral nucleus of the lateral lemniscus of Mongolian gerbils, an auditory brain stem structure that converts excitatory to inhibitory information with high temporal precision. During this developmental period, the input resistance and capacitance decrease, and action potentials acquire faster kinetics and enhanced precision. Depending on the stimulation time course, the input resistance and capacitance contribute differentially to action-potential thresholds. The decrease in input resistance, however, is sufficient to explain the enhanced action-potential precision. Alterations in passive membrane properties also interact with a developmental change in potassium currents to generate the emergence of the mature firing pattern, characteristic of coincidence-detector neurons. Cholinergic receptor-mediated depolarizations further modulate this intrinsic excitability profile by eliciting changes in the threshold and firing pattern, irrespective of the developmental stage. Thus our findings reveal how intrinsic membrane properties interact developmentally to promote temporally precise information processing.

  18. Audio representations of multi-channel EEG: a new tool for diagnosis of brain disorders

    PubMed Central

    Vialatte, François B; Dauwels, Justin; Musha, Toshimitsu; Cichocki, Andrzej

    2012-01-01

    Objective: The objective of this paper is to develop audio representations of electroencephalographic (EEG) multichannel signals, useful for medical practitioners and neuroscientists. The fundamental question explored in this paper is whether clinically valuable information contained in the EEG, not available from the conventional graphical EEG representation, might become apparent through audio representations. Methods and Materials: Music scores are generated from sparse time-frequency maps of EEG signals. Specifically, EEG signals of patients with mild cognitive impairment (MCI) and (healthy) control subjects are considered. Statistical differences in the audio representations of MCI patients and control subjects are assessed through mathematical complexity indexes as well as a perception test; in the latter, participants try to distinguish between audio sequences from MCI patients and control subjects. Results: Several characteristics of the audio sequences, including sample entropy, number of notes, and synchrony, are significantly different in MCI patients and control subjects (Mann-Whitney p < 0.01). Moreover, the participants of the perception test were able to accurately classify the audio sequences (89% correctly classified). Conclusions: The proposed audio representation of multi-channel EEG signals helps to understand the complex structure of EEG. Promising results were obtained on a clinical EEG data set. PMID:23383399

  19. Audio representations of multi-channel EEG: a new tool for diagnosis of brain disorders.

    PubMed

    Vialatte, François B; Dauwels, Justin; Musha, Toshimitsu; Cichocki, Andrzej

    2012-01-01

    The objective of this paper is to develop audio representations of electroencephalographic (EEG) multichannel signals, useful for medical practitioners and neuroscientists. The fundamental question explored in this paper is whether clinically valuable information contained in the EEG, not available from the conventional graphical EEG representation, might become apparent through audio representations. Music scores are generated from sparse time-frequency maps of EEG signals. Specifically, EEG signals of patients with mild cognitive impairment (MCI) and (healthy) control subjects are considered. Statistical differences in the audio representations of MCI patients and control subjects are assessed through mathematical complexity indexes as well as a perception test; in the latter, participants try to distinguish between audio sequences from MCI patients and control subjects. Several characteristics of the audio sequences, including sample entropy, number of notes, and synchrony, are significantly different in MCI patients and control subjects (Mann-Whitney p < 0.01). Moreover, the participants of the perception test were able to accurately classify the audio sequences (89% correctly classified). The proposed audio representation of multi-channel EEG signals helps to understand the complex structure of EEG. Promising results were obtained on a clinical EEG data set.

  20. Characteristics of brain stem auditory evoked potentials in children with hearing impairment due to infectious diseases.

    PubMed

    Ječmenica, Jovana Radovan; Opančina, Aleksandra Aleksandar Bajec

    2015-05-01

    Among objective audiologic tests, the most important were tests of brain stem auditory evoked potentials. The objective of the study was to test the configuration, degree of hearing loss, and response characteristics of auditory brain stem evoked potentials in children with hearing loss occurred due to infectious disease. A case control study design was used. The study group consisted of 54 patients referred for a hearing test because of infectious diseases caused by other agents or that occurred as congenital infection. Infectious agents have led to the emergence of various forms of sensorineural hearing loss. We have found deviations from the normal values of absolute and interwave latencies in some children in our group. We found that in the group of children who had the diseases such as purulent meningitis, or were born with rubella virus and cytomegalovirus infection, a retrocochlear damage was present in children with and without cochlear damage. © The Author(s) 2014.

  1. Brain stem auditory evoked responses in human infants and adults

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Galambos, R.

    1974-01-01

    Brain stem evoked potentials were recorded by conventional scalp electrodes in infants (3 weeks to 3 years of age) and adults. The latency of one of the major response components (wave V) is shown to be a function both of click intensity and the age of the subject; this latency at a given signal strength shortens postnatally to reach the adult value (about 6 msec) by 12 to 18 months of age. The demonstrated reliability and limited variability of these brain stem electrophysiological responses provide the basis for an optimistic estimate of their usefulness as an objective method for assessing hearing in infants and adults.

  2. Brain stem auditory evoked responses in human infants and adults

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Galambos, R.

    1974-01-01

    Brain stem evoked potentials were recorded by conventional scalp electrodes in infants (3 weeks to 3 years of age) and adults. The latency of one of the major response components (wave V) is shown to be a function both of click intensity and the age of the subject; this latency at a given signal strength shortens postnatally to reach the adult value (about 6 msec) by 12 to 18 months of age. The demonstrated reliability and limited variability of these brain stem electrophysiological responses provide the basis for an optimistic estimate of their usefulness as an objective method for assessing hearing in infants and adults.

  3. Auditory and Cognitive Factors Associated with Speech-in-Noise Complaints following Mild Traumatic Brain Injury.

    PubMed

    Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J

    2017-04-01

    Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was

  4. Fast reconfiguration of high-frequency brain networks in response to surprising changes in auditory input.

    PubMed

    Nicol, Ruth M; Chapman, Sandra C; Vértes, Petra E; Nathan, Pradeep J; Smith, Marie L; Shtyrov, Yury; Bullmore, Edward T

    2012-03-01

    How do human brain networks react to dynamic changes in the sensory environment? We measured rapid changes in brain network organization in response to brief, discrete, salient auditory stimuli. We estimated network topology and distance parameters in the immediate central response period, <1 s following auditory presentation of standard tones interspersed with occasional deviant tones in a mismatch-negativity (MMN) paradigm, using magnetoencephalography (MEG) to measure synchronization of high-frequency (gamma band; 33-64 Hz) oscillations in healthy volunteers. We found that global small-world parameters of the networks were conserved between the standard and deviant stimuli. However, surprising or unexpected auditory changes were associated with local changes in clustering of connections between temporal and frontal cortical areas and with increased interlobar, long-distance synchronization during the 120- to 250-ms epoch (coinciding with the MMN-evoked response). Network analysis of human MEG data can resolve fast local topological reconfiguration and more long-range synchronization of high-frequency networks as a systems-level representation of the brain's immediate response to salient stimuli in the dynamically changing sensory environment.

  5. Brain maturity in regard to the auditory brainstem response in small-for-date neonates.

    PubMed

    Saintonge, J; Lavoie, A; Lachapelle, J; Côté, R

    1986-01-01

    The auditory brainstem response has been used in neonates at risk of hearing impairment or as a functional measurement of brain maturity. The goal of the present study was to evaluate the auditory brainstem response in small-for-date newborns, in relation to changes observed with fetal maturity in a control group. Compared to controls with similar maturity, a significant delay for the appearance of waves III and V, and between waves I-V was observed in the small-for-date newborns, suggesting an alteration of the auditory pathway within the brainstem rather than an impairment of the peripheral auditive apparatus. Indeed, small-for-date newborns reacted to the test in a similar manner as premature babies in whom such a delay was also observed. Our data would suggest a functional brain immaturity in small-for-date newborns, during the first days of life, in regard to the auditive evoked potential, which may be related to some alterations in brain development reported with fetal malnutrition.

  6. Discrimination of Timbre in Early Auditory Responses of the Human Brain

    PubMed Central

    Seol, Jaeho; Oh, MiAe; Kim, June Sic; Jin, Seung-Hyun; Kim, Sun Il; Chung, Chun Kee

    2011-01-01

    Background The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Methodology/Principal Findings Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1) – testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Conclusions/Significances Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre. PMID:21949807

  7. Discrimination of timbre in early auditory responses of the human brain.

    PubMed

    Seol, Jaeho; Oh, MiAe; Kim, June Sic; Jin, Seung-Hyun; Kim, Sun Il; Chung, Chun Kee

    2011-01-01

    The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1)-testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  8. Brain stem responses evoked by stimulation of the mature cochlear nucleus with an auditory brain stem implant.

    PubMed

    O'Driscoll, Martin; El-Deredy, Wael; Ramsden, Richard T

    2011-01-01

    The Nucleus auditory brain stem implant (ABI) has been used in the hearing rehabilitation of totally deaf individuals for whom a cochlear implant is not an option such as in the case of neurofibromatosis type 2 (NF2). Intraoperative electrically evoked auditory brain stem responses (EABRs) are recorded to assist in the placement of the electrode array over the dorsal and ventral cochlear nuclei in the lateral recess of the IVth ventricle of the brain stem. This study had four objectives: (1) to characterize EABRs evoked by stimulation with an ABI in adolescents and adults with NF2, (2) to evaluate how the EABR morphology relates to auditory sensations elicited from stimulation by an ABI, (3) to establish whether there is evidence of morphology changes in the EABR with site of stimulation by the ABI, and (4) to investigate how the threshold of the EABR relates to behavioral threshold and comfortably loud sensations measured at initial device activation. Intraoperative EABRs were recorded from 34 subjects with ABIs: 19 male and 15 female, mean age 27 yrs (range 12 to 52 yrs). ABI stimulation was applied at seven different sites using either wide bipolar stimulation across the array or in subsections of the array from medial to lateral and inferior to superior. The EABRs were analyzed with respect to morphology, peak latency, and changes in these characteristics with the site of stimulation. In a subset of eight subjects, additional narrow bipolar sites were stimulated to compare the intraoperative EABR threshold levels with the behavioral threshold (T) and comfortably loud (C) levels of stimulation required at initial device activation. EABRs were elicited from 91% of subjects. Morphology varied from one to four vertex-positive peaks with mean latencies of 0.76, 1.53, 2.51, and 3.64 msecs, respectively. The presence of an EABR from stimulation by electrodes across the whole array had a high predictive value for the presence of auditory electrodes at initial device

  9. Brain stem auditory evoked potentials: effects of ovarian steroids correlated with increased incidence of Bell's palsy in pregnancy.

    PubMed

    Ben David, Y; Tal, J; Podoshin, L; Fradis, M; Sharf, M; Pratt, H; Faraggi, D

    1995-07-01

    To investigate the effect of ovarian steroids on the brain stem during changes of estrogen and progesterone blood levels, we recorded brain stem auditory evoked potentials with increased stimulus rates from 26 women treated for sterility by menotropins (Pergonal and Metrodin). These women were divided into three groups according to their estrogen and progesterone blood levels. The brain stem auditory evoked potential results revealed a significant delay of peak III only, with an increased stimulus rate in the group with the highest estrogen level. Estrogen may cause a brain stem synaptic impairment, presumably because of ischemic changes, and thus also may be responsible for a higher incidence of Bell's palsy during pregnancy.

  10. Effect of acupuncture on the auditory evoked brain stem potential in Parkinson's disease.

    PubMed

    Wang, Lingling; He, Chong; Liu, Yueguang; Zhu, Lili

    2002-03-01

    Under the auditory evoked brain stem potential (ABP) examination, the latent period of V wave and the intermittent periods of III-V peak and I-V peak were significantly shortened in Parkinson's disease patients of the treatment group (N = 29) after acupuncture treatment. The difference of cumulative scores in Webster's scale was also decreased in correlation analysis. The increase of dopamine in the brain and the excitability of the dopamine neurons may contribute to the therapeutic effects, in TCM terms, of subduing the pathogenic wind and tranquilizing the mind.

  11. Auditory-evoked potentials as indicator of brain serotonergic activity--first evidence in behaving cats.

    PubMed

    Juckel, G; Molnár, M; Hegerl, U; Csépe, V; Karmos, G

    1997-06-15

    Due to the increasing importance of the central serotonergic neurotransmission for pathogenetic concepts and as a target of pharmacotherapeutic interventions in psychiatry, reliable indicators of this system are needed. Several findings from basic and clinical research suggest that the stimulus intensity dependence of auditory evoked potentials (AEP) may be such an indicator of behaviorally relevant aspects of serotonergic activity (Hegerl and Juckel 1993, Biol Psychiatry 33:173-187). In order to study this relationship more directly, epidural recordings over the primary and secondary auditory cortex were conducted in chronically implanted cats under intravenous (i.v.) administration of drugs influencing the serotonergic and other modulatory systems (8-OH-DPAT, m-CPP, ketanserin, DOI, apomorphine, atropine, clonidine). The intensity dependence of the cat AEP component with the highest functional similarity to this of the N1/P2-component in humans was significantly changed by influencing 5-HT1a and 5-HT2 receptors, but not 5-HT1c receptors. This serotonergic modulation of the intensity dependence was only found for the primary auditory cortex which corresponds to the known different innervation of the primary and secondary auditory cortex by serotonergic fibers. Our study supports the idea that the intensity dependence of AEP could be a valuable indicator of brain serotonergic activity; however, this indicator seems to be of relative specificity because at least cholinergic effects on the intensity dependence were also observed.

  12. The role of event-related brain potentials in assessing central auditory processing.

    PubMed

    Alain, Claude; Tremblay, Kelly

    2007-01-01

    The perception of complex acoustic signals such as speech and music depends on the interaction between peripheral and central auditory processing. As information travels from the cochlea to primary and associative auditory cortices, the incoming sound is subjected to increasingly more detailed and refined analysis. These various levels of analyses are thought to include low-level automatic processes that detect, discriminate and group sounds that are similar in physical attributes such as frequency, intensity, and location as well as higher-level schema-driven processes that reflect listeners' experience and knowledge of the auditory environment. In this review, we describe studies that have used event-related brain potentials in investigating the processing of complex acoustic signals (e.g., speech, music). In particular, we examine the role of hearing loss on the neural representation of sound and how cognitive factors and learning can help compensate for perceptual difficulties. The notion of auditory scene analysis is used as a conceptual framework for interpreting and studying the perception of sound.

  13. Early auditory processing in area V5/MT+ of the congenitally blind brain.

    PubMed

    Watkins, Kate E; Shakespeare, Timothy J; O'Donoghue, M Clare; Alexander, Iona; Ragge, Nicola; Cowey, Alan; Bridge, Holly

    2013-11-13

    Previous imaging studies of congenital blindness have studied individuals with heterogeneous causes of blindness, which may influence the nature and extent of cross-modal plasticity. Here, we scanned a homogeneous group of blind people with bilateral congenital anophthalmia, a condition in which both eyes fail to develop, and, as a result, the visual pathway is not stimulated by either light or retinal waves. This model of congenital blindness presents an opportunity to investigate the effects of very early visual deafferentation on the functional organization of the brain. In anophthalmic animals, the occipital cortex receives direct subcortical auditory input. We hypothesized that this pattern of subcortical reorganization ought to result in a topographic mapping of auditory frequency information in the occipital cortex of anophthalmic people. Using functional MRI, we examined auditory-evoked activity to pure tones of high, medium, and low frequencies. Activity in the superior temporal cortex was significantly reduced in anophthalmic compared with sighted participants. In the occipital cortex, a region corresponding to the cytoarchitectural area V5/MT+ was activated in the anophthalmic participants but not in sighted controls. Whereas previous studies in the blind indicate that this cortical area is activated to auditory motion, our data show it is also active for trains of pure tone stimuli and in some anophthalmic participants shows a topographic mapping (tonotopy). Therefore, this region appears to be performing early sensory processing, possibly served by direct subcortical input from the pulvinar to V5/MT+.

  14. A vision-free brain-computer interface (BCI) paradigm based on auditory selective attention.

    PubMed

    Kim, Do-Won; Cho, Jae-Hyun; Hwang, Han-Jeong; Lim, Jeong-Hwan; Im, Chang-Hwan

    2011-01-01

    Majority of the recently developed brain computer interface (BCI) systems have been using visual stimuli or visual feedbacks. However, the BCI paradigms based on visual perception might not be applicable to severe locked-in patients who have lost their ability to control their eye movement or even their vision. In the present study, we investigated the feasibility of a vision-free BCI paradigm based on auditory selective attention. We used the power difference of auditory steady-state responses (ASSRs) when the participant modulates his/her attention to the target auditory stimulus. The auditory stimuli were constructed as two pure-tone burst trains with different beat frequencies (37 and 43 Hz) which were generated simultaneously from two speakers located at different positions (left and right). Our experimental results showed high classification accuracies (64.67%, 30 commands/min, information transfer rate (ITR) = 1.89 bits/min; 74.00%, 12 commands/min, ITR = 2.08 bits/min; 82.00%, 6 commands/min, ITR = 1.92 bits/min; 84.33%, 3 commands/min, ITR = 1.12 bits/min; without any artifact rejection, inter-trial interval = 6 sec), enough to be used for a binary decision. Based on the suggested paradigm, we implemented a first online ASSR-based BCI system that demonstrated the possibility of materializing a totally vision-free BCI system.

  15. High Resolution Quantitative Synaptic Proteome Profiling of Mouse Brain Regions After Auditory Discrimination Learning

    PubMed Central

    Kolodziej, Angela; Smalla, Karl-Heinz; Richter, Sandra; Engler, Alexander; Pielot, Rainer; Dieterich, Daniela C.; Tischmeyer, Wolfgang; Naumann, Michael; Kähne, Thilo

    2016-01-01

    The molecular synaptic mechanisms underlying auditory learning and memory remain largely unknown. Here, the workflow of a proteomic study on auditory discrimination learning in mice is described. In this learning paradigm, mice are trained in a shuttle box Go/NoGo-task to discriminate between rising and falling frequency-modulated tones in order to avoid a mild electric foot-shock. The protocol involves the enrichment of synaptosomes from four brain areas, namely the auditory cortex, frontal cortex, hippocampus, and striatum, at different stages of training. Synaptic protein expression patterns obtained from trained mice are compared to naïve controls using a proteomic approach. To achieve sufficient analytical depth, samples are fractionated in three different ways prior to mass spectrometry, namely 1D SDS-PAGE/in-gel digestion, in-solution digestion and phospho-peptide enrichment. High-resolution proteomic analysis on a mass spectrometer and label-free quantification are used to examine synaptic protein profiles in phospho-peptide-depleted and phospho-peptide-enriched fractions of synaptosomal protein samples. A commercial software package is utilized to reveal proteins and phospho-peptides with significantly regulated relative synaptic abundance levels (trained/naïve controls). Common and differential regulation modes for the synaptic proteome in the investigated brain regions of mice after training were observed. Subsequently, meta-analyses utilizing several databases are employed to identify underlying cellular functions and biological pathways. PMID:28060347

  16. High Resolution Quantitative Synaptic Proteome Profiling of Mouse Brain Regions After Auditory Discrimination Learning.

    PubMed

    Kolodziej, Angela; Smalla, Karl-Heinz; Richter, Sandra; Engler, Alexander; Pielot, Rainer; Dieterich, Daniela C; Tischmeyer, Wolfgang; Naumann, Michael; Kähne, Thilo

    2016-12-15

    The molecular synaptic mechanisms underlying auditory learning and memory remain largely unknown. Here, the workflow of a proteomic study on auditory discrimination learning in mice is described. In this learning paradigm, mice are trained in a shuttle box Go/NoGo-task to discriminate between rising and falling frequency-modulated tones in order to avoid a mild electric foot-shock. The protocol involves the enrichment of synaptosomes from four brain areas, namely the auditory cortex, frontal cortex, hippocampus, and striatum, at different stages of training. Synaptic protein expression patterns obtained from trained mice are compared to naïve controls using a proteomic approach. To achieve sufficient analytical depth, samples are fractionated in three different ways prior to mass spectrometry, namely 1D SDS-PAGE/in-gel digestion, in-solution digestion and phospho-peptide enrichment. High-resolution proteomic analysis on a mass spectrometer and label-free quantification are used to examine synaptic protein profiles in phospho-peptide-depleted and phospho-peptide-enriched fractions of synaptosomal protein samples. A commercial software package is utilized to reveal proteins and phospho-peptides with significantly regulated relative synaptic abundance levels (trained/naïve controls). Common and differential regulation modes for the synaptic proteome in the investigated brain regions of mice after training were observed. Subsequently, meta-analyses utilizing several databases are employed to identify underlying cellular functions and biological pathways.

  17. Widespread Brain Areas Engaged during a Classical Auditory Streaming Task Revealed by Intracranial EEG

    PubMed Central

    Dykstra, Andrew R.; Halgren, Eric; Thesen, Thomas; Carlson, Chad E.; Doyle, Werner; Madsen, Joseph R.; Eskandar, Emad N.; Cash, Sydney S.

    2011-01-01

    The auditory system must constantly decompose the complex mixture of sound arriving at the ear into perceptually independent streams constituting accurate representations of individual sources in the acoustic environment. How the brain accomplishes this task is not well understood. The present study combined a classic behavioral paradigm with direct cortical recordings from neurosurgical patients with epilepsy in order to further describe the neural correlates of auditory streaming. Participants listened to sequences of pure tones alternating in frequency and indicated whether they heard one or two “streams.” The intracranial EEG was simultaneously recorded from sub-dural electrodes placed over temporal, frontal, and parietal cortex. Like healthy subjects, patients heard one stream when the frequency separation between tones was small and two when it was large. Robust evoked-potential correlates of frequency separation were observed over widespread brain areas. Waveform morphology was highly variable across individual electrode sites both within and across gross brain regions. Surprisingly, few evoked-potential correlates of perceptual organization were observed after controlling for physical stimulus differences. The results indicate that the cortical areas engaged during the streaming task are more complex and widespread than has been demonstrated by previous work, and that, by-and-large, correlates of bistability during streaming are probably located on a spatial scale not assessed – or in a brain area not examined – by the present study. PMID:21886615

  18. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise.

    PubMed

    Ioannou, Christos I; Pereda, Ernesto; Lindsen, Job P; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.

  19. Toward A High-Throughput Auditory P300-based Brain-Computer Interface

    PubMed Central

    Klobassa, D. S.; Vaughan, T. M.; Brunner, P.; Schwartz, N. E.; Wolpaw, J. R.; Neuper, C.; Sellers, E. W.

    2009-01-01

    Objective Brain-computer interface (BCI) technology can provide severely disabled people with non-muscular communication. For those most severely disabled, limitations in eye mobility or visual acuity may necessitate auditory BCI systems. The present study investigates the efficacy of the use of six environmental sounds to operate a 6×6 P300 Speller. Methods A two-group design was used to ascertain whether participants benefited from visual cues early in training. Group A (N=5) received only auditory stimuli during all 11 sessions, whereas Group AV (N=5) received simultaneous auditory and visual stimuli in initial sessions after which the visual stimuli were systematically removed. Stepwise linear discriminant analysis determined the matrix item that elicited the largest P300 response and thereby identified the desired choice. Results Online results and offline analyses showed that the two groups achieved equivalent accuracy. In the last session, eight of ten participants achieved 50% or more, and four of these achieved 75% or more, online accuracy (2.8% accuracy expected by chance). Mean bit rates averaged about 2 bits/min, and maximum bit rates reached 5.6 bits/min. Conclusions This study indicates that an auditory P300 BCI is feasible, that reasonable classification accuracy and rate of communication are achievable, and that the paradigm should be further evaluated with a group of severely disabled participants who have limited visual mobility. Significance With further development, this auditory P300 BCI could be of substantial value to severely disabled people who cannot use a visual BCI. PMID:19574091

  20. The WIN-speller: a new intuitive auditory brain-computer interface spelling application

    PubMed Central

    Kleih, Sonja C.; Herweg, Andreas; Kaufmann, Tobias; Staiger-Sälzer, Pit; Gerstner, Natascha; Kübler, Andrea

    2015-01-01

    The objective of this study was to test the usability of a new auditory Brain-Computer Interface (BCI) application for communication. We introduce a word based, intuitive auditory spelling paradigm the WIN-speller. In the WIN-speller letters are grouped by words, such as the word KLANG representing the letters A, G, K, L, and N. Thereby, the decoding step between perceiving a code and translating it to the stimuli it represents becomes superfluous. We tested 11 healthy volunteers and four end-users with motor impairment in the copy spelling mode. Spelling was successful with an average accuracy of 84% in the healthy sample. Three of the end-users communicated with average accuracies of 80% or higher while one user was not able to communicate reliably. Even though further evaluation is required, the WIN-speller represents a potential alternative for BCI based communication in end-users. PMID:26500476

  1. The WIN-speller: a new intuitive auditory brain-computer interface spelling application.

    PubMed

    Kleih, Sonja C; Herweg, Andreas; Kaufmann, Tobias; Staiger-Sälzer, Pit; Gerstner, Natascha; Kübler, Andrea

    2015-01-01

    The objective of this study was to test the usability of a new auditory Brain-Computer Interface (BCI) application for communication. We introduce a word based, intuitive auditory spelling paradigm the WIN-speller. In the WIN-speller letters are grouped by words, such as the word KLANG representing the letters A, G, K, L, and N. Thereby, the decoding step between perceiving a code and translating it to the stimuli it represents becomes superfluous. We tested 11 healthy volunteers and four end-users with motor impairment in the copy spelling mode. Spelling was successful with an average accuracy of 84% in the healthy sample. Three of the end-users communicated with average accuracies of 80% or higher while one user was not able to communicate reliably. Even though further evaluation is required, the WIN-speller represents a potential alternative for BCI based communication in end-users.

  2. Role of auditory brain function assessment by SPECT in cochlear implant side selection.

    PubMed

    Di Nardo, W; Giannantonio, S; Di Giuda, D; De Corso, E; Schinaia, L; Paludetti, G

    2013-02-01

    Pre-surgery evaluation, indications for cochlear implantation and expectations in terms of post-operative functional results remain challenging topics in pre-lingually deaf adults. Our study has the purpose of determining the benefits of Single Photon Emission Tomography (SPECT) assessment in pre-surgical evaluation of pre-lingually deaf adults who are candidates for cochlear implantation. In 7 pre-lingually profoundly deaf patients, brain SPECT was performed at baseline conditions and in bilateral simultaneous multi-frequency acoustic stimulation. Six sagittal tomograms of both temporal cortices were used for semi-quantitative analysis in each patient. Percentage increases in cortical perfusion resulting from auditory stimulation were calculated. The results showed an inter-hemispherical asymmetry of the activation extension and intensity in the stimulated temporal areas. Consistent with the obtained brain activation data, patients were implanted preferring the side that showed higher activation after acoustic stimulus. Considering the increment in auditory perception performances, it was possible to point out a relationship between cortical brain activity shown by SPECT and hearing performances, and, even more significant, a correlation between post-operative functional performances and the activation of the most medial part of the sagittal temporal tomograms, corresponding to medium-high frequencies. In light of these findings, we believe that brain SPECT could be considered in the evaluation of deaf patients candidate for cochlear implantation, and that it plays a major role in functional assessment of the auditory cortex of pre-lingually deaf subjects, even if further studies are necessary to conclusively establish its utility. Further developments of this technique are possible by using trans-tympanic electrical stimulation of the cochlear promontory, which could give the opportunity to study completely deaf patients, whose evaluation is objectively difficult

  3. Synchrony of auditory brain responses predicts behavioral ability to keep still in children with autism spectrum disorder: Auditory-evoked response in children with autism spectrum disorder.

    PubMed

    Yoshimura, Yuko; Kikuchi, Mitsuru; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Remijn, Gerard B; Oi, Manabu; Munesue, Toshio; Higashida, Haruhiro; Minabe, Yoshio

    2016-01-01

    The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.

  4. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    PubMed

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.

  5. Connectivity in the human brain dissociates entropy and complexity of auditory inputs☆

    PubMed Central

    Nastase, Samuel A.; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-01-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. PMID:25536493

  6. An online multi-channel SSVEP-based brain-computer interface using a canonical correlation analysis method.

    PubMed

    Bin, Guangyu; Gao, Xiaorong; Yan, Zheng; Hong, Bo; Gao, Shangkai

    2009-08-01

    In recent years, there has been increasing interest in using steady-state visual evoked potential (SSVEP) in brain-computer interface (BCI) systems. However, several aspects of current SSVEP-based BCI systems need improvement, specifically in relation to speed, user variation and ease of use. With these improvements in mind, this paper presents an online multi-channel SSVEP-based BCI system using a canonical correlation analysis (CCA) method for extraction of frequency information associated with the SSVEP. The key parameters, channel location, window length and the number of harmonics, are investigated using offline data, and the result used to guide the design of the online system. An SSVEP-based BCI system with six targets, which use nine channel locations in the occipital and parietal lobes, a window length of 2 s and the first harmonic, is used for online testing on 12 subjects. The results show that the proposed BCI system has a high performance, achieving an average accuracy of 95.3% and an information transfer rate of 58 +/- 9.6 bit min(-1). The positive characteristics of the proposed system are that channel selection and parameter optimization are not required, the possible use of harmonic frequencies, low user variation and easy setup.

  7. An online multi-channel SSVEP-based brain-computer interface using a canonical correlation analysis method

    NASA Astrophysics Data System (ADS)

    Bin, Guangyu; Gao, Xiaorong; Yan, Zheng; Hong, Bo; Gao, Shangkai

    2009-08-01

    In recent years, there has been increasing interest in using steady-state visual evoked potential (SSVEP) in brain-computer interface (BCI) systems. However, several aspects of current SSVEP-based BCI systems need improvement, specifically in relation to speed, user variation and ease of use. With these improvements in mind, this paper presents an online multi-channel SSVEP-based BCI system using a canonical correlation analysis (CCA) method for extraction of frequency information associated with the SSVEP. The key parameters, channel location, window length and the number of harmonics, are investigated using offline data, and the result used to guide the design of the online system. An SSVEP-based BCI system with six targets, which use nine channel locations in the occipital and parietal lobes, a window length of 2 s and the first harmonic, is used for online testing on 12 subjects. The results show that the proposed BCI system has a high performance, achieving an average accuracy of 95.3% and an information transfer rate of 58 ± 9.6 bit min-1. The positive characteristics of the proposed system are that channel selection and parameter optimization are not required, the possible use of harmonic frequencies, low user variation and easy setup.

  8. Experience-based Auditory Predictions Modulate Brain Activity to Silence as do Real Sounds.

    PubMed

    Chouiter, Leila; Tzovara, Athina; Dieguez, Sebastian; Annoni, Jean-Marie; Magezi, David; De Lucia, Marzia; Spierer, Lucas

    2015-10-01

    Interactions between stimuli's acoustic features and experience-based internal models of the environment enable listeners to compensate for the disruptions in auditory streams that are regularly encountered in noisy environments. However, whether auditory gaps are filled in predictively or restored a posteriori remains unclear. The current lack of positive statistical evidence that internal models can actually shape brain activity as would real sounds precludes accepting predictive accounts of filling-in phenomenon. We investigated the neurophysiological effects of internal models by testing whether single-trial electrophysiological responses to omitted sounds in a rule-based sequence of tones with varying pitch could be decoded from the responses to real sounds and by analyzing the ERPs to the omissions with data-driven electrical neuroimaging methods. The decoding of the brain responses to different expected, but omitted, tones in both passive and active listening conditions was above chance based on the responses to the real sound in active listening conditions. Topographic ERP analyses and electrical source estimations revealed that, in the absence of any stimulation, experience-based internal models elicit an electrophysiological activity different from noise and that the temporal dynamics of this activity depend on attention. We further found that the expected change in pitch direction of omitted tones modulated the activity of left posterior temporal areas 140-200 msec after the onset of omissions. Collectively, our results indicate that, even in the absence of any stimulation, internal models modulate brain activity as do real sounds, indicating that auditory filling in can be accounted for by predictive activity.

  9. [Effect of sleep deprivation on visual evoked potentials and brain stem auditory evoked potentials in epileptics].

    PubMed

    Urumova, L T; Kovalenko, G A; Tsunikov, A I; Sumskiĭ, L I

    1984-01-01

    The article reports on the first study of the evoked activity of the brain in epileptic patients (n = 20) following sleep deprivation. An analysis of the data obtained has revealed a tendency to the shortening of the peak latent intervals of visual evoked potentials in the range of 100-200 mu sec and the V component and the interpeak interval III-V of evoked auditory trunk potentials in patients with temporal epilepsy. The phenomenon may indicate the elimination of stabilizing control involving the specific conductive pathways and, possibly, an accelerated conduction of a specific sensor signal.

  10. Auditory brain stem responses from graduates of an intensive care nursery using an insert earphone.

    PubMed

    Gorga, M P; Kaminski, J R; Beauchaine, K A

    1988-06-01

    Auditory brain stem responses (ABR) were measured from graduates of an intensive care nursery using an insert earphone. Approximately 95% of all ears had click-evoked ABR thresholds of 30 dB nHL or less. Absolute latencies of waves I and V were within the range observed for a circumaural earphone, once the delay introduced by the insert earphone's sound delivery tube was taken into account. Finally, interpeak latency differences and interaural symmetry were comparable to values observed when a similar group of patients were tested with a circumaural earphone.

  11. Brain stem responses evoked by stimulation with an auditory brain stem implant in children with cochlear nerve aplasia or hypoplasia.

    PubMed

    O'Driscoll, Martin; El-Deredy, Wael; Atas, Ahmet; Sennaroglu, Gonca; Sennaroglu, Levent; Ramsden, Richard T

    2011-01-01

    The inclusion criteria for an auditory brain stem implant (ABI) have been extended beyond the traditional, postlingually deafened adult with Neurofibromatosis type 2, to include children who are born deaf due to cochlear nerve aplasia or hypoplasia and for whom a cochlear implant is not an option. Fitting the ABI for these new candidates presents a challenge, and intraoperative electrically evoked auditory brain stem responses (EABRs) may assist in the surgical placement of the electrode array over the dorsal and ventral cochlear nucleus in the brain stem and in the postoperative programming of the device. This study had four objectives: (1) to characterize the EABR by stimulation of the cochlear nucleus in children, (2) to establish whether there are any changes between the EABR recorded intraoperatively and again just before initial behavioral testing with the device, (3) to establish whether there is evidence of morphology changes in the EABR depending on the site of stimulation with the ABI, and (4) to investigate how the EABR relates to behavioral measurements and the presence of auditory and nonauditory sensations perceived with the ABI at initial device activation. Intra- and postoperative EABRs were recorded from six congenitally deaf children with ABIs, four boys and two girls, mean age 4.2 yrs (range 3.2 to 5.0 yrs). The ABI was stimulated at nine different bipolar sites on the array, and the EABRs recorded were analyzed with respect to the morphology and peak latency with site of stimulation for each recording session. The relationship between the EABR waveforms and the presence or absence of auditory electrodes at initial device activation was investigated. The EABR threshold levels were compared with the behavioral threshold (T) and comfortably loud (C) levels of stimulation required at initial device activation. EABRs were elicited from all children on both test occasions. Responses contained a possible combination of one to three peaks from a total

  12. Effects of Visual and Auditory Background on Reading Achievement Test Performance of Brain-Injured and Non Brain-Injured Children.

    ERIC Educational Resources Information Center

    Carter, John L.

    Forty-two brain injured boys and 42 non brain injured boys (aged 11-6 to 12-6) were tested to determine the effects of increasing amounts of visual and auditory distraction on reading performance. The Stanford Achievement Reading Comprehension Test was administered with three degrees of distraction. The visual distraction consisted of either very…

  13. An Auditory-Tactile Visual Saccade-Independent P300 Brain-Computer Interface.

    PubMed

    Yin, Erwei; Zeyl, Timothy; Saab, Rami; Hu, Dewen; Zhou, Zongtan; Chau, Tom

    2016-02-01

    Most P300 event-related potential (ERP)-based brain-computer interface (BCI) studies focus on gaze shift-dependent BCIs, which cannot be used by people who have lost voluntary eye movement. However, the performance of visual saccade-independent P300 BCIs is generally poor. To improve saccade-independent BCI performance, we propose a bimodal P300 BCI approach that simultaneously employs auditory and tactile stimuli. The proposed P300 BCI is a vision-independent system because no visual interaction is required of the user. Specifically, we designed a direction-congruent bimodal paradigm by randomly and simultaneously presenting auditory and tactile stimuli from the same direction. Furthermore, the channels and number of trials were tailored to each user to improve online performance. With 12 participants, the average online information transfer rate (ITR) of the bimodal approach improved by 45.43% and 51.05% over that attained, respectively, with the auditory and tactile approaches individually. Importantly, the average online ITR of the bimodal approach, including the break time between selections, reached 10.77 bits/min. These findings suggest that the proposed bimodal system holds promise as a practical visual saccade-independent P300 BCI.

  14. Auditory brain development in premature infants: the importance of early experience.

    PubMed

    McMahon, Erin; Wintermark, Pia; Lahav, Amir

    2012-04-01

    Preterm infants in the neonatal intensive care unit (NICU) often close their eyes in response to bright lights, but they cannot close their ears in response to loud sounds. The sudden transition from the womb to the overly noisy world of the NICU increases the vulnerability of these high-risk newborns. There is a growing concern that the excess noise typically experienced by NICU infants disrupts their growth and development, putting them at risk for hearing, language, and cognitive disabilities. Preterm neonates are especially sensitive to noise because their auditory system is at a critical period of neurodevelopment, and they are no longer shielded by maternal tissue. This paper discusses the developmental milestones of the auditory system and suggests ways to enhance the quality control and type of sounds delivered to NICU infants. We argue that positive auditory experience is essential for early brain maturation and may be a contributing factor for healthy neurodevelopment. Further research is needed to optimize the hospital environment for preterm newborns and to increase their potential to develop into healthy children. © 2012 New York Academy of Sciences.

  15. Design, simulation and experimental validation of a novel flexible neural probe for deep brain stimulation and multichannel recording.

    PubMed

    Lai, Hsin-Yi; Liao, Lun-De; Lin, Chin-Teng; Hsu, Jui-Hsiang; He, Xin; Chen, You-Yin; Chang, Jyh-Yeong; Chen, Hui-Fen; Tsang, Siny; Shih, Yen-Yu I

    2012-06-01

    An implantable micromachined neural probe with multichannel electrode arrays for both neural signal recording and electrical stimulation was designed, simulated and experimentally validated for deep brain stimulation (DBS) applications. The developed probe has a rough three-dimensional microstructure on the electrode surface to maximize the electrode-tissue contact area. The flexible, polyimide-based microelectrode arrays were each composed of a long shaft (14.9 mm in length) and 16 electrodes (5 µm thick and with a diameter of 16 µm). The ability of these arrays to record and stimulate specific areas in a rat brain was evaluated. Moreover, we have developed a finite element model (FEM) applied to an electric field to evaluate the volume of tissue activated (VTA) by DBS as a function of the stimulation parameters. The signal-to-noise ratio ranged from 4.4 to 5 over a 50 day recording period, indicating that the laboratory-designed neural probe is reliable and may be used successfully for long-term recordings. The somatosensory evoked potential (SSEP) obtained by thalamic stimulations and in vivo electrode-electrolyte interface impedance measurements was stable for 50 days and demonstrated that the neural probe is feasible for long-term stimulation. A strongly linear (positive correlation) relationship was observed among the simulated VTA, the absolute value of the SSEP during the 200 ms post-stimulus period (ΣSSEP) and c-Fos expression, indicating that the simulated VTA has perfect sensitivity to predict the evoked responses (c-Fos expression). This laboratory-designed neural probe and its FEM simulation represent a simple, functionally effective technique for studying DBS and neural recordings in animal models.

  16. Design, simulation and experimental validation of a novel flexible neural probe for deep brain stimulation and multichannel recording

    NASA Astrophysics Data System (ADS)

    Lai, Hsin-Yi; Liao, Lun-De; Lin, Chin-Teng; Hsu, Jui-Hsiang; He, Xin; Chen, You-Yin; Chang, Jyh-Yeong; Chen, Hui-Fen; Tsang, Siny; Shih, Yen-Yu I.

    2012-06-01

    An implantable micromachined neural probe with multichannel electrode arrays for both neural signal recording and electrical stimulation was designed, simulated and experimentally validated for deep brain stimulation (DBS) applications. The developed probe has a rough three-dimensional microstructure on the electrode surface to maximize the electrode-tissue contact area. The flexible, polyimide-based microelectrode arrays were each composed of a long shaft (14.9 mm in length) and 16 electrodes (5 µm thick and with a diameter of 16 µm). The ability of these arrays to record and stimulate specific areas in a rat brain was evaluated. Moreover, we have developed a finite element model (FEM) applied to an electric field to evaluate the volume of tissue activated (VTA) by DBS as a function of the stimulation parameters. The signal-to-noise ratio ranged from 4.4 to 5 over a 50 day recording period, indicating that the laboratory-designed neural probe is reliable and may be used successfully for long-term recordings. The somatosensory evoked potential (SSEP) obtained by thalamic stimulations and in vivo electrode-electrolyte interface impedance measurements was stable for 50 days and demonstrated that the neural probe is feasible for long-term stimulation. A strongly linear (positive correlation) relationship was observed among the simulated VTA, the absolute value of the SSEP during the 200 ms post-stimulus period (ΣSSEP) and c-Fos expression, indicating that the simulated VTA has perfect sensitivity to predict the evoked responses (c-Fos expression). This laboratory-designed neural probe and its FEM simulation represent a simple, functionally effective technique for studying DBS and neural recordings in animal models.

  17. GABA Immunoreactivity in Auditory and Song Control Brain Areas of Zebra Finches

    PubMed Central

    Pinaud, Raphael; Mello, Claudio V.

    2009-01-01

    Inhibitory transmission is critical to sensory and motor processing and is believed to play a role in experience-dependent plasticity. The main inhibitory neurotransmitter in vertebrates, GABA, has been implicated in both sensory and motor aspects of vocalization in songbirds. To understand the role of GABAergic mechanisms in vocal communication, GABAergic elements must be characterized fully. Hence, we investigated GABA immunohistochemistry in the zebra finch brain, emphasizing auditory areas and song control nuclei. Several nuclei of the ascending auditory pathway showed a moderate to high density of GABAergic neurons including the cochlear nuclei, nucleus laminaris, superior olivary nucleus, mesencephalic nucleus lateralis pars dorsalis, and nucleus ovoidalis. Telencephalic auditory areas, including field L subfields L1, L2a and L3, as well as the caudomedial nidopallium (NCM) and mesopallium (CMM), contained GABAergic cells at particularly high densities. Considerable GABA labeling was also seen in the shelf area of caudodorsal nidopallium, and the cup area in the arcopallium, as well as in area X, the lateral magnocellular nucleus of the anterior nidopallium, the robust nucleus of the arcopallium and nidopallial nucleus HVC. GABAergic cells were typically small, most likely local inhibitory interneurons, although large GABA-positive cells that were sparsely distributed were also identified. GABA-positive neurites and puncta were identified in most nuclei of the ascending auditory pathway and in song control nuclei. Our data are in accordance with a prominent role of GABAergic mechanisms in regulating the neural circuits involved in song perceptual processing, motor production, and vocal learning in songbirds. PMID:17466487

  18. Age-related changes in auditory nerve-inner hair cell connections, hair cell numbers, auditory brain stem response and gap detection in UM-HET4 mice.

    PubMed

    Altschuler, R A; Dolan, D F; Halsey, K; Kanicki, A; Deng, N; Martin, C; Eberle, J; Kohrman, D C; Miller, R A; Schacht, J

    2015-04-30

    This study compared the timing of appearance of three components of age-related hearing loss that determine the pattern and severity of presbycusis: the functional and structural pathologies of sensory cells and neurons and changes in gap detection (GD), the latter as an indicator of auditory temporal processing. Using UM-HET4 mice, genetically heterogeneous mice derived from four inbred strains, we studied the integrity of inner and outer hair cells by position along the cochlear spiral, inner hair cell-auditory nerve connections, spiral ganglion neurons (SGN), and determined auditory thresholds, as well as pre-pulse and gap inhibition of the acoustic startle reflex (ASR). Comparisons were made between mice of 5-7, 22-24 and 27-29 months of age. There was individual variability among mice in the onset and extent of age-related auditory pathology. At 22-24 months of age a moderate to large loss of outer hair cells was restricted to the apical third of the cochlea and threshold shifts in the auditory brain stem response were minimal. There was also a large and significant loss of inner hair cell-auditory nerve connections and a significant reduction in GD. The expression of Ntf3 in the cochlea was significantly reduced. At 27-29 months of age there was no further change in the mean number of synaptic connections per inner hair cell or in GD, but a moderate to large loss of outer hair cells was found across all cochlear turns as well as significantly increased ABR threshold shifts at 4, 12, 24 and 48 kHz. A statistical analysis of correlations on an individual animal basis revealed that neither the hair cell loss nor the ABR threshold shifts correlated with loss of GD or with the loss of connections, consistent with independent pathological mechanisms. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Case study: auditory brain responses in a minimally verbal child with autism and cerebral palsy

    PubMed Central

    Yau, Shu H.; McArthur, Genevieve; Badcock, Nicholas A.; Brock, Jon

    2015-01-01

    An estimated 30% of individuals with autism spectrum disorders (ASD) remain minimally verbal into late childhood, but research on cognition and brain function in ASD focuses almost exclusively on those with good or only moderately impaired language. Here we present a case study investigating auditory processing of GM, a nonverbal child with ASD and cerebral palsy. At the age of 8 years, GM was tested using magnetoencephalography (MEG) whilst passively listening to speech sounds and complex tones. Where typically developing children and verbal autistic children all demonstrated similar brain responses to speech and nonspeech sounds, GM produced much stronger responses to nonspeech than speech, particularly in the 65–165 ms (M50/M100) time window post-stimulus onset. GM was retested aged 10 years using electroencephalography (EEG) whilst passively listening to pure tone stimuli. Consistent with her MEG response to complex tones, GM showed an unusually early and strong response to pure tones in her EEG responses. The consistency of the MEG and EEG data in this single case study demonstrate both the potential and the feasibility of these methods in the study of minimally verbal children with ASD. Further research is required to determine whether GM's atypical auditory responses are characteristic of other minimally verbal children with ASD or of other individuals with cerebral palsy. PMID:26150768

  20. Synaptic proteome changes in mouse brain regions upon auditory discrimination learning.

    PubMed

    Kähne, Thilo; Kolodziej, Angela; Smalla, Karl-Heinz; Eisenschmidt, Elke; Haus, Utz-Uwe; Weismantel, Robert; Kropf, Siegfried; Wetzel, Wolfram; Ohl, Frank W; Tischmeyer, Wolfgang; Naumann, Michael; Gundelfinger, Eckart D

    2012-08-01

    Changes in synaptic efficacy underlying learning and memory processes are assumed to be associated with alterations of the protein composition of synapses. Here, we performed a quantitative proteomic screen to monitor changes in the synaptic proteome of four brain areas (auditory cortex, frontal cortex, hippocampus striatum) during auditory learning. Mice were trained in a shuttle box GO/NO-GO paradigm to discriminate between rising and falling frequency modulated tones to avoid mild electric foot shock. Control-treated mice received corresponding numbers of either the tones or the foot shocks. Six hours and 24 h later, the composition of a fraction enriched in synaptic cytomatrix-associated proteins was compared to that obtained from naïve mice by quantitative mass spectrometry. In the synaptic protein fraction obtained from trained mice, the average percentage (±SEM) of downregulated proteins (59.9 ± 0.5%) exceeded that of upregulated proteins (23.5 ± 0.8%) in the brain regions studied. This effect was significantly smaller in foot shock (42.7 ± 0.6% down, 40.7 ± 1.0% up) and tone controls (43.9 ± 1.0% down, 39.7 ± 0.9% up). These data suggest that learning processes initially induce removal and/or degradation of proteins from presynaptic and postsynaptic cytoskeletal matrices before these structures can acquire a new, postlearning organisation. In silico analysis points to a general role of insulin-like signalling in this process.

  1. Three-channel Lissajous' trajectory of human auditory brain-stem evoked potentials. I. Normative measures.

    PubMed

    Pratt, H; Bleich, N; Martin, W H

    1985-12-01

    Three-channel Lissajous' trajectories (3-CLTs) of auditory brainstem evoked potentials (ABEPs) were obtained from 14 humans (28 ears) in response to 75 dB nHL, 10/sec alternating polarity clicks. A normative set of 3-CLT quantitative measures was calculated and compared with amplitudes and latencies of the simultaneously recorded, single-channel, vertex-mastoid ABEP. The comparison included average values as well as intersubject variability. The 3-CLT measures included: apex latencies; planar segment durations, orientation, size and shape; trajectory-amplitude peaks and their latencies. Apex latencies of 3-CLT were comparable to peak latencies of the vertex-mastoid records, both in absolute values and in intersubject variability. Durations of planar segments were approximately 0.7 msec and their standard deviations were about a half of their average. Individual planar segment orientations were typically within 50 degrees of the normative average. Trajectory amplitudes were, in general, somewhat larger than peak amplitudes of the vertex-mastoid records, while their intersubject variabilities were comparable. Size and shape measures of planar segments were variable across subjects, making their clinical use, in their present form, questionable. The quantitative study of the 3-CLT of auditory brain-stem evoked potentials will enable evaluation of normal, as well as pathological, evoked potentials, to further the understanding and utility of this comprehensive representation of brain-stem function.

  2. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise

    PubMed Central

    Ioannou, Christos I.; Pereda, Ernesto; Lindsen, Job P.; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies. PMID:26065708

  3. Diffeomorphic brain mapping based on T1-weighted images: improvement of registration accuracy by multichannel mapping.

    PubMed

    Djamanakova, Aigerim; Faria, Andreia V; Hsu, John; Ceritoglu, Can; Oishi, Kenichi; Miller, Michael I; Hillis, Argye E; Mori, Susumu

    2013-01-01

    To improve image registration accuracy in neurodegenerative populations. This study used primary progressive aphasia, aged control, and young control T1-weighted images. Mapping to a template image was performed using single-channel Large Deformation Diffeomorphic Metric Mapping (LDDMM), a dual-channel method with ventricular anatomy in the second channel, and a dual-channel with appendage method, which utilized a priori knowledge of template ventricular anatomy in the deformable atlas. Our results indicated substantial improvement in the registration accuracy over single-contrast-based brain mapping, mainly in the lateral ventricles and regions surrounding them. Dual-channel mapping significantly (P < 0.001) reduced the number of misclassified lateral ventricle voxels (based on a manually defined reference) over single-channel mapping. The dual-channel (w/appendage) method further reduced (P < 0.001) misclassification over the dual-channel method, indicating that the appendage provides more accurate anatomical correspondence for deformation. Brain anatomical mapping by shape normalization is widely used for quantitative anatomical analysis. However, in many geriatric and neurodegenerative disorders, severe tissue atrophy poses a unique challenge for accurate mapping of voxels, especially around the lateral ventricles. In this study we demonstrate our ability to improve mapping accuracy by incorporating ventricular anatomy in LDDMM and by utilizing a priori knowledge of ventricular anatomy in the deformable atlas. Copyright © 2012 Wiley Periodicals, Inc.

  4. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    PubMed

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  5. Research of brain activation regions of "yes" and "no" responses by auditory stimulations in human EEG

    NASA Astrophysics Data System (ADS)

    Hu, Min; Liu, GuoZhong

    2011-11-01

    People with neuromuscular disorders are difficult to communicate with the outside world. It is very important to the clinician and the patient's family that how to distinguish vegetative state (VS) and minimally conscious state (MCS) for a disorders of consciousness (DOC) patient. If a patient is diagnosed with VS, this means that the hope of recovery is greatly reduced, thus leading to the family to abandon the treatment. Brain-computer interface (BCI) is aiming to help those people by analyzing patients' electroencephalogram (EEG). This paper focus on analyzing the corresponding activated regions of the brain when a subject responses "yes" or "no" to an auditory stimuli question. When the brain concentrates, the phase of the related area will become orderly from desultorily. So in this paper we analyzed EEG from the angle of phase. Seven healthy subjects volunteered to participate in the experiment. A total of 84 groups of repeatability stimulation test were done. Firstly, the frequency is fragmented by using wavelet method. Secondly, the phase of EEG is extracted by Hilbert. At last, we obtained approximate entropy and information entropy of each frequency band of EEG. The results show that brain areas are activated of the central area when people say "yes", and the areas are activated of the central area and temporal when people say "no". This conclusion is corresponding to magnetic resonance imaging technology. This study provides the theory basis and the algorithm design basis for designing BCI equipment for people with neuromuscular disorders.

  6. Brain hyper-reactivity to auditory novel targets in children with high-functioning autism.

    PubMed

    Gomot, Marie; Belmonte, Matthew K; Bullmore, Edward T; Bernard, Frédéric A; Baron-Cohen, Simon

    2008-09-01

    Although communication and social difficulties in autism have received a great deal of research attention, the other key diagnostic feature, extreme repetitive behaviour and unusual narrow interests, has been addressed less often. Also known as 'resistance to change' this may be related to atypical processing of infrequent, novel stimuli. This can be tested at sensory and neural levels. Our aims were to (i) examine auditory novelty detection and its neural basis in children with autism spectrum conditions (ASC) and (ii) test for brain activation patterns that correlate quantitatively with number of autistic traits as a test of the dimensional nature of ASC. The present study employed event-related fMRI during a novel auditory detection paradigm. Participants were twelve 10- to 15-year-old children with ASC and a group of 12 age-, IQ- and sex-matched typical controls. The ASC group responded faster to novel target stimuli. Group differences in brain activity mainly involved the right prefrontal-premotor and the left inferior parietal regions, which were more activated in the ASC group than in controls. In both groups, activation of prefrontal regions during target detection was positively correlated with Autism Spectrum Quotient scores measuring the number of autistic traits. These findings suggest that target detection in autism is associated not only with superior behavioural performance (shorter reaction time) but also with activation of a more widespread network of brain regions. This pattern also shows quantitative variation with number of autistic traits, in a continuum that extends to the normal population. This finding may shed light on the neurophysiological process underlying narrow interests and what clinically is called 'need for sameness'.

  7. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition.

  8. Auditory-evoked cortical activity: contribution of brain noise, phase locking, and spectral power.

    PubMed

    Harris, Kelly C; Vaden, Kenneth I; Dubno, Judy R

    2014-09-01

    The N1-P2 is an obligatory cortical response that can reflect the representation of spectral and temporal characteristics of an auditory stimulus. Traditionally,mean amplitudes and latencies of the prominent peaks in the averaged response are compared across experimental conditions. Analyses of the peaks in the averaged response only reflect a subset of the data contained within the electroencephalogram(EEG) signal. We used single-trial analyses techniques to identify the contribution of brain noise,neural synchrony, and spectral power to the generation of P2 amplitude and how these variables may change across age group. This information is important for appropriate interpretation of event-related potentials (ERPs) results and in understanding of age-related neural pathologies. EEG was measured from 25 younger and 25 older normal hearing adults. Age-related and individual differences in P2 response amplitudes, and variability in brain noise, phase locking value (PLV), and spectral power (4-8 Hz) were assessed from electrode FCz. Model testing and linear regression were used to determine the extent to which brain noise, PLV, and spectral power uniquely predicted P2 amplitudes and varied by age group. Younger adults had significantly larger P2 amplitudes, PLV, and power compared to older adults. Brain noise did not differ between age groups. The results of regression testing revealed that brain noise and PLV, but not spectral power were unique predictors of P2 amplitudes. Model fit was significantly better in younger than in older adults. ERP analyses are intended to provide a better understanding of the underlying neural mechanisms that contribute to individual and group differences in behavior. The current results support that age-related declines in neural synchrony contribute to smaller P2 amplitudes in older normal hearing adults. Based on our results, we discuss potential models in which differences in neural synchrony and brain noise can account for

  9. Brain connectivity and auditory hallucinations: In search of novel noninvasive brain stimulation therapeutic approaches for schizophrenia.

    PubMed

    Thomas, F; Moulier, V; Valéro-Cabré, A; Januel, D

    2016-11-01

    Auditory verbal hallucinations (AVH) are among the most characteristic symptoms of schizophrenia and have been linked to likely disturbances of structural and functional connectivity within frontal, temporal, parietal and subcortical networks involved in language and auditory functions. Resting-state functional magnetic resonance imaging (fMRI) has shown that alterations in the functional connectivity activity of the default-mode network (DMN) may also subtend hallucinations. Noninvasive neurostimulation techniques such as repetitive transcranial magnetic stimulation (rTMS) have the ability to modulate activity of targeted cortical sites and their associated networks, showing a high potential for modulating altered connectivity subtending schizophrenia. Notwithstanding, the clinical benefit of these approaches remains weak and variable. Further studies in the field should foster a better understanding concerning the status of networks subtending AVH and the neural impact of rTMS in relation with symptom improvement. Additionally, the identification and characterization of clinical biomarkers able to predict response to treatment would be a critical asset allowing better care for patients with schizophrenia. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  10. From complex B(1) mapping to local SAR estimation for human brain MR imaging using multi-channel transceiver coil at 7T.

    PubMed

    Zhang, Xiaotong; Schmitter, Sebastian; Van de Moortele, Pierre-Francois; Liu, Jiaen; He, Bin

    2013-06-01

    Elevated specific absorption rate (SAR) associated with increased main magnetic field strength remains a major safety concern in ultra-high-field (UHF) magnetic resonance imaging (MRI) applications. The calculation of local SAR requires the knowledge of the electric field induced by radio-frequency (RF) excitation, and the local electrical properties of tissues. Since electric field distribution cannot be directly mapped in conventional MR measurements, SAR estimation is usually performed using numerical model-based electromagnetic simulations which, however, are highly time consuming and cannot account for the specific anatomy and tissue properties of the subject undergoing a scan. In the present study, starting from the measurable RF magnetic fields (B1) in MRI, we conducted a series of mathematical deduction to estimate the local, voxel-wise and subject-specific SAR for each single coil element using a multi-channel transceiver array coil. We first evaluated the feasibility of this approach in numerical simulations including two different human head models. We further conducted experimental study in a physical phantom and in two human subjects at 7T using a multi-channel transceiver head coil. Accuracy of the results is discussed in the context of predicting local SAR in the human brain at UHF MRI using multi-channel RF transmission.

  11. From Complex B1 Mapping to Local SAR Estimation for Human Brain MR Imaging Using Multi-channel Transceiver Coil at 7T

    PubMed Central

    Zhang, Xiaotong; Schmitter, Sebastian; Van de Moortel, Pierre-François; Liu, Jiaen

    2014-01-01

    Elevated Specific Absorption Rate (SAR) associated with increased main magnetic field strength remains as a major safety concern in ultra-high-field (UHF) Magnetic Resonance Imaging (MRI) applications. The calculation of local SAR requires the knowledge of the electric field induced by radiofrequency (RF) excitation, and the local electrical properties of tissues. Since electric field distribution cannot be directly mapped in conventional MR measurements, SAR estimation is usually performed using numerical model-based electromagnetic simulations which, however, are highly time consuming and cannot account for the specific anatomy and tissue properties of the subject undergoing a scan. In the present study, starting from the measurable RF magnetic fields (B1) in MRI, we conducted a series of mathematical deduction to estimate the local, voxel-wise and subject-specific SAR for each single coil element using a multi-channel transceiver array coil. We first evaluated the feasibility of this approach in numerical simulations including two different human head models. We further conducted experimental study in a physical phantom and in two human subjects at 7T using a multi-channel transceiver head coil. Accuracy of the results is discussed in the context of predicting local SAR in the human brain at UHF MRI using multi-channel RF transmission. PMID:23508259

  12. Noise Trauma Induced Plastic Changes in Brain Regions outside the Classical Auditory Pathway

    PubMed Central

    Chen, Guang-Di; Sheppard, Adam; Salvi, Richard

    2017-01-01

    The effects of intense noise exposure on the classical auditory pathway have been extensively investigated; however, little is known about the effects of noise-induced hearing loss on non-classical auditory areas in the brain such as the lateral amygdala (LA) and striatum (Str). To address this issue, we compared the noise-induced changes in spontaneous and tone-evoked responses from multiunit clusters (MUC) in the LA and Str with those seen in auditory cortex (AC). High-frequency octave band noise (10–20 kHz) and narrow band noise (16–20 kHz) induced permanent thresho ld shifts (PTS) at high-frequencies within and above the noise band but not at low frequencies. While the noise trauma significantly elevated spontaneous discharge rate (SR) in the AC, SRs in the LA and Str were only slightly increased across all frequencies. The high-frequency noise trauma affected tone-evoked firing rates in frequency and time dependent manner and the changes appeared to be related to severity of noise trauma. In the LA, tone-evoked firing rates were reduced at the high-frequencies (trauma area) whereas firing rates were enhanced at the low-frequencies or at the edge-frequency dependent on severity of hearing loss at the high frequencies. The firing rate temporal profile changed from a broad plateau to one sharp, delayed peak. In the AC, tone-evoked firing rates were depressed at high frequencies and enhanced at the low frequencies while the firing rate temporal profiles became substantially broader. In contrast, firing rates in the Str were generally decreased and firing rate temporal profiles become more phasic and less prolonged. The altered firing rate and pattern at low frequencies induced by high frequency hearing loss could have perceptual consequences. The tone-evoked hyperactivity in low-frequency MUC could manifest as hyperacusis whereas the discharge pattern changes could affect temporal resolution and integration. PMID:26701290

  13. Auditory Hallucinations and the Brain's Resting-State Networks: Findings and Methodological Observations.

    PubMed

    Alderson-Day, Ben; Diederen, Kelly; Fernyhough, Charles; Ford, Judith M; Horga, Guillermo; Margulies, Daniel S; McCarthy-Jones, Simon; Northoff, Georg; Shine, James M; Turner, Jessica; van de Ven, Vincent; van Lutterveld, Remko; Waters, Flavie; Jardri, Renaud

    2016-09-01

    In recent years, there has been increasing interest in the potential for alterations to the brain's resting-state networks (RSNs) to explain various kinds of psychopathology. RSNs provide an intriguing new explanatory framework for hallucinations, which can occur in different modalities and population groups, but which remain poorly understood. This collaboration from the International Consortium on Hallucination Research (ICHR) reports on the evidence linking resting-state alterations to auditory hallucinations (AH) and provides a critical appraisal of the methodological approaches used in this area. In the report, we describe findings from resting connectivity fMRI in AH (in schizophrenia and nonclinical individuals) and compare them with findings from neurophysiological research, structural MRI, and research on visual hallucinations (VH). In AH, various studies show resting connectivity differences in left-hemisphere auditory and language regions, as well as atypical interaction of the default mode network and RSNs linked to cognitive control and salience. As the latter are also evident in studies of VH, this points to a domain-general mechanism for hallucinations alongside modality-specific changes to RSNs in different sensory regions. However, we also observed high methodological heterogeneity in the current literature, affecting the ability to make clear comparisons between studies. To address this, we provide some methodological recommendations and options for future research on the resting state and hallucinations.

  14. Auditory brain stem response and cortical evoked potentials in children with type 1 diabetes mellitus.

    PubMed

    Radwan, Heba Mohammed; El-Gharib, Amani Mohamed; Erfan, Adel Ali; Emara, Afaf Ahmad

    2017-05-01

    Delay in ABR and CAEPs wave latencies in children with type 1DM indicates that there is abnormality in the neural conduction in DM patients. The duration of DM has greater effect on auditory function than the control of DM. Diabetes mellitus (DM) is a common endocrine and metabolic disorder. Evoked potentials offer the possibility to perform a functional evaluation of neural pathways in the central nervous system. To investigate the effect of type 1 diabetes mellitus (T1DM) on auditory brain stem response (ABR) and cortical evoked potentials (CAEPs). This study included two groups: a control group (GI), which consisted of 20 healthy children with normal peripheral hearing, and a study group (GII), which consisted of 30 children with type I DM. Basic audiological evaluation, ABR, and CAEPs were done in both groups. Delayed absolute latencies of ABR and CAEPs waves were found. Amplitudes showed no significant difference between both groups. Positive correlation was found between ABR wave latencies and duration of DM. No correlation was found between ABR, CAEPs, and glycated hemoglobin.

  15. Subthalamic nucleus deep brain stimulation affects distractor interference in auditory working memory.

    PubMed

    Camalier, Corrie R; Wang, Alice Y; McIntosh, Lindsey G; Park, Sohee; Neimat, Joseph S

    2017-03-01

    Computational and theoretical accounts hypothesize the basal ganglia play a supramodal "gating" role in the maintenance of working memory representations, especially in preservation from distractor interference. There are currently two major limitations to this account. The first is that supporting experiments have focused exclusively on the visuospatial domain, leaving questions as to whether such "gating" is domain-specific. The second is that current evidence relies on correlational measures, as it is extremely difficult to causally and reversibly manipulate subcortical structures in humans. To address these shortcomings, we examined non-spatial, auditory working memory performance during reversible modulation of the basal ganglia, an approach afforded by deep brain stimulation of the subthalamic nucleus. We found that subthalamic nucleus stimulation impaired auditory working memory performance, specifically in the group tested in the presence of distractors, even though the distractors were predictable and completely irrelevant to the encoding of the task stimuli. This study provides key causal evidence that the basal ganglia act as a supramodal filter in working memory processes, further adding to our growing understanding of their role in cognition.

  16. Learning to modulate sensorimotor rhythms with stereo auditory feedback for a brain-computer interface.

    PubMed

    McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh

    2012-01-01

    Motor imagery can be used to modulate sensorimotor rhythms (SMR) enabling detection of voltage fluctuations on the surface of the scalp using electroencephalographic (EEG) electrodes. Feedback is essential in learning how to intentionally modulate SMR in non-muscular communication using a brain-computer interface (BCI). A BCI that is not reliant upon the visual modality for feedback is an attractive means of communication for the blind and the vision impaired and to release the visual channel for other purposes during BCI usage. The aim of this study is to demonstrate the feasibility of replacing the traditional visual feedback modality with stereo auditory feedback. Twenty participants split into equal groups took part in ten BCI sessions involving motor imagery. The visual feedback group performed best using two performance measures but did not show improvement over time whilst the auditory group improved as the study progressed. Multiple loudspeaker presentation of audio allows the listener to intuitively assign each of two classes to the corresponding lateral position in a free-field listening environment.

  17. Sources of variability in auditory brain stem evoked potential measures over time.

    PubMed

    Edwards, R M; Buchwald, J S; Tanguay, P E; Schwafel, J A

    1982-02-01

    Auditory brain stem EPs elicited in 10 normal adults by monaural clicks delivered at 72 dB HL, 20/sec showed no significant change in wave latencies or in the ratio of wave I to wave Y amplitude across 250 trial subsets, across 250 trial subsets, across 1500 trial blocks within a test session, or across two test sessions separated by several months. Sources of maximum variability were determined by using mean squared differences with all but one condition constant. 'Subjects' was shown to contribute the most variability followed by 'ears', 'sessions' and 'runs'; collapsing across conditions, wave III latencies were found to be the least variable, while wave II showed the most variability. Some EP morphologies showed extra peaks between waves II and IV, missing wave IV or wave IV fused with wave V. Such variations in wave form morphology were independent of EMG amplitude and were characteristic of certain individuals.

  18. Geometrical analysis of human three-channel Lissajous' trajectory of auditory brain-stem evoked potentials.

    PubMed

    Pratt, H; Har'el, Z; Golos, E

    1984-07-01

    Three-channel Lissajous' trajectories (3CLTs) of auditory brain-stem evoked potentials (ABEPs) were obtained for a group of 15 normal humans. 3CLTs were studied using geometric spatial descriptors of rate of bending (curvature), local planarity (torsion), as well as the parameters of best fit planes along the trajectory. Point-by-point analysis enabled objective determination of apices (curvature maxima) which divided the trajectory to curvilinear segments. Consecutive apices defined planar segments (each consisting of two curvilinear segments) along the trajectory. Of the two possibilities of arranging apices in the middle of planar segments only one yielded consistently planar segments. The alternative set of apex triads represented transitions between truly planar segments. The variability of position and orientation measures of planar segments was greater than that of apex latencies and plane limits. The significance of planes, as well as their generation, in 3CLT of ABEP, remains to be studied.

  19. Comparison of Beyer DT48 and etymotic insert earphones: auditory brain stem response measurements.

    PubMed

    Beauchaine, K A; Kaminski, J R; Gorga, M P

    1987-10-01

    Click-evoked auditory brain stem responses (ABRs) were measured using a Beyer DT48 circumaural earphone and an Etymotic ER-3A insert earphone in a group of normal-hearing subjects. Comparisons were made between time waveforms and amplitude spectra for the two transducers. ABR waveforms, latencies, and thresholds were compared for the two transducers. Click-evoked ABR and behavioral thresholds were comparable for the two earphones. In addition, absolute response-component latencies differed by an amount that was equivalent to the travel time introduced by the insert earphone's sound-delivery tube. Inter-peak latency differences were virtually identical. These findings suggest that the insert earphone is a viable transducer for clinical ABR evaluations. Further, a temporal correction may be all that is necessary to account for the difference between the insert earphone and the circumaural earphone if other characteristics of the transducers are similar.

  20. Interaural attenuation using etymotic ER-3A insert earphones in auditory brain stem response testing.

    PubMed

    Van Campen, L E; Sammeth, C A; Peek, B F

    1990-02-01

    Click interaural attenuation (IA) was measured behaviorally and with the auditory brain stem response (ABR) in two unilaterally deaf adults with Etymotic ER-3A insert earphones, and TDH-39P and TDH-49P supraaural earphones. Stimulus crossover for each set of earphones was also determined with pure-tone audiometry. Pure-tone results agreed with previous research, showing that the ER-3A provided substantially greater IA than the supraaural earphones, particularly for low frequencies. For click stimuli, behavioral and ABR results revealed only modest, if any, improvement in IA with the ER-3A relative to the supraaural earphones. The results of this study suggest that while the ER-3A earphones provide a clear IA advantage for behavioral pure-tone audiometry, they do not eliminate the need for contralateral masking of click stimuli in ABR testing.

  1. Brain dynamics in the auditory Go/NoGo task as a function of EEG frequency.

    PubMed

    Barry, Robert J; De Blasio, Frances; Rushby, Jacqueline A; Clarke, Adam R

    2010-11-01

    We examined relationships between the phase of narrow-band electroencephalographic (EEG) activity at stimulus onset and the resultant event-related potentials (ERPs) in an equiprobable auditory Go/NoGo task with a fixed SOA, in the context of a novel conceptualisation of orthogonal phase effects (cortical negativity vs. positivity, negative driving vs. positive driving, waxing vs. waning). ERP responses to each stimulus type were analysed. Prestimulus narrow-band EEG activity (in 1Hz bands from 1 to 13Hz) at Cz was assessed for each trial using FFT decomposition of the EEG data. For each frequency, the cycle at stimulus onset was used to sort trials into four phases, for which ERPs were derived from the raw EEG activity at 9 central sites. The occurrence of preferred phase-defined brain states was confirmed at a number of frequencies, crossing the traditional frequency bands. As expected, these did not differ between Go and NoGo stimuli. These preferred states were associated with more efficient processing of the stimulus, as reflected in differences in latency and amplitude of the N1 and P3 ERP components. The present results, although derived in a different paradigm by EEG decomposition methods different from those used previously, confirm the existence of preferred brain states and their impact on the efficiency of brain dynamics involved in perceptual and cognitive processing.

  2. Frequency tuning of the dolphin's hearing as revealed by auditory brain-stem response with notch-noise masking.

    PubMed

    Popov, V V; Supin, A Y; Klishin, V O

    1997-12-01

    Notch-noise masking was used to measure frequency tuning in a dolphin (Tursiops truncatus) in a simultaneous-masking paradigm in conjunction with auditory brain-stem evoked potential recording. Measurements were made at probe frequencies of 64, 76, 90, and 108 kHz. The data were analyzed by fitting the rounded-exponent model of the auditory filters to the experimental data. The fitting parameter values corresponded to the filter tuning as follows: QER (center frequency divided by equivalent rectangular bandwidths) of 35 to 36.5 and Q10 dB of 18 to 19 at all tested frequencies.

  3. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    PubMed Central

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  4. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach.

    PubMed

    Cichy, Radoslaw Martin; Teng, Santani

    2017-02-19

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'.

  5. The musical centers of the brain: Vladimir E. Larionov (1857-1929) and the functional neuroanatomy of auditory perception.

    PubMed

    Triarhou, Lazaros C; Verina, Tatyana

    2016-11-01

    In 1899 a landmark paper entitled "On the musical centers of the brain" was published in Pflügers Archiv, based on work carried out in the Anatomo-Physiological Laboratory of the Neuropsychiatric Clinic of Vladimir M. Bekhterev (1857-1927) in St. Petersburg, Imperial Russia. The author of that paper was Vladimir E. Larionov (1857-1929), a military doctor and devoted brain scientist, who pursued the problem of the localization of function in the canine and human auditory cortex. His data detailed the existence of tonotopy in the temporal lobe and further demonstrated centrifugal auditory pathways emanating from the auditory cortex and directed to the opposite hemisphere and lower brain centers. Larionov's discoveries have been largely considered as findings of the Bekhterev school. Perhaps this is why there are limited resources on Larionov, especially keeping in mind his military medical career and the fact that after 1917 he just seems to have practiced otorhinolaryngology in Odessa. Larionov died two years after Bekhterev's mysterious death of 1927. The present study highlights the pioneering contributions of Larionov to auditory neuroscience, trusting that the life and work of Vladimir Efimovich will finally, and deservedly, emerge from the shadow of his celebrated master, Vladimir Mikhailovich.

  6. Coding of Visual, Auditory, Rule, and Response Information in the Brain: 10 Years of Multivoxel Pattern Analysis.

    PubMed

    Woolgar, Alexandra; Jackson, Jade; Duncan, John

    2016-10-01

    How is the processing of task information organized in the brain? Many views of brain function emphasize modularity, with different regions specialized for processing different types of information. However, recent accounts also highlight flexibility, pointing especially to the highly consistent pattern of frontoparietal activation across many tasks. Although early insights from functional imaging were based on overall activation levels during different cognitive operations, in the last decade many researchers have used multivoxel pattern analyses to interrogate the representational content of activations, mapping out the brain regions that make particular stimulus, rule, or response distinctions. Here, we drew on 100 searchlight decoding analyses from 57 published papers to characterize the information coded in different brain networks. The outcome was highly structured. Visual, auditory, and motor networks predominantly (but not exclusively) coded visual, auditory, and motor information, respectively. By contrast, the frontoparietal multiple-demand network was characterized by domain generality, coding visual, auditory, motor, and rule information. The contribution of the default mode network and voxels elsewhere was minor. The data suggest a balanced picture of brain organization in which sensory and motor networks are relatively specialized for information in their own domain, whereas a specific frontoparietal network acts as a domain-general "core" with the capacity to code many different aspects of a task.

  7. Training leads to increased auditory brain-computer interface performance of end-users with motor impairments.

    PubMed

    Halder, S; Käthner, I; Kübler, A

    2016-02-01

    Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Proteome rearrangements after auditory learning: high-resolution profiling of synapse-enriched protein fractions from mouse brain.

    PubMed

    Kähne, Thilo; Richter, Sandra; Kolodziej, Angela; Smalla, Karl-Heinz; Pielot, Rainer; Engler, Alexander; Ohl, Frank W; Dieterich, Daniela C; Seidenbecher, Constanze; Tischmeyer, Wolfgang; Naumann, Michael; Gundelfinger, Eckart D

    2016-07-01

    Learning and memory processes are accompanied by rearrangements of synaptic protein networks. While various studies have demonstrated the regulation of individual synaptic proteins during these processes, much less is known about the complex regulation of synaptic proteomes. Recently, we reported that auditory discrimination learning in mice is associated with a relative down-regulation of proteins involved in the structural organization of synapses in various brain regions. Aiming at the identification of biological processes and signaling pathways involved in auditory memory formation, here, a label-free quantification approach was utilized to identify regulated synaptic junctional proteins and phosphoproteins in the auditory cortex, frontal cortex, hippocampus, and striatum of mice 24 h after the learning experiment. Twenty proteins, including postsynaptic scaffolds, actin-remodeling proteins, and RNA-binding proteins, were regulated in at least three brain regions pointing to common, cross-regional mechanisms. Most of the detected synaptic proteome changes were, however, restricted to individual brain regions. For example, several members of the Septin family of cytoskeletal proteins were up-regulated only in the hippocampus, while Septin-9 was down-regulated in the hippocampus, the frontal cortex, and the striatum. Meta analyses utilizing several databases were employed to identify underlying cellular functions and biological pathways. Data are available via ProteomeExchange with identifier PXD003089. How does the protein composition of synapses change in different brain areas upon auditory learning? We unravel discrete proteome changes in mouse auditory cortex, frontal cortex, hippocampus, and striatum functionally implicated in the learning process. We identify not only common but also area-specific biological pathways and cellular processes modulated 24 h after training, indicating individual contributions of the regions to memory processing.

  9. Combining functional and anatomical connectivity reveals brain networks for auditory language comprehension.

    PubMed

    Saur, Dorothee; Schelter, Björn; Schnell, Susanne; Kratochvil, David; Küpper, Hanna; Kellmeyer, Philipp; Kümmerer, Dorothee; Klöppel, Stefan; Glauche, Volkmar; Lange, Rüdiger; Mader, Wolfgang; Feess, David; Timmer, Jens; Weiller, Cornelius

    2010-02-15

    Cognitive functions are organized in distributed, overlapping, and interacting brain networks. Investigation of those large-scale brain networks is a major task in neuroimaging research. Here, we introduce a novel combination of functional and anatomical connectivity to study the network topology subserving a cognitive function of interest. (i) In a given network, direct interactions between network nodes are identified by analyzing functional MRI time series with the multivariate method of directed partial correlation (dPC). This method provides important improvements over shortcomings that are typical for ordinary (partial) correlation techniques. (ii) For directly interacting pairs of nodes, a region-to-region probabilistic fiber tracking on diffusion tensor imaging data is performed to identify the most probable anatomical white matter fiber tracts mediating the functional interactions. This combined approach is applied to the language domain to investigate the network topology of two levels of auditory comprehension: lower-level speech perception (i.e., phonological processing) and higher-level speech recognition (i.e., semantic processing). For both processing levels, dPC analyses revealed the functional network topology and identified central network nodes by the number of direct interactions with other nodes. Tractography showed that these interactions are mediated by distinct ventral (via the extreme capsule) and dorsal (via the arcuate/superior longitudinal fascicle fiber system) long- and short-distance association tracts as well as commissural fibers. Our findings demonstrate how both processing routines are segregated in the brain on a large-scale network level. Combining dPC with probabilistic tractography is a promising approach to unveil how cognitive functions emerge through interaction of functionally interacting and anatomically interconnected brain regions. Copyright 2009 Elsevier Inc. All rights reserved.

  10. Bigger brains or bigger nuclei? Regulating the size of auditory structures in birds.

    PubMed

    Kubke, M Fabiana; Massoglia, Dino P; Carr, Catherine E

    2004-01-01

    Increases in the size of the neuronal structures that mediate specific behaviors are believed to be related to enhanced computational performance. It is not clear, however, what developmental and evolutionary mechanisms mediate these changes, nor whether an increase in the size of a given neuronal population is a general mechanism to achieve enhanced computational ability. We addressed the issue of size by analyzing the variation in the relative number of cells of auditory structures in auditory specialists and generalists. We show that bird species with different auditory specializations exhibit variation in the relative size of their hindbrain auditory nuclei. In the barn owl, an auditory specialist, the hindbrain auditory nuclei involved in the computation of sound location show hyperplasia. This hyperplasia was also found in songbirds, but not in non-auditory specialists. The hyperplasia of auditory nuclei was also not seen in birds with large body weight suggesting that the total number of cells is selected for in auditory specialists. In barn owls, differences observed in the relative size of the auditory nuclei might be attributed to modifications in neurogenesis and cell death. Thus, hyperplasia of circuits used for auditory computation accompanies auditory specialization in different orders of birds.

  11. Multi-channel EEG signal feature extraction and pattern recognition on horizontal mental imagination task of 1-D cursor movement for brain computer interface.

    PubMed

    Serdar Bascil, M; Tesneli, Ahmet Y; Temurtas, Feyzullah

    2015-06-01

    Brain computer interfaces (BCIs), based on multi-channel electroencephalogram (EEG) signal processing convert brain signal activities to machine control commands. It provides new communication way with a computer by extracting electroencephalographic activity. This paper, deals with feature extraction and classification of horizontal mental task pattern on 1-D cursor movement from EEG signals. The hemispherical power changes are computed and compared on alpha & beta frequencies and horizontal cursor control extracted with only mental imagination of cursor movements. In the first stage, features are extracted with the well-known average signal power or power difference (alpha and beta) method. Principal component analysis is used for reducing feature dimensions. All features are classified and the mental task patterns are recognized by three neural network classifiers which learning vector quantization, multilayer neural network and probabilistic neural network due to obtaining acceptable good results and using successfully in pattern recognition via k-fold cross validation technique.

  12. Evaluation of Auditory Brain Stems Evoked Response in Newborns With Pathologic Hyperbilirubinemia in Mashhad, Iran

    PubMed Central

    Okhravi, Tooba; Tarvij Eslami, Saeedeh; Hushyar Ahmadi, Ali; Nassirian, Hossain; Najibpour, Reza

    2015-01-01

    Background: Neonatal jaundice is a common cause of sensorneural hearing loss in children. Objectives: We aimed to detect the neurotoxic effects of pathologic hyperbilirubinemia on brain stem and auditory tract by auditory brain stem evoked response (ABR) which could predict early effects of hyperbilirubinemia. Patients and Methods: This case-control study was performed on newborns with pathologic hyperbilirubinemia. The inclusion criteria were healthy term and near term (35 - 37 weeks) newborns with pathologic hyperbilirubinemia with serum bilirubin values of ≥ 7 mg/dL, ≥ 10 mg/dL and ≥14 mg/dL at the first, second and third-day of life, respectively, and with bilirubin concentration ≥ 18 mg/dL at over 72 hours of life. The exclusion criteria included family history and diseases causing sensorineural hearing loss, use of auto-toxic medications within the preceding five days, convulsion, congenital craniofacial anomalies, birth trauma, preterm newborns < 35 weeks old, birth weight < 1500 g, asphyxia, and mechanical ventilations for five days or more. A total of 48 newborns with hyperbilirubinemia met the enrolment criteria as the case group and 49 healthy newborns as the control group, who were hospitalized in a university educational hospital (22 Bahaman), in a north-eastern city of Iran, Mashhad. ABR was performed on both groups. The evaluated variable factors were latency time, inter peak intervals time, and loss of waves. Results: The mean latencies of waves I, III and V of ABR were significantly higher in the pathologic hyperbilirubinemia group compared with the controls (P < 0.001). In addition, the mean interpeak intervals (IPI) of waves I-III, I-V and III-V of ABR were significantly higher in the pathologic hyperbilirubinemia group compared with the controls (P < 0.001). For example, the mean latencies time of wave I was significantly higher in right ear of the case group than in controls (2.16 ± 0.26 vs. 1.77 ± 0.15 milliseconds, respectively) (P

  13. Physiological modulators of Kv3.1 channels adjust firing patterns of auditory brain stem neurons.

    PubMed

    Brown, Maile R; El-Hassar, Lynda; Zhang, Yalan; Alvaro, Giuseppe; Large, Charles H; Kaczmarek, Leonard K

    2016-07-01

    Many rapidly firing neurons, including those in the medial nucleus of the trapezoid body (MNTB) in the auditory brain stem, express "high threshold" voltage-gated Kv3.1 potassium channels that activate only at positive potentials and are required for stimuli to generate rapid trains of actions potentials. We now describe the actions of two imidazolidinedione derivatives, AUT1 and AUT2, which modulate Kv3.1 channels. Using Chinese hamster ovary cells stably expressing rat Kv3.1 channels, we found that lower concentrations of these compounds shift the voltage of activation of Kv3.1 currents toward negative potentials, increasing currents evoked by depolarization from typical neuronal resting potentials. Single-channel recordings also showed that AUT1 shifted the open probability of Kv3.1 to more negative potentials. Higher concentrations of AUT2 also shifted inactivation to negative potentials. The effects of lower and higher concentrations could be mimicked in numerical simulations by increasing rates of activation and inactivation respectively, with no change in intrinsic voltage dependence. In brain slice recordings of mouse MNTB neurons, both AUT1 and AUT2 modulated firing rate at high rates of stimulation, a result predicted by numerical simulations. Our results suggest that pharmaceutical modulation of Kv3.1 currents represents a novel avenue for manipulation of neuronal excitability and has the potential for therapeutic benefit in the treatment of hearing disorders.

  14. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks.

    PubMed

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.

  15. Noninvasive brain stimulation for the treatment of auditory verbal hallucinations in schizophrenia: methods, effects and challenges

    PubMed Central

    Kubera, Katharina M.; Barth, Anja; Hirjak, Dusan; Thomann, Philipp A.; Wolf, Robert C.

    2015-01-01

    This mini-review focuses on noninvasive brain stimulation techniques as an augmentation method for the treatment of persistent auditory verbal hallucinations (AVH) in patients with schizophrenia. Paradigmatically, we place emphasis on transcranial magnetic stimulation (TMS). We specifically discuss rationales of stimulation and consider methodological questions together with issues of phenotypic diversity in individuals with drug-refractory and persistent AVH. Eventually, we provide a brief outlook for future investigations and treatment directions. Taken together, current evidence suggests TMS as a promising method in the treatment of AVH. Low-frequency stimulation of the superior temporal cortex (STC) may reduce symptom severity and frequency. Yet clinical effects are of relatively short duration and effect sizes appear to decrease over time along with publication of larger trials. Apart from considering other innovative stimulation techniques, such as transcranial Direct Current Stimulation (tDCS), and optimizing stimulation protocols, treatment of AVH using noninvasive brain stimulation will essentially rely on accurate identification of potential responders and non-responders for these treatment modalities. In this regard, future studies will need to consider distinct phenotypic presentations of AVH in patients with schizophrenia, together with the putative functional neurocircuitry underlying these phenotypes. PMID:26528145

  16. Descending brain neurons in the cricket Gryllus bimaculatus (de Geer): auditory responses and impact on walking.

    PubMed

    Zorović, Maja; Hedwig, Berthold

    2013-01-01

    The activity of four types of sound-sensitive descending brain neurons in the cricket Gryllus bimaculatus was recorded intracellularly while animals were standing or walking on an open-loop trackball system. In a neuron with a contralaterally descending axon, the male calling song elicited responses that copied the pulse pattern of the song during standing and walking. The accuracy of pulse copying increased during walking. Neurons with ipsilaterally descending axons responded weakly to sound only during standing. The responses were mainly to the first pulse of each chirp, whereas the complete pulse pattern of a chirp was not copied. During walking the auditory responses were suppressed in these neurons. The spiking activity of all four neuron types was significantly correlated to forward walking velocity, indicating their relevance for walking. Additionally, injection of depolarizing current elicited walking and/or steering in three of four neuron types described. In none of the neurons was the spiking activity both sufficient and necessary to elicit and maintain walking behaviour. Some neurons showed arborisations in the lateral accessory lobes, pointing to the relevance of this brain region for cricket audition and descending motor control.

  17. [Pathologic findings in auditory brain stem evoked potentials in 10 children with brain stem tumors].

    PubMed

    Kraus, J; Lehovský, M; Procházková, M

    1990-11-01

    The authors examined brain stem acoustic evoked potentials (BAEP) in 10 children aged 2-14 years with tumours in the posterior cranial fossa infiltrating the cerebellum and brain stem. The tumours were diagnosed by computed tomography and in eight patients confirmed on operation. For stimulation a monaural click was used. The authors assessed the latency of waves, intervals between and investigated the presence of waves and evaluated their amplitudes. They compared the results with normal values. In abnormal findings the latency of components was delayed, the components were generated in a rostral manner from the site of the lesion in all children. In four patients with severe affections of the brain stem the investigated components were lacking. In nine patients the BAEP evoked by monaural stimulation helped to asses the lateralization of the lesion. In eight patients a bilateral abnormality of the V wave was recorded. The most sensitive indicator for assessment of the site of the lesion was assessment of the interval between two consecutive waves.

  18. Musical brains: a study of spontaneous and evoked musical sensations without external auditory stimuli.

    PubMed

    Goycoolea, Marcos V; Mena, Ismael; Neubauer, Sonia G; Levy, Raquel G; Grez, Margarita Fernández; Berger, Claudia G

    2007-07-01

    Our observations confirm that musical sensations with no external stimuli, either spontaneous or evoked, occur in normal individuals and that a biological substrate can be demonstrated by brain single photon emission computed tomography (SPECT). There are individuals, usually musicians, who are seemingly able to evoke and/or have spontaneous musical sensations without external auditory stimuli. However, to date there is no available evidence to determine if it is feasible to have musical sensations without using external sensory receptors, or if there is a biological substrate for these sensations. A group of 100 musicians and another of 150 otolaryngologists were asked if they had spontaneous musical auditory sensations and/or were capable of evoking them. SPECT evaluations with Tc(99m)-HMPAO were conducted in six female musicians while they were evoking these sensations or, in one case, while she was having them spontaneously. In three of them an additional SPECT was conducted in basal conditions (having been asked to avoid evoking music). In all, 97 of 100 musicians had spontaneous musical sensations; all 100 could evoke and modify them. Of the 150 otolaryngologists, 18 (12%) were musicians. Of the 132 nonmusicians, spontaneous musical sensations occurred in 52 (39.4%), 72 (54.5%) could evoke and 23 (17.4%) were able to modify them, 58 (43.9%) did not have spontaneous musical sensations nor could they evoke them. The musical sensations of the 72 otolaryngologists that could evoke were less elaborated than those of musicians. NeuroSPECT during voluntary musical autoevocation demonstrated significant (>2 SD) increased activation of executive frontal cortex in Brodmann areas 9 and 10, secondary visual cortex (area 17), and paracingulate (areas 31 and 32). There was also activation in the para-executive frontal cortex (areas 45 and 46). In the basal ganglia there was activation in thalamus and lentiform nucleus. Deactivation below 2 SD was demonstrated by mean

  19. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    NASA Astrophysics Data System (ADS)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  20. Hippocampal volume and auditory attention on a verbal memory task with adult survivors of pediatric brain tumor.

    PubMed

    Jayakar, Reema; King, Tricia Z; Morris, Robin; Na, Sabrina

    2015-03-01

    We examined the nature of verbal memory deficits and the possible hippocampal underpinnings in long-term adult survivors of childhood brain tumor. 35 survivors (M = 24.10 ± 4.93 years at testing; 54% female), on average 15 years post-diagnosis, and 59 typically developing adults (M = 22.40 ± 4.35 years, 54% female) participated. Automated FMRIB Software Library (FSL) tools were used to measure hippocampal, putamen, and whole brain volumes. The California Verbal Learning Test-Second Edition (CVLT-II) was used to assess verbal memory. Hippocampal, F(1, 91) = 4.06, ηp² = .04; putamen, F(1, 91) = 11.18, ηp² = .11; and whole brain, F(1, 92) = 18.51, ηp² = .17, volumes were significantly lower for survivors than controls (p < .05). Hippocampus and putamen volumes were significantly correlated (r = .62, p < .001) with each other, but not with total brain volume (r = .09; r = .08), for survivors and controls. Verbal memory indices of auditory attention list span (Trial 1: F(1, 92) = 12.70, η² = .12) and final list learning (Trial 5: F(1, 92) = 6.01, η² = .06) were significantly lower for survivors (p < .05). Total hippocampal volume in survivors was significantly correlated (r = .43, p = .01) with auditory attention, but none of the other CVLT-II indices. Secondary analyses for the effect of treatment factors are presented. Volumetric differences between survivors and controls exist for the whole brain and for subcortical structures on average 15 years post-diagnosis. Treatment factors seem to have a unique effect on subcortical structures. Memory differences between survivors and controls are largely contingent upon auditory attention list span. Only hippocampal volume is associated with the auditory attention list span component of verbal memory. These findings are particularly robust for survivors treated with radiation. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  1. Abnormal Effective Connectivity in the Brain is Involved in Auditory Verbal Hallucinations in Schizophrenia.

    PubMed

    Li, Baojuan; Cui, Long-Biao; Xi, Yi-Bin; Friston, Karl J; Guo, Fan; Wang, Hua-Ning; Zhang, Lin-Chuan; Bai, Yuan-Han; Tan, Qing-Rong; Yin, Hong; Lu, Hongbing

    2017-02-21

    Information flow among auditory and language processing-related regions implicated in the pathophysiology of auditory verbal hallucinations (AVHs) in schizophrenia (SZ) remains unclear. In this study, we used stochastic dynamic causal modeling (sDCM) to quantify connections among the left dorsolateral prefrontal cortex (inner speech monitoring), auditory cortex (auditory processing), hippocampus (memory retrieval), thalamus (information filtering), and Broca's area (language production) in 17 first-episode drug-naïve SZ patients with AVHs, 15 without AVHs, and 19 healthy controls using resting-state functional magnetic resonance imaging. Finally, we performed receiver operating characteristic (ROC) analysis and correlation analysis between image measures and symptoms. sDCM revealed an increased sensitivity of auditory cortex to its thalamic afferents and a decrease in hippocampal sensitivity to auditory inputs in SZ patients with AVHs. The area under the ROC curve showed the diagnostic value of these two connections to distinguish SZ patients with AVHs from those without AVHs. Furthermore, we found a positive correlation between the strength of the connectivity from Broca's area to the auditory cortex and the severity of AVHs. These findings demonstrate, for the first time, augmented AVH-specific excitatory afferents from the thalamus to the auditory cortex in SZ patients, resulting in auditory perception without external auditory stimuli. Our results provide insights into the neural mechanisms underlying AVHs in SZ. This thalamic-auditory cortical-hippocampal dysconnectivity may also serve as a diagnostic biomarker of AVHs in SZ and a therapeutic target based on direct in vivo evidence.

  2. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    PubMed Central

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  3. A case of generalized auditory agnosia with unilateral subcortical brain lesion.

    PubMed

    Suh, Hyee; Shin, Yong-Il; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-12-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia.

  4. Effect of stimulus intensity level on auditory middle latency response brain maps in human adults.

    PubMed

    Tucker, D A; Dietrich, S; McPherson, D L; Salamat, M T

    2001-05-01

    Auditory middle latency response (AMLR) brain maps were obtained in 11 young adults with normal hearing. AMLR waveforms were elicited with monaural clicks presented at three stimulus intensity levels (50, 70, and 90 dB nHL). Recordings were made for right and left ear stimulus presentations. All recordings were obtained in an eyes open/awake status for each subject. Peak-to-peak amplitudes and absolute latencies of the AMLR Pa and Pb waveforms were measured at the Cz electrode site. Pa and Pb waveforms were present 100 percent of the time in response to the 90 dB nHL presentation. The prevalence of Pa and Pb to the 70 dB nHL presentation varied from 86 to 95 percent. The prevalence of Pa and Pb to the 50 dB nHL stimulus never reached 100 percent, ranging in prevalence from 77 to 68 percent. No significant ear effect was seen for amplitude or latency measures of Pa or Pb. AMLR brain maps of the voltage field distributions of Pa and Pb waveforms showed different topographic features. Scalp topography of the Pa waveform was altered by a reduction in stimulus intensity level. At 90 dB nHL, the Pa brain map showed a large positivity midline over the frontal and central scalp areas. At lower stimulus intensity levels, frontal positivity was reduced, and scalp negativity over occipital regions was increased. Pb scalp topography was also altered by a reduction in stimulus intensity level. Varying the stimulus intensity significantly altered Pa and Pb distributions of amplitude and latency measures. Pa and Pb distributions were skewed regardless of stimulus intensity.

  5. MULTICHANNEL ANALYZER

    DOEpatents

    Kelley, G.G.

    1959-11-10

    A multichannel pulse analyzer having several window amplifiers, each amplifier serving one group of channels, with a single fast pulse-lengthener and a single novel interrogation circuit serving all channels is described. A pulse followed too closely timewise by another pulse is disregarded by the interrogation circuit to prevent errors due to pulse pileup. The window amplifiers are connected to the pulse lengthener output, rather than the linear amplifier output, so need not have the fast response characteristic formerly required.

  6. Auditory brain response modified by temporal deviation of language rhythm: an auditory event-related potential study.

    PubMed

    Jomori, Izumi; Hoshiyama, Minoru

    2009-10-01

    The effects of the temporal disruption of language rhythm in Japanese on auditory evoked potentials were investigated in normal subjects. Auditory event-related evoked potentials (AERP) were recorded following syllables using a natural and deviated language rhythm by inserting various (0-400 ms) silent intervals between syllables. The language speed was changed to assess the effect of a deviant rhythm relative to the language speed on AERP in another experiment. The prolonging of intervals did not affect the N100-P150 components until the inserted interval became 400 ms, while the negative component (early negativity, EN), peaking at 250-300 ms, was enhanced when the interval was 100 ms or more. The N100-P150 components following deviated language rhythms did not change during the fast speed but did in the standard and slow speed. We considered that the N100-P150 components were changed by the mixed effects of adaptation and prediction related to the reading speed, and that EN was evoked by deviated language rhythm in a different way from that caused N100-P150 changes, possibly via mismatch detection process between deviant rhythm and intrinsic rehearsed rhythm.

  7. Brain stem auditory nuclei and their connections in a carnivorous marsupial, the northern native cat (Dasyurus hallucatus).

    PubMed

    Aitkin, L M; Byers, M; Nelson, J E

    1986-01-01

    The cytoarchitecture and connections of the brain stem auditory nuclei in the marsupial native cat (Dasyurus hallucatus) were studied using Nissl material in conjunction with the retrograde transport of horseradish peroxidase injected into the inferior colliculus. Some features different from those of Eutheria include the disposition of the cochlear nuclear complex medial to the restiform body, a lack of large spherical cells in the anteroventral cochlear nucleus, a small medial superior olive, and a large superior paraolivary nucleus.

  8. Effect of hearing aids on auditory function in infants with perinatal brain injury and severe hearing loss.

    PubMed

    Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio

    2012-01-01

    Approximately 2-4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs.

  9. Effect of Hearing Aids on Auditory Function in Infants with Perinatal Brain Injury and Severe Hearing Loss

    PubMed Central

    Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio

    2012-01-01

    Background Approximately 2–4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. Methodology/Principal Findings A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). Conclusions/Significance This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs. PMID:22808289

  10. Communication and control by listening: toward optimal design of a two-class auditory streaming brain-computer interface.

    PubMed

    Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin

    2012-01-01

    Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.

  11. Risk of depression enhances auditory Pitch discrimination in the brain as indexed by the mismatch negativity.

    PubMed

    Bonetti, L; Haumann, N T; Vuust, P; Kliuchko, M; Brattico, E

    2017-10-01

    Depression is a state of aversion to activity and low mood that affects behaviour, thoughts, feelings and sense of well-being. Moreover, the individual depression trait is associated with altered auditory cortex activation and appraisal of the affective content of sounds. Mismatch negativity responses (MMNs) to acoustic feature changes (pitch, timbre, location, intensity, slide and rhythm) inserted in a musical sequence played in major or minor mode were recorded using magnetoencephalography (MEG) in 88 subclinical participants with depression risk. We found correlations between MMNs to slide and pitch and the level of depression risk reported by participants, indicating that higher MMNs correspond to higher risk of depression. Furthermore we found significantly higher MMN amplitudes to mistuned pitches within a major context compared to MMNs to pitch changes in a minor context. The brains of individuals with depression risk are more responsive to mistuned and fast pitch stimulus changes, even at a pre-attentive level. Considering the altered appraisal of affective contents of sounds in depression and the relevance of spectral pitch features for those contents in music and speech, we propose that individuals with subclinical depression risk are more tuned to tracking sudden pitch changes. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  12. Recovery function of the human brain stem auditory-evoked potential.

    PubMed

    Kevanishvili, Z; Lagidze, Z

    1979-01-01

    Amplitude reduction and peak latency prolongation were observed in the human brain stem auditory-evoked potential (BEP) with preceding (conditioning) stimulation. At a conditioning interval (CI) of 5 ms the alteration of BEP was greater than at a CI of 10 ms. At a CI of 10 ms the amplitudes of some BEP components (e.g. waves I and II) were more decreased than those of others (e.g. wave V), while the peak latency prolongation did not show any obvious component selectivity. At a CI of 5 ms, the extent of the amplitude decrement of individual BEP components differed less, while the increase in the peak latencies of the later components was greater than that of the earlier components. The alterations of the parameters of the test BEPs at both CIs are ascribed to the desynchronization of intrinsic neural events. The differential amplitude reduction at a CI of 10 ms is explained by the different durations of neural firings determining various effects of desynchronization upon the amplitudes of individual BEP components. The decrease in the extent of the component selectivity and the preferential increase in the peak latencies of the later BEP components observed at a CI of 5 ms are explained by the intensification of the mechanism of the relative refractory period.

  13. Localization of brain activity during auditory verbal short-term memory derived from magnetic recordings.

    PubMed

    Starr, A; Kristeva, R; Cheyne, D; Lindinger, G; Deecke, L

    1991-09-06

    We have studied magnetic and electrical fields of the brain in normal subjects during the performance of an auditory verbal short-term memory task. On each trial 3 digits, selected from the numbers 'one' through 'nine', were presented for memorization followed by a probe number which could or could not be a member of the preceding memory set. The subject pressed an appropriate response button and accuracy and reaction time were measured. Magnetic fields recorded from up to 63 sites over both hemispheres revealed a transient field at 110 ms to both the memory item and the probe consistent with a dipole source in Heschl's gyrus; a sustained magnetic field between 300 and 800 ms to just the memory items localized to the temporal lobe slightly deeper and posterior to Heschl's gyri; and a sustained magnetic field between 300 and 800 ms to just the probes localized bilaterally to the medio-basal temporal lobes. These results are related to clinical disorders of short-term memory in man.

  14. Non-invasive Brain Stimulation and Auditory Verbal Hallucinations: New Techniques and Future Directions

    PubMed Central

    Moseley, Peter; Alderson-Day, Ben; Ellison, Amanda; Jardri, Renaud; Fernyhough, Charles

    2016-01-01

    Auditory verbal hallucinations (AVHs) are the experience of hearing a voice in the absence of any speaker. Results from recent attempts to treat AVHs with neurostimulation (rTMS or tDCS) to the left temporoparietal junction have not been conclusive, but suggest that it may be a promising treatment option for some individuals. Some evidence suggests that the therapeutic effect of neurostimulation on AVHs may result from modulation of cortical areas involved in the ability to monitor the source of self-generated information. Here, we provide a brief overview of cognitive models and neurostimulation paradigms associated with treatment of AVHs, and discuss techniques that could be explored in the future to improve the efficacy of treatment, including alternating current and random noise stimulation. Technical issues surrounding the use of neurostimulation as a treatment option are discussed (including methods to localize the targeted cortical area, and the state-dependent effects of brain stimulation), as are issues surrounding the acceptability of neurostimulation for adolescent populations and individuals who experience qualitatively different types of AVH. PMID:26834541

  15. Frontal brain activation in premature infants' response to auditory stimuli in neonatal intensive care unit.

    PubMed

    Saito, Yuri; Fukuhara, Rie; Aoyama, Shiori; Toshima, Tamotsu

    2009-07-01

    The present study was focusing on the very few contacts with the mother's voice that NICU infants have in the womb as well as after birth, we examined whether they can discriminate between their mothers' utterances and those of female nurses in terms of the emotional bonding that is facilitated by prosodic utterances. Twenty-six premature infants were included in this study, and their cerebral blood flows were measured by near-infrared spectroscopy. They were exposed to auditory stimuli in the form of utterances made by their mothers and female nurses. A two (stimulus: mother and nurse) x two (recording site: right frontal area and left frontal area) analysis of variance (ANOVA) for these relative oxy-Hb values was conducted. The ANOVA showed a significant interaction between stimulus and recording site. The mother's and the nurse's voices were activated in the same way in the left frontal area, but showed different reactions in the right frontal area. We presume that the nurse's voice might become associated with pain and stress for premature infants. Our results showed that the premature infants reacted differently to the different voice stimuli. Therefore, we presume that both mothers' and nurses' voices represent positive stimuli for premature infants because both activate the frontal brain. Accordingly, we cannot explain our results only in terms of the state-dependent marker for infantile individual differences, but must also address the stressful trigger of nurses' voices for NICU infants.

  16. Hyperpolarization-independent maturation and refinement of GABA/glycinergic connections in the auditory brain stem

    PubMed Central

    Lee, Hanmi; Bach, Eva; Noh, Jihyun; Delpire, Eric

    2015-01-01

    During development GABA and glycine synapses are initially excitatory before they gradually become inhibitory. This transition is due to a developmental increase in the activity of neuronal potassium-chloride cotransporter 2 (KCC2), which shifts the chloride equilibrium potential (ECl) to values more negative than the resting membrane potential. While the role of early GABA and glycine depolarizations in neuronal development has become increasingly clear, the role of the transition to hyperpolarization in synapse maturation and circuit refinement has remained an open question. Here we investigated this question by examining the maturation and developmental refinement of GABA/glycinergic and glutamatergic synapses in the lateral superior olive (LSO), a binaural auditory brain stem nucleus, in KCC2-knockdown mice, in which GABA and glycine remain depolarizing. We found that many key events in the development of synaptic inputs to the LSO, such as changes in neurotransmitter phenotype, strengthening and elimination of GABA/glycinergic connection, and maturation of glutamatergic synapses, occur undisturbed in KCC2-knockdown mice compared with wild-type mice. These results indicate that maturation of inhibitory and excitatory synapses in the LSO is independent of the GABA and glycine depolarization-to-hyperpolarization transition. PMID:26655825

  17. Event-related brain potentials to irrelevant auditory stimuli during selective listening: effects of channel probability.

    PubMed

    Akai, Toshiyuki

    2004-03-01

    The purpose of this study was to identify the cognitive process reflected by a positive deflection to irrelevant auditory stimuli (Pdi) during selective listening. Event-related brain potentials were recorded from 9 participants in a two-channel (left/right ears) selective listening task. Relative event probabilities of the relevant/irrelevant channels were 25%/75%, 50%/50%, and 75%/25%. With increasing probability of the relevant channel, behavioral performances (the reaction time and hit rate) for the targets within the relevant channel improved, reflecting development of a more robust attentional trace. At the same time, the amplitude of the early Pdi (200-300 ms after stimulus onset) elicited by the stimuli in the irrelevant channel with a decreased probability was enhanced in the central region. This positive relation between the strength of the attentional trace and the amplitude of the early Pdi suggests that the early Pdi is elicited by a mismatching between an incoming irrelevant stimulus and an attentional trace.

  18. Brain stem auditory-evoked potentials in different strains of rodents.

    PubMed

    Chen, T J; Chen, S S

    1990-04-01

    This study was conducted to evaluate variations in brain stem auditory-evoked potentials (BAEPs) among different strains of rodents. BAEPs were recorded by routine procedures from rodents of different strains or species. These included 22 Long-Evans, 28 Wistar and 28 Sprague-Dawley rats, and six hamsters. Within the first 10 ms, there were five consistent and reproducible positive waves of BAEPs in each rodent, named I, II, III, IV and V in correspondence with the nomenclature of waves I-VII in human BAEPs. These BAEPs were also similar to those observed in other vertebrates and in human controls. However, there were variations in waveforms and peak latencies among rodents, even in the rats of the same strain that came from different laboratory centres. At optimal stimulation intensity, usually around 90 dB, the mean latencies of the waves varied as follows: I, 1.23-1.53 ms; II, 1.88-2.28 ms; III, 2.62-2.94 ms; IV, 3.49-3.97 ms; and V, 4.47-5.14 ms. They were significantly different between species, but not in different strains of rats if they came from the same animal centre. The conduction time in the central portion illustrated by interpeak latencies between I and III, III and V, and I and V was dependent on the species (P less than 0.05). When recorded in a soundproof incubator, the minimal hearing threshold showed a significant species difference. The animal BAEP model can be employed for evaluating the physiological function or the pathological conditions of the brain stem. The confirmation of BAEP variations among different species or strains will be helpful in deciding which kind of rodents will be appropriate to serve as animal models for the various purposes of BAEP studies.

  19. Rey's Auditory Verbal Learning Test scores can be predicted from whole brain MRI in Alzheimer's disease.

    PubMed

    Moradi, Elaheh; Hallikainen, Ilona; Hänninen, Tuomo; Tohka, Jussi

    2017-01-01

    Rey's Auditory Verbal Learning Test (RAVLT) is a powerful neuropsychological tool for testing episodic memory, which is widely used for the cognitive assessment in dementia and pre-dementia conditions. Several studies have shown that an impairment in RAVLT scores reflect well the underlying pathology caused by Alzheimer's disease (AD), thus making RAVLT an effective early marker to detect AD in persons with memory complaints. We investigated the association between RAVLT scores (RAVLT Immediate and RAVLT Percent Forgetting) and the structural brain atrophy caused by AD. The aim was to comprehensively study to what extent the RAVLT scores are predictable based on structural magnetic resonance imaging (MRI) data using machine learning approaches as well as to find the most important brain regions for the estimation of RAVLT scores. For this, we built a predictive model to estimate RAVLT scores from gray matter density via elastic net penalized linear regression model. The proposed approach provided highly significant cross-validated correlation between the estimated and observed RAVLT Immediate (R = 0.50) and RAVLT Percent Forgetting (R = 0.43) in a dataset consisting of 806 AD, mild cognitive impairment (MCI) or healthy subjects. In addition, the selected machine learning method provided more accurate estimates of RAVLT scores than the relevance vector regression used earlier for the estimation of RAVLT based on MRI data. The top predictors were medial temporal lobe structures and amygdala for the estimation of RAVLT Immediate and angular gyrus, hippocampus and amygdala for the estimation of RAVLT Percent Forgetting. Further, the conversion of MCI subjects to AD in 3-years could be predicted based on either observed or estimated RAVLT scores with an accuracy comparable to MRI-based biomarkers.

  20. Auditory neglect.

    PubMed Central

    De Renzi, E; Gentilini, M; Barbieri, C

    1989-01-01

    Auditory neglect was investigated in normal controls and in patients with a recent unilateral hemispheric lesion, by requiring them to detect the interruptions that occurred in one ear in a sound delivered through earphones either mono-aurally or binaurally. Control patients accurately detected interruptions. One left brain damaged (LBD) patient missed only once in the ipsilateral ear while seven of the 30 right brain damaged (RBD) patients missed more than one signal in the monoaural test and nine patients did the same in the binaural test. Omissions were always more marked in the left ear and in the binaural test with a significant ear by test interaction. The lesion of these patients was in the parietal lobe (five patients) and the thalamus (four patients). The relation of auditory neglect to auditory extinction was investigated and found to be equivocal, in that there were seven RBD patients who showed extinction, but not neglect and, more importantly, two patients who exhibited the opposite pattern, thus challenging the view that extinction is a minor form of neglect. Also visual and auditory neglect were not consistently correlated, the former being present in nine RBD patients without auditory neglect and the latter in two RBD patients without visual neglect. The finding that in some RBD patients with auditory neglect omissions also occurred, though with less frequency, in the right ear, points to a right hemisphere participation in the deployment of attention not only to the contralateral, but also to the ipsilateral space. PMID:2732732

  1. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    PubMed Central

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  2. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    PubMed

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  3. The Auditory Brain-Stem Response to Complex Sounds: A Potential Biomarker for Guiding Treatment of Psychosis

    PubMed Central

    Tarasenko, Melissa A.; Swerdlow, Neal R.; Makeig, Scott; Braff, David L.; Light, Gregory A.

    2014-01-01

    Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here, we describe an event-related potential (ERP) biomarker – the auditory brain-stem response (ABR) to complex sounds (cABR) – that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions. PMID:25352811

  4. The influence of an auditory distraction on rapid naming after a mild traumatic brain injury: a longitudinal study.

    PubMed

    Barrow, Irene M; Collins, Jay N; Britt, L D

    2006-11-01

    The purpose of this investigation was to examine speeded performance over time and the impact of a common auditory distraction on performance after a mild traumatic brain injury (MTBI). Fourteen adults (ages 18-53) treated for a MTBI and 14 age and education-matched controls were asked to perform two speeded naming tasks. Both tasks were presented with or without the presence a common auditory distraction. The MTBI group was tested within 5 days, 30 days, 60 days, and 6 months postinjury. Latency (ms) and accuracy of response were recorded. Initially, the MTBI group demonstrated significantly longer response latencies and lower accuracy levels for both tasks. Similar results were found at 30 days postinjury. At 60 days postinjury, no significant difference was found for task 1 accuracy. Significant differences remained for task 1 latency, task 2 latency, and task 2 accuracy. At 6 months postinjury, no significant differences were found. The presence of an auditory distraction differentially affected the MTBI group for task 2 accuracy upon initial testing and at 30 days postinjury only. The MTBI group performed both tasks significantly slower and less accurately than the control group upon initial testing and at 30 days postinjury. The presence of pop music further influenced accuracy of complex processing. At 60 days postinjury, accuracy of simple processing returned to preinjury levels and the auditory distraction no longer differentially influenced the MTBI group. All performance differences were resolved at 6 months postinjury.

  5. An Evaluation of Training with an Auditory P300 Brain-Computer Interface for the Japanese Hiragana Syllabary

    PubMed Central

    Halder, Sebastian; Takano, Kouji; Ora, Hiroki; Onishi, Akinari; Utsumi, Kota; Kansaku, Kenji

    2016-01-01

    Gaze-independent brain-computer interfaces (BCIs) are a possible communication channel for persons with paralysis. We investigated if it is possible to use auditory stimuli to create a BCI for the Japanese Hiragana syllabary, which has 46 Hiragana characters. Additionally, we investigated if training has an effect on accuracy despite the high amount of different stimuli involved. Able-bodied participants (N = 6) were asked to select 25 syllables (out of fifty possible choices) using a two step procedure: First the consonant (ten choices) and then the vowel (five choices). This was repeated on 3 separate days. Additionally, a person with spinal cord injury (SCI) participated in the experiment. Four out of six healthy participants reached Hiragana syllable accuracies above 70% and the information transfer rate increased from 1.7 bits/min in the first session to 3.2 bits/min in the third session. The accuracy of the participant with SCI increased from 12% (0.2 bits/min) to 56% (2 bits/min) in session three. Reliable selections from a 10 × 5 matrix using auditory stimuli were possible and performance is increased by training. We were able to show that auditory P300 BCIs can be used for communication with up to fifty symbols. This enables the use of the technology of auditory P300 BCIs with a variety of applications. PMID:27746716

  6. An Evaluation of Training with an Auditory P300 Brain-Computer Interface for the Japanese Hiragana Syllabary.

    PubMed

    Halder, Sebastian; Takano, Kouji; Ora, Hiroki; Onishi, Akinari; Utsumi, Kota; Kansaku, Kenji

    2016-01-01

    Gaze-independent brain-computer interfaces (BCIs) are a possible communication channel for persons with paralysis. We investigated if it is possible to use auditory stimuli to create a BCI for the Japanese Hiragana syllabary, which has 46 Hiragana characters. Additionally, we investigated if training has an effect on accuracy despite the high amount of different stimuli involved. Able-bodied participants (N = 6) were asked to select 25 syllables (out of fifty possible choices) using a two step procedure: First the consonant (ten choices) and then the vowel (five choices). This was repeated on 3 separate days. Additionally, a person with spinal cord injury (SCI) participated in the experiment. Four out of six healthy participants reached Hiragana syllable accuracies above 70% and the information transfer rate increased from 1.7 bits/min in the first session to 3.2 bits/min in the third session. The accuracy of the participant with SCI increased from 12% (0.2 bits/min) to 56% (2 bits/min) in session three. Reliable selections from a 10 × 5 matrix using auditory stimuli were possible and performance is increased by training. We were able to show that auditory P300 BCIs can be used for communication with up to fifty symbols. This enables the use of the technology of auditory P300 BCIs with a variety of applications.

  7. Top-down controlled and bottom-up triggered orienting of auditory attention to pitch activate overlapping brain networks.

    PubMed

    Alho, Kimmo; Salmi, Juha; Koistinen, Sonja; Salonen, Oili; Rinne, Teemu

    2015-11-11

    A number of previous studies have suggested segregated networks of brain areas for top-down controlled and bottom-up triggered orienting of visual attention. However, the corresponding networks involved in auditory attention remain less studied. Our participants attended selectively to a tone stream with either a lower pitch or higher pitch in order to respond to infrequent changes in duration of attended tones. The participants were also required to shift their attention from one stream to the other when guided by a visual arrow cue. In addition to these top-down controlled cued attention shifts, infrequent task-irrelevant louder tones occurred in both streams to trigger attention in a bottom-up manner. Both cued shifts and louder tones were associated with enhanced activity in the superior temporal gyrus and sulcus, temporo-parietal junction, superior parietal lobule, inferior and middle frontal gyri, frontal eye field, supplementary motor area, and anterior cingulate gyrus. Thus, the present findings suggest that in the auditory modality, unlike in vision, top-down controlled and bottom-up triggered attention activate largely the same cortical networks. Comparison of the present results with our previous results from a similar experiment on spatial auditory attention suggests that fronto-parietal networks of attention to location or pitch overlap substantially. However, the auditory areas in the anterior superior temporal cortex might have a more important role in attention to the pitch than location of sounds. This article is part of a Special Issue entitled SI: Prediction and Attention.

  8. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    PubMed Central

    Hill, N J; Schölkopf, B

    2012-01-01

    We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135

  9. Brain-computer interfaces using capacitive measurement of visual or auditory steady-state responses

    NASA Astrophysics Data System (ADS)

    Baek, Hyun Jae; Kim, Hyun Seok; Heo, Jeong; Lim, Yong Gyu; Park, Kwang Suk

    2013-04-01

    Objective. Brain-computer interface (BCI) technologies have been intensely studied to provide alternative communication tools entirely independent of neuromuscular activities. Current BCI technologies use electroencephalogram (EEG) acquisition methods that require unpleasant gel injections, impractical preparations and clean-up procedures. The next generation of BCI technologies requires practical, user-friendly, nonintrusive EEG platforms in order to facilitate the application of laboratory work in real-world settings. Approach. A capacitive electrode that does not require an electrolytic gel or direct electrode-scalp contact is a potential alternative to the conventional wet electrode in future BCI systems. We have proposed a new capacitive EEG electrode that contains a conductive polymer-sensing surface, which enhances electrode performance. This paper presents results from five subjects who exhibited visual or auditory steady-state responses according to BCI using these new capacitive electrodes. The steady-state visual evoked potential (SSVEP) spelling system and the auditory steady-state response (ASSR) binary decision system were employed. Main results. Offline tests demonstrated BCI performance high enough to be used in a BCI system (accuracy: 95.2%, ITR: 19.91 bpm for SSVEP BCI (6 s), accuracy: 82.6%, ITR: 1.48 bpm for ASSR BCI (14 s)) with the analysis time being slightly longer than that when wet electrodes were employed with the same BCI system (accuracy: 91.2%, ITR: 25.79 bpm for SSVEP BCI (4 s), accuracy: 81.3%, ITR: 1.57 bpm for ASSR BCI (12 s)). Subjects performed online BCI under the SSVEP paradigm in copy spelling mode and under the ASSR paradigm in selective attention mode with a mean information transfer rate (ITR) of 17.78 ± 2.08 and 0.7 ± 0.24 bpm, respectively. Significance. The results of these experiments demonstrate the feasibility of using our capacitive EEG electrode in BCI systems. This capacitive electrode may become a flexible and

  10. Mother’s voice and heartbeat sounds elicit auditory plasticity in the human brain before full gestation

    PubMed Central

    Webb, Alexandra R.; Heller, Howard T.; Benson, Carol B.; Lahav, Amir

    2015-01-01

    Brain development is largely shaped by early sensory experience. However, it is currently unknown whether, how early, and to what extent the newborn’s brain is shaped by exposure to maternal sounds when the brain is most sensitive to early life programming. The present study examined this question in 40 infants born extremely prematurely (between 25- and 32-wk gestation) in the first month of life. Newborns were randomized to receive auditory enrichment in the form of audio recordings of maternal sounds (including their mother’s voice and heartbeat) or routine exposure to hospital environmental noise. The groups were otherwise medically and demographically comparable. Cranial ultrasonography measurements were obtained at 30 ± 3 d of life. Results show that newborns exposed to maternal sounds had a significantly larger auditory cortex (AC) bilaterally compared with control newborns receiving standard care. The magnitude of the right and left AC thickness was significantly correlated with gestational age but not with the duration of sound exposure. Measurements of head circumference and the widths of the frontal horn (FH) and the corpus callosum (CC) were not significantly different between the two groups. This study provides evidence for experience-dependent plasticity in the primary AC before the brain has reached full-term maturation. Our results demonstrate that despite the immaturity of the auditory pathways, the AC is more adaptive to maternal sounds than environmental noise. Further studies are needed to better understand the neural processes underlying this early brain plasticity and its functional implications for future hearing and language development. PMID:25713382

  11. Auditory perception in the aging brain: the role of inhibition and facilitation in early processing.

    PubMed

    Stothart, George; Kazanina, Nina

    2016-11-01

    Aging affects the interplay between peripheral and cortical auditory processing. Previous studies have demonstrated that older adults are less able to regulate afferent sensory information and are more sensitive to distracting information. Using auditory event-related potentials we investigated the role of cortical inhibition on auditory and audiovisual processing in younger and older adults. Across puretone, auditory and audiovisual speech paradigms older adults showed a consistent pattern of inhibitory deficits, manifested as increased P50 and/or N1 amplitudes and an absent or significantly reduced N2. Older adults were still able to use congruent visual articulatory information to aid auditory processing but appeared to require greater neural effort to resolve conflicts generated by incongruent visual information. In combination, the results provide support for the Inhibitory Deficit Hypothesis of aging. They extend previous findings into the audiovisual domain and highlight older adults' ability to benefit from congruent visual information during speech processing.

  12. Are you listening? Brain activation associated with sustained nonspatial auditory attention in the presence and absence of stimulation.

    PubMed

    Seydell-Greenwald, Anna; Greenberg, Adam S; Rauschecker, Josef P

    2014-05-01

    Neuroimaging studies investigating the voluntary (top-down) control of attention largely agree that this process recruits several frontal and parietal brain regions. Since most studies used attention tasks requiring several higher-order cognitive functions (e.g. working memory, semantic processing, temporal integration, spatial orienting) as well as different attentional mechanisms (attention shifting, distractor filtering), it is unclear what exactly the observed frontoparietal activations reflect. The present functional magnetic resonance imaging study investigated, within the same participants, signal changes in (1) a "Simple Attention" task in which participants attended to a single melody, (2) a "Selective Attention" task in which they simultaneously ignored another melody, and (3) a "Beep Monitoring" task in which participants listened in silence for a faint beep. Compared to resting conditions with identical stimulation, all tasks produced robust activation increases in auditory cortex, cross-modal inhibition in visual and somatosensory cortex, and decreases in the default mode network, indicating that participants were indeed focusing their attention on the auditory domain. However, signal increases in frontal and parietal brain areas were only observed for tasks 1 and 2, but completely absent for task 3. These results lead to the following conclusions: under most conditions, frontoparietal activations are crucial for attention since they subserve higher-order cognitive functions inherently related to attention. However, under circumstances that minimize other demands, nonspatial auditory attention in the absence of stimulation can be maintained without concurrent frontal or parietal activations.

  13. Repetition suppression and repetition enhancement underlie auditory memory-trace formation in the human brain: an MEG study.

    PubMed

    Recasens, Marc; Leung, Sumie; Grimm, Sabine; Nowak, Rafal; Escera, Carles

    2015-03-01

    The formation of echoic memory traces has traditionally been inferred from the enhanced responses to its deviations. The mismatch negativity (MMN), an auditory event-related potential (ERP) elicited between 100 and 250ms after sound deviation is an indirect index of regularity encoding that reflects a memory-based comparison process. Recently, repetition positivity (RP) has been described as a candidate ERP correlate of direct memory trace formation. RP consists of repetition suppression and enhancement effects occurring in different auditory components between 50 and 250ms after sound onset. However, the neuronal generators engaged in the encoding of repeated stimulus features have received little interest. This study intends to investigate the neuronal sources underlying the formation and strengthening of new memory traces by employing a roving-standard paradigm, where trains of different frequencies and different lengths are presented randomly. Source generators of repetition enhanced (RE) and suppressed (RS) activity were modeled using magnetoencephalography (MEG) in healthy subjects. Our results show that, in line with RP findings, N1m (~95-150ms) activity is suppressed with stimulus repetition. In addition, we observed the emergence of a sustained field (~230-270ms) that showed RE. Source analysis revealed neuronal generators of RS and RE located in both auditory and non-auditory areas, like the medial parietal cortex and frontal areas. The different timing and location of neural generators involved in RS and RE points to the existence of functionally separated mechanisms devoted to acoustic memory-trace formation in different auditory processing stages of the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Formulae Describing Subjective Attributes for Sound Fields Based on a Model of the Auditory-Brain System

    NASA Astrophysics Data System (ADS)

    ANDO, Y.; SAKAI, H.; SATO, S.

    2000-04-01

    This article reviews the background of a workable model of the auditory-brain system, and formulae of calculating fundamental subjective attributes derived from the model. The model consists of the autocorrelation mechanisms, the interaural cross-correlation mechanism between the two auditory pathways, and the specialization of the human cerebral hemispheres for temporal and spatial factors of the sound field. Typical fundamental attributes, the apparent source width, the missing fundamental, and the speech intelligibility of sound fields, for example, in opera houses, are described in terms of the orthogonal spatial factors extracted from the interaural cross-correlation function, and the orthogonal temporal factors extracted from the autocorrelation function, respectively. Also, other important subjective attributes of the sound fields, the subjective diffuseness, and subjective preferences of both listeners and performers for single reflection are demonstrated here.

  15. On the temporal window of auditory-brain system in connection with subjective responses

    NASA Astrophysics Data System (ADS)

    Mouri, Kiminori

    2003-08-01

    The human auditory-brain system processes information extracted from autocorrelation function (ACF) of the source signal and interaural cross correlation function (IACF) of binaural sound signals which are associated with the left and right cerebral hemispheres, respectively. The purpose of this dissertation is to determine the desirable temporal window (2T: integration interval) for ACF and IACF mechanisms. For the ACF mechanism, the visual change of Φ(0), i.e., the power of ACF, was associated with the change of loudness, and it is shown that the recommended temporal window is given as about 30(τe)min [s]. The value of (τe)min is the minimum value of effective duration of the running ACF of the source signal. It is worth noticing from the experiment of EEG that the most preferred delay time of the first reflection sound is determined by the piece indicating (τe)min in the source signal. For the IACF mechanism, the temporal window is determined as below: The measured range of τIACC corresponding to subjective angle for the moving image sound depends on the temporal window. Here, the moving image was simulated by the use of two loudspeakers located at +/-20° in the horizontal plane, reproducing amplitude modulated band-limited noise alternatively. It is found that the temporal window has a wide range of values from 0.03 to 1 [s] for the modulation frequency below 0.2 Hz. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Kiminori Mouri, 5-3-3-1110 Harayama-dai, Sakai city, Osaka 590-0132, Japan. E-mail address: km529756@aol.com

  16. Immuno-modulator inter-alpha inhibitor proteins ameliorate complex auditory processing deficits in rats with neonatal hypoxic-ischemic brain injury.

    PubMed

    Threlkeld, Steven W; Lim, Yow-Pin; La Rue, Molly; Gaudet, Cynthia; Stonestreet, Barbara S

    2017-03-10

    Hypoxic-ischemic (HI) brain injury is recognized as a significant problem in the perinatal period, contributing to life-long language-learning and other cognitive impairments. Central auditory processing deficits are common in infants with hypoxic-ischemic encephalopathy and have been shown to predict language learning deficits in other at risk infant populations. Inter-alpha inhibitor proteins (IAIPs) are a family of structurally related plasma proteins that modulate the systemic inflammatory response to infection and have been shown to attenuate cell death and improve learning outcomes after neonatal brain injury in rats. Here, we show that systemic administration of IAIPs during the early HI injury cascade ameliorates complex auditory discrimination deficits as compared to untreated HI injured subjects, despite reductions in brain weight. These findings have significant clinical implications for improving central auditory processing deficits linked to language learning in neonates with HI related brain injury.

  17. ARX filtering of single-sweep movement-related brain macropotentials in mono- and multi-channel recordings.

    PubMed

    Capitanio, L; Filligoi, G C; Liberati, D; Cerutti, S; Babiloni, F; Fattorini, L; Urbano, A

    1994-03-01

    A technique of stochastic parametric identification and filtering is applied to the analysis of single-sweep event-related potentials. This procedure, called AutoRegressive with n eXogenous inputs (ARXn), models the recorded signal as the sum of n+1 signals: the background EEG activity, modeled as an autoregressive process driven by white noise, and n signals, one of which represents a filtered version of a reference signal carrying the average information contained in each sweep. The other (n-1) signals could represent various sources of noise (i.e., artifacts, EOG, etc.). An evaluation of the effects of both artifact suppression and accurate selection of the average signal on mono- or multi-channel scalp recordings is presented.

  18. Differential relationships between personality and brain function in monetary and goal-oriented subjective motivation: multichannel near-infrared spectroscopy of healthy subjects.

    PubMed

    Sato, Toshimasa; Fukuda, Masato; Kameyama, Masaki; Suda, Masashi; Uehara, Toru; Mikuni, Masahiko

    2012-06-01

    To examine relationships between personality traits and cerebral cortex reactivity under different motivating conditions. Relationships between personality traits assessed using the NEO Personality Inventory-Revised (NEO-PI-R) and cerebral cortex reactivity during a verbal fluency task monitored using multichannel near-infrared spectroscopy (NIRS) were examined under three different motivational conditions: control, monetary reward, and goal-oriented, in healthy young male volunteers. Significant correlations between cerebral cortex reactivity and personality traits were found in the frontopolar region: a positive correlation with agreeableness and a negative correlation with the neuroticism and conscientiousness scores of the NEO-PI-R under the three motivational conditions. Higher scores for agreeableness were more strongly associated with a greater increase in total hemoglobin concentration ([total-Hb]) under the goal-oriented and control conditions than under the monetary reward condition. In addition, higher scores for neuroticism were more strongly associated with a greater increase in deoxygenated hemoglobin concentration ([deoxy-Hb]) under the monetary reward condition than the goal-oriented condition, and higher scores for conscientiousness were more strongly associated with a greater increase in [deoxy-Hb] under control conditions than under the goal-oriented condition. Using multichannel NIRS, certain personality traits of the big-five model are related to frontopolar reactivity. These relationships vary depending on the motivational condition when brain functions are monitored: agreeableness, neuroticism, and conscientiousness are all related to frontopolar reactivity depending on the motivational condition. © 2012 The Authors. Psychiatry and Clinical Neurosciences © 2012 Japanese Society of Psychiatry and Neurology.

  19. Music and natural sounds in an auditory steady-state response based brain-computer interface to increase user acceptance.

    PubMed

    Heo, Jeong; Baek, Hyun Jae; Hong, Seunghyeok; Chang, Min Hye; Lee, Jeong Su; Park, Kwang Suk

    2017-05-01

    Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Brain bases for auditory stimulus-driven figure-ground segregation.

    PubMed

    Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Griffiths, Timothy D

    2011-01-05

    Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.

  1. Multichannel fiber-based diffuse reflectance spectroscopy for the rat brain exposed to a laser-induced shock wave: comparison between ipsi- and contralateral hemispheres

    NASA Astrophysics Data System (ADS)

    Miyaki, Mai; Kawauchi, Satoko; Okuda, Wataru; Nawashiro, Hiroshi; Takemura, Toshiya; Sato, Shunichi; Nishidate, Izumi

    2015-03-01

    Due to considerable increase in the terrorism using explosive devices, blast-induced traumatic brain injury (bTBI) receives much attention worldwide. However, little is known about the pathology and mechanism of bTBI. In our previous study, we found that cortical spreading depolarization (CSD) occurred in the hemisphere exposed to a laser- induced shock wave (LISW), which was followed by long-lasting hypoxemia-oligemia. However, there is no information on the events occurred in the contralateral hemisphere. In this study, we performed multichannel fiber-based diffuse reflectance spectroscopy for the rat brain exposed to an LISW and compared the results for the ipsilateral and contralateral hemispheres. A pair of optical fibers was put on the both exposed right and left parietal bone; white light was delivered to the brain through source fibers and diffuse reflectance signals were collected with detection fibers for both hemispheres. An LISW was applied to the left (ipsilateral) hemisphere. By analyzing reflectance signals, we evaluated occurrence of CSD, blood volume and oxygen saturation for both hemispheres. In the ipsilateral hemispheres, we observed the occurrence of CSD and long-lasting hypoxemia-oligemia in all rats examined (n=8), as observed in our previous study. In the contralateral hemisphere, on the other hand, no occurrence of CSD was observed, but we observed oligemia in 7 of 8 rats and hypoxemia in 1 of 8 rats, suggesting a mechanism to cause hypoxemia or oligemia or both that is (are) not directly associated with CSD in the contralateral hemisphere.

  2. Reduced auditory M100 asymmetry in schizophrenia and dyslexia: applying a developmental instability approach to assess atypical brain asymmetry.

    PubMed

    Edgar, J Christopher; Yeo, Ron A; Gangestad, Steven W; Blake, Melissa B; Davis, John T; Lewine, Jeffrey D; Cañive, José M

    2006-01-01

    Although atypical structural and functional superior temporal gyrus (STG) asymmetries are frequently observed in patients with schizophrenia and individuals with dyslexia, their significance is unclear. One possibility is that atypical asymmetries reflect a general risk factor that can be seen across multiple neurodevelopmental conditions--a risk factor whose origins are best understood in the context of Developmental Instability (DI) theory. DI measures (minor physical anomalies (MPAs) and fluctuating asymmetries (FAs)) reflect perturbation of the genetic plan. The present study sought to assess whether the presence of peripheral indices of DI predicts anomalous functional auditory cortex asymmetry in schizophrenia patients and dyslexia subjects. The location of the auditory M100 response was used as a measure of functional STG asymmetry, as it has been reported that in controls (but not in subjects with schizophrenia or dyslexia) the M100 source location in the right hemisphere is shifted anterior to that seen for the left hemisphere. Whole-brain auditory evoked magnetic field data were successfully recorded from 14 male schizophrenia patients, 21 male subjects with dyslexia, and 16 normal male control subjects. MPA and FA measures were also obtained. Replicating previous studies, both schizophrenia and dyslexia groups showed less M100 asymmetry than did controls. Schizophrenia and dyslexia subjects also had higher MPA scores than normal controls. Although neither total MPA nor FA measures predicted M100 asymmetry, analyses on individual MPA items revealed a relationship between high palate and M100 asymmetry. Findings suggest that M100 positional asymmetry is not a diagnostically specific feature in several neurodevelopmental conditions. Continued research examining DI and brain asymmetry relationships is warranted.

  3. Towards User-Friendly Spelling with an Auditory Brain-Computer Interface: The CharStreamer Paradigm

    PubMed Central

    Höhne, Johannes; Tangermann, Michael

    2014-01-01

    Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences. PMID:24886978

  4. Plasticity in the neural coding of auditory space in the mammalian brain

    PubMed Central

    King, Andrew J.; Parsons, Carl H.; Moore, David R.

    2000-01-01

    Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the “cocktail party effect”) are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy. PMID:11050215

  5. Auditory middle latency responses differ in right- and left-handed subjects: an evaluation through topographic brain mapping.

    PubMed

    Mohebbi, Mehrnaz; Mahmoudian, Saeid; Alborzi, Marzieh Sharifian; Najafi-Koopaie, Mojtaba; Farahani, Ehsan Darestani; Farhadi, Mohammad

    2014-09-01

    To investigate the association of handedness with auditory middle latency responses (AMLRs) using topographic brain mapping by comparing amplitudes and latencies in frontocentral and hemispheric regions of interest (ROIs). The study included 44 healthy subjects with normal hearing (22 left handed and 22 right handed). AMLRs were recorded from 29 scalp electrodes in response to binaural 4-kHz tone bursts. Frontocentral ROI comparisons revealed that Pa and Pb amplitudes were significantly larger in the left-handed than the right-handed group. Topographic brain maps showed different distributions in AMLR components between the two groups. In hemispheric comparisons, Pa amplitude differed significantly across groups. A left-hemisphere emphasis of Pa was found in the right-handed group but not in the left-handed group. This study provides evidence that handedness is associated with AMLR components in frontocentral and hemispheric ROI. Handedness should be considered an essential factor in the clinical or experimental use of AMLRs.

  6. Spatiotemporal properties of auditory intensity processing in multisensor MEG.

    PubMed

    Wyss, C; Boers, F; Kawohl, W; Arrubla, J; Vahedipour, K; Dammers, J; Neuner, I; Shah, N J

    2014-11-15

    Loudness dependence of auditory evoked potentials (LDAEP) evaluates loudness processing in the human auditory system and is often altered in patients with psychiatric disorders. Previous research has suggested that this measure may be used as an indicator of the central serotonergic system through the highly serotonergic innervation of the auditory cortex. However, differences among the commonly used analysis approaches (such as source analysis and single electrode estimation) may lead to different results. Putatively due to discrepancies of the underlying structures being measured. Therefore, it is important to learn more about how and where in the brain loudness variation is processed. We conducted a detailed investigation of the LDAEP generators and their temporal dynamics by means of multichannel magnetoencephalography (MEG). Evoked responses to brief tones of five different intensities were recorded from 19 healthy participants. We used magnetic field tomography in order to appropriately localize superficial as well as deep source generators of which we conducted a time series analysis. The results showed that apart from the auditory cortex other cortical sources exhibited activation during the N1/P2 time window. Analysis of time courses in the regions of interest revealed a sequential cortical activation from primary sensory areas, particularly the auditory and somatosensory cortex to posterior cingulate cortex (PCC) and to premotor cortex (PMC). The additional activation within the PCC and PMC has implications on the analysis approaches used in LDAEP research. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Auditory skills and brain morphology predict individual differences in adaptation to degraded speech.

    PubMed

    Erb, Julia; Henry, Molly J; Eisner, Frank; Obleser, Jonas

    2012-07-01

    Noise-vocoded speech is a spectrally highly degraded signal, but it preserves the temporal envelope of speech. Listeners vary considerably in their ability to adapt to this degraded speech signal. Here, we hypothesised that individual differences in adaptation to vocoded speech should be predictable by non-speech auditory, cognitive, and neuroanatomical factors. We tested 18 normal-hearing participants in a short-term vocoded speech-learning paradigm (listening to 100 4-band-vocoded sentences). Non-speech auditory skills were assessed using amplitude modulation (AM) rate discrimination, where modulation rates were centred on the speech-relevant rate of 4 Hz. Working memory capacities were evaluated (digit span and nonword repetition), and structural MRI scans were examined for anatomical predictors of vocoded speech learning using voxel-based morphometry. Listeners who learned faster to understand degraded speech also showed smaller thresholds in the AM discrimination task. This ability to adjust to degraded speech is furthermore reflected anatomically in increased grey matter volume in an area of the left thalamus (pulvinar) that is strongly connected to the auditory and prefrontal cortices. Thus, individual non-speech auditory skills and left thalamus grey matter volume can predict how quickly a listener adapts to degraded speech. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Characteristics of Auditory Agnosia in a Child with Severe Traumatic Brain Injury: A Case Report

    ERIC Educational Resources Information Center

    Hattiangadi, Nina; Pillion, Joseph P.; Slomine, Beth; Christensen, James; Trovato, Melissa K.; Speedie, Lynn J.

    2005-01-01

    We present a case that is unusual in many respects from other documented incidences of auditory agnosia, including the mechanism of injury, age of the individual, and location of neurological insult. The clinical presentation is one of disturbance in the perception of spoken language, music, pitch, emotional prosody, and temporal auditory…

  9. Characteristics of Auditory Agnosia in a Child with Severe Traumatic Brain Injury: A Case Report

    ERIC Educational Resources Information Center

    Hattiangadi, Nina; Pillion, Joseph P.; Slomine, Beth; Christensen, James; Trovato, Melissa K.; Speedie, Lynn J.

    2005-01-01

    We present a case that is unusual in many respects from other documented incidences of auditory agnosia, including the mechanism of injury, age of the individual, and location of neurological insult. The clinical presentation is one of disturbance in the perception of spoken language, music, pitch, emotional prosody, and temporal auditory…

  10. Testing domain-general theories of perceptual awareness with auditory brain responses.

    PubMed

    Snyder, Joel S; Yerkes, Breanne D; Pitts, Michael A

    2015-06-01

    Past research has identified several candidate neural correlates of consciousness (NCCs) during visual perception. Recent research on auditory perception shows promise for establishing the generality of various NCCs across sensory modalities, as well as for revealing differences in how conscious processing unfolds in different sensory systems.

  11. Cortical Evoked Potentials and Hearing Aids in Individuals with Auditory Dys-Synchrony.

    PubMed

    Yuvaraj, Pradeep; Mannarukrishnaiah, Jayaram

    2015-12-01

    The purpose of the present study was to investigate the relationship between cortical processing of speech and benefit from hearing aids in individuals with auditory dys-synchrony. Data were collected from 38 individuals with auditory dys-synchrony. Participants were selected based on hearing thresholds, middle ear reflexes, otoacoustic emissions, and auditory brain stem responses. Cortical-evoked potentials were recorded for click and speech. Participants with auditory dys-synchrony were fitted with bilateral multichannel wide dynamic range compression hearing aids. Aided and unaided speech identification scores for 40 words were obtained for each participant. Hierarchical cluster analysis using Ward's method clearly showed four subgroups of participants with auditory dys-synchrony based on the hearing aid benefit score (aided minus unaided speech identification score). The difference in the mean aided and unaided speech identification scores was significantly different in participants with auditory dys-synchrony. However, the mean unaided speech identification scores were not significantly different between the four subgroups. The N2 amplitude and P1 latency of the speech-evoked cortical potentials were significantly different between the four subgroups formed based on hearing aid benefit scores. The results indicated that subgroups of individuals with auditory dys-synchrony who benefit from hearing aids exist. Individuals who benefitted from hearing aids showed decreased N2 amplitudes compared with those who did not. N2 amplitude is associated with greater suppression of background noise while processing speech.

  12. The effects of neck flexion on cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in related sensory cortices

    PubMed Central

    2012-01-01

    Background A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Methods Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10–20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Results Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Conclusions Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections. PMID:23199306

  13. Detection of brain magnetic fields with an atomic magnetometer

    NASA Astrophysics Data System (ADS)

    Xia, Hui; Hoffman, Dan; Baranga, Andrei; Romalis, Michael

    2006-05-01

    We report detection of magnetic fields generated by evoked brain activity with an atomic magnetometer. The measurements are performed with a high-density potassium magnetometer operating in a spin-exchange relaxation free regime. Compared to SQUID magnetometers which so far have been the only detectors capable of measuring the magnetic fields from the brain, atomic magnetometers have the advantages of higher sensitivity and spatial resolution, simple multi-channel recording, and no need for cryogenics. Using a multi-channel photodetector array we recorded magnetic fields from the brain correlated with an audio tone administered with a non-magnetic earphone. The spatial map of the magnetic field gives information about the location of the brain region responding to the auditory stimulation. Our results demonstrate the atomic magnetometer as an alternative and low cost technique for brain imaging applications, without using cryogenic apparatus.

  14. Far-field brainstem responses evoked by vestibular and auditory stimuli exhibit increases in interpeak latency as brain temperature is decreased

    NASA Technical Reports Server (NTRS)

    Hoffman, L. F.; Horowitz, J. M.

    1984-01-01

    The effect of decreasing of brain temperature on the brainstem auditory evoked response (BAER) in rats was investigated. Voltage pulses, applied to a piezoelectric crystal attached to the skull, were used to evoke stimuli in the auditory system by means of bone-conducted vibrations. The responses were recorded at 37 C and 34 C brain temperatures. The peaks of the BAER recorded at 34 C were delayed in comparison with the peaks from the 37 C wave, and the later peaks were more delayed than the earlier peaks. These results indicate that an increase in the interpeak latency occurs as the brain temperature is decreased. Preliminary experiments, in which responses to brief angular acceleration were used to measure the brainstem vestibular evoked response (BVER), have also indicated increases in the interpeak latency in response to the lowering of brain temperature.

  15. Far-field brainstem responses evoked by vestibular and auditory stimuli exhibit increases in interpeak latency as brain temperature is decreased

    NASA Technical Reports Server (NTRS)

    Hoffman, L. F.; Horowitz, J. M.

    1984-01-01

    The effect of decreasing of brain temperature on the brainstem auditory evoked response (BAER) in rats was investigated. Voltage pulses, applied to a piezoelectric crystal attached to the skull, were used to evoke stimuli in the auditory system by means of bone-conducted vibrations. The responses were recorded at 37 C and 34 C brain temperatures. The peaks of the BAER recorded at 34 C were delayed in comparison with the peaks from the 37 C wave, and the later peaks were more delayed than the earlier peaks. These results indicate that an increase in the interpeak latency occurs as the brain temperature is decreased. Preliminary experiments, in which responses to brief angular acceleration were used to measure the brainstem vestibular evoked response (BVER), have also indicated increases in the interpeak latency in response to the lowering of brain temperature.

  16. A trade-off between somatosensory and auditory related brain activity during object naming but not reading.

    PubMed

    Seghier, Mohamed L; Hope, Thomas M H; Prejawa, Susan; Parker Jones, 'Ōiwi; Vitkovitch, Melanie; Price, Cathy J

    2015-03-18

    The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level.

  17. Brain networks of novelty-driven involuntary and cued voluntary auditory attention shifting.

    PubMed

    Huang, Samantha; Belliveau, John W; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-01-01

    In everyday life, we need a capacity to flexibly shift attention between alternative sound sources. However, relatively little work has been done to elucidate the mechanisms of attention shifting in the auditory domain. Here, we used a mixed event-related/sparse-sampling fMRI approach to investigate this essential cognitive function. In each 10-sec trial, subjects were instructed to wait for an auditory "cue" signaling the location where a subsequent "target" sound was likely to be presented. The target was occasionally replaced by an unexpected "novel" sound in the uncued ear, to trigger involuntary attention shifting. To maximize the attention effects, cues, targets, and novels were embedded within dichotic 800-Hz vs. 1500-Hz pure-tone "standard" trains. The sound of clustered fMRI acquisition (starting at t = 7.82 sec) served as a controlled trial-end signal. Our approach revealed notable activation differences between the conditions. Cued voluntary attention shifting activated the superior intra--parietal sulcus (IPS), whereas novelty-triggered involuntary orienting activated the inferior IPS and certain subareas of the precuneus. Clearly more widespread activations were observed during voluntary than involuntary orienting in the premotor cortex, including the frontal eye fields. Moreover, we found -evidence for a frontoinsular-cingular attentional control network, consisting of the anterior insula, inferior frontal cortex, and medial frontal cortices, which were activated during both target discrimination and voluntary attention shifting. Finally, novels and targets activated much wider areas of superior temporal auditory cortices than shifting cues.

  18. Suppression and facilitation of auditory neurons through coordinated acoustic and midbrain stimulation: investigating a deep brain stimulator for tinnitus

    NASA Astrophysics Data System (ADS)

    Offutt, Sarah J.; Ryan, Kellie J.; Konop, Alexander E.; Lim, Hubert H.

    2014-12-01

    Objective. The inferior colliculus (IC) is the primary processing center of auditory information in the midbrain and is one site of tinnitus-related activity. One potential option for suppressing the tinnitus percept is through deep brain stimulation via the auditory midbrain implant (AMI), which is designed for hearing restoration and is already being implanted in deaf patients who also have tinnitus. However, to assess the feasibility of AMI stimulation for tinnitus treatment we first need to characterize the functional connectivity within the IC. Previous studies have suggested modulatory projections from the dorsal cortex of the IC (ICD) to the central nucleus of the IC (ICC), though the functional properties of these projections need to be determined. Approach. In this study, we investigated the effects of electrical stimulation of the ICD on acoustic-driven activity within the ICC in ketamine-anesthetized guinea pigs. Main Results. We observed ICD stimulation induces both suppressive and facilitatory changes across ICC that can occur immediately during stimulation and remain after stimulation. Additionally, ICD stimulation paired with broadband noise stimulation at a specific delay can induce greater suppressive than facilitatory effects, especially when stimulating in more rostral and medial ICD locations. Significance. These findings demonstrate that ICD stimulation can induce specific types of plastic changes in ICC activity, which may be relevant for treating tinnitus. By using the AMI with electrode sites positioned with the ICD and the ICC, the modulatory effects of ICD stimulation can be tested directly in tinnitus patients.

  19. Three-channel Lissajous' trajectory of human auditory brain-stem evoked potentials. III. Effects of click rate.

    PubMed

    Pratt, H; Bleich, N; Martin, W H

    1986-05-01

    Three-channel Lissajous' trajectories (3-CLT) of the human auditory brain-stem evoked potentials (ABEPs) were recorded from 14 adult subjects using click rates of 10, 55 and 80/sec. The 3-CLTs were analysed and described in terms of their constituent planar segments and their trajectory amplitudes at each stimulus rate. Increasing stimulus rate resulted in an increase of planar segment duration which was more pronounced for segments 'a' and 'e', an increase in apex latency which was more pronounced the later the component and a decrease in planar segment size and peak trajectory amplitude which was more pronounced the earlier the component. These findings support the involvement of synaptic efficacy changes in the effects of stimulus rate on ABEP. The results are explained by overlapping convergence and divergence in the ascending auditory pathway. These results support the notion that the principal generator of each component is activated by the principal generator of the previous component, with some temporal overlap of their activities. Such temporal overlap may be minimized by using low intensity high rate stimuli.

  20. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  1. Assessment of brain impairment with the Rey Auditory Verbal Learning Test: a comparison with other neuropsychological measures.

    PubMed

    Powell, J B; Cripe, L I; Dodrill, C B

    1991-01-01

    In this study the effectiveness of The Rey Auditory Verbal Learning Test (AVLT) at assessing patients with mixed brain impairment was compared with that of a number of other commonly used neuropsychological measures. Subjects were 50 patients with a mixture of medically confirmed neuropathologies, and 50 controls with no evidence of neurological history. Groups were equated for age, education, and sex. The AVLT was administered as pan of a full neuropsychological battery. Results indicated that all seven AVLT recall trials and the total of Trials I-V could significantly differentiate between the two groups (p <.001). The AVLT trial V score performed best (U = 457.5, p <.0001), correctly predicting group membership for 74% of the subjects. This hit-rate was better than any other single test on the Halstead-Reitan or Dodrill batteries, and was surpassed only by the Dodrill Discrimination Index. The potential usefulness of this test as part of a neuropsychological battery is discussed.

  2. Brain stem auditory evoked potentials in patients with multiple system atrophy with progressive autonomic failure (Shy-Drager syndrome).

    PubMed Central

    Prasher, D; Bannister, R

    1986-01-01

    Brain stem potentials from three groups of patients, namely those with pure progressive autonomic failure, Parkinson's disease and multisystem atrophy with progressive autonomic failure (Shy-Drager syndrome) were compared with each other and a group of normal subjects. In virtually all the patients with multisystem atrophy with progressive autonomic failure the brain stem potentials were abnormal in contrast to normal findings with Parkinson's disease. The closely associated group of patients with progressive autonomic failure alone also revealed no abnormalities of the BAEP. This separation of the two groups, Parkinson's disease and progressive autonomic failure from multisystem atrophy with progressive autonomic failure is important clinically as multiple system atrophy of the Shy-Drager type has extra-pyramidal features closely resembling Parkinsonism or a late onset cerebellar degeneration. From the abnormalities of the brain stem response in multisystem atrophy with progressive autonomic failure, it is clear that some disruption of the auditory pathway occurs in the ponto-medullary region as in nearly all patients there is a significant delay or reduction in the amplitude of components of the response generated beyond this region. The most likely area involved is the superior olivary complex. Images PMID:3958741

  3. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    PubMed

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  4. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation

    PubMed Central

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A.

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  5. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  6. “Where Do Auditory Hallucinations Come From?”—A Brain Morphometry Study of Schizophrenia Patients With Inner or Outer Space Hallucinations

    PubMed Central

    Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc

    2011-01-01

    Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N = 12) and patients with only inner space hallucinations (N = 15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the “where” auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge. PMID:19666833

  7. Brain activity in predominantly-inattentive subtype attention-deficit/hyperactivity disorder during an auditory oddball attention task

    PubMed Central

    Orinstein, Alyssa J.; Stevens, Michael C.

    2014-01-01

    Previous functional neuroimaging studies have found brain activity abnormalities in attention-deficit/hyperactivity disorder (ADHD) on numerous cognitive tasks. However, little is known about brain dysfunction unique to the predominantly-inattentive subtype of ADHD (ADHD-I), despite debate as to whether DSM-IV-defined ADHD subtypes differ in etiology. This study compared brain activity of 18 ADHD-I adolescents (ages 12–18) and 20 non-psychiatric age-matched control participants on a functional magnetic resonance image (fMRI) auditory oddball attention task. ADHD-I participants had significant activation deficits to infrequent target stimuli in bilateral superior temporal gyri, bilateral insula, several midline cingulate/medial frontal gyrus regions, right posterior parietal cortex, thalamus, cerebellum, and brainstem. To novel stimuli, ADHD-I participants had reduced activation in bilateral lateral temporal lobe structures. There were no brain regions where ADHD-I participants had greater hemodynamic activity to targets or novels than controls. Brain activity deficits in ADHD-I participants were found in several regions important to attentional orienting and working memory-related cognitive processes involved in target identification. These results differ from those in previously studied adolescents with combined-subtype ADHD, who had a lesser magnitude of activation abnormalities in frontoparietal regions and relatively more discrete regional deficits to novel stimuli. The divergent findings suggest different etiological factors might underlie attention deficits in different DSM-IV-defined ADHD subtypes, and they have important implications for the DSM-V reconceptualization of subtypes as varying clinical presentations of the same core disorder. PMID:24953999

  8. Brain activity in predominantly-inattentive subtype attention-deficit/hyperactivity disorder during an auditory oddball attention task.

    PubMed

    Orinstein, Alyssa J; Stevens, Michael C

    2014-08-30

    Previous functional neuroimaging studies have found brain activity abnormalities in attention-deficit/hyperactivity disorder (ADHD) on numerous cognitive tasks. However, little is known about brain dysfunction unique to the predominantly-inattentive subtype of ADHD (ADHD-I), despite debate as to whether DSM-IV-defined ADHD subtypes differ in etiology. This study compared brain activity of 18 ADHD-I adolescents (ages 12-18) and 20 non-psychiatric age-matched control participants on a functional magnetic resonance image (fMRI) auditory oddball attention task. ADHD-I participants had significant activation deficits to infrequent target stimuli in bilateral superior temporal gyri, bilateral insula, several midline cingulate/medial frontal gyrus regions, right posterior parietal cortex, thalamus, cerebellum, and brainstem. To novel stimuli, ADHD-I participants had reduced activation in bilateral lateral temporal lobe structures. There were no brain regions where ADHD-I participants had greater hemodynamic activity to targets or novels than controls. Brain activity deficits in ADHD-I participants were found in several regions important to attentional orienting and working memory-related cognitive processes involved in target identification. These results differ from those in previously studied adolescents with combined-subtype ADHD, who had a lesser magnitude of activation abnormalities in frontoparietal regions and relatively more discrete regional deficits to novel stimuli. The divergent findings suggest different etiological factors might underlie attention deficits in different DSM-IV-defined ADHD subtypes, and they have important implications for the DSM-V reconceptualization of subtypes as varying clinical presentations of the same core disorder.

  9. Brain Networks of Novelty-Driven Involuntary and Cued Voluntary Auditory Attention Shifting

    PubMed Central

    Huang, Samantha; Belliveau, John W.; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-01-01

    In everyday life, we need a capacity to flexibly shift attention between alternative sound sources. However, relatively little work has been done to elucidate the mechanisms of attention shifting in the auditory domain. Here, we used a mixed event-related/sparse-sampling fMRI approach to investigate this essential cognitive function. In each 10-sec trial, subjects were instructed to wait for an auditory “cue” signaling the location where a subsequent “target” sound was likely to be presented. The target was occasionally replaced by an unexpected “novel” sound in the uncued ear, to trigger involuntary attention shifting. To maximize the attention effects, cues, targets, and novels were embedded within dichotic 800-Hz vs. 1500-Hz pure-tone “standard” trains. The sound of clustered fMRI acquisition (starting at t = 7.82 sec) served as a controlled trial-end signal. Our approach revealed notable activation differences between the conditions. Cued voluntary attention shifting activated the superior intra­­parietal sulcus (IPS), whereas novelty-triggered involuntary orienting activated the inferior IPS and certain subareas of the precuneus. Clearly more widespread activations were observed during voluntary than involuntary orienting in the premotor cortex, including the frontal eye fields. Moreover, we found ­evidence for a frontoinsular-cingular attentional control network, consisting of the anterior insula, inferior frontal cortex, and medial frontal cortices, which were activated during both target discrimination and voluntary attention shifting. Finally, novels and targets activated much wider areas of superior temporal auditory cortices than shifting cues. PMID:22937153

  10. When the brain plays music: auditory-motor interactions in music perception and production.

    PubMed

    Zatorre, Robert J; Chen, Joyce L; Penhune, Virginia B

    2007-07-01

    Music performance is both a natural human activity, present in all societies, and one of the most complex and demanding cognitive challenges that the human mind can undertake. Unlike most other sensory-motor activities, music performance requires precise timing of several hierarchically organized actions, as well as precise control over pitch interval production, implemented through diverse effectors according to the instrument involved. We review the cognitive neuroscience literature of both motor and auditory domains, highlighting the value of studying interactions between these systems in a musical context, and propose some ideas concerning the role of the premotor cortex in integration of higher order features of music with appropriately timed and organized actions.

  11. Effect of low-frequency rTMS on electromagnetic tomography (LORETA) and regional brain metabolism (PET) in schizophrenia patients with auditory hallucinations.

    PubMed

    Horacek, Jiri; Brunovsky, Martin; Novak, Tomas; Skrdlantova, Lucie; Klirova, Monika; Bubenikova-Valesova, Vera; Krajca, Vladimir; Tislerova, Barbora; Kopecek, Milan; Spaniel, Filip; Mohr, Pavel; Höschl, Cyril

    2007-01-01

    Auditory hallucinations are characteristic symptoms of schizophrenia with high clinical importance. It was repeatedly reported that low frequency (auditory hallucinations. A neuroimaging study elucidating the effect of rTMS in auditory hallucinations has yet to be published. To evaluate the distribution of neuronal electrical activity and the brain metabolism changes after low-frequency rTMS in patients with auditory hallucinations. Low-frequency rTMS (0.9 Hz, 100% of motor threshold, 20 min) applied to the left temporoparietal cortex was used for 10 days in the treatment of medication-resistant auditory hallucinations in schizophrenia (n = 12). The effect of rTMS on the low-resolution brain electromagnetic tomography (LORETA) and brain metabolism ((18)FDG PET) was measured before and after 2 weeks of treatment. We found a significant improvement in the total and positive symptoms (PANSS), and on the hallucination scales (HCS, AHRS). The rTMS decreased the brain metabolism in the left superior temporal gyrus and in interconnected regions, and effected increases in the contralateral cortex and in the frontal lobes. We detected a decrease in current densities (LORETA) for the beta-1 and beta-3 bands in the left temporal lobe whereas an increase was found for beta-2 band contralaterally. Our findings implicate that the effect is connected with decreased metabolism in the cortex underlying the rTMS site, while facilitation of metabolism is propagated by transcallosal and intrahemispheric connections. The LORETA indicates that the neuroplastic changes affect the functional laterality and provide the substrate for a metabolic effect. (c) 2007 S. Karger AG, Basel.

  12. Brain dynamics that correlate with effects of learning on auditory distance perception.

    PubMed

    Wisniewski, Matthew G; Mercado, Eduardo; Church, Barbara A; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  13. Alterations in brain-stem auditory evoked potentials among drug addicts

    PubMed Central

    Garg, Sonia; Sharma, Rajeev; Mittal, Shilekh; Thapar, Satish

    2015-01-01

    Objective: To compare the absolute latencies, the interpeak latencies, and amplitudes of different waveforms of brainstem auditory evoked potentials (BAEP) in different drug abusers and controls, and to identify early neurological damage in persons who abuse different drugs so that proper counseling and timely intervention can be undertaken. Methods: In this cross-sectional study, BAEP’s were assessed by a data acquisition and analysis system in 58 male drug abusers in the age group of 15-45 years as well as in 30 age matched healthy controls. The absolute peak latencies and the interpeak latencies of BAEP were analyzed by applying one way ANOVA and student t-test. The study was carried out at the GGS Medical College, Faridkot, Punjab, India between July 2012 and May 2013. Results: The difference in the absolute peak latencies and interpeak latencies of BAEP in the 2 groups was found to be statistically significant in both the ears (p<0.05). However, the difference in the amplitude ratio in both the ears was found to be statistically insignificant. Conclusion: Chronic intoxication by different drugs has been extensively associated with prolonged absolute peak latencies and interpeak latencies of BAEP in drug abusers reflecting an adverse effect of drug dependence on neural transmission in central auditory nerve pathways. PMID:26166594

  14. Brain-Computer Interface application: auditory serial interface to control a two-class motor-imagery-based wheelchair.

    PubMed

    Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier

    2017-05-30

    Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.

  15. Auditory imagery: empirical findings.

    PubMed

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear).

  16. Attention effects on auditory scene analysis: insights from event-related brain potentials.

    PubMed

    Spielmann, Mona Isabel; Schröger, Erich; Kotz, Sonja A; Bendixen, Alexandra

    2014-01-01

    Sounds emitted by different sources arrive at our ears as a mixture that must be disentangled before meaningful information can be retrieved. It is still a matter of debate whether this decomposition happens automatically or requires the listener's attention. These opposite positions partly stem from different methodological approaches to the problem. We propose an integrative approach that combines the logic of previous measurements targeting either auditory stream segregation (interpreting a mixture as coming from two separate sources) or integration (interpreting a mixture as originating from only one source). By means of combined behavioral and event-related potential (ERP) measures, our paradigm has the potential to measure stream segregation and integration at the same time, providing the opportunity to obtain positive evidence of either one. This reduces the reliance on zero findings (i.e., the occurrence of stream integration in a given condition can be demonstrated directly, rather than indirectly based on the absence of empirical evidence for stream segregation, and vice versa). With this two-way approach, we systematically manipulate attention devoted to the auditory stimuli (by varying their task relevance) and to their underlying structure (by delivering perceptual tasks that require segregated or integrated percepts). ERP results based on the mismatch negativity (MMN) show no evidence for a modulation of stream integration by attention, while stream segregation results were less clear due to overlapping attention-related components in the MMN latency range. We suggest future studies combining the proposed two-way approach with some improvements in the ERP measurement of sequential stream segregation.

  17. Brain dynamics that correlate with effects of learning on auditory distance perception

    PubMed Central

    Wisniewski, Matthew G.; Mercado, Eduardo; Church, Barbara A.; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4–8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8–12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10–16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance. PMID:25538550

  18. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response.

    PubMed

    Nuttall, Helen E; Moore, David R; Barry, Johanna G; Krumbholz, Katrin; de Boer, Jessica

    2015-06-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. Copyright © 2015 the American Physiological Society.

  19. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response

    PubMed Central

    Nuttall, Helen E.; Moore, David R.; Barry, Johanna G.; Krumbholz, Katrin

    2015-01-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. PMID:25787954

  20. Monitoring therapeutic efficacy of decompressive craniotomy in space occupying cerebellar infarcts using brain-stem auditory evoked potentials.

    PubMed

    Krieger, D; Adams, H P; Rieke, K; Hacke, W

    1993-01-01

    Brain-stem auditory evoked potentials (BAEPs) have been used to gauge effects of brain-stem dysfunction in humans and animal models. The purpose of this study was to evaluate the usefulness of BAEP in monitoring patients undergoing decompressive surgery of the posterior fossa for space occupying cerebellar infarcts. We report on serial BAEP recordings in 11 comatose patients with space occupying cerebellar infarcts undergoing decompressive craniotomy. BAEP studies were performed within 12 h after admission, 24 h following surgery and prior to extubation. BAEP signals were analyzed using latency determination and cross-correlation. Following surgery, 9 patients regained consciousness; 2 patients persisted in a comatose state and died subsequently. BAEP interpeak latency (IPL) I-V assessed prior to surgery exceeded normal values in all patients in whom it could be reliably measured (N = 9). Following decompressive surgery BAEP wave I-V IPL normalized in 5 patients, but remained prolonged despite dramatic clinical improvement in 4 patients. We prospectively computed the coefficient of cross-correlation (MCC) of combined ipsilateral BAEP trials after right and left ear stimulation. In all patients increasing MCC was associated with clinical improvement. Unchanging or decreasing MCC indicated poor outcome. We conclude that serial BAEP studies are an appropriate perioperative monitoring modality in patients with space occupying cerebellar infarcts undergoing decompressive surgery of the posterior fossa. Our study suggests advantages of cross-correlation analysis as an objective signal processing strategy; relevant information can be extracted even if BAEP wave discrimination is impossible due to severe brain-stem dysfunction.

  1. To Study Brain Stem Auditory Evoked Potential in Patients with Type 2 Diabetes Mellitus- A Cross- Sectional Comparative Study

    PubMed Central

    Muneshwar, J.N.; Afroz, Sayeeda

    2016-01-01

    Introduction Neuropathy is one of the commonest complications of Diabetes Mellitus (DM). Apart from having peripheral and autonomic neuropathy patients with type 2 DM may also suffer from sensory neural hearing loss, which is more severe at higher frequencies. However, few studies have done detailed evaluation of sensory pathway in these patients. In this study brain stem auditory evoked potential is used to detect the acoustic and central neuropathy in a group of patients with type 2 DM with controlled and uncontrolled blood sugar. Aim To study brain stem auditory evoked potential in patients of type 2 DM with controlled and uncontrolled blood sugar and to correlate the various parameters e.g., age (years), weight (kilograms), height (meters), BMI (kg/m2), HbA1c (%) in patients with type 2 DM with controlled and uncontrolled blood sugar. Materials and Methods Cross-sectional comparative study conducted from January 2014 to January 2015. Total 60 patients with type 2 DM of either sex, between age groups of 35-50 years were enrolled from the Diabetic Clinic of Medicine department, of a tertiary care hospital. Based on the value of HbA1c, patients were divided in two groups with controlled and uncontrolled blood sugar and with each group comprising of 30 patients. BERA (Brainstem Evoked Response Audiometry) was done in both the groups on RMS ALERON 201/401. Recordings were taken at 70dB, 80dB and 90dB at 2KHz frequency. Absolute latency of wave I, III, V and interpeak latencies I–III, III-V and I-V were recorded. Results Mean±SD of the absolute latency of BERA waves I, III, V and interpeak latencies I–III, III-V and I-V at 2 KHz and at varying intensity of 70dB, 80dB and 90dB in uncontrolled group of DM were delayed and were significant as compared to controlled group of DM. Conclusion If BERA is done in diabetic patients, central neuropathy can be detected earlier in uncontrolled groups of diabetic patients. PMID:28050358

  2. Age-Related Changes in Transient and Oscillatory Brain Responses to Auditory Stimulation during Early Adolescence

    ERIC Educational Resources Information Center

    Poulsen, Catherine; Picton, Terence W.; Paus, Tomas

    2009-01-01

    Maturational changes in the capacity to process quickly the temporal envelope of sound have been linked to language abilities in typically developing individuals. As part of a longitudinal study of brain maturation and cognitive development during adolescence, we employed dense-array EEG and spatiotemporal source analysis to characterize…

  3. Age-Related Changes in Transient and Oscillatory Brain Responses to Auditory Stimulation during Early Adolescence

    ERIC Educational Resources Information Center

    Poulsen, Catherine; Picton, Terence W.; Paus, Tomas

    2009-01-01

    Maturational changes in the capacity to process quickly the temporal envelope of sound have been linked to language abilities in typically developing individuals. As part of a longitudinal study of brain maturation and cognitive development during adolescence, we employed dense-array EEG and spatiotemporal source analysis to characterize…

  4. Positron emission tomography (PET) analysis of the effects of auditory stimulation on the distribution of /sup 11/C-N-methylchlorphentermine in the brain

    SciTech Connect

    Paschal, C.B.

    1986-06-01

    This experimental work was launched to study how auditory stimulation effects blood flow in the brain. The technique used was Positron Emission Tomography (PET) with /sup 11/C-N-methylchlorphentermine (/sup 11/C-NMCP) as the tracer. /sup 11/C-NMCP acts as a molecular microsphere and thus measures blood flow. The objectives of this work were: to develop, test, and refine an experimental procedure, to design and construct a universally applicable positioning device, and to develop and test a synthesis for a radiopure solution of /sup 11/C-NMCP; all were accomplished. PET was used to observe the brain distribution of /sup 11/C-NMCP during binaural and monaural stimulation states. The data was analyzed by finding the signal intensity in regions of the image that represented the left and right interior colliculi (IC's), brain structures dedicated to the processing of auditory signals. The binaural tests indicated a statistically significant tendency for slightly higher concentration of the tracer in the left IC than in the right IC. The monaural tests combined with those of the binaural state were not solidly conclusive, however, three of the four cases showed a decrease in tracer uptake in the IC opposite the zero-stimulus ear, as expected. There is some indication that the anesthesia used in the majority of this work may have interferred with blood flow response to auditory stimulation. 39 refs., 17 figs., 3 tabs.

  5. Brain activity is related to individual differences in the number of items stored in auditory short-term memory for pitch: evidence from magnetoencephalography.

    PubMed

    Grimault, Stephan; Nolden, Sophie; Lefebvre, Christine; Vachon, François; Hyde, Krista; Peretz, Isabelle; Zatorre, Robert; Robitaille, Nicolas; Jolicoeur, Pierre

    2014-07-01

    We used magnetoencephalography (MEG) to examine brain activity related to the maintenance of non-verbal pitch information in auditory short-term memory (ASTM). We focused on brain activity that increased with the number of items effectively held in memory by the participants during the retention interval of an auditory memory task. We used very simple acoustic materials (i.e., pure tones that varied in pitch) that minimized activation from non-ASTM related systems. MEG revealed neural activity in frontal, temporal, and parietal cortices that increased with a greater number of items effectively held in memory by the participants during the maintenance of pitch representations in ASTM. The present results reinforce the functional role of frontal and temporal cortices in the retention of pitch information in ASTM. This is the first MEG study to provide both fine spatial localization and temporal resolution on the neural mechanisms of non-verbal ASTM for pitch in relation to individual differences in the capacity of ASTM. This research contributes to a comprehensive understanding of the mechanisms mediating the representation and maintenance of basic non-verbal auditory features in the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Diagnostic System Based on the Human AUDITORY-BRAIN Model for Measuring Environmental NOISE—AN Application to Railway Noise

    NASA Astrophysics Data System (ADS)

    SAKAI, H.; HOTEHAMA, T.; ANDO, Y.; PRODI, N.; POMPOLI, R.

    2002-02-01

    Measurements of railway noise were conducted by use of a diagnostic system of regional environmental noise. The system is based on the model of the human auditory-brain system. The model consists of the interplay of autocorrelators and an interaural crosscorrelator acting on the pressure signals arriving at the ear entrances, and takes into account the specialization of left and right human cerebral hemispheres. Different kinds of railway noise were measured through binaural microphones of a dummy head. To characterize the railway noise, physical factors, extracted from the autocorrelation functions (ACF) and interaural crosscorrelation function (IACF) of binaural signals, were used. The factors extracted from ACF were (1) energy represented at the origin of the delay, Φ (0), (2) effective duration of the envelope of the normalized ACF, τe, (3) the delay time of the first peak, τ1, and (4) its amplitude,ø1 . The factors extracted from IACF were (5) IACC, (6) interaural delay time at which the IACC is defined, τIACC, and (7) width of the IACF at the τIACC,WIACC . The factor Φ (0) can be represented as a geometrical mean of energies at both ears as listening level, LL.

  7. Neuronal coupling by endogenous electric fields: cable theory and applications to coincidence detector neurons in the auditory brain stem.

    PubMed

    Goldwyn, Joshua H; Rinzel, John

    2016-04-01

    The ongoing activity of neurons generates a spatially and time-varying field of extracellular voltage (Ve). This Ve field reflects population-level neural activity, but does it modulate neural dynamics and the function of neural circuits? We provide a cable theory framework to study how a bundle of model neurons generates Ve and how this Ve feeds back and influences membrane potential (Vm). We find that these "ephaptic interactions" are small but not negligible. The model neural population can generate Ve with millivolt-scale amplitude, and this Ve perturbs the Vm of "nearby" cables and effectively increases their electrotonic length. After using passive cable theory to systematically study ephaptic coupling, we explore a test case: the medial superior olive (MSO) in the auditory brain stem. The MSO is a possible locus of ephaptic interactions: sounds evoke large (millivolt scale)Vein vivo in this nucleus. The Ve response is thought to be generated by MSO neurons that perform a known neuronal computation with submillisecond temporal precision (coincidence detection to encode sound source location). Using a biophysically based model of MSO neurons, we find millivolt-scale ephaptic interactions consistent with the passive cable theory results. These subtle membrane potential perturbations induce changes in spike initiation threshold, spike time synchrony, and time difference sensitivity. These results suggest that ephaptic coupling may influence MSO function. Copyright © 2016 the American Physiological Society.

  8. Neuronal coupling by endogenous electric fields: cable theory and applications to coincidence detector neurons in the auditory brain stem

    PubMed Central

    Rinzel, John

    2016-01-01

    The ongoing activity of neurons generates a spatially and time-varying field of extracellular voltage (Ve). This Ve field reflects population-level neural activity, but does it modulate neural dynamics and the function of neural circuits? We provide a cable theory framework to study how a bundle of model neurons generates Ve and how this Ve feeds back and influences membrane potential (Vm). We find that these “ephaptic interactions” are small but not negligible. The model neural population can generate Ve with millivolt-scale amplitude, and this Ve perturbs the Vm of “nearby” cables and effectively increases their electrotonic length. After using passive cable theory to systematically study ephaptic coupling, we explore a test case: the medial superior olive (MSO) in the auditory brain stem. The MSO is a possible locus of ephaptic interactions: sounds evoke large (millivolt scale) Ve in vivo in this nucleus. The Ve response is thought to be generated by MSO neurons that perform a known neuronal computation with submillisecond temporal precision (coincidence detection to encode sound source location). Using a biophysically based model of MSO neurons, we find millivolt-scale ephaptic interactions consistent with the passive cable theory results. These subtle membrane potential perturbations induce changes in spike initiation threshold, spike time synchrony, and time difference sensitivity. These results suggest that ephaptic coupling may influence MSO function. PMID:26823512

  9. The combined monitoring of brain stem auditory evoked potentials and intracranial pressure in coma. A study of 57 patients.

    PubMed Central

    García-Larrea, L; Artru, F; Bertrand, O; Pernier, J; Mauguière, F

    1992-01-01

    Continuous monitoring of brainstem auditory evoked potentials (BAEPs) was carried out in 57 comatose patients for periods ranging from 5 hours to 13 days. In 53 cases intracranial pressure (ICP) was also simultaneously monitored. The study of relative changes of evoked potentials over time proved more relevant to prognosis than the mere consideration of "statistical normality" of waveforms; thus progressive degradation of the BAEPs was associated with a bad outcome even if the responses remained within normal limits. Contrary to previous reports, a normal BAEP obtained during the second week of coma did not necessarily indicate a good vital outcome; it could, however, do so in cases with a low probability of secondary insults. The simultaneous study of BAEPs and ICP showed that apparently significant (greater than 40 mm Hg) acute rises in ICP were not always followed by BAEP changes. The stability of BAEP's despite "significant" ICP rises was associated in our patients with a high probability of survival, while prolongation of central latency of BAEPs in response to ICP modifications was almost invariably followed by brain death. Continuous monitoring of brainstem responses provided a useful physiological counterpart to physical parameters such as ICP. Serial recording of cortical EPs should be added to BAEP monitoring to permit the early detection of rostrocaudal deterioration. Images PMID:1402970

  10. Effect of middle ear effusion on the brain-stem auditory evoked response of Cavalier King Charles Spaniels.

    PubMed

    Harcourt-Brown, Thomas R; Parker, John E; Granger, Nicolas; Jeffery, Nick D

    2011-06-01

    Brain-stem auditory evoked responses (BAER) were assessed in 23 Cavalier King Charles Spaniels with and without middle ear effusion at sound intensities ranging from 10 to 100 dB nHL. Significant differences were found between the median BAER threshold for ears where effusions were present (60 dB nHL), compared to those without (30 dB nHL) (P=0.001). The slopes of latency-intensity functions from both groups did not differ, but the y-axis intercept when the x value was zero was greater in dogs with effusions (P=0.009), consistent with conductive hearing loss. Analysis of latency-intensity functions suggested the degree of hearing loss due to middle ear effusion was 21 dB (95% confidence between 10 and 33 dB). Waves I-V inter-wave latency at 90 dB nHL was not significantly different between the two groups. These findings demonstrate that middle ear effusion is associated with a conductive hearing loss of 10-33 dB in affected dogs despite the fact that all animals studied were considered to have normal hearing by their owners.

  11. Three-channel Lissajous' trajectory of human auditory brain-stem evoked potentials. II. Effects of click intensity.

    PubMed

    Martin, W H; Pratt, H; Bleich, N

    1986-01-01

    Three-channel Lissajous' trajectories (3-CLT) of the human auditory brain-stem evoked potentials (ABEP) were recorded from 14 adult subjects at 15, 45, and 75 dB (nHL). The 3-CLTs were analysed and described in terms of their constituent planar segments and their trajectory amplitudes at each stimulus level. At 75 dB, 9 planar segments were observed, each having unique apex latency, boundaries, duration, shape, size and orientation in voltage space. As stimulus levels were lowered, changes were noted in apex latency and trajectory amplitude which parallelled changes seen under similar experimental conditions using single-channel ABEP. In addition, changes were seen in planar segment number, boundaries, duration, shape, size and orientation, all of which could be related to decreased generator activity. It is proposed that at high stimulus levels, there is significant temporal overlap of generator activity which results in a larger number of planar segments and single-channel peaks than at low stimulus levels. This may be important to the identification of the specific generators of the ABEP and their appropriate clinical usage.

  12. Three-channel Lissajous' trajectory of auditory brain-stem potentials evoked by specific frequency bands (derived responses).

    PubMed

    Kaminer, M; Pratt, H

    1987-02-01

    Three-channel Lissajous' trajectories (3-CLT) of the auditory brain-stem evoked potentials were recorded from 14 adult subjects in response to different frequency bands as well as to unmasked clicks. The frequency bands (8 kHz and above, 4-8 kHz, 2-4 kHz, 1-2 kHz and 1 kHz and under) were obtained by subtraction of wave forms to clicks with high-pass masking at these frequencies (derived responses). The 3-CLTs were analysed and described in terms of their geometrical measures. All 3-CLTs included 5 planar segments whose latencies progressively increased with decreasing stimulus frequency, and whose durations and orientations did not change across frequencies. Apex trajectory amplitudes as well as planar segment sizes decreased between unmasked clicks and specific frequency bands, and with decreasing frequency. The changes noted for apex latency and trajectory amplitude were paralleled with corresponding changes in amplitude and latency of single-channel records. The changes in 3-CLT measures with changes in stimulus frequency reflect the contribution of different parts of the cochlea. The unchanged measures may be attributed to the unchanged anatomy of the generators under the different stimulus conditions. The results of this study do not support the wide band of stimuli as responsible for the planarity of 3-CLT segments. In addition, these results indicate that different cochlear processes are responsible for the latency changes observed across stimulus intensities and for those associated with stimulus frequency.

  13. Effects of click polarity on auditory brain-stem potentials: a three-channel Lissajous' trajectory study.

    PubMed

    Pratt, H; Bleich, N

    1989-11-01

    Three-channel Lissajous' trajectories (3CLTs) of the auditory brain-stem evoked potentials (ABEP) were recorded from 16 adult subjects (28 ears) in response to rarefaction (R), condensation (C) and alternating polarity (A) clicks. 3CLTs were analysed and described in terms of their geometrical measures. All 3CLTs included 5 planar segments (named 'a', 'b', 'c', 'd' and 'e') whose latencies, durations, orientations, sizes and shapes were not affected by click polarity. Occasionally, planar segment 'd' was not defined, and its absence was parallelled by the absence of peak IV in the Vertex-Mastoid records. A small (0.03 microV), but significant increase was found in the trajectory amplitude of planar segment 'e' in C clicks. The effects of click polarity on 3CLT observed in this study suggest that some previously described ABEP changes with click polarity were the result of interactions between electrode positions and relative contributions of overlapping generators. The effects on the fourth and fifth ABEP components may be the results of changes in the temporal overlap of the activity of their generators.

  14. Three-channel Lissajous' trajectory of the binaural interaction components in human auditory brain-stem evoked potentials.

    PubMed

    Polyakov, A; Pratt, H

    1994-09-01

    The 3-channel Lissajous' trajectory (3-CLT) of the binaural interaction components (BI) in auditory brain-stem evoked potentials (ABEPs) was derived from 17 normally hearing adults by subtracting the response to binaural clicks (B) from the algebraic sum of monaural responses (L + R). ABEPs were recorded in response to 65 dB nHL, alternating polarity clicks, presented at a rate of 11/sec. A normative set of BI 3-CLT measures was calculated and compared with the corresponding measures of simultaneously recorded, single-channel vertex-left mastoid and vertex-neck derivations of BI and of ABEP L + R and B. 3-CLT measures included: apex latency, amplitude and orientation, as well as planar segment duration and orientation. The results showed 3 apices and associated planar segments ("BdII," "Be" and "Bf") in the 3-CLT of BI which corresponded in latency to the vertex-mastoid and vertex-neck peaks IIIn, V and VI of ABEP L + R and B. These apices corresponded in latency and orientation to apices of the 3-CLT of ABEP L + R and ABEP B. This correspondence suggests generators of the BI components between the trapezoid body and the inferior colliculus output. Durations of BI planar segments were approximately 1.0 msec. Apex amplitudes of BI 3-CLT were larger than the respective peak amplitudes of the vertex-mastoid and vertex-neck recorded BI, while their intersubject variabilities were comparable.

  15. Auditory Verbal Hallucinations and Brain Dysconnectivity in the Perisylvian Language Network: A Multimodal Investigation

    PubMed Central

    Pettersson-Yeo, William; Allen, Paul; Catani, Marco; Williams, Steven; Barsaglini, Alessio; Kambeitz-Ilankovic, Lana M.; McGuire, Philip; Mechelli, Andrea

    2015-01-01

    Neuroimaging studies of schizophrenia have indicated that the development of auditory verbal hallucinations (AVHs) is associated with altered structural and functional connectivity within the perisylvian language network. However, these studies focussed mainly on either structural or functional alterations in patients with chronic schizophrenia. Therefore, they were unable to examine the relationship between the 2 types of measures and could not establish whether the observed alterations would be expressed in the early stage of the illness. We used diffusion tensor imaging and functional magnetic resonance imaging to examine white matter integrity and functional connectivity within the left perisylvian language network of 46 individuals with an at risk mental state for psychosis or a first episode of the illness, including 28 who had developed AVH group and 18 who had not (nonauditory verbal hallucination [nAVH] group), and 22 healthy controls. Inferences were made at P < .05 (corrected). The nAVH group relative to healthy controls showed a reduction of both white matter integrity and functional connectivity as well as a disruption of the normal structure−function relationship along the fronto-temporal pathway. For all measures, the AVH group showed intermediate values between healthy controls and the nAVH group. These findings seem to suggest that, in the early stage of the disorder, a significant impairment of fronto-temporal connectivity is evident in patients who do not experience AVHs. This is consistent with the hypothesis that, whilst mild disruption of connectivity might still enable the emergence of AVHs, more severe alterations may prevent the occurrence of the hallucinatory experience. PMID:24361862

  16. Auditory verbal hallucinations and brain dysconnectivity in the perisylvian language network: a multimodal investigation.

    PubMed

    Benetti, Stefania; Pettersson-Yeo, William; Allen, Paul; Catani, Marco; Williams, Steven; Barsaglini, Alessio; Kambeitz-Ilankovic, Lana M; McGuire, Philip; Mechelli, Andrea

    2015-01-01

    Neuroimaging studies of schizophrenia have indicated that the development of auditory verbal hallucinations (AVHs) is associated with altered structural and functional connectivity within the perisylvian language network. However, these studies focussed mainly on either structural or functional alterations in patients with chronic schizophrenia. Therefore, they were unable to examine the relationship between the 2 types of measures and could not establish whether the observed alterations would be expressed in the early stage of the illness. We used diffusion tensor imaging and functional magnetic resonance imaging to examine white matter integrity and functional connectivity within the left perisylvian language network of 46 individuals with an at risk mental state for psychosis or a first episode of the illness, including 28 who had developed AVH group and 18 who had not (nonauditory verbal hallucination [nAVH] group), and 22 healthy controls. Inferences were made at P < .05 (corrected). The nAVH group relative to healthy controls showed a reduction of both white matter integrity and functional connectivity as well as a disruption of the normal structure-function relationship along the fronto-temporal pathway. For all measures, the AVH group showed intermediate values between healthy controls and the nAVH group. These findings seem to suggest that, in the early stage of the disorder, a significant impairment of fronto-temporal connectivity is evident in patients who do not experience AVHs. This is consistent with the hypothesis that, whilst mild disruption of connectivity might still enable the emergence of AVHs, more severe alterations may prevent the occurrence of the hallucinatory experience.

  17. Asymmetries of the human social brain in the visual, auditory and chemical modalities

    PubMed Central

    Brancucci, Alfredo; Lucci, Giuliana; Mazzatenta, Andrea; Tommasi, Luca

    2008-01-01

    Structural and functional asymmetries are present in many regions of the human brain responsible for motor control, sensory and cognitive functions and communication. Here, we focus on hemispheric asymmetries underlying the domain of social perception, broadly conceived as the analysis of information about other individuals based on acoustic, visual and chemical signals. By means of these cues the brain establishes the border between ‘self’ and ‘other’, and interprets the surrounding social world in terms of the physical and behavioural characteristics of conspecifics essential for impression formation and for creating bonds and relationships. We show that, considered from the standpoint of single- and multi-modal sensory analysis, the neural substrates of the perception of voices, faces, gestures, smells and pheromones, as evidenced by modern neuroimaging techniques, are characterized by a general pattern of right-hemispheric functional asymmetry that might benefit from other aspects of hemispheric lateralization rather than constituting a true specialization for social information. PMID:19064350

  18. Asymmetries of the human social brain in the visual, auditory and chemical modalities.

    PubMed

    Brancucci, Alfredo; Lucci, Giuliana; Mazzatenta, Andrea; Tommasi, Luca

    2009-04-12

    Structural and functional asymmetries are present in many regions of the human brain responsible for motor control, sensory and cognitive functions and communication. Here, we focus on hemispheric asymmetries underlying the domain of social perception, broadly conceived as the analysis of information about other individuals based on acoustic, visual and chemical signals. By means of these cues the brain establishes the border between 'self' and 'other', and interprets the surrounding social world in terms of the physical and behavioural characteristics of conspecifics essential for impression formation and for creating bonds and relationships. We show that, considered from the standpoint of single- and multi-modal sensory analysis, the neural substrates of the perception of voices, faces, gestures, smells and pheromones, as evidenced by modern neuroimaging techniques, are characterized by a general pattern of right-hemispheric functional asymmetry that might benefit from other aspects of hemispheric lateralization rather than constituting a true specialization for social information.

  19. Placebo-Controlled Trial of Familiar Auditory Sensory Training for Acute Severe Traumatic Brain Injury: A Preliminary Report.

    PubMed

    Pape, Theresa Louise-Bender; Rosenow, Joshua M; Steiner, Monica; Parrish, Todd; Guernon, Ann; Harton, Brett; Patil, Vijaya; Bhaumik, Dulal K; McNamee, Shane; Walker, Matthew; Froehlich, Kathleen; Burress, Catherine; Odle, Cheryl; Wang, Xue; Herrold, Amy A; Zhao, Weihan; Reda, Domenic; Mallinson, Trudy; Conneely, Mark; Nemeth, Alexander J

    2015-07-01

    Sensory stimulation is often provided to persons incurring severe traumatic brain injury (TBI), but therapeutic effects are unclear. This preliminary study investigated neurobehavioral and neurophysiological effects related to sensory stimulation on global neurobehavioral functioning, arousal, and awareness. A double-blind randomized placebo-controlled trial where 15 participants in states of disordered consciousness (DOC), an average of 70 days after TBI, were provided either the Familiar Auditory Sensory Training (FAST) or Placebo of silence. Global neurobehavioral functioning was measured with the Disorders of Consciousness Scale (DOCS). Arousal and awareness were measured with the Coma-Near-Coma (CNC) scale. Neurophysiological effect was measured using functional magnetic resonance imaging (fMRI). FAST (n = 8) and Placebo (n = 7) groups each showed neurobehavioral improvement. Mean DOCS change (FAST = 13.5, SD = 8.2; Placebo = 18.9, SD = 15.6) was not different, but FAST patients had significantly (P = .049; 95% confidence interval [CI] = -1.51, -.005) more CNC gains (FAST = 1.01, SD = 0.60; Placebo = 0.25, SD = 0.70). Mixed-effects models confirm CNC findings (P = .002). Treatment effect, based on CNC, is large (d = 1.88, 95% CI = 0.77, 3.00). Number needed to treat is 2. FAST patients had more fMRI activation in language regions and whole brain (P values <.05) resembling healthy controls' activation. For persons with DOC 29 to 170 days after TBI, FAST resulted in CNC gains and increased neural responsivity to vocal stimuli in language regions. Clinicians should consider providing the FAST to support patient engagement in neurorehabilitation. © The Author(s) 2015.

  20. [The Map of Auditory Function].

    PubMed

    Fujimoto, So; Komura, Yutaka

    2017-04-01

    Brodmann areas 41 and 42 are located in the superior temporal gyrus and regarded as auditory cortices. The fundamental function in audition is frequency analysis; however, the findings on tonotopy maps of the human auditory cortex were not unified until recently when they were compared to the findings on inputs and outputs of the monkey auditory cortex. The auditory cortex shows plasticity after conditioned learning and surgery of cochlear implant. It is also involved in speech perception, music appreciation, and auditory hallucination in schizophrenia through interactions with other brain areas, such as the thalamus, frontal cortex, and limbic systems.

  1. Design and evaluation of area-efficient and wide-range impedance analysis circuit for multichannel high-quality brain signal recording system

    NASA Astrophysics Data System (ADS)

    Iwagami, Takuma; Tani, Takaharu; Ito, Keita; Nishino, Satoru; Harashima, Takuya; Kino, Hisashi; Kiyoyama, Koji; Tanaka, Tetsu

    2016-04-01

    To enable chronic and stable neural recording, we have been developing an implantable multichannel neural recording system with impedance analysis functions. One of the important things for high-quality neural signal recording is to maintain well interfaces between recording electrodes and tissues. We have proposed an impedance analysis circuit with a very small circuit area, which is implemented in a multichannel neural recording and stimulating system. In this paper, we focused on the design of an impedance analysis circuit configuration and the evaluation of a minimal voltage measurement unit. The proposed circuit has a very small circuit area of 0.23 mm2 designed with 0.18 µm CMOS technology and can measure interface impedances between recording electrodes and tissues in ultrawide ranges from 100 Ω to 10 MΩ. In addition, we also successfully acquired interface impedances using the proposed circuit in agarose gel experiments.

  2. Brain functional connectivity during the experience of thought blocks in schizophrenic patients with persistent auditory verbal hallucinations: an EEG study.

    PubMed

    Angelopoulos, Elias; Koutsoukos, Elias; Maillis, Antonis; Papadimitriou, George N; Stefanis, Costas

    2014-03-01

    Thought blocks (TBs) are characterized by regular interruptions in the stream of thought. Outward signs are abrupt and repeated interruptions in the flow of conversation or actions while subjective experience is that of a total and uncontrollable emptying of the mind. In the very limited bibliography regarding TB, the phenomenon is thought to be conceptualized as a disturbance of consciousness that can be attributed to stoppages of continuous information processing due to an increase in the volume of information to be processed. In an attempt to investigate potential expression of the phenomenon on the functional properties of electroencephalographic (EEG) activity, an EEG study was contacted in schizophrenic patients with persisting auditory verbal hallucinations (AVHs) who additionally exhibited TBs. In this case, we hypothesized that the persistent and dense AVHs could serve the role of an increased information flow that the brain is unable to process, a condition that is perceived by the person as TB. Phase synchronization analyses performed on EEG segments during the experience of TBs showed that synchrony values exhibited a long-range common mode of coupling (grouped behavior) among the left temporal area and the remaining central and frontal brain areas. These common synchrony-fluctuation schemes were observed for 0.5 to 2s and were detected in a 4-s window following the estimated initiation of the phenomenon. The observation was frequency specific and detected in the broad alpha band region (6-12Hz). The introduction of synchrony entropy (SE) analysis applied on the cumulative synchrony distribution showed that TB states were characterized by an explicit preference of the system to be functioned at low values of synchrony, while the synchrony values are broadly distributed during the recovery state. Our results indicate that during TB states, the phase locking of several brain areas were converged uniformly in a narrow band of low synchrony values and in a

  3. Multichannel sparse spike inversion

    NASA Astrophysics Data System (ADS)

    Pereg, Deborah; Cohen, Israel; Vassiliou, Anthony A.

    2017-10-01

    In this paper, we address the problem of sparse multichannel seismic deconvolution. We introduce multichannel sparse spike inversion as an iterative procedure, which deconvolves the seismic data and recovers the Earth two-dimensional reflectivity image, while taking into consideration the relations between spatially neighboring traces. We demonstrate the improved performance of the proposed algorithm and its robustness to noise, compared to competitive single-channel algorithm through simulations and real seismic data examples.

  4. Conventional and cross-correlation brain-stem auditory evoked responses in the white leghorn chick: rate manipulations

    NASA Technical Reports Server (NTRS)

    Burkard, R.; Jones, S.; Jones, T.

    1994-01-01

    Rate-dependent changes in the chick brain-stem auditory evoked response (BAER) using conventional averaging and a cross-correlation technique were investigated. Five 15- to 19-day-old white leghorn chicks were anesthetized with Chloropent. In each chick, the left ear was acoustically stimulated. Electrical pulses of 0.1-ms duration were shaped, attenuated, and passed through a current driver to an Etymotic ER-2 which was sealed in the ear canal. Electrical activity from stainless-steel electrodes was amplified, filtered (300-3000 Hz) and digitized at 20 kHz. Click levels included 70 and 90 dB peSPL. In each animal, conventional BAERs were obtained at rates ranging from 5 to 90 Hz. BAERs were also obtained using a cross-correlation technique involving pseudorandom pulse sequences called maximum length sequences (MLSs). The minimum time between pulses, called the minimum pulse interval (MPI), ranged from 0.5 to 6 ms. Two BAERs were obtained for each condition. Dependent variables included the latency and amplitude of the cochlear microphonic (CM), wave 2 and wave 3. BAERs were observed in all chicks, for all level by rate combinations for both conventional and MLS BAERs. There was no effect of click level or rate on the latency of the CM. The latency of waves 2 and 3 increased with decreasing click level and increasing rate. CM amplitude decreased with decreasing click level, but was not influenced by click rate for the 70 dB peSPL condition. For the 90 dB peSPL click, CM amplitude was uninfluenced by click rate for conventional averaging. For MLS BAERs, CM amplitude was similar to conventional averaging for longer MPIs.(ABSTRACT TRUNCATED AT 250 WORDS).

  5. Conventional and cross-correlation brain-stem auditory evoked responses in the white leghorn chick: rate manipulations

    NASA Technical Reports Server (NTRS)

    Burkard, R.; Jones, S.; Jones, T.

    1994-01-01

    Rate-dependent changes in the chick brain-stem auditory evoked response (BAER) using conventional averaging and a cross-correlation technique were investigated. Five 15- to 19-day-old white leghorn chicks were anesthetized with Chloropent. In each chick, the left ear was acoustically stimulated. Electrical pulses of 0.1-ms duration were shaped, attenuated, and passed through a current driver to an Etymotic ER-2 which was sealed in the ear canal. Electrical activity from stainless-steel electrodes was amplified, filtered (300-3000 Hz) and digitized at 20 kHz. Click levels included 70 and 90 dB peSPL. In each animal, conventional BAERs were obtained at rates ranging from 5 to 90 Hz. BAERs were also obtained using a cross-correlation technique involving pseudorandom pulse sequences called maximum length sequences (MLSs). The minimum time between pulses, called the minimum pulse interval (MPI), ranged from 0.5 to 6 ms. Two BAERs were obtained for each condition. Dependent variables included the latency and amplitude of the cochlear microphonic (CM), wave 2 and wave 3. BAERs were observed in all chicks, for all level by rate combinations for both conventional and MLS BAERs. There was no effect of click level or rate on the latency of the CM. The latency of waves 2 and 3 increased with decreasing click level and increasing rate. CM amplitude decreased with decreasing click level, but was not influenced by click rate for the 70 dB peSPL condition. For the 90 dB peSPL click, CM amplitude was uninfluenced by click rate for conventional averaging. For MLS BAERs, CM amplitude was similar to conventional averaging for longer MPIs.(ABSTRACT TRUNCATED AT 250 WORDS).

  6. Differences in brain circuitry for appetitive and reactive aggression as revealed by realistic auditory scripts

    PubMed Central

    Moran, James K.; Weierstall, Roland; Elbert, Thomas

    2014-01-01

    Aggressive behavior is thought to divide into two motivational elements: The first being a self-defensively motivated aggression against threat and a second, hedonically motivated “appetitive” aggression. Appetitive aggression is the less understood of the two, often only researched within abnormal psychology. Our approach is to understand it as a universal and adaptive response, and examine the functional neural activity of ordinary men (N = 50) presented with an imaginative listening task involving a murderer describing a kill. We manipulated motivational context in a between-subjects design to evoke appetitive or reactive aggression, against a neutral control, measuring activity with Magnetoencephalography (MEG). Results show differences in left frontal regions in delta (2–5 Hz) and alpha band (8–12 Hz) for aggressive conditions and right parietal delta activity differentiating appetitive and reactive aggression. These results validate the distinction of reward-driven appetitive aggression from reactive aggression in ordinary populations at the level of functional neural brain circuitry. PMID:25538590

  7. Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.

    PubMed

    Bauer, Martin; Trahms, Lutz; Sander, Tilmann

    2015-04-01

    The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.

  8. Central auditory disorders: toward a neuropsychology of auditory objects

    PubMed Central

    Goll, Johanna C.; Crutch, Sebastian J.; Warren, Jason D.

    2012-01-01

    Purpose of review Analysis of the auditory environment, source identification and vocal communication all require efficient brain mechanisms for disambiguating, representing and understanding complex natural sounds as ‘auditory objects’. Failure of these mechanisms leads to a diverse spectrum of clinical deficits. Here we review current evidence concerning the phenomenology, mechanisms and brain substrates of auditory agnosias and related disorders of auditory object processing. Recent findings Analysis of lesions causing auditory object deficits has revealed certain broad anatomical correlations: deficient parsing of the auditory scene is associated with lesions involving the parieto-temporal junction, while selective disorders of sound recognition occur with more anterior temporal lobe or extra-temporal damage. Distributed neural networks have been increasingly implicated in the pathogenesis of such disorders as developmental dyslexia, congenital amusia and tinnitus. Auditory category deficits may arise from defective interaction of spectrotemporal encoding and executive and mnestic processes. Dedicated brain mechanisms are likely to process specialised sound objects such as voices and melodies. Summary Emerging empirical evidence suggests a clinically relevant, hierarchical and fractionated neuropsychological model of auditory object processing that provides a framework for understanding auditory agnosias and makes specific predictions to direct future work. PMID:20975559

  9. Bilinguals at the "cocktail party": dissociable neural activity in auditory-linguistic brain regions reveals neurobiological basis for nonnative listeners' speech-in-noise recognition deficits.

    PubMed

    Bidelman, Gavin M; Dexter, Lauren

    2015-04-01

    We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs.

  10. Expression of androgen receptor mRNA in the brain of Gekko gecko: implications for understanding the role of androgens in controlling auditory and vocal processes.

    PubMed

    Tang, Y Z; Piao, Y S; Zhuang, L Z; Wang, Z W

    2001-09-17

    The neuroanatomical distribution of androgen receptor (AR) mRNA-containing cells in the brain of a vocal lizard, Gekko gecko, was mapped using in situ hybridization. Particular attention was given to auditory and vocal nuclei. Within the auditory system, the cochlear nuclei, the central nucleus of the torus semicircularis, the nucleus medialis, and the medial region of the dorsal ventricular ridge contained moderate numbers of labeled neurons. Neurons labeled with the AR probe were located in many nuclei related to vocalization. Within the hindbrain, the mesencephalic nucleus of the trigeminal nerve, the vagal part of the nucleus ambiguus, and the dosal motor nucleus of the vagus nerve contained many neurons that exhibited strong expression of AR mRNA. Neurons located in the peripheral nucleus of the torus in the mesencephalon exhibited moderate levels of hybridization. Intense AR mRNA expression was also observed in neurons within two other areas that may be involved in vocalization, the medial preoptic area and the hypoglossal nucleus. The strongest mRNA signals identified in this study were found in cells of the pallium, hypothalamus, and inferior nucleus of the raphe. The expression patterns of AR mRNA in the auditory and vocal control nuclei of G. gecko suggest that neurons involved in acoustic communication in this species, and perhaps related species, are susceptible to regulation by androgens during the breeding season. The significance of these results for understanding the evolution of reptilian vocal communication is discussed.

  11. Loss of auditory sensitivity from inner hair cell synaptopathy can be centrally compensated in the young but not old brain.

    PubMed

    Möhrle, Dorit; Ni, Kun; Varakina, Ksenya; Bing, Dan; Lee, Sze Chim; Zimmermann, Ulrike; Knipper, Marlies; Rüttiger, Lukas

    2016-08-01

    A dramatic shift in societal demographics will lead to rapid growth in the number of older people with hearing deficits. Poorer performance in suprathreshold speech understanding and temporal processing with age has been previously linked with progressing inner hair cell (IHC) synaptopathy that precedes age-dependent elevation of auditory thresholds. We compared central sound responsiveness after acoustic trauma in young, middle-aged, and older rats. We demonstrate that IHC synaptopathy progresses from middle age onward and hearing threshold becomes elevated from old age onward. Interestingly, middle-aged animals could centrally compensate for the loss of auditory fiber activity through an increase in late auditory brainstem responses (late auditory brainstem response wave) linked to shortening of central response latencies. In contrast, old animals failed to restore central responsiveness, which correlated with reduced temporal resolution in responding to amplitude changes. These findings may suggest that cochlear IHC synaptopathy with age does not necessarily induce temporal auditory coding deficits, as long as the capacity to generate neuronal gain maintains normal sound-induced central amplitudes. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. fMRI reveals lateralized pattern of brain activity modulated by the metrics of stimuli during auditory rhyme processing.

    PubMed

    Hurschler, Martina A; Liem, Franziskus; Oechslin, Mathias; Stämpfli, Philipp; Meyer, Martin

    2015-08-01

    Our fMRI study investigates auditory rhyme processing in spoken language to further elucidate the topic of functional lateralization of language processing. During scanning, 14 subjects listened to four different types of versed word strings and subsequently performed either a rhyme or a meter detection task. Our results show lateralization to auditory-related temporal regions in the right hemisphere irrespective of task. As for the left hemisphere we report responses in the supramarginal gyrus as well as in the opercular part of the inferior frontal gyrus modulated by the presence of regular meter and rhyme. The interaction of rhyme and meter was associated with increased involvement of the superior temporal sulcus and the putamen of the right hemisphere. Overall, these findings support the notion of right-hemispheric specialization for suprasegmental analyses during processing of spoken sentences and provide neuroimaging evidence for the influence of metrics on auditory rhyme processing. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. List mode multichannel analyzer

    DOEpatents

    Archer, Daniel E.; Luke, S. John; Mauger, G. Joseph; Riot, Vincent J.; Knapp, David A.

    2007-08-07

    A digital list mode multichannel analyzer (MCA) built around a programmable FPGA device for onboard data analysis and on-the-fly modification of system detection/operating parameters, and capable of collecting and processing data in very small time bins (<1 millisecond) when used in histogramming mode, or in list mode as a list mode MCA.

  14. Comparison of air- and bone-conducted brain stem auditory evoked responses in young dogs and dogs with bilateral ear canal obstruction.

    PubMed

    Wolschrijn, C F; Venker-van Haagen, A J; van den Brom, W E

    1997-11-01

    Brain stem responses to air- and bone-conducted stimuli were analyzed in 11 young dogs, using an in-the-ear transducer and a vibrator designed for human hearing tests, respectively. The mean thresholds were 0 to 10 dB for air-conducted stimuli and 50 to 60 dB for bone-conducted stimuli. The wave forms and inter-peak latencies of the waves of the auditory evoked responses elicited by air-conducted and bone-conducted stimuli were similar. This indicated that the signals had the same origin and thus both the air-conducted and the bone-conducted responses could be considered to be auditory responses. Measurement of air-conducted and bone-conducted brain stem-evoked responses in five dogs with bilateral chronic obstructive ear disease revealed thresholds of 50 to 60 dB for air-conducted stimuli and 60 to 70 dB for bone-conducted stimuli. By comparison of these results with those in the 11 young dogs, it could be concluded that there was hearing loss other than that caused by obstruction of the ear canals.

  15. Automatic versus strategic effects of phonology and orthography on auditory lexical access in brain-damaged patients as a function of inter-stimulus interval.

    PubMed

    Baum, S R; Leonard, C L

    1999-12-01

    The influence of both phonological and orthographic information on auditory lexical access was examined in left- and right-hemisphere-damaged individuals using a lexical decision paradigm. Subjects were presented with prime-target pairs that were either phonologically related (tooth-youth), orthographically related (touch-couch), both phonologically and orthographically related (blood-flood), or unrelated (bill-tent), at two inter-stimulus intervals (ISI)--100 ms and 750 ms--to tap more automatic versus more strategic processing. All groups demonstrated effects of orthography at both ISIs (facilitory at 100 ms ISI and inhibitory at 750 ms ISI), supporting the findings by Leonard and Baum (1997) that effects of orthography emerge independent of site of brain damage and suggesting that orthographic effects in auditory word recognition tend to be largely strategic. A facilitory effect of phonology was also found for all groups at both ISIs. The findings are discussed in relation to theories of lexical activation in brain-damaged individuals.

  16. Evidence from Auditory Nerve and Brainstem Evoked Responses for an Organic Brain Lesion in Children with Autistic Traits

    ERIC Educational Resources Information Center

    Student, M.; Sohmer, H.

    1978-01-01

    In an attempt to resolve the question as to whether children with autistic traits have an organic nervous system lesion, auditory nerve and brainstem evoked responses were recorded in a group of 15 children (4 to 12 years old) with autistic traits. (Author)

  17. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    PubMed Central

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  18. Multichannel compressive sensing MRI using noiselet encoding.

    PubMed

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  19. Non-auditory Effect of Noise Pollution and Its Risk on Human Brain Activity in Different Audio Frequency Using Electroencephalogram Complexity

    PubMed Central

    ALLAHVERDY, Armin; JAFARI, Amir Homayoun

    2016-01-01

    Background: Noise pollution is one of the most harmful ambiance disturbances. It may cause many deficits in ability and activity of persons in the urban and industrial areas. It also may cause many kinds of psychopathies. Therefore, it is very important to measure the risk of this pollution in different area. Methods: This study was conducted in the Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences from June to September of 2015, in which, different frequencies of noise pollution were played for volunteers. 16-channel EEG signal was recorded synchronously, then by using fractal dimension and relative power of Beta sub-band of EEG, the complexity of EEG signals was measured. Results: As the results, it is observed that the average complexity of brain activity is increased in the middle of audio frequency range and the complexity map of brain activity changes in different frequencies, which can show the effects of frequency changes on human brain activity. Conclusion: The complexity of EEG is a good measure for ranking the annoyance and non-auditory risk of noise pollution on human brain activity. PMID:27957440

  20. Non-auditory Effect of Noise Pollution and Its Risk on Human Brain Activity in Different Audio Frequency Using Electroencephalogram Complexity.

    PubMed

    Allahverdy, Armin; Jafari, Amir Homayoun

    2016-10-01

    Noise pollution is one of the most harmful ambiance disturbances. It may cause many deficits in ability and activity of persons in the urban and industrial areas. It also may cause many kinds of psychopathies. Therefore, it is very important to measure the risk of this pollution in different area. This study was conducted in the Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences from June to September of 2015, in which, different frequencies of noise pollution were played for volunteers. 16-channel EEG signal was recorded synchronously, then by using fractal dimension and relative power of Beta sub-band of EEG, the complexity of EEG signals was measured. As the results, it is observed that the average complexity of brain activity is increased in the middle of audio frequency range and the complexity map of brain activity changes in different frequencies, which can show the effects of frequency changes on human brain activity. The complexity of EEG is a good measure for ranking the annoyance and non-auditory risk of noise pollution on human brain activity.

  1. Feasibility of multichannel human cochlear nucleus stimulation.

    PubMed

    Luetje, C M; Whittaker, C K; Geier, L; Mediavilla, S J; Shallop, J K

    1992-01-01

    Bipolar electrical stimulation of the brainstem cochlear nucleus (CN) following acoustic tumor removal in an only-hearing ear can provide beneficial hearing. However, the benefits of multichannel stimulation have yet to be defined. Following removal of a second acoustic tumor in a patient with neurofibromatosis 2, a Nucleus mini-22 channel implant device was inserted with the electrode array tip from the foramen of Luschka cephalad along the root entry zone of the eighth nerve, secured by a single suture superficially in the brain stem. Initial stimulation on the sixth postoperative day indicated that electrodes 18 to 22 were capable of CN stimulation without seventh nerve stimulation. Presumed electrode migration precluded further CN stimulation 1 month later. This report illustrates the feasibility of brainstem CN stimulation with an existing multichannel system.

  2. Multichannel Human Body Communication

    NASA Astrophysics Data System (ADS)

    Przystup, Piotr; Bujnowski, Adam; Wtorek, Jerzy

    2016-01-01

    Human Body Communication is an attractive alternative for traditional wireless communication (Bluetooth, ZigBee) in case of Body Sensor Networks. Low power, high data rates and data security makes it ideal solution for medical applications. In this paper, signal attenuation for different frequencies, using FR4 electrodes, has been investigated. Performance of single and multichannel transmission with frequency modulation of analog signal has been tested. Experiment results show that HBC is a feasible solution for transmitting data between BSN nodes.

  3. Miniature multichannel biotelemeter system

    NASA Technical Reports Server (NTRS)

    Carraway, J. B.; Sumida, J. T. (Inventor)

    1974-01-01

    A miniature multichannel biotelemeter system is described. The system includes a transmitter where signals from different sources are sampled to produce a wavetrain of pulses. The transmitter also separates signals by sync pulses. The pulses amplitude modulate a radio frequency carrier which is received at a receiver unit. There the sync pulses are detected by a demultiplexer which routes the pulses from each different source to a separate output channel where the pulses are used to reconstruct the signals from the particular source.

  4. An auditory multiclass brain-computer interface with natural stimuli: Usability evaluation with healthy participants and a motor impaired end user

    PubMed Central

    Simon, Nadine; Käthner, Ivo; Ruf, Carolin A.; Pasqualotto, Emanuele; Kübler, Andrea; Halder, Sebastian

    2015-01-01

    Brain-computer interfaces (BCIs) can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g., in the completely locked-in state) or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on 2 consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS) in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min) that increased to 90% (4.23 bits/min) on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease. PMID:25620924

  5. Brain Regions Underlying Repetition and Auditory-Verbal Short-term Memory Deficits in Aphasia: Evidence from Voxel-based Lesion Symptom Mapping

    PubMed Central

    Baldo, Juliana V.; Katseff, Shira; Dronkers, Nina F.

    2014-01-01

    Background A deficit in the ability to repeat auditory-verbal information is common among individuals with aphasia. The neural basis of this deficit has traditionally been attributed to the disconnection of left posterior and anterior language regions via damage to a white matter pathway, the arcuate fasciculus. However, a number of lesion and imaging studies have called this notion into question. Aims The goal of this study was to identify the neural correlates of repetition and a related process, auditory-verbal short-term memory (AVSTM). Both repetition and AVSTM involve common elements such as auditory and phonological analysis and translation to speech output processes. Based on previous studies, we predicted that both repetition and AVSTM would be most dependent on posterior language regions in left temporo-parietal cortex. Methods & Procedures We tested 84 individuals with left hemisphere lesions due to stroke on an experimental battery of repetition and AVSTM tasks. Participants were tested on word, pseudoword, and number-word repetition, as well as digit and word span tasks. Brain correlates of these processes were identified using a statistical, lesion analysis approach known as voxel-based lesion symptom mapping (VLSM). VLSM allows for a voxel-by-voxel analysis of brain areas most critical to performance on a given task, including both grey and white matter regions. Outcomes & Results The VLSM analyses showed that left posterior temporo-parietal cortex, not the arcuate fasciculus, was most critical for repetition as well as for AVSTM. The location of maximal foci, defined as the voxels with the highest t values, varied somewhat among measures: Word and pseudoword repetition had maximal foci in the left posterior superior temporal gyrus, on the border with inferior parietal cortex, while word and digit span, as well as number-word repetition, were centered on the border between the middle temporal and superior temporal gyri and the underlying white matter

  6. Attention to natural auditory signals.

    PubMed

    Caporello Bluvas, Emily; Gentner, Timothy Q

    2013-11-01

    The challenge of understanding how the brain processes natural signals is compounded by the fact that such signals are often tied closely to specific natural behaviors and natural environments. This added complexity is especially true for auditory communication signals that can carry information at multiple hierarchical levels, and often occur in the context of other competing communication signals. Selective attention provides a mechanism to focus processing resources on specific components of auditory signals, and simultaneously suppress responses to unwanted signals or noise. Although selective auditory attention has been well-studied behaviorally, very little is known about how selective auditory attention shapes the processing on natural auditory signals, and how the mechanisms of auditory attention are implemented in single neurons or neural circuits. Here we review the role of selective attention in modulating auditory responses to complex natural stimuli in humans. We then suggest how the current understanding can be applied to the study of selective auditory attention in the context natural signal processing at the level of single neurons and populations in animal models amenable to invasive neuroscience techniques. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  7. A subfemtotesla multichannel atomic magnetometer

    NASA Astrophysics Data System (ADS)

    Kominis, I. K.; Kornack, T. W.; Allred, J. C.; Romalis, M. V.

    2003-04-01

    The magnetic field is one of the most fundamental and ubiquitous physical observables, carrying information about all electromagnetic phenomena. For the past 30 years, superconducting quantum interference devices (SQUIDs) operating at 4K have been unchallenged as ultrahigh-sensitivity magnetic field detectors, with a sensitivity reaching down to 1fTHz-1/2 (1fT = 10-15T). They have enabled, for example, mapping of the magnetic fields produced by the brain, and localization of the underlying electrical activity (magnetoencephalography). Atomic magnetometers, based on detection of Larmor spin precession of optically pumped atoms, have approached similar levels of sensitivity using large measurement volumes, but have much lower sensitivity in the more compact designs required for magnetic imaging applications. Higher sensitivity and spatial resolution combined with non-cryogenic operation of atomic magnetometers would enable new applications, including the possibility of mapping non-invasively the cortical modules in the brain. Here we describe a new spin-exchange relaxation-free (SERF) atomic magnetometer, and demonstrate magnetic field sensitivity of 0.54fTHz-1/2 with a measurement volume of only 0.3cm3. Theoretical analysis shows that fundamental sensitivity limits of this device are below 0.01fTHz-1/2. We also demonstrate simple multichannel operation of the magnetometer, and localization of magnetic field sources with a resolution of 2mm.

  8. The effects of mild and severe traumatic brain injury on the auditory and visual versions of the Adjusting-Paced Serial Addition Test (Adjusting-PSAT).

    PubMed

    Tombaugh, Tom N; Stormer, Peter; Rees, Laura; Irving, Susan; Francis, Margaret

    2006-10-01

    Auditory and visual versions of the Adjusting-PSAT [Tombaugh, T. N. (1999). Administrative manual for the adjusting-paced serial addition test (Adjusting-PSAT). Ottawa, Ontario: Carleton University] were used to examine the effects of mild and severe traumatic brain injury (TBI) on information processing. The Adjusting-PSAT, a computerized modification of the original PASAT [Gronwall, D., & Sampson, H. (1974). The psychological effects of concussion. Auckland, New Zealand: Auckland University Press], systematically varied the inter-stimulus interval (ISI) by making the duration of the ISI contingent on the correctness of the response. This procedure permitted calculation of a temporal threshold measure that represented the fastest speed of digit presentation at which a person was able to process the information and provide the correct answer. Threshold values progressively declined as a function of the severity of TBI with visual thresholds significantly lower than auditory thresholds. The major importance of the current study is that the threshold measure offers a potentially more precise way of evaluating how TBI affects cognitive functioning than is achieved using the traditional PASAT and the number of correct responses. The Adjusting-PSAT offers the additional clinical advantages of eliminating the need to make a priori decisions about what ISI should be used in different clinical applications, and avoiding spuriously high levels of performance that occur when an "alternate answer" or chunking strategy is used. Unfortunately, the Adjusting-PSAT did not reduce the high level of frustration previously associated with the traditional PASAT.

  9. Variable effects of click polarity on auditory brain-stem response latencies: analyses of narrow-band ABRs suggest possible explanations.

    PubMed

    Don, M; Vermiglio, A J; Ponton, C W; Eggermont, J J; Masuda, A

    1996-07-01

    The auditory brain-stem responses (ABRs) to rarefaction and condensation clicks were obtained for 12 normal-hearing subjects in quiet, and high-pass masking at 8, 4, 2, 1, and 0.5 kHz. Derived narrow-band wave V latency differences were analyzed with respect to (1) stimulus polarity, (2) absolute differences irrespective of polarity. The analyses revealed no significant stimulus polarity effects on latency for the derived bands. Absolute latency differences regardless of polarity tended to be greater for those derived bands having lower characteristic frequencies (CFs). However, these differences were smaller than the expected half-period of the theoretical CF. Further analyses in three additional subjects using repeated runs of the same polarity indicate that this increase in absolute latency difference with lower derived band CF does not reflect a simple half-period change owing to polarity, but rather to the increase variability in measuring the peak latency of the lower CF derived bands. The variability is consistent with variability of eighth nerve PST histograms behavior observed in animal work [Kiang et al., "Discharge patterns of single fibers in the cat's auditory nerve," Research Monograph No. 35 (MIT, Cambridge, MA, 1965)]. Thus claimed polarity effects observed in other ABR work using absolute values may have been affected by this variability. It appears from these current data that half-period latency shifts of wave V owing to stimulus polarity differences are not observed in derived bands responses initiated from frequency specific regions of the cochlea.

  10. Use of Multichannel Near Infrared Spectroscopy to Study Relationships Between Brain Regions and Neurocognitive Tasks of Selective/Divided Attention and 2-Back Working Memory.

    PubMed

    Tomita, Nozomi; Imai, Shoji; Kanayama, Yusuke; Kawashima, Issaku; Kumano, Hiroaki

    2017-01-01

    While dichotic listening (DL) was originally intended to measure bottom-up selective attention, it has also become a tool for measuring top-down selective attention. This study investigated the brain regions related to top-down selective and divided attention DL tasks and a 2-back task using alphanumeric and Japanese numeric sounds. Thirty-six healthy participants underwent near-infrared spectroscopy scanning while performing a top-down selective attentional DL task, a top-down divided attentional DL task, and a 2-back task. Pearson's correlations were calculated to show relationships between oxy-Hb concentration in each brain region and the score of each cognitive task. Different brain regions were activated during the DL and 2-back tasks. Brain regions activated in the top-down selective attention DL task were the left inferior prefrontal gyrus and left pars opercularis. The left temporopolar area was activated in the top-down divided attention DL task, and the left frontopolar area and left dorsolateral prefrontal cortex were activated in the 2-back task. As further evidence for the finding that each task measured different cognitive and brain area functions, neither the percentages of correct answers for the three tasks nor the response times for the selective attentional task and the divided attentional task were correlated to one another. Thus, the DL and 2-back tasks used in this study can assess multiple areas of cognitive, brain-related dysfunction to explore their relationship to different psychiatric and neurodevelopmental disorders.

  11. Subcortical processing in auditory communication.

    PubMed

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2015-10-01

    The voice is a rich source of information, which the human brain has evolved to decode and interpret. Empirical observations have shown that the human auditory system is especially sensitive to the human voice, and that activity within the voice-sensitive regions of the primary and secondary auditory cortex is modulated by the emotional quality of the vocal signal, and may therefore subserve, with frontal regions, the cognitive ability to correctly identify the speaker's affective state. So far, the network involved in the processing of vocal affect has been mainly characterised at the cortical level. However, anatomical and functional evidence suggests that acoustic information relevant to the affective quality of the auditory signal might be processed prior to the auditory cortex. Here we review the animal and human literature on the main subcortical structures along the auditory pathway, and propose a model whereby the distinction between different types of vocal affect in auditory communication begins at very early stages of auditory processing, and relies on the analysis of individual acoustic features of the sound signal. We further suggest that this early feature-based decoding occurs at a subcortical level along the ascending auditory pathway, and provides a preliminary coarse (but fast) characterisation of the affective quality of the auditory signal before the more refined (but slower) cortical processing is completed.

  12. Origins of task-specific sensory-independent organization in the visual and auditory brain: neuroscience evidence, open questions and clinical implications.

    PubMed

    Heimler, Benedetta; Striem-Amit, Ella; Amedi, Amir

    2015-12-01

    Evidence of task-specific sensory-independent (TSSI) plasticity from blind and deaf populations has led to a better understanding of brain organization. However, the principles determining the origins of this plasticity remain unclear. We review recent data suggesting that a combination of the connectivity bias and sensitivity to task-distinctive features might account for TSSI plasticity in the sensory cortices as a whole, from the higher-order occipital/temporal cortices to the primary sensory cortices. We discuss current theories and evidence, open questions and related predictions. Finally, given the rapid progress in visual and auditory restoration techniques, we address the crucial need to develop effective rehabilitation approaches for sensory recovery. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Geometrical principal component analysis of planar-segments of the three-channel Lissajous' trajectory of human auditory brain stem evoked potentials.

    PubMed

    Pratt, H; Har'el, Z; Golos, E

    1986-05-01

    Three-Channel Lissajous' Trajectories (3CLTs) of Auditory Brain Stem Evoked Potentials (ABEP) were obtained from 15 normal humans. Planar-segments of 3CLT were identified and the orientations of the first two geometrical principal components, which interact to produce the planar-segments, were calculated. Each principal component's orientation in voltage space was quantified by its coefficients (A, B and C). Intersubject variability of these orientations was comparable to the variability of plane orientations. The principal components of planar-segments can indicate the type of generator activity that is involved in the formation of planar-segments. The results of this analysis indicate that planarity of each 3CLT component is produced by the interaction of simultaneous multiple generators, or by a single synchronous generator which changes its orientation. The coefficients of these principal components may complement plane coefficients as quantitative indices of 3CLT of ABEP.

  14. Central auditory imperception.

    PubMed

    Snow, J B; Rintelmann, W F; Miller, J M; Konkle, D F

    1977-09-01

    The development of clinically applicable techniques for the evaluation of hearing impairment caused by lesions of the central auditory pathways has increased clinical interest in the anatomy and physiology of these pathways. A conceptualization of present understanding of the anatomy and physiology of the central auditory pathways is presented. Clinical tests based on reduction of redundancy of the speech message, degradation of speech and binaural interations are presented. Specifically performance-intensity functions, filtered speech tests, competing message tests and time-compressed speech tests are presented with the emphasis on our experience with time-compressed speech tests. With proper use of these tests not only can central auditory impairments by detected, but brain stem lesions can be distinguished from cortical lesions.

  15. The Drosophila Auditory System

    PubMed Central

    Boekhoff-Falk, Grace; Eberl, Daniel F.

    2013-01-01

    Development of a functional auditory system in Drosophila requires specification and differentiation of the chordotonal sensilla of Johnston’s organ (JO) in the antenna, correct axonal targeting to the antennal mechanosensory and motor center (AMMC) in the brain, and synaptic connections to neurons in the downstream circuit. Chordotonal development in JO is functionally complicated by structural, molecular and functional diversity that is not yet fully understood, and construction of the auditory neural circuitry is only beginning to unfold. Here we describe our current understanding of developmental and molecular mechanisms that generate the exquisite functions of the Drosophila auditory system, emphasizing recent progress and highlighting important new questions arising from research on this remarkable sensory system. PMID:24719289

  16. Auditory Imagination.

    ERIC Educational Resources Information Center

    Croft, Martyn

    Auditory imagination is used in this paper to describe a number of issues and activities related to sound and having to do with listening, thinking, recalling, imagining, reshaping, creating, and uttering sounds and words. Examples of auditory imagination in religious and literary works are cited that indicate a belief in an imagined, expected, or…

  17. Site of auditory plasticity in the brain stem (VLVp) of the owl revealed by early monaural occlusion.

    PubMed

    Mogdans, J; Knudsen, E I

    1994-12-01

    1. The optic tectum of the barn owl contains a physiological map of interaural level difference (ILD) that underlies, in part, its map of auditory space. Monaural occlusion shifts the range of ILDs experienced by an animal and alters the correspondence of ILDs with source locations. Chronic monaural occlusion during development induces an adaptive shift in the tectal ILD map that compensates for the effects of the earplug. The data presented in this study indicate that one site of plasticity underlying this adaptive adjustment is in the posterior division of the ventral nucleus of the lateral lemniscus (VLVp), the first site of ILD comparison in the auditory pathway. 2. Single and multiple unit sites were recorded in the optic tecta and VLVps of ketamine-anesthetized owls. The owls were raised from 4 wk of age with one ear occluded with an earplug. Auditory testing, using digitally synthesized dichotic stimuli, was carried out 8-16 wk later with the earplug removed. The adaptive adjustment in ILD coding in each bird was quantified as the shift from normal ILD tuning measured in the optic tectum. Evidence of adaptive adjustment in the VLVp was based on statistical differences between the VLVp's ipsilateral and contralateral to the occluded ear in the sensitivity of units to excitatory-ear and inhibitory-ear stimulation. 3. The balance of excitatory to inhibitory influences on VLVp units was shifted in the adaptive direction in six out of eight owls. In three of these owls, adaptive differences in inhibition, but not in excitation, were found. For this group of owls, the patterns of response properties across the two VLVps can only be accounted for by plasticity in the VLVp. For the other three owls, the possibility that the difference between the two VLVps resulted from damage to one of the VLVps could not be eliminated, and for one of these, plasticity at a more peripheral site (in the cochlea or cochlear nucleus) could also explain the data. In the remaining two

  18. Multichannel interval timer (MINT)

    SciTech Connect

    Kimball, K.B.

    1982-06-01

    A prototype Multichannel INterval Timer (MINT) has been built for measuring signal Time of Arrival (TOA) from sensors placed in blast environments. The MINT is intended to reduce the space, equipment costs, and data reduction efforts associated with traditional analog TOA recording methods, making it more practical to field the large arrays of TOA sensors required to characterize blast environments. This document describes the MINT design features, provides the information required for installing and operating the system, and presents proposed improvements for the next generation system.

  19. Fractional channel multichannel analyzer

    DOEpatents

    Brackenbush, Larry W.; Anderson, Gordon A.

    1994-01-01

    A multichannel analyzer incorporating the features of the present invention obtains the effect of fractional channels thus greatly reducing the number of actual channels necessary to record complex line spectra. This is accomplished by using an analog-to-digital converter in the asynscronous mode, i.e., the gate pulse from the pulse height-to-pulse width converter is not synchronized with the signal from a clock oscillator. This saves power and reduces the number of components required on the board to achieve the effect of radically expanding the number of channels without changing the circuit board.

  20. Fractional channel multichannel analyzer

    DOEpatents

    Brackenbush, L.W.; Anderson, G.A.

    1994-08-23

    A multichannel analyzer incorporating the features of the present invention obtains the effect of fractional channels thus greatly reducing the number of actual channels necessary to record complex line spectra. This is accomplished by using an analog-to-digital converter in the asynchronous mode, i.e., the gate pulse from the pulse height-to-pulse width converter is not synchronized with the signal from a clock oscillator. This saves power and reduces the number of components required on the board to achieve the effect of radically expanding the number of channels without changing the circuit board. 9 figs.

  1. Impact of Repetitive Transcranial Magnetic Stimulation (rTMS) on Brain Functional Marker of Auditory Hallucinations in Schizophrenia Patients

    PubMed Central

    Maïza, Olivier; Hervé, Pierre-Yve; Etard, Olivier; Razafimandimby, Annick; Montagne-Larmurier, Aurélie; Dollfus, Sonia

    2013-01-01

    Several cross-sectional functional Magnetic Resonance Imaging (fMRI) studies reported a negative correlation between auditory verbal hallucination (AVH) severity and amplitude of the activations during language tasks. The present study assessed the time course of this correlation and its possible structural underpinnings by combining structural, functional MRI and repetitive Transcranial Magnetic Stimulation (rTMS). Methods: Nine schizophrenia patients with AVH (evaluated with the Auditory Hallucination Rating scale; AHRS) and nine healthy participants underwent two sessions of an fMRI speech listening paradigm. Meanwhile, patients received high frequency (20 Hz) rTMS. Results: Before rTMS, activations were negatively correlated with AHRS in a left posterior superior temporal sulcus (pSTS) cluster, considered henceforward as a functional region of interest (fROI). After rTMS, activations in this fROI no longer correlated with AHRS. This decoupling was explained by a significant decrease of AHRS scores after rTMS that contrasted with a relative stability of cerebral activations. A voxel-based-morphometry analysis evidenced a cluster of the left pSTS where grey matter volume negatively correlated with AHRS before rTMS and positively correlated with activations in the fROI at both sessions. Conclusion: rTMS decreases the severity of AVH leading to modify the functional correlate of AVH underlain by grey matter abnormalities. PMID:24961421

  2. Sex, acceleration, brain imaging, and rhesus monkeys: Converging evidence for an evolutionary bias for looming auditory motion

    NASA Astrophysics Data System (ADS)

    Neuhoff, John G.

    2003-04-01

    Increasing acoustic intensity is a primary cue to looming auditory motion. Perceptual overestimation of increasing intensity could provide an evolutionary selective advantage by specifying that an approaching sound source is closer than actual, thus affording advanced warning and more time than expected to prepare for the arrival of the source. Here, multiple lines of converging evidence for this evolutionary hypothesis are presented. First, it is shown that intensity change specifying accelerating source approach changes in loudness more than equivalent intensity change specifying decelerating source approach. Second, consistent with evolutionary hunter-gatherer theories of sex-specific spatial abilities, it is shown that females have a significantly larger bias for rising intensity than males. Third, using functional magnetic resonance imaging in conjunction with approaching and receding auditory motion, it is shown that approaching sources preferentially activate a specific neural network responsible for attention allocation, motor planning, and translating perception into action. Finally, it is shown that rhesus monkeys also exhibit a rising intensity bias by orienting longer to looming tones than to receding tones. Together these results illustrate an adaptive perceptual bias that has evolved because it provides a selective advantage in processing looming acoustic sources. [Work supported by NSF and CDC.

  3. Early experience and domestication affect auditory discrimination learning, open field behaviour and brain size in wild Mongolian gerbils and domesticated laboratory gerbils (Meriones unguiculatus forma domestica).

    PubMed

    Stuermer, Ingo W; Wetzel, Wolfram

    2006-10-02

    The influence of early experience and strain differences on auditory discrimination learning, open field behaviour and brain size was investigated in wild-type Mongolian gerbils (strain Ugoe:MU95) raised in the wild (wild F-0) or in the laboratory (wild F-1) and in domesticated Laboratory Gerbils (LAB). Adult males were conditioned for 10 days in a shuttle box go/no-go paradigm to discriminate two frequency-modulated tones. Significant learning was established within 5 days in wild F-0 and within 3 days in wild F-1 and LAB. Spontaneous jumps in the shuttle box (inter-trial crossings) were frequently seen in wild F-0 and F-1, but rarely in LAB. All groups exhibited nearly the same ability to remember after 2 weeks without training. In the open field test applied on 5 consecutive days, no differences in locomotion patterns and inner field preferences were found. Rearing frequency decreased over 5 days in wild gerbils. Running distances (4-6m/min) were similar in wild F-0 and LAB, but higher in wild F-1. The ratio of brain size to body weight did not differ between wild F-0 and F-1, but was 17.1% lower in LAB. Correspondingly high brain weights in wild F-1 and F-0 support our domestication hypothesis and negate any serious effect of early experience or captivity on brain size in Mongolian gerbils. In contrast, wild F-1 raised in the laboratory show a rapid improvement in learning performance, indicating that early experience rather that genetic differences between strains affect shuttle box discrimination learning in gerbils.

  4. Time-resolved multi-channel optical system for assessment of brain oxygenation and perfusion by monitoring of diffuse reflectance and fluorescence

    NASA Astrophysics Data System (ADS)

    Milej, D.; Gerega, A.; Kacprzak, M.; Sawosz, P.; Weigl, W.; Maniewski, R.; Liebert, A.

    2014-03-01

    Time-resolved near-infrared spectroscopy is an optical technique which can be applied in tissue oxygenation assessment. In the last decade this method is extensively tested as a potential clinical tool for noninvasive human brain function monitoring and imaging. In the present paper we show construction of an instrument which allows for: (i) estimation of changes in brain tissue oxygenation using two-wavelength spectroscopy approach and (ii) brain perfusion assessment with the use of single-wavelength reflectometry or fluorescence measurements combined with ICG-bolus tracking. A signal processing algorithm based on statistical moments of measured distributions of times of flight of photons is implemented. This data analysis method allows for separation of signals originating from extra- and intracerebral tissue compartments. In this paper we present compact and easily reconfigurable system which can be applied in different types of time-resolved experiments: two-wavelength measurements at 687 and 832 nm, single wavelength reflectance measurements at 760 nm (which is at maximum of ICG absorption spectrum) or fluorescence measurements with excitation at 760 nm. Details of the instrument construction and results of its technical tests are shown. Furthermore, results of in-vivo measurements obtained for various modes of operation of the system are presented.

  5. Longer storage of auditory than of visual information in the rabbit brain: evidence from dorsal hippocampal electrophysiology.

    PubMed

    Astikainen, Piia; Ruusuvirta, Timo; Korhonen, Tapani

    2005-01-01

    Whereas sensory memory in humans has been found to store auditory information for a longer time than visual information, it is unclear whether this is the case also in other species. We recorded hippocampal event-related potentials (ERPs) in awake rabbits exposed to occasional changes in a repeated 50-ms acoustic (1000 versus 2000 Hz) and visual (vertical versus horizontal orientation) stimulus. Three intervals (500, 1500, or 3000 ms) between stimulus repetitions were applied. Whereas acoustic changes significantly affected ERPs with the repetition intervals of 500 and 1500 ms, visual changes did so only with the repetition interval of 500 ms. Our finding, thus, suggests a similarity in sensory processing abilities between human and non-human mammals.

  6. Effect Of Electromagnetic Waves Emitted From Mobile Phone On Brain Stem Auditory Evoked Potential In Adult Males.

    PubMed

    Singh, K

    2015-01-01

    Mobile phone (MP) is commonly used communication tool. Electromagnetic waves (EMWs) emitted from MP may have potential health hazards. So, it was planned to study the effect of electromagnetic waves (EMWs) emitted from the mobile phone on brainstem auditory evoked potential (BAEP) in male subjects in the age group of 20-40 years. BAEPs were recorded using standard method of 10-20 system of electrode placement and sound click stimuli of specified intensity, duration and frequency.Right ear was exposed to EMW emitted from MP for about 10 min. On comparison of before and after exposure to MP in right ear (found to be dominating ear), there was significant increase in latency of II, III (p < 0.05) and V (p < 0.001) wave, amplitude of I-Ia wave (p < 0.05) and decrease in IPL of III-V wave (P < 0.05) after exposure to MP. But no significant change was found in waves of BAEP in left ear before vs after MP. On comparison of right (having exposure routinely as found to be dominating ear) and left ears (not exposed to MP), before exposure to MP, IPL of IIl-V wave and amplitude of V-Va is more (< 0.001) in right ear compared to more latency of III and IV wave (< 0.001) in left ear. After exposure to MP, the amplitude of V-Va was (p < 0.05) more in right ear compared to left ear. In conclusion, EMWs emitted from MP affects the auditory potential.

  7. Preferred EEG brain states at stimulus onset in a fixed interstimulus interval equiprobable auditory Go/NoGo task: a definitive study.

    PubMed

    Barry, Robert J; De Blasio, Frances M; De Pascalis, Vilfredo; Karamacoska, Diana

    2014-10-01

    This study examined the occurrence of preferred EEG phase states at stimulus onset in an equiprobable auditory Go/NoGo task with a fixed interstimulus interval, and their effects on the resultant event-related potentials (ERPs). We used a sliding short-time FFT decomposition of the EEG at Cz for each trial to assess prestimulus EEG activity in the delta, theta, alpha and beta bands. We determined the phase of each 2 Hz narrow-band contributing to these four broad bands at 125 ms before each stimulus onset, and for the first time, avoided contamination from poststimulus EEG activity. This phase value was extrapolated 125 ms to obtain the phase at stimulus onset, combined into the broad-band phase, and used to sort trials into four phase groups for each of the four broad bands. For each band, ERPs were derived for each phase from the raw EEG activity at 19 sites. Data sets from each band were separately decomposed using temporal Principal Components Analyses with unrestricted VARIMAX rotation to extract N1-1, PN, P2, P3, SW and LP components. Each component was analysed as a function of EEG phase at stimulus onset in the context of a simple conceptualisation of orthogonal phase effects (cortical negativity vs. positivity, negative driving vs. positive driving, waxing vs. waning). The predicted non-random occurrence of phase-defined brain states was confirmed. The preferred states of negativity, negative driving, and waxing were each associated with more efficient stimulus processing, as reflected in amplitude differences of the components. The present results confirm the existence of preferred brain states and their impact on the efficiency of brain dynamics in perceptual and cognitive processing.

  8. Maturation of preterm newborn brains: a fMRI-DTI study of auditory processing of linguistic stimuli and white matter development.

    PubMed

    Baldoli, Cristina; Scola, Elisa; Della Rosa, Pasquale Antony; Pontesilli, Silvia; Longaretti, Roberta; Poloniato, Antonella; Scotti, Roberta; Blasi, Valeria; Cirillo, Sara; Iadanza, Antonella; Rovelli, Rosanna; Barera, Graziano; Scifo, Paola

    2015-11-01

    To evaluate brain development longitudinally in premature infants without abnormalities as compared to healthy full-term newborns, we assessed fMRI brain activity patterns in response to linguistic stimuli and white matter structural development focusing on language-related fibres. A total sample of 29 preterm newborns and 26 at term control newborns underwent both fMRI and DTI. Griffiths test was performed at 6 months of corrected age to assess development. Auditory fMRI data were analysed in 17 preterm newborns at three time points [34, 41 and 44 weeks of post menstrual age (wPMA)] and in 15 controls, at term. Analysis showed a distinctive pattern of cortical activation in preterm newborns up to 29 wPMA moving from early prevalent left temporal and supramarginal area activation in the preterm period, to a bilateral temporal and frontoopercular activation in the at term equivalent period and to a more fine-grained left pattern of activity at 44 wPMA. At term controls showed instead greater bilateral posterior thalamic activation. The different pattern of brain activity associated to preterm newborns mirrors their white matter maturation delay in peripheral regions of the fibres and thalamo-cortical radiations in subcortical areas of both hemispheres, pointing to different transient thalamo-cortical development due to prematurity. Evidence for functional thalamic activation and more mature subcortical tracts, including thalamic radiations, may represent the substantial gap between preterm and at term infants. The transition between bilateral temporal activations at term age and leftward activations at 44 weeks of PMA is correlated to better neuropsychological results in Griffiths test.

  9. Software Configurable Multichannel Transceiver

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Cornelius, Harold; Hickling, Ron; Brooks, Walter

    2009-01-01

    Emerging test instrumentation and test scenarios increasingly require network communication to manage complexity. Adapting wireless communication infrastructure to accommodate challenging testing needs can benefit from reconfigurable radio technology. A fundamental requirement for a software-definable radio system is independence from carrier frequencies, one of the radio components that to date has seen only limited progress toward programmability. This paper overviews an ongoing project to validate the viability of a promising chipset that performs conversion of radio frequency (RF) signals directly into digital data for the wireless receiver and, for the transmitter, converts digital data into RF signals. The Software Configurable Multichannel Transceiver (SCMT) enables four transmitters and four receivers in a single unit the size of a commodity disk drive, programmable for any frequency band between 1 MHz and 6 GHz.

  10. Multichannel optical sensing device

    DOEpatents

    Selkowitz, S.E.

    1985-08-16

    A multichannel optical sensing device is disclosed, for measuring the outdoor sky luminance or illuminance or the luminance or illuminance distribution in a room, comprising a plurality of light receptors, an optical shutter matrix including a plurality of liquid crystal optical shutter elements operable by electrical control signals between light transmitting and light stopping conditions, fiber optical elements connected between the receptors and the shutter elements, a microprocessor based programmable control unit for selectively supplying control signals to the optical shutter elements in a programmable sequence, a photodetector including an optical integrating spherical chamber having an input port for receiving the light from the shutter matrix and at least one detector element in the spherical chamber for producing output signals corresponding to the light, and output units for utilizing the output signals including a storage unit having a control connection to the microprocessor based programmable control unit for storing the output signals under the sequence control of the programmable control unit.

  11. Multichannel optical sensing device

    DOEpatents

    Selkowitz, Stephen E.

    1990-01-01

    A multichannel optical sensing device is disclosed, for measuring the outr sky luminance or illuminance or the luminance or illuminance distribution in a room, comprising a plurality of light receptors, an optical shutter matrix including a plurality of liquid crystal optical shutter elements operable by electrical control signals between light transmitting and light stopping conditions, fiber optic elements connected between the receptors and the shutter elements, a microprocessor based programmable control unit for selectively supplying control signals to the optical shutter elements in a programmable sequence, a photodetector including an optical integrating spherical chamber having an input port for receiving the light from the shutter matrix and at least one detector element in the spherical chamber for producing output signals corresponding to the light, and output units for utilizing the output signals including a storage unit having a control connection to the microprocessor based programmable control unit for storing the output signals under the sequence control of the programmable control unit.

  12. Spatiotemporal Analysis of Multichannel EEG: CARTOOL

    PubMed Central

    Brunet, Denis; Murray, Micah M.; Michel, Christoph M.

    2011-01-01

    This paper describes methods to analyze the brain's electric fields recorded with multichannel Electroencephalogram (EEG) and demonstrates their implementation in the software CARTOOL. It focuses on the analysis of the spatial properties of these fields and on quantitative assessment of changes of field topographies across time, experimental conditions, or populations. Topographic analyses are advantageous because they are reference independents and thus render statistically unambiguous results. Neurophysiologically, differences in topography directly indicate changes in the configuration of the active neuronal sources in the brain. We describe global measures of field strength and field similarities, temporal segmentation based on topographic variations, topographic analysis in the frequency domain, topographic statistical analysis, and source imaging based on distributed inverse solutions. All analysis methods are implemented in a freely available academic software package called CARTOOL. Besides providing these analysis tools, CARTOOL is particularly designed to visualize the data and the analysis results using 3-dimensional display routines that allow rapid manipulation and animation of 3D images. CARTOOL therefore is a helpful tool for researchers as well as for clinicians to interpret multichannel EEG and evoked potentials in a global, comprehensive, and unambiguous way. PMID:21253358

  13. Spatiotemporal analysis of multichannel EEG: CARTOOL.

    PubMed

    Brunet, Denis; Murray, Micah M; Michel, Christoph M

    2011-01-01

    This paper describes methods to analyze the brain's electric fields recorded with multichannel Electroencephalogram (EEG) and demonstrates their implementation in the software CARTOOL. It focuses on the analysis of the spatial properties of these fields and on quantitative assessment of changes of field topographies across time, experimental conditions, or populations. Topographic analyses are advantageous because they are reference independents and thus render statistically unambiguous results. Neurophysiologically, differences in topography directly indicate changes in the configuration of the active neuronal sources in the brain. We describe global measures of field strength and field similarities, temporal segmentation based on topographic variations, topographic analysis in the frequency domain, topographic statistical analysis, and source imaging based on distributed inverse solutions. All analysis methods are implemented in a freely available academic software package called CARTOOL. Besides providing these analysis tools, CARTOOL is particularly designed to visualize the data and the analysis results using 3-dimensional display routines that allow rapid manipulation and animation of 3D images. CARTOOL therefore is a helpful tool for researchers as well as for clinicians to interpret multichannel EEG and evoked potentials in a global, comprehensive, and unambiguous way.

  14. Multichannel electrochemical microbial detection unit

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Young, R. N.; Boykin, E. H.

    1978-01-01

    The paper describes the design and capabilities of a compact multichannel electrochemical unit devised to detect and automatically indicate detection time length of bacteria. By connecting this unit to a strip-chart recorder, a permanent record is obtained of the end points and growth curves for each of eight channels. The experimental setup utilizing the multichannel unit consists of a test tube (25 by 150 mm) containing a combination redox electrode plus 18 ml of lauryl tryptose broth and positioned in a 35-C water bath. Leads from the electrodes are connected to the multichannel unit, which in turn is connected to a strip-chart recorder. After addition of 2.0 ml of inoculum to the test tubes, depression of the push-button starter activates the electronics, timer, and indicator light for each channel. The multichannel unit is employed to test tenfold dilutions of various members of the Enterobacteriaceae group, and a typical dose-response curve is presented.

  15. Cross-modal recruitment of primary visual cortex by auditory stimuli in the nonhuman primate brain: a molecular mapping study.

    PubMed

    Hirst, Priscilla; Javadi Khomami, Pasha; Gharat, Amol; Zangenehpour, Shahin

    2012-01-01

    Recent studies suggest that exposure to only one component of audiovisual events can lead to cross-modal cortical activation. However, it is not certain whether such crossmodal recruitment can occur in the absence of explicit conditioning, semantic factors, or long-term associations. A recent study demonstrated that crossmodal cortical recruitment can occur even after a brief exposure to bimodal stimuli without semantic association. In addition, the authors showed that the primary visual cortex is under such crossmodal influence. In the present study, we used molecular activity mapping of the immediate early gene zif268. We found that animals, which had previously been exposed to a combination of auditory and visual stimuli, showed increased number of active neurons in the primary visual cortex when presented with sounds alone. As previously implied, this crossmodal activation appears to be the result of implicit associations of the two stimuli, likely driven by their spatiotemporal characteristics; it was observed after a relatively short period of exposure (~45 min) and lasted for a relatively long period after the initial exposure (~1 day). These results suggest that the previously reported findings may be directly rooted in the increased activity of the neurons occupying the primary visual cortex.

  16. Multichannel signal enhancement

    DOEpatents

    Lewis, Paul S.

    1990-01-01

    A mixed adaptive filter is formulated for the signal processing problem where desired a priori signal information is not available. The formulation generates a least squares problem which enables the filter output to be calculated directly from an input data matrix. In one embodiment, a folded processor array enables bidirectional data flow to solve the recursive problem by back substitution without global communications. In another embodiment, a balanced processor array solves the recursive problem by forward elimination through the array. In a particular application to magnetoencephalography, the mixed adaptive filter enables an evoked response to an auditory stimulus to be identified from only a single trial.

  17. Sampled sinusoidal stimulation profile and multichannel fuzzy logic classification for monitor-based phase-coded SSVEP brain-computer interfacing

    NASA Astrophysics Data System (ADS)

    Manyakov, Nikolay V.; Chumerin, Nikolay; Robben, Arne; Combaz, Adrien; van Vliet, Marijn; Van Hulle, Marc M.

    2013-06-01

    Objective. The performance and usability of brain-computer interfaces (BCIs) can be improved by new paradigms, stimulation methods, decoding strategies, sensor technology etc. In this study we introduce new stimulation and decoding methods for electroencephalogram (EEG)-based BCIs that have targets flickering at the same frequency but with different phases. Approach. The phase information is estimated from the EEG data, and used for target command decoding. All visual stimulation is done on a conventional (60-Hz) LCD screen. Instead of the ‘on/off’ visual stimulation, commonly used in phase-coded BCI, we propose one based on a sampled sinusoidal intensity profile. In order to fully exploit the circular nature of the evoked phase response, we introduce a filter feature selection procedure based on circular statistics and propose a fuzzy logic classifier designed to cope with circular information from multiple channels jointly. Main results. We show that the proposed visual stimulation enables us not only to encode more commands under the same conditions, but also to obtain EEG responses with a more stable phase. We also demonstrate that the proposed decoding approach outperforms existing ones, especially for the short time windows used. Significance. The work presented here shows how to overcome some of the limitations of screen-based visual stimulation. The superiority of the proposed decoding approach demonstrates the importance of preserving the circularity of the data during the decoding stage.

  18. Digital restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Chin, Roland T.

    1989-01-01

    The Wiener solution of a multichannel restoration scheme is presented. Using matrix diagonalization and block-Toeplitz to block-circulant approximation, the inversion of the multichannel, linear space-invariant imaging system becomes feasible by utilizing a fast iterative matrix inversion procedure. The restoration uses both the within-channel (spatial) and between-channel (spectral) correlation; hence, the restored result is a better estimate than that produced by independent channel restoration. Simulations are also presented.

  19. Separating heart and brain: on the reduction of physiological noise from multichannel functional near-infrared spectroscopy (fNIRS) signals

    NASA Astrophysics Data System (ADS)

    Bauernfeind, G.; Wriessnegger, S. C.; Daly, I.; Müller-Putz, G. R.

    2014-10-01

    Objective. Functional near-infrared spectroscopy (fNIRS) is an emerging technique for the in vivo assessment of functional activity of the cerebral cortex as well as in the field of brain-computer interface (BCI) research. A common challenge for the utilization of fNIRS in these areas is a stable and reliable investigation of the spatio-temporal hemodynamic patterns. However, the recorded patterns may be influenced and superimposed by signals generated from physiological processes, resulting in an inaccurate estimation of the cortical activity. Up to now only a few studies have investigated these influences, and still less has been attempted to remove/reduce these influences. The present study aims to gain insights into the reduction of physiological rhythms in hemodynamic signals (oxygenated hemoglobin (oxy-Hb), deoxygenated hemoglobin (deoxy-Hb)). Approach. We introduce the use of three different signal processing approaches (spatial filtering, a common average reference (CAR) method; independent component analysis (ICA); and transfer function (TF) models) to reduce the influence of respiratory and blood pressure (BP) rhythms on the hemodynamic responses. Main results. All approaches produce large reductions in BP and respiration influences on the oxy-Hb signals and, therefore, improve the contrast-to-noise ratio (CNR). In contrast, for deoxy-Hb signals CAR and ICA did not improve the CNR. However, for the TF approach, a CNR-improvement in deoxy-Hb can also be found. Significance. The present study investigates the application of different signal processing approaches to reduce the influences of physiological rhythms on the hemodynamic responses. In addition to the identification of the best signal processing method, we also show the importance of noise reduction in fNIRS data.

  20. Separating heart and brain: on the reduction of physiological noise from multichannel functional near-infrared spectroscopy (fNIRS) signals.

    PubMed

    Bauernfeind, G; Wriessnegger, S C; Daly, I; Müller-Putz, G R

    2014-10-01

    Functional near-infrared spectroscopy (fNIRS) is an emerging technique for the in vivo assessment of functional activity of the cerebral cortex as well as in the field of brain-computer interface (BCI) research. A common challenge for the utilization of fNIRS in these areas is a stable and reliable investigation of the spatio-temporal hemodynamic patterns. However, the recorded patterns may be influenced and superimposed by signals generated from physiological processes, resulting in an inaccurate estimation of the cortical activity. Up to now only a few studies have investigated these influences, and still less has been attempted to remove/reduce these influences. The present study aims to gain insights into the reduction of physiological rhythms in hemodynamic signals (oxygenated hemoglobin (oxy-Hb), deoxygenated hemoglobin (deoxy-Hb)). We introduce the use of three different signal processing approaches (spatial filtering, a common average reference (CAR) method; independent component analysis (ICA); and transfer function (TF) models) to reduce the influence of respiratory and blood pressure (BP) rhythms on the hemodynamic responses. All approaches produce large reductions in BP and respiration influences on the oxy-Hb signals and, therefore, improve the contrast-to-noise ratio (CNR). In contrast, for deoxy-Hb signals CAR and ICA did not improve the CNR. However, for the TF approach, a CNR-improvement in deoxy-Hb can also be found. The present study investigates the application of different signal processing approaches to reduce the influences of physiological rhythms on the hemodynamic responses. In addition to the identification of the best signal processing method, we also show the importance of noise reduction in fNIRS data.

  1. Auditory spatial processing in Alzheimer's disease.

    PubMed

    Golden, Hannah L; Nicholas, Jennifer M; Yong, Keir X X; Downey, Laura E; Schott, Jonathan M; Mummery, Catherine J; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer's disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer's disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer's disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer's disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer's disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer's disease

  2. Auditory and visual impairments in patients with blast-related traumatic brain injury: Effect of dual sensory impairment on Functional Independence Measure.

    PubMed

    Lew, Henry L; Garvert, Donn W; Pogoda, Terri K; Hsu, Pei-Te; Devine, Jennifer M; White, Daniel K; Myers, Paula J; Goodrich, Gregory L

    2009-01-01

    The frequencies of hearing impairment (HI), vision impairment (VI), or dual (hearing and vision) sensory impairment (DSI) in patients with blast-related traumatic brain injury (TBI) and their effects on functional recovery are not well documented. In this preliminary study of 175 patients admitted to a Polytrauma Rehabilitation Center, we completed hearing and vision examinations and obtained Functional Independence Measure (FIM) scores at admission and discharge for 62 patients with blast-related TBI. We diagnosed HI only, VI only, and DSI in 19%, 34%, and 32% of patients, respectively. Only 15% of the patients had no sensory impairment in either auditory or visual modality. An analysis of variance showed a group difference for the total and motor FIM scores at discharge (p < 0.04). Regression model analyses demonstrated that DSI significantly contributed to reduced gain in total ( t = -2.25) and motor ( t = -2.50) FIM scores ( p < 0.05). Understanding the long-term consequences of sensory impairments in the functional recovery of patients with blast-related TBI requires further research.

  3. Distinct activation of monoaminergic pathways in chick brain in relation to auditory imprinting and stressful situations: a microdialysis study.

    PubMed

    Gruss, M; Braun, K

    1997-02-01

    In the forebrain of the domestic chick (Gallus gallus domesticus), an area termed the mediorostral neostriatum/hyperstriatum ventrale is strongly involved in emotional learning paradigms such as acoustic filial imprinting. Furthermore, the involvement of the mediorostral neostriatum/hyperstriatum ventrale in stressful situations, such as social separation, has been demonstrated in 2-deoxyglucose studies. The aim of the present study was to examine whether quantitative changes of dopamine, serotonin and their metabolites occur during auditory filial imprinting and during social separation. Using in vivo microdialysis in tone-imprinted and in naive, control chicks, we compared the extracellular levels of homovanillic acid, a metabolite of dopamine, and 5-hydroxyindoleacetic acid, a metabolite of serotonin, during the presentation of the imprinting tone. A small, but statistically significant, decrease of extracellular homovanillic acid levels was found in the mediorostral neostriatum/hyperstriatum ventrale of imprinted chicks compared to control animals, whereas changes of 5-hydroxyindoleacetic acid were not detected. In a second experiment, we investigated the levels of homovanillic acid and 5-hydroxyindoleacetic acid in the mediorostral neostriatum/hyperstriatum ventrale of socially reared chicks during different stress situations, such as handling or separation from their cage mates. Handling induced a significant increase of homovanillic acid and 5-hydroxyindoleacetic acid, while social separation resulted in a significant increase of 5-hydroxyindoleacetic acid and only a slight increase of homovanillic acid. Despite considerable inter-individual variability, the increase of distress vocalizations (duration of distress calls) after social separation displayed a good correlation to the increased 5-hydroxyindoleacetic acid levels in all animals analysed. These results provide the first evidence that the physiological response of the mediorostral neostriatum

  4. Decreases in energy and increases in phase locking of event related oscillations to auditory stimuli occurs over adolescence in human and rodent brain

    PubMed Central

    Ehlers, Cindy L.; Wills, Derek N.; Desikan, Anita; Phillips, Evelyn; Havstad, James

    2014-01-01

    Synchrony of phase (phase locking) of event-related oscillations (EROs) within and between different brain areas has been suggested to reflect communication exchange between neural networks and as such may be a sensitive and translational measure of changes in brain remodeling that occurs during adolescence. This study sought to investigate developmental changes in EROs using a similar auditory event-related potential (ERP) paradigm in both rats and humans. Energy and phase variability of EROs collected from 38 young adult men (age 18-25 yrs), 33 periadolescent boys (age 10-14 yrs), 15 male periadolescent rats (@ Post Natal Day (PD) 36) and 19 male adult rats (@ PD 103) were investigated. Three channels of ERP data (Frontal Cortex, FZ; Central Cortex, CZ; Parietal Cortex, PZ) were collected from the humans using an oddball plus “noise” paradigm that was presented under passive (no behavioral response required) conditions in the periadolescents and under active conditions (where each subject was instructed to depress a counter each time he detected an infrequent (target) tone) in adults and adolescents. ERPs were recorded in rats using only the passive paradigm. In order to compare the tasks used in rats to those used in humans we first studied whether three ERO measures (energy, phase locking index (within an electrode site, PLI), phase difference locking index (between different electrode sites, PDLI)) differentiated the “active” from “passive” ERP tasks. Secondly we explored our main question of whether the three ERO measures, differentiated adults from periadolescents in a similar manner in both humans and rats. No significant changes were found in measures of ERO energy between the active and passive tasks in the periadolescent human participants. There was a smaller but significant increase in PLI but not PDLI as a function of “active” task requirements. Developmental differences were found in energy, PLI and PDLI values between the

  5. Decreases in energy and increases in phase locking of event-related oscillations to auditory stimuli occur during adolescence in human and rodent brain.

    PubMed

    Ehlers, Cindy L; Wills, Derek N; Desikan, Anita; Phillips, Evelyn; Havstad, James

    2014-01-01

    Synchrony of phase (phase locking) of event-related oscillations (EROs) within and between different brain areas has been suggested to reflect communication exchange between neural networks and as such may be a sensitive and translational measure of changes in brain remodeling that occur during adolescence. This study sought to investigate developmental changes in EROs using a similar auditory event-related potential (ERP) paradigm in both rats and humans. Energy and phase variability of EROs collected from 38 young adult men (aged 18-25 years), 33 periadolescent boys (aged 10-14 years), 15 male periadolescent rats [at postnatal day (PD) 36] and 19 male adult rats (at PD103) were investigated. Three channels of ERP data (frontal cortex, central cortex and parietal cortex) were collected from the humans using an 'oddball plus noise' paradigm that was presented under passive (no behavioral response required) conditions in the periadolescents and under active conditions (where each subject was instructed to depress a counter each time he detected an infrequent target tone) in adults and adolescents. ERPs were recorded in rats using only the passive paradigm. In order to compare the tasks used in rats to those used in humans, we first studied whether three ERO measures [energy, phase locking index (PLI) within an electrode site and phase difference locking index (PDLI) between different electrode sites] differentiated the 'active' from 'passive' ERP tasks. Secondly, we explored our main question of whether the three ERO measures differentiated adults from periadolescents in a similar manner in both humans and rats. No significant changes were found in measures of ERO energy between the active and passive tasks in the periadolescent human participants. There was a smaller but significant increase in PLI but not PDLI as a function of active task requirements. Developmental differences were found in energy, PLI and PDLI values between the periadolescents and adults in

  6. Sleep-Disordered Breathing Affects Auditory Processing in 5–7 Year-Old Children: Evidence From Brain Recordings

    PubMed Central

    Key, Alexandra P.F.; Molfese, Dennis L.; O’Brien, Louise; Gozal, David

    2010-01-01

    Poor sleep in children is associated with lower neurocognitive functioning and increased maladaptive behaviors. The current study examined the impact of snoring (the most common manifestation of sleep-disordered breathing) on cognitive and brain functioning in a sample of 35 asymptomatic children ages 5–7 years identified in the community as having habitual snoring (SDB). All participants completed polysomnographic, neurocognitive (NEPSY) and psychophysiological (ERPs to speech sounds) assessments. The results indicated that sub-clinical levels of SDB may not necessarily lead to reduced performance on standardized behavioral measures of attention and memory. However, brain indices of speech perception and discrimination (N1/P2) are sensitive to individual differences in the quality of sleep. We postulate that addition of ERPs to the standard clinical measures of sleep problems could lead to early identification of children who may be more cognitively vulnerable because of chronic sleep disturbances. PMID:20183723

  7. Brain Dynamics of Aging: Multiscale Variability of EEG Signals at Rest and during an Auditory Oddball Task1,2,3

    PubMed Central

    Sleimen-Malkoun, Rita; Perdikis, Dionysios; Müller, Viktor; Blanc, Jean-Luc; Huys, Raoul; Temprado, Jean-Jacques

    2015-01-01

    Abstract The present work focused on the study of fluctuations of cortical activity across time scales in young and older healthy adults. The main objective was to offer a comprehensive characterization of the changes of brain (cortical) signal variability during aging, and to make the link with known underlying structural, neurophysiological, and functional modifications, as well as aging theories. We analyzed electroencephalogram (EEG) data of young and elderly adults, which were collected at resting state and during an auditory oddball task. We used a wide battery of metrics that typically are separately applied in the literature, and we compared them with more specific ones that address their limits. Our procedure aimed to overcome some of the methodological limitations of earlier studies and verify whether previous findings can be reproduced and extended to different experimental conditions. In both rest and task conditions, our results mainly revealed that EEG signals presented systematic age-related changes that were time-scale-dependent with regard to the structure of fluctuations (complexity) but not with regard to their magnitude. Namely, compared with young adults, the cortical fluctuations of the elderly were more complex at shorter time scales, but less complex at longer scales, although always showing a lower variance. Additionally, the elderly showed signs of spatial, as well as between, experimental conditions dedifferentiation. By integrating these so far isolated findings across time scales, metrics, and conditions, the present study offers an overview of age-related changes in the fluctuation electrocortical activity while making the link with underlying brain dynamics. PMID:26464983

  8. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    PubMed

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Auditory pathways: are 'what' and 'where' appropriate?

    PubMed

    Hall, Deborah A

    2003-05-13

    New evidence confirms that the auditory system encompasses temporal, parietal and frontal brain regions, some of which partly overlap with the visual system. But common assumptions about the functional homologies between sensory systems may be misleading.

  10. Multichannel demultiplexer-demodulator

    NASA Astrophysics Data System (ADS)

    Courtois, Hector; Sherry, Mike; Cangiane, Peter; Caso, Greg

    1993-11-01

    One of the critical satellite technologies in a meshed VSAT (very small aperture terminal) satellite communication networks utilizing FDMA (frequency division multiple access) uplinks is a multichannel demultiplexer/demodulator (MCDD). TRW Electronic Systems Group developed a proof-of-concept (POC) MCDD using advanced digital technologies. This POC model demonstrates the capability of demultiplexing and demodulating multiple low to medium data rate FDMA uplinks with potential for expansion to demultiplexing and demodulating hundreds to thousands of narrowband uplinks. The TRW approach uses baseband sampling followed by successive wideband and narrowband channelizers with each channelizer feeding into a multirate, time-shared demodulator. A full-scale MCDD would consist of an 8 bit A/D sampling at 92.16 MHz, four wideband channelizers capable of demultiplexing eight wideband channels, thirty-two narrowband channelizers capable of demultiplexing one wideband signal into 32 narrowband channels, and thirty-two multirate demodulators. The POC model consists of an 8 bit A/D sampling at 23.04 MHz, one wideband channelizer, 16 narrowband channelizers, and three multirate demodulators. The implementation loss of the wideband and narrowband channels is 0.3dB and 0.75dB at 10(exp -7) E(sub b)/N(sub o) respectively.

  11. Multichannel demultiplexer-demodulator

    NASA Technical Reports Server (NTRS)

    Courtois, Hector; Sherry, Mike; Cangiane, Peter; Caso, Greg

    1993-01-01

    One of the critical satellite technologies in a meshed VSAT (very small aperture terminal) satellite communication networks utilizing FDMA (frequency division multiple access) uplinks is a multichannel demultiplexer/demodulator (MCDD). TRW Electronic Systems Group developed a proof-of-concept (POC) MCDD using advanced digital technologies. This POC model demonstrates the capability of demultiplexing and demodulating multiple low to medium data rate FDMA uplinks with potential for expansion to demultiplexing and demodulating hundreds to thousands of narrowband uplinks. The TRW approach uses baseband sampling followed by successive wideband and narrowband channelizers with each channelizer feeding into a multirate, time-shared demodulator. A full-scale MCDD would consist of an 8 bit A/D sampling at 92.16 MHz, four wideband channelizers capable of demultiplexing eight wideband channels, thirty-two narrowband channelizers capable of demultiplexing one wideband signal into 32 narrowband channels, and thirty-two multirate demodulators. The POC model consists of an 8 bit A/D sampling at 23.04 MHz, one wideband channelizer, 16 narrowband channelizers, and three multirate demodulators. The implementation loss of the wideband and narrowband channels is 0.3dB and 0.75dB at 10(exp -7) E(sub b)/N(sub o) respectively.

  12. Auditory system

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1973-01-01

    The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.

  13. Auditory system

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1973-01-01

    The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.

  14. Functional Organization of the Ventral Auditory Pathway.

    PubMed

    Cohen, Yale E; Bennur, Sharath; Christison-Lagay, Kate; Gifford, Adam M; Tsunada, Joji

    2016-01-01

    The fundamental problem in audition is determining the mechanisms required by the brain to transform an unlabelled mixture of auditory stimuli into coherent perceptual representations. This process is called auditory-scene analysis. The perceptual representations that result from auditory-scene analysis are formed through a complex interaction of perceptual grouping, attention, categorization and decision-making. Despite a great deal of scientific energy devoted to understanding these aspects of hearing, we still do not understand (1) how sound perception arises from neural activity and (2) the causal relationship between neural activity and sound perception. Here, we review the role of the "ventral" auditory pathway in sound perception. We hypothesize that, in the early parts of the auditory cortex, neural activity reflects the auditory properties of a stimulus. However, in latter parts of the auditory cortex, neurons encode the sensory evidence that forms an auditory decision and are causally involved in the decision process. Finally, in the prefrontal cortex, which receives input from the auditory cortex, neural activity reflects the actual perceptual decision. Together, these studies indicate that the ventral pathway contains hierarchical circuits that are specialized for auditory perception and scene analysis.

  15. Recording and marking with silicon multichannel electrodes.

    PubMed

    Townsend, George; Peloquin, Pascal; Kloosterman, Fabian; Hetke, Jamille F; Leung, L Stan

    2002-04-01

    This protocol describes an implementation of recording and analysis of evoked potentials in the hippocampal cortex, combined with lesioning using multichannel silicon probes. Multichannel recording offers the advantage of capturing a potential field at one instant in time. The potentials are then subjected to current source density (CSD) analysis, to reveal the layer-by-layer current sources and sinks. Signals from each channel of a silicon probe (maximum 16 channels in this study) were amplified and digitized at up to 40 kHz after sample-and-hold circuits. A modular lesion circuit board could be inserted between the input preamplifiers and the silicon probe, such that any one of the 16 electrodes could be connected to a DC lesion current. By making a lesion at the electrode showing a physiological event of interest, the anatomical location of the event can be precisely identified, as shown for the distal dendritic current sink in CA1 following medial perforant path stimulation. Making two discrete lesions through the silicon probe is useful to indicate the degree of tissue shrinkage during histological procedures. In addition, potential/CSD profiles were stable following small movements of the silicon probe, suggesting that the probe did not cause excessive damage to the brain.

  16. Harmonic Training and the Formation of Pitch Representation in a Neural Network Model of the Auditory Brain

    PubMed Central

    Ahmad, Nasir; Higgins, Irina; Walker, Kerry M. M.; Stringer, Simon M.

    2016-01-01

    Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises. PMID:27047368

  17. Impaired auditory selective attention ameliorated by cognitive training with graded exposure to noise in patients with traumatic brain injury.

    PubMed

    Dundon, Neil M; Dockree, Suvi P; Buckley, Vanessa; Merriman, Niamh; Carton, Mary; Clarke, Sarah; Roche, Richard A P; Lalor, Edmund C; Robertson, Ian H; Dockree, Paul M

    2015-08-01

    Patients who suffer traumatic brain injury frequently report difficulty concentrating on tasks and completing routine activities in noisy and distracting environments. Such impairments can have long-term negative psychosocial consequences. A cognitive control function that may underlie this impairment is the capacity to select a goal-relevant signal for further processing while safeguarding it from irrelevant noise. A paradigmatic investigation of this problem was undertaken using a dichotic listening task (study 1) in which comprehension of a stream of speech to one ear was measured in the context of increasing interference from a second stream of irrelevant speech to the other ear. Controls showed an initial decline in performance in the presence of competing speech but thereafter showed adaptation to increasing audibility of irrelevant speech, even at the highest levels of noise. By contrast, patients showed linear decline in performance with increasing noise. Subsequently attempts were made to ameliorate this deficit (study 2) using a cognitive training procedure based on attention process training (APT) that included graded exposure to irrelevant noise over the course of training. Patients were assigned to adaptive and non-adaptive training schedules or to a no-training control group. Results showed that both types of training drove improvements in the dichotic listening and in naturalistic tasks of performance in noise. Improvements were also seen on measures of selective attention in the visual domain suggesting transfer of training. We also observed augmentation of event-related potentials (ERPs) linked to target processing (P3b) but no change in ERPs evoked by distractor stimuli (P3a) suggesting that training heightened tuning of target signals, as opposed to gating irrelevant noise. No changes in any of the above measures were observed in a no-training control group. Together these findings present an ecologically valid approach to measure selective

  18. Auditory short-term memory in the primate auditory cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  19. Long-term recovery from hippocampal-related behavioral and biochemical abnormalities induced by noise exposure during brain development. Evaluation of auditory pathway integrity.

    PubMed

    Uran, S L; Gómez-Casati, M E; Guelman, L R

    2014-10-01

    Sound is an important part of man's contact with the environment and has served as critical means for survival throughout his evolution. As a result of exposure to noise, physiological functions such as those involving structures of the auditory and non-auditory systems might be damaged. We have previously reported that noise-exposed developing rats elicited hippocampal-related histological, biochemical and behavioral changes. However, no data about the time lapse of these changes were reported. Moreover, measurements of auditory pathway function were not performed in exposed animals. Therefore, with the present work, we aim to test the onset and the persistence of the different extra-auditory abnormalities observed in noise-exposed rats and to evaluate auditory pathway integrity. Male Wistar rats of 15 days were exposed to moderate noise levels (95-97 dB SPL, 2 h a day) during one day (acute noise exposure, ANE) or during 15 days (sub-acute noise exposure, SANE). Hippocampal biochemical determinations as well as short (ST) and long term (LT) behavioral assessments were performed. In addition, histological and functional evaluations of the auditory pathway were carried out in exposed animals. Our results show that hippocampal-related behavioral and biochemical changes (impairments in habituation, recognition and associative memories as well as distortion of anxiety-related behavior, decreases in reactive oxygen species (ROS) levels and increases in antioxidant enzymes activities) induced by noise exposure were almost completely restored by PND 90. In addition, auditory evaluation shows that increased cochlear thresholds observed in exposed rats were re-established at PND 90, although with a remarkable supra-threshold amplitude reduction. These data suggest that noise-induced hippocampal and auditory-related alterations are mostly transient and that the effects of noise on the hippocampus might be, at least in part, mediated by the damage on the auditory pathway

  20. Comparison of tactile, auditory, and visual modality for brain-computer interface use: a case study with a patient in the locked-in state

    PubMed Central

    Kaufmann, Tobias; Holz, Elisa M.; Kübler, Andrea

    2013-01-01

    This paper describes a case study with a patient in the classic locked-in state, who currently has no means of independent communication. Following a user-centered approach, we investigated event-related potentials (ERP) elicited in different modalities for use in brain-computer interface (BCI) systems. Such systems could provide her with an alternative communication channel. To investigate the most viable modality for achieving BCI based communication, classic oddball paradigms (1 rare and 1 frequent stimulus, ratio 1:5) in the visual, auditory and tactile modality were conducted (2 runs per modality). Classifiers were built on one run and tested offline on another run (and vice versa). In these paradigms, the tactile modality was clearly superior to other modalities, displaying high offline accuracy even when classification was performed on single trials only. Consequently, we tested the tactile paradigm online and the patient successfully selected targets without any error. Furthermore, we investigated use of the visual or tactile modality for different BCI systems with more than two selection options. In the visual modality, several BCI paradigms were tested offline. Neither matrix-based nor so-called gaze-independent paradigms constituted a means of control. These results may thus question the gaze-independence of current gaze-independent approaches to BCI. A tactile four-choice BCI resulted in high offline classification accuracies. Yet, online use raised various issues. Although performance was clearly above chance, practical daily life use appeared unlikely when compared to other communication approaches (e.g., partner scanning). Our results emphasize the need for user-centered design in BCI development including identification of the best stimulus modality for a particular user. Finally, the paper discusses feasibility of EEG-based BCI systems for patients in classic locked-in state and compares BCI to other AT solutions that we also tested during the

  1. Comparison of tactile, auditory, and visual modality for brain-computer interface use: a case study with a patient in the locked-in state.

    PubMed

    Kaufmann, Tobias; Holz, Elisa M; Kübler, Andrea

    2013-01-01

    This paper describes a case study with a patient in the classic locked-in state, who currently has no means of independent communication. Following a user-centered approach, we investigated event-related potentials (ERP) elicited in different modalities for use in brain-computer interface (BCI) systems. Such systems could provide her with an alternative communication channel. To investigate the most viable modality for achieving BCI based communication, classic oddball paradigms (1 rare and 1 frequent stimulus, ratio 1:5) in the visual, auditory and tactile modality were conducted (2 runs per modality). Classifiers were built on one run and tested offline on another run (and vice versa). In these paradigms, the tactile modality was clearly superior to other modalities, displaying high offline accuracy even when classification was performed on single trials only. Consequently, we tested the tactile paradigm online and the patient successfully selected targets without any error. Furthermore, we investigated use of the visual or tactile modality for different BCI systems with more than two selection options. In the visual modality, several BCI paradigms were tested offline. Neither matrix-based nor so-called gaze-independent paradigms constituted a means of control. These results may thus question the gaze-independence of current gaze-independent approaches to BCI. A tactile four-choice BCI resulted in high offline classification accuracies. Yet, online use raised various issues. Although performance was clearly above chance, practical daily life use appeared unlikely when compared to other communication approaches (e.g., partner scanning). Our results emphasize the need for user-centered design in BCI development including identification of the best stimulus modality for a particular user. Finally, the paper discusses feasibility of EEG-based BCI systems for patients in classic locked-in state and compares BCI to other AT solutions that we also tested during the

  2. Modular multichannel surface plasmon spectrometer

    NASA Astrophysics Data System (ADS)

    Neuert, G.; Kufer, S.; Benoit, M.; Gaub, H. E.

    2005-05-01

    We have developed a modular multichannel surface plasmon resonance (SPR) spectrometer on the basis of a commercially available hybrid sensor chip. Due to its modularity this inexpensive and easy to use setup can readily be adapted to different experimental environments. High temperature stability is achieved through efficient thermal coupling of individual SPR units. With standard systems the performance of the multichannel instrument was evaluated. The absorption kinetics of a cysteamine monolayer, as well as the concentration dependence of the specific receptor-ligand interaction between biotin and streptavidin was measured.

  3. Brain dynamics of distractibility: interaction between top-down and bottom-up mechanisms of auditory attention.

    PubMed

    Bidet-Caulet, Aurélie; Bottemanne, Laure; Fonteneau, Clara; Giard, Marie-Hélène; Bertrand, Olivier

    2015-05-01

    Attention improves the processing of specific information while other stimuli are disregarded. A good balance between bottom-up (attentional capture by unexpected salient stimuli) and top-down (selection of relevant information) mechanisms is crucial to be both task-efficient and aware of our environment. Only few studies have explored how an isolated unexpected task-irrelevant stimulus outside the attention focus can disturb the top-down attention mechanisms necessary to the good performance of the ongoing task, and how these top-down mechanisms can modulate the bottom-up mechanisms of attentional capture triggered by an unexpected event. We recorded scalp electroencephalography in 18 young adults performing a new paradigm measuring distractibility and assessing both bottom-up and top-down attention mechanisms, at the same time. Increasing task load in top-down attention was found to reduce early processing of the distracting sound, but not bottom-up attentional capture mechanisms nor the behavioral distraction cost in reaction time. Moreover, the impact of bottom-up attentional capture by distracting sounds on target processing was revealed as a delayed latency of the N100 sensory response to target sounds mirroring increased reaction times. These results provide crucial information into how bottom-up and top-down mechanisms dynamically interact and compete in the human brain, i.e. on the precarious balance between voluntary attention and distraction.

  4. Auditory neuroplasticity, hearing loss and cochlear implants.

    PubMed

    Ryugo, David

    2015-07-01

    Data from our laboratory show that the auditory brain is highly malleable by experience. We establish a base of knowledge that describes the normal structure and workings at the initial stages of the central auditory system. This research is expanded to include the associated pathology in the auditory brain stem created by hearing loss. Utilizing the congenitally deaf white cat, we demonstrate the way that cells, synapses, and circuits are pathologically affected by sound deprivation. We further show that the restoration of auditory nerve activity via electrical stimulation through cochlear implants serves to correct key features of brain pathology caused by hearing loss. The data suggest that rigorous training with cochlear implants and/or hearing aids offers the promise of heretofore unattained benefits.

  5. Auditory-vocal mirroring in songbirds.

    PubMed

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  6. Multichannel SQUID systems for brain research

    SciTech Connect

    Ahonen, A.I.; Hamalainen, M.S.; Kajola, M.J.; Knuutila, J.E.F.; Lounasmaa, O.V.; Simola, J.T.; Vilkman, V.A. . Low Temperature Lab.); Tesche, C.D. . Thomas J. Watson Research Center)

    1991-03-01

    This paper reviews basis principles of magnetoencephalography (MEG) and neuromagnetic instrumentation. The authors' 24-channel system, based on planar gradiometer coils and dc-SQUIDs, is then described. Finally, recent MEG-experiments on human somatotopy and focal epilepsy, carried out in the authors' laboratory, are presented.

  7. Brain stem auditory potentials evoked by clicks in the presence of high-pass filtered noise in dogs.

    PubMed

    Poncelet, L; Deltenre, P; Coppens, A; Michaux, C; Coussart, E

    2006-04-01

    This study evaluates the effects of a high-frequency hearing loss simulated by the high-pass-noise masking method, on the click-evoked brain stem-evoked potentials (BAEP) characteristics in dogs. BAEP were obtained in response to rarefaction and condensation click stimuli from 60 dB normal hearing level (NHL, corresponding to 89 dB sound pressure level) to wave V threshold, using steps of 5 dB in eleven 58 to 80-day-old Beagle puppies. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation potential (RCDP). The procedure was repeated while constant level, high-pass filtered (HPF) noise was superposed to the click. Cut-off frequencies of the successively used filters were 8, 4, 2 and 1 kHz. For each condition, wave V and RCDP thresholds, and slope of the wave V latency-intensity curve (LIC) were collected. The intensity range at which RCDP could not be recorded (pre-RCDP range) was calculated. Compared with the no noise condition, the pre-RCDP range significantly diminished and the wave V threshold significantly increased when the superposed HPF noise reached the 4 kHz area. Wave V LIC slope became significantly steeper with the 2 kHz HPF noise. In this non-invasive model of high-frequency hearing loss, impaired hearing of frequencies from 8 kHz and above escaped detection through click BAEP study in dogs. Frequencies above 13 kHz were however not specifically addressed in this study.

  8. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  9. Central auditory function of deafness genes.

    PubMed

    Willaredt, Marc A; Ebbers, Lena; Nothwang, Hans Gerd

    2014-06-01

    The highly variable benefit of hearing devices is a serious challenge in auditory rehabilitation. Various factors contribute to this phenomenon such as the diversity in ear defects, the different extent of auditory nerve hypoplasia, the age of intervention, and cognitive abilities. Recent analyses indicate that, in addition, central auditory functions of deafness genes have to be considered in this context. Since reduced neuronal activity acts as the common denominator in deafness, it is widely assumed that peripheral deafness influences development and function of the central auditory system in a stereotypical manner. However, functional characterization of transgenic mice with mutated deafness genes demonstrated gene-specific abnormalities in the central auditory system as well. A frequent function of deafness genes in the central auditory system is supported by a genome-wide expression study that revealed significant enrichment of these genes in the transcriptome of the auditory brainstem compared to the entire brain. Here, we will summarize current knowledge of the diverse central auditory functions of deafness genes. We furthermore propose the intimately interwoven gene regulatory networks governing development of the otic placode and the hindbrain as a mechanistic explanation for the widespread expression of these genes beyond the cochlea. We conclude that better knowledge of central auditory dysfunction caused by genetic alterations in deafness genes is required. In combination with improved genetic diagnostics becoming currently available through novel sequencing technologies, this information will likely contribute to better outcome prediction of hearing devices.

  10. Auditory short-term memory in the primate auditory cortex

    PubMed Central

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ‘working memory’ bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ‘match’ stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. PMID:26541581

  11. [Auditory fatigue].

    PubMed

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  12. Novel Methods for Measuring Depth of Anesthesia by Quantifying Dominant Information Flow in Multichannel EEGs

    PubMed Central

    Choi, Byung-Moon; Noh, Gyu-Jeong

    2017-01-01

    In this paper, we propose novel methods for measuring depth of anesthesia (DOA) by quantifying dominant information flow in multichannel EEGs. Conventional methods mainly use few EEG channels independently and most of multichannel EEG based studies are limited to specific regions of the brain. Therefore the function of the cerebral cortex over wide brain regions is hardly reflected in DOA measurement. Here, DOA is measured by the quantification of dominant information flow obtained from principle bipartition. Three bipartitioning methods are used to detect the dominant information flow in entire EEG channels and the dominant information flow is quantified by calculating information entropy. High correlation between the proposed measures and the plasma concentration of propofol is confirmed from the experimental results of clinical data in 39 subjects. To illustrate the performance of the proposed methods more easily we present the results for multichannel EEG on a two-dimensional (2D) brain map. PMID:28408923

  13. Novel Methods for Measuring Depth of Anesthesia by Quantifying Dominant Information Flow in Multichannel EEGs.

    PubMed

    Cha, Kab-Mun; Choi, Byung-Moon; Noh, Gyu-Jeong; Shin, Hyun-Chool

    2017-01-01

    In this paper, we propose novel methods for measuring depth of anesthesia (DOA) by quantifying dominant information flow in multichannel EEGs. Conventional methods mainly use few EEG channels independently and most of multichannel EEG based studies are limited to specific regions of the brain. Therefore the function of the cerebral cortex over wide brain regions is hardly reflected in DOA measurement. Here, DOA is measured by the quantification of dominant information flow obtained from principle bipartition. Three bipartitioning methods are used to detect the dominant information flow in entire EEG channels and the dominant information flow is quantified by calculating information entropy. High correlation between the proposed measures and the plasma concentration of propofol is confirmed from the experimental results of clinical data in 39 subjects. To illustrate the performance of the proposed methods more easily we present the results for multichannel EEG on a two-dimensional (2D) brain map.

  14. Atypical brain lateralisation in the auditory cortex and language performance in 3- to 7-year-old children with high-functioning autism spectrum disorder: a child-customised magnetoencephalography (MEG) study

    PubMed Central

    2013-01-01

    chronological age was a significant predictor of shorter P50m latency in the right hemisphere. Conclusions Using a child-customised MEG device, we studied the P50m component that was evoked through binaural human voice stimuli in young ASD and TD children to examine differences in auditory cortex function that are associated with language development. Our results suggest that there is atypical brain function in the auditory cortex in young children with ASD, regardless of language development. PMID:24103585

  15. Comparison of eye tracking, electrooculography and an auditory brain-computer interface for binary communication: a case study with a participant in the locked-in state.

    PubMed

    Käthner, Ivo; Kübler, Andrea; Halder, Sebastian

    2015-09-04

    In this study, we evaluated electrooculography (EOG), an eye tracker and an auditory brain-computer interface (BCI) as access methods to augmentative and alternative communication (AAC). The participant of the study has been in the locked-in state (LIS) for 6 years due to amyotrophic lateral sclerosis. He was able to communicate with slow residual eye movements, but had no means of partner independent communication. We discuss the usability of all tested access methods and the prospects of using BCIs as an assistive technology. Within four days, we tested whether EOG, eye tracking and a BCI would allow the participant in LIS to make simple selections. We optimized the parameters in an iterative procedure for all systems. The participant was able to gain control over all three systems. Nonetheless, due to the level of proficiency previously achieved with his low-tech AAC method, he did not consider using any of the tested systems as an additional communication channel. However, he would consider using the BCI once control over his eye muscles would no longer be possible. He rated the ease of use of the BCI as the highest among the tested systems, because no precise eye movements were required; but also as the most tiring, due to the high level of attention needed to operate the BCI. In this case study, the partner based communication was possible due to the good care provided and the proficiency achieved by the interlocutors. To ease the transition from a low-tech AAC method to a BCI once control over all muscles is lost, it must be simple to operate. For persons, who rely on AAC and are affected by a progressive neuromuscular disease, we argue that a complementary approach, combining BCIs and standard assistive technology, can prove valuable to achieve partner independent communication and ease the transition to a purely BCI based approach. Finally, we provide further evidence for the importance of a user-centered approach in the design of new assistive devices.

  16. Identifying cochlear implant channels with poor electrode-neuron interfaces: electrically evoked auditory brain stem responses measured with the partial tripolar configuration.

    PubMed

    Bierer, Julie Arenberg; Faulkner, Kathleen F; Tremblay, Kelly L

    2011-01-01

    The goal of this study was to compare cochlear implant behavioral measures and electrically evoked auditory brain stem responses (EABRs) obtained with a spatially focused electrode configuration. It has been shown previously that channels with high thresholds, when measured with the tripolar configuration, exhibit relatively broad psychophysical tuning curves. The elevated threshold and degraded spatial/spectral selectivity of such channels are consistent with a poor electrode-neuron interface, defined as suboptimal electrode placement or reduced nerve survival. However, the psychophysical methods required to obtain these data are time intensive and may not be practical during a clinical mapping session, especially for young children. Here, we have extended the previous investigation to determine whether a physiological approach could provide a similar assessment of channel functionality. We hypothesized that, in accordance with the perceptual measures, higher EABR thresholds would correlate with steeper EABR amplitude growth functions, reflecting a degraded electrode-neuron interface. Data were collected from six cochlear implant listeners implanted with the HiRes 90k cochlear implant (Advanced Bionics). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the partial tripolar configuration, for which a fraction of current (σ) from a center active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. EABRs were obtained in each subject for the two channels having the highest and lowest tripolar (σ = 1 or 0.9) behavioral threshold. Evoked potentials were measured with both the monopolar (σ = 0) and a more focused partial tripolar (σ ≥ 0.50) configuration. Consistent with previous studies, EABR thresholds were highly and positively correlated with behavioral thresholds obtained with both the monopolar and partial

  17. The chicken immediate-early gene ZENK is expressed in the medio-rostral neostriatum/hyperstriatum ventrale, a brain region involved in acoustic imprinting, and is up-regulated after exposure to an auditory stimulus.

    PubMed

    Thode, C; Bock, J; Braun, K; Darlison, M G

    2005-01-01

    The immediate-early gene zenk (an acronym for the avian orthologue of the mammalian genes zif-268, egr-1, ngfi-a and krox-24) has been extensively employed, in studies on oscine birds, as a marker of neuronal activity to reveal forebrain structures that are involved in the memory processes associated with the acquisition, perception and production of song. Audition-induced expression of this gene, in brain, has also recently been reported for the domestic chicken (Gallus gallus domesticus) and the Japanese quail (Coturnix coturnix japonica). Whilst the anatomical distribution of zenk expression was described for the quail, corresponding data for the chicken were not reported. We have, therefore, used in situ hybridisation to localise the mRNA that encodes the product of the zenk gene (which we call ZENK) within the brain of the 1-day-old chick. We demonstrate that this transcript is present in a number of forebrain structures including the medio-rostral neostriatum/hyperstriatum ventrale (MNH), a region that has been strongly implicated in auditory imprinting (which is a form of recognition memory), and Field L, the avian analog of the mammalian auditory cortex. Because of this pattern of gene expression, we have compared the level of the ZENK mRNA in chicks that have been subjected to a 30-min acoustic imprinting paradigm and in untrained controls. Our results reveal a significant increase (P< or =0.05) in the level of the ZENK mRNA in MNH and Field L, and in the two forebrain hemispheres; no increase was seen in the ectostriatum, which is a visual projection area. The data obtained implicate the immediate-early gene, zenk, in auditory imprinting, which is an established model of juvenile learning. In addition, our results indicate that the ZENK mRNA may be used as a molecular marker for MNH, a region that is difficult to anatomically and histochemically delineate.

  18. Multichannel Error Correction Code Decoder

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA Lewis Research Center's Digital Systems Technology Branch has an ongoing program in modulation, coding, onboard processing, and switching. Recently, NASA completed a project to incorporate a time-shared decoder into the very-small-aperture terminal (VSAT) onboard-processing mesh architecture. The primary goal was to demonstrate a time-shared decoder for a regenerative satellite that uses asynchronous, frequency-division multiple access (FDMA) uplink channels, thereby identifying hardware and power requirements and fault-tolerant issues that would have to be addressed in a operational system. A secondary goal was to integrate and test, in a system environment, two NASA-sponsored, proof-of-concept hardware deliverables: the Harris Corp. high-speed Bose Chaudhuri-Hocquenghem (BCH) codec and the TRW multichannel demultiplexer/demodulator (MCDD). A beneficial byproduct of this project was the development of flexible, multichannel-uplink signal-generation equipment.

  19. Web-based multi-channel analyzer

    DOEpatents

    Gritzo, Russ E.

    2003-12-23

    The present invention provides an improved multi-channel analyzer designed to conveniently gather, process, and distribute spectrographic pulse data. The multi-channel analyzer may operate on a computer system having memory, a processor, and the capability to connect to a network and to receive digitized spectrographic pulses. The multi-channel analyzer may have a software module integrated with a general-purpose operating system that may receive digitized spectrographic pulses for at least 10,000 pulses per second. The multi-channel analyzer may further have a user-level software module that may receive user-specified controls dictating the operation of the multi-channel analyzer, making the multi-channel analyzer customizable by the end-user. The user-level software may further categorize and conveniently distribute spectrographic pulse data employing non-proprietary, standard communication protocols and formats.

  20. Scalable multichannel MRI data acquisition system.

    PubMed

    Bodurka, Jerzy; Ledden, Patrick J; van Gelderen, Peter; Chu, Renxin; de Zwart, Jacco A; Morris, Doug; Duyn, Jeff H

    2004-01-01

    A scalable multichannel digital MRI receiver system was designed to achieve high bandwidth echo-planar imaging (EPI) acquisitions for applications such as BOLD-fMRI. The modular system design allows for easy extension to an arbitrary number of channels. A 16-channel receiver was developed and integrated with a General Electric (GE) Signa 3T VH/3 clinical scanner. Receiver performance was evaluated on phantoms and human volunteers using a custom-built 16-element receive-only brain surface coil array. At an output bandwidth of 1 MHz, a 100% acquisition duty cycle was achieved. Overall system noise figure and dynamic range were better than 0.85 dB and 84 dB, respectively. During repetitive EPI scanning on phantoms, the relative temporal standard deviation of the image intensity time-course was below 0.2%. As compared to the product birdcage head coil, 16-channel reception with the custom array yielded a nearly 6-fold SNR gain in the cerebral cortex and a 1.8-fold SNR gain in the center of the brain. The excellent system stability combined with the increased sensitivity and SENSE capabilities of 16-channel coils are expected to significantly benefit and enhance fMRI applications. Published 2003 Wiley-Liss, Inc.

  1. Adaptive enhancement of magnetoencephalographic signals via multichannel filtering

    SciTech Connect

    Lewis, P.S.

    1989-01-01

    A time-varying spatial/temporal filter for enhancing multichannel magnetoencephalographic (MEG) recordings of evoked responses is described. This filter is based in projections derived from a combination of measured data and a priori models of the expected response. It produces estimates of the evoked fields in single trial measurements. These estimates can reduce the need for signal averaging in some situations. The filter uses the a priori model information to enhance responses where they exist, but avoids creating responses that do not exist. Examples are included of the filter's application to both MEG single trial data containing an auditory evoked field and control data with no evoked field. 5 refs., 7 figs.

  2. Multichannel analysis of surface waves

    USGS Publications Warehouse

    Park, C.B.; Miller, R.D.; Xia, J.

    1999-01-01

    The frequency-dependent properties of Rayleigh-type surface waves can be utilized for imaging and characterizing the shallow subsurface. Most surface-wave analysis relies on the accurate calculation of phase velocities for the horizontally traveling fundamental-mode Rayleigh wave acquired by stepping out a pair of receivers at intervals based on calculated ground roll wavelengths. Interference by coherent source-generated noise inhibits the reliability of shear-wave velocities determined through inversion of the whole wave field. Among these nonplanar, nonfundamental-mode Rayleigh waves (noise) are body waves, scattered and nonsource-generated surface waves, and higher-mode surface waves. The degree to which each of these types of noise contaminates the dispersion curve and, ultimately, the inverted shear-wave velocity profile is dependent on frequency as well as distance from the source. Multichannel recording permits effective identification and isolation of noise according to distinctive trace-to-trace coherency in arrival time and amplitude. An added advantage is the speed and redundancy of the measurement process. Decomposition of a multichannel record into a time variable-frequency format, similar to an uncorrelated Vibroseis record, permits analysis and display of each frequency component in a unique and continuous format. Coherent noise contamination can then be examined and its effects appraised in both frequency and offset space. Separation of frequency components permits real-time maximization of the S/N ratio during acquisition and subsequent processing steps. Linear separation of each ground roll frequency component allows calculation of phase velocities by simply measuring the linear slope of each frequency component. Breaks in coherent surface-wave arrivals, observable on the decomposed record, can be compensated for during acquisition and processing. Multichannel recording permits single-measurement surveying of a broad depth range, high levels of

  3. On-Line Statistical Segmentation of a Non-Speech Auditory Stream in Neonates as Demonstrated by Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Kudo, Noriko; Nonaka, Yulri; Mizuno, Noriko; Mizuno, Katsumi; Okanoya, Kazuo

    2011-01-01

    The ability to statistically segment a continuous auditory stream is one of the most important preparations for initiating language learning. Such ability is available to human infants at 8 months of age, as shown by a behavioral measurement. However, behavioral study alone cannot determine how early this ability is available. A recent study using…

  4. On-Line Statistical Segmentation of a Non-Speech Auditory Stream in Neonates as Demonstrated by Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Kudo, Noriko; Nonaka, Yulri; Mizuno, Noriko; Mizuno, Katsumi; Okanoya, Kazuo

    2011-01-01

    The ability to statistically segment a continuous auditory stream is one of the most important preparations for initiating language learning. Such ability is available to human infants at 8 months of age, as shown by a behavioral measurement. However, behavioral study alone cannot determine how early this ability is available. A recent study using…

  5. Electrophysiological measurement of human auditory function

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1975-01-01

    Contingent negative variations in the presence and amplitudes of brain potentials evoked by sound are considered. Evidence is produced that the evoked brain stem response to auditory stimuli is clearly related to brain events associated with cognitive processing of acoustic signals since their properties depend upon where the listener directs his attention, whether the signal is an expected event or a surprise, and when sound that is listened-for is heard at last.

  6. Material identification with multichannel radiographs

    NASA Astrophysics Data System (ADS)

    Collins, Noelle; Jimenez, Edward S.; Thompson, Kyle R.

    2017-02-01

    This work aims to validate previous exploratory work done to characterize materials by matching their attenuation profiles using a multichannel radiograph given an initial energy spectrum. The experiment was performed in order to evaluate the effects of noise on the resulting attenuation profiles, which was ignored in simulation. Spectrum measurements have also been collected from various materials of interest. Additionally, a MATLAB optimization algorithm has been applied to these candidate spectrum measurements in order to extract an estimate of the attenuation profile. Being able to characterize materials through this nondestructive method has an extensive range of applications for a wide variety of fields, including quality assessment, industry, and national security.

  7. Multichanneled puzzle-like encryption

    NASA Astrophysics Data System (ADS)

    Amaya, Dafne; Tebaldi, Myrian; Torroba, Roberto; Bolognini, Néstor

    2008-07-01

    In order to increase data security transmission we propose a multichanneled puzzle-like encryption method. The basic principle relies on the input information decomposition, in the same way as the pieces of a puzzle. Each decomposed part of the input object is encrypted separately in a 4 f double random phase mask architecture, by setting the optical parameters in a determined status. Each parameter set defines a channel. In order to retrieve the whole information it is necessary to properly decrypt and compose all channels. Computer simulations that confirm our proposal are presented.

  8. Multichannel Analyzer Built from a Microcomputer.

    ERIC Educational Resources Information Center

    Spencer, C. D.; Mueller, P.

    1979-01-01

    Describes a multichannel analyzer built using eight-bit S-100 bus microcomputer hardware. The output modes are an oscilloscope display, print data, and send data to another computer. Discusses the system's hardware, software, costs, and advantages relative to commercial multichannels. (Author/GA)

  9. Multichannel Learning: Connecting All to Education.

    ERIC Educational Resources Information Center

    Anzalone, Steve, Ed.

    Drafted for the Learning Technologies for Basic Education project, this document assembles case studies which provide an overview of multichannel learning, or reinforce learning through the use of several instructional paths and various media including print, broadcast, and online. Through the cases, multichannel learning is depicted as an…

  10. Multichannel Compression, Temporal Cues, and Audibility.

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Turner, Christopher W.

    1998-01-01

    The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…

  11. A Student-Made Inexpensive Multichannel Pipet

    ERIC Educational Resources Information Center

    Dragojlovic, Veljko

    2009-01-01

    An inexpensive multichannel pipet designed to deliver small volumes of liquid simultaneously to wells in a multiwell plate can be prepared by students in a single laboratory period. The multichannel pipet is made of disposable plastic 1 mL syringes and drilled plastic plates, which are used to make plunger and barrel assemblies. Application of the…

  12. A Student-Made Inexpensive Multichannel Pipet

    ERIC Educational Resources Information Center

    Dragojlovic, Veljko

    2009-01-01

    An inexpensive multichannel pipet designed to deliver small volumes of liquid simultaneously to wells in a multiwell plate can be prepared by students in a single laboratory period. The multichannel pipet is made of disposable plastic 1 mL syringes and drilled plastic plates, which are used to make plunger and barrel assemblies. Application of the…

  13. Multichannel Analyzer Built from a Microcomputer.

    ERIC Educational Resources Information Center

    Spencer, C. D.; Mueller, P.

    1979-01-01

    Describes a multichannel analyzer built using eight-bit S-100 bus microcomputer hardware. The output modes are an oscilloscope display, print data, and send data to another computer. Discusses the system's hardware, software, costs, and advantages relative to commercial multichannels. (Author/GA)

  14. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  15. Human auditory neuroimaging of intensity and loudness.

    PubMed

    Uppenkamp, Stefan; Röhl, Markus

    2014-01-01

    The physical intensity of a sound, usually expressed in dB on a logarithmic ratio scale, can easily be measured using technical equipment. Loudness is the perceptual correlate of sound intensity, and is usually determined by means of some sort of psychophysical scaling procedure. The interrelation of sound intensity and perceived loudness is still a matter of debate, and the physiological correlate of loudness perception in the human auditory pathway is not completely understood. Various studies indicate that the activation in human auditory cortex is more a representation of loudness sensation rather than of physical sound pressure level. This raises the questions (1), at what stage or stages in the ascending auditory pathway is the transformation of the physical stimulus into its perceptual correlate completed, and (2), to what extent other factors affecting individual loudness judgements might modulate the brain activation as registered by auditory neuroimaging. An overview is given about recent studies on the effects of sound intensity, duration, bandwidth and individual hearing status on the activation in the human auditory system, as measured by various approaches in auditory neuroimaging. This article is part of a Special Issue entitled Human Auditory Neuroimaging.

  16. McGurk illusion recalibrates subsequent auditory perception.

    PubMed

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-09-09

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of 'ada'. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as 'ada'. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as 'ada', activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input.

  17. McGurk illusion recalibrates subsequent auditory perception

    PubMed Central

    Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  18. Least squares restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.

    1991-01-01

    Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.

  19. Studying brain function with near-infrared spectroscopy concurrently with electroencephalography

    NASA Astrophysics Data System (ADS)

    Tong, Y.; Rooney, E. J.; Bergethon, P. R.; Martin, J. M.; Sassaroli, A.; Ehrenberg, B. L.; Van Toi, Vo; Aggarwal, P.; Ambady, N.; Fantini, S.

    2005-04-01

    Near-infrared spectroscopy (NIRS) has been used for functional brain imaging by employing properly designed source-detector matrices. We demonstrate that by embedding a NIRS source-detector matrix within an electroencephalography (EEG) standard multi-channel cap, we can perform functional brain mapping of hemodynamic response and neuronal response simultaneously. In this study, the P300 endogenous evoked response was generated in human subjects using an auditory odd-ball paradigm while concurrently monitoring the hemodynamic response both spatially and temporally with NIRS. The electrical measurements showed the localization of evoked potential P300, which appeared around 320 ms after the odd-ball stimulus. The NIRS measurements demonstrate a hemodynamic change in the fronto-temporal cortex a few seconds after the appearance of P300.

  20. Reconstructing Speech from Human Auditory Cortex

    PubMed Central

    Pasley, Brian N.; David, Stephen V.; Mesgarani, Nima; Flinker, Adeen; Shamma, Shihab A.; Crone, Nathan E.; Knight, Robert T.; Chang, Edward F.

    2012-01-01

    How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex. PMID:22303281

  1. Reconstructing speech from human auditory cortex.

    PubMed

    Pasley, Brian N; David, Stephen V; Mesgarani, Nima; Flinker, Adeen; Shamma, Shihab A; Crone, Nathan E; Knight, Robert T; Chang, Edward F

    2012-01-01

    How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.

  2. Development of multichannel MEG system at IGCAR

    NASA Astrophysics Data System (ADS)

    Mariyappa, N.; Parasakthi, C.; Gireesan, K.; Sengottuvel, S.; Patel, Rajesh; Janawadkar, M. P.; Radhakrishnan, T. S.; Sundar, C. S.

    2013-02-01

    We describe some of the challenging aspects in the indigenous development of the whole head multichannel magnetoencephalography (MEG) system at IGCAR, Kalpakkam. These are: i) fabrication and testing of a helmet shaped sensor array holder of a polymeric material experimentally tested to be compatible with liquid helium temperatures, ii) the design and fabrication of the PCB adapter modules, keeping in mind the inter-track cross talk considerations between the electrical leads used to provide connections from SQUID at liquid helium temperature (4.2K) to the electronics at room temperature (300K) and iii) use of high resistance manganin wires for the 86 channels (86×8 leads) essential to reduce the total heat leak which, however, inevitably causes an attenuation of the SQUID output signal due to voltage drop in the leads. We have presently populated 22 of the 86 channels, which include 6 reference channels to reject the common mode noise. The whole head MEG system to cover all the lobes of the brain will be progressively assembled when other three PCB adapter modules, presently under fabrication, become available. The MEG system will be used for a variety of basic and clinical studies including localization of epileptic foci during pre-surgical mapping in collaboration with neurologists.

  3. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer

  4. A unique cellular scaling rule in the avian auditory system.

    PubMed

    Corfield, Jeremy R; Long, Brendan; Krilow, Justin M; Wylie, Douglas R; Iwaniuk, Andrew N

    2016-06-01

    Although it is clear that neural structures scale with body size, the mechanisms of this relationship are not well understood. Several recent studies have shown that the relationship between neuron numbers and brain (or brain region) size are not only different across mammalian orders, but also across auditory and visual regions within the same brains. Among birds, similar cellular scaling rules have not been examined in any detail. Here, we examine the scaling of auditory structures in birds and show that the scaling rules that have been established in the mammalian auditory pathway do not necessarily apply to birds. In galliforms, neuronal densities decrease with increasing brain size, suggesting that auditory brainstem structures increase in size faster than neurons are added; smaller brains have relatively more neurons than larger brains. The cellular scaling rules that apply to auditory brainstem structures in galliforms are, therefore, different to that found in primate auditory pathway. It is likely that the factors driving this difference are associated with the anatomical specializations required for sound perception in birds, although there is a decoupling of neuron numbers in brain structures and hair cell numbers in the basilar papilla. This study provides significant insight into the allometric scaling of neural structures in birds and improves our understanding of the rules that govern neural scaling across vertebrates.

  5. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  6. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  7. Auditory Training for Central Auditory Processing Disorder

    PubMed Central

    Weihing, Jeffrey; Chermak, Gail D.; Musiek, Frank E.

    2015-01-01

    Auditory training (AT) is an important component of rehabilitation for patients with central auditory processing disorder (CAPD). The present article identifies and describes aspects of AT as they relate to applications in this population. A description of the types of auditory processes along with information on relevant AT protocols that can be used to address these specific deficits is included. Characteristics and principles of effective AT procedures also are detailed in light of research that reflects on their value. Finally, research investigating AT in populations who show CAPD or present with auditory complaints is reported. Although efficacy data in this area are still emerging, current findings support the use of AT for treatment of auditory difficulties. PMID:27587909

  8. The human auditory evoked response

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1974-01-01

    Figures are presented of computer-averaged auditory evoked responses (AERs) that point to the existence of a completely endogenous brain event. A series of regular clicks or tones was administered to the ear, and 'odd-balls' of different intensity or frequency respectively were included. Subjects were asked either to ignore the sounds (to read or do something else) or to attend to the stimuli. When they listened and counted the odd-balls, a P3 wave occurred at 300msec after stimulus. When the odd-balls consisted of omitted clicks or tone bursts, a similar response was observed. This could not have come from auditory nerve, but only from cortex. It is evidence of recognition, a conscious process.

  9. Reality of auditory verbal hallucinations

    PubMed Central

    Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-01-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency. PMID:19620178

  10. Summary statistics in auditory perception.

    PubMed

    McDermott, Josh H; Schemitsch, Michael; Simoncelli, Eero P

    2013-04-01

    Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here we provide evidence that the auditory system summarizes the temporal details of sounds using time-averaged statistics. We measured discrimination of 'sound textures' that were characterized by particular statistical properties, as normally result from the superposition of many acoustic features in auditory scenes. When listeners discriminated examples of different textures, performance improved with excerpt duration. In contrast, when listeners discriminated different examples of the same texture, performance declined with duration, a paradoxical result given that the information available for discrimination grows with duration. These results indicate that once these sounds are of moderate length, the brain's representation is limited to time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail.

  11. Electrophysiological measurement of human auditory function

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1975-01-01

    Knowledge of the human auditory evoked response is reviewed, including methods of determining this response, the way particular changes in the stimulus are coupled to specific changes in the response, and how the state of mind of the listener will influence the response. Important practical applications of this basic knowledge are discussed. Measurement of the brainstem evoked response, for instance, can state unequivocally how well the peripheral auditory apparatus functions. It might then be developed into a useful hearing test, especially for infants and preverbal or nonverbal children. Clinical applications of measuring the brain waves evoked 100 msec and later after the auditory stimulus are undetermined. These waves are clearly related to brain events associated with cognitive processing of acoustic signals, since their properties depend upon where the listener directs his attention and whether how long he expects the signal.

  12. Impairments of auditory scene analysis in Alzheimer's disease.

    PubMed

    Goll, Johanna C; Kim, Lois G; Ridgway, Gerard R; Hailstone, Julia C; Lehmann, Manja; Buckley, Aisling H; Crutch, Sebastian J; Warren, Jason D

    2012-01-01

    Parsing of sound sources in the auditory environment or 'auditory scene analysis' is a computationally demanding cognitive operation that is likely to be vulnerable to the neurodegenerative process in Alzheimer's disease. However, little information is available concerning auditory scene analysis in Alzheimer's disease. Here we undertook a detailed neuropsychological and neuroanatomical characterization of auditory scene analysis in a cohort of 21 patients with clinically typical Alzheimer's disease versus age-matched healthy control subjects. We designed a novel auditory dual stream paradigm based on synthetic sound sequences to assess two key generic operations in auditory scene analysis (object segregation and grouping) in relation to simpler auditory perceptual, task and general neuropsychological factors. In order to assess neuroanatomical associations of performance on auditory scene analysis tasks, structural brain magnetic resonance imaging data from the patient cohort were analysed using voxel-based morphometry. Compared with healthy controls, patients with Alzheimer's disease had impairments of auditory scene analysis, and segregation and grouping operations were comparably affected. Auditory scene analysis impairments in Alzheimer's disease were not wholly attributable to simple auditory perceptual or task factors; however, the between-group difference relative to healthy controls was attenuated after accounting for non-verbal (visuospatial) working memory capacity. These findings demonstrate that clinically typical Alzheimer's disease is associated with a generic deficit of auditory scene analysis. Neuroanatomical associations of auditory scene analysis performance were identified in posterior cortical areas including the posterior superior temporal lobes and posterior cingulate. This work suggests a basis for understanding a class of clinical symptoms in Alzheimer's disease and for delineating cognitive mechanisms that mediate auditory scene analysis

  13. Multichannel laser-fiber vibrometer

    NASA Astrophysics Data System (ADS)

    Dudzik, Grzegorz; Waz, Adam; Kaczmarek, Pawel; Antonczak, Arkadiusz; Sotor, Jaroslaw; Krzempek, Karol; Sobon, Grzegorz; Abramski, Krzysztof M.

    2013-01-01

    For the last few years we were elaborating the laser-fiber vibrometer working at 1550 nm. Our main stress was directed towards different aspects of research: analysis of scattered light, efficient photodetection, optimization of the fiber-free space interfaces and signal processing. As a consequence we proposed the idea of a multichannel fiber vibrometer based on well developed telecommunication technique - Wavelength Division Multiplexing (WDM). One of the most important parts of a fiber-laser vibrometer is demodulation electronic section. The distortion, nonlinearity, offset and added noise of measured signal come from electronic circuits and they have direct influence on finale measuring results. We present the results of finished project "Developing novel laser-fiber monitoring technologies to prevent environmental hazards from vibrating objects" where we have constructed a 4-channel WDM laser-fiber vibrometer.

  14. Multichannel acousto-optical spectrometer

    NASA Astrophysics Data System (ADS)

    Carter, James A.; Pape, Dennis R.

    1992-08-01

    Photonic Systems Incorporated is currently fabricating a Multichannel Acousto-Optical Spectrometer (MCAOS) for NASA Goddard Space Flight Center. This instrument will be used as a frequency channelized radiometer for radio astronomy spectroscopy. It will analyze the spectrum of four independent radio frequency (RF) channels simultaneously and has potential for eight to as many as sixteen channels. Each channel will resolve the RF spectrum to one megahertz within its 1000 megahertz band. Dynamic range exceeding 30 dB will be achieved by quantizing detector photo-charge to 12 bits and accumulating data for large periods of time. Long time integration requires an optical bench optimized for stability and the use of temperature stabilization. System drift due to speckle interference is minimized by using a novel polarization switching Bragg cell.

  15. Multi-channel polarized thermal emitter

    DOEpatents

    Lee, Jae-Hwang; Ho, Kai-Ming; Constant, Kristen P

    2013-07-16

    A multi-channel polarized thermal emitter (PTE) is presented. The multi-channel PTE can emit polarized thermal radiation without using a polarizer at normal emergence. The multi-channel PTE consists of two layers of metallic gratings on a monolithic and homogeneous metallic plate. It can be fabricated by a low-cost soft lithography technique called two-polymer microtransfer molding. The spectral positions of the mid-infrared (MIR) radiation peaks can be tuned by changing the periodicity of the gratings and the spectral separation between peaks are tuned by changing the mutual angle between the orientations of the two gratings.

  16. A Multichannel Bioluminescence Determination Platform for Bioassays.

    PubMed

    Kim, Sung-Bae; Naganawa, Ryuichi

    2016-01-01

    The present protocol introduces a multichannel bioluminescence determination platform allowing a high sample throughput determination of weak bioluminescence with reduced standard deviations. The platform is designed to carry a multichannel conveyer, an optical filter, and a mirror cap. The platform enables us to near-simultaneously determine ligands in multiple samples without the replacement of the sample tubes. Furthermore, the optical filters beneath the multichannel conveyer are designed to easily discriminate colors during assays. This optical system provides excellent time- and labor-efficiency to users during bioassays.

  17. The Brain As a Mixer, I. Preliminary Literature Review: Auditory Integration. Studies in Language and Language Behavior, Progress Report Number VII.

    ERIC Educational Resources Information Center

    Semmel, Melvyn I.; And Others

    Methods to evaluate central hearing deficiencies and to localize brain damage are reviewed beginning with Bocca who showed that patients with temporal lobe tumors made significantly lower discrimination scores in the ear opposite the tumor when speech signals were distorted. Tests were devised to attempt to pinpoint brain damage on the basis of…

  18. Multi-channel fiber photometry for population neuronal activity recording.

    PubMed

    Guo, Qingchun; Zhou, Jingfeng; Feng, Qiru; Lin, Rui; Gong, Hui; Luo, Qingming; Zeng, Shaoqun; Luo, Minmin; Fu, Ling

    2015-10-01

    Fiber photometry has become increasingly popular among neuroscientists as a convenient tool for the recording of genetically defined neuronal population in behaving animals. Here, we report the development of the multi-channel fiber photometry system to simultaneously monitor neural activities in several brain areas of an animal or in different animals. In this system, a galvano-mirror modulates and cyclically couples the excitation light to individual multimode optical fiber bundles. A single photodetector collects excited light and the configuration of fiber bundle assembly and the scanner determines the total channel number. We demonstrated that the system exhibited negligible crosstalk between channels and optical signals could be sampled simultaneously with a sample rate of at least 100 Hz for each channel, which is sufficient for recording calcium signals. Using this system, we successfully recorded GCaMP6 fluorescent signals from the bilateral barrel cortices of a head-restrained mouse in a dual-channel mode, and the orbitofrontal cortices of multiple freely moving mice in a triple-channel mode. The multi-channel fiber photometry system would be a valuable tool for simultaneous recordings of population activities in different brain areas of a given animal and different interacting individuals.

  19. Multi-channel fiber photometry for population neuronal activity recording

    PubMed Central

    Guo, Qingchun; Zhou, Jingfeng; Feng, Qiru; Lin, Rui; Gong, Hui; Luo, Qingming; Zeng, Shaoqun; Luo, Minmin; Fu, Ling

    2015-01-01

    Fiber photometry has become increasingly popular among neuroscientists as a convenient tool for the recording of genetically defined neuronal population in behaving animals. Here, we report the development of the multi-channel fiber photometry system to simultaneously monitor neural activities in several brain areas of an animal or in different animals. In this system, a galvano-mirror modulates and cyclically couples the excitation light to individual multimode optical fiber bundles. A single photodetector collects excited light and the configuration of fiber bundle assembly and the scanner determines the total channel number. We demonstrated that the system exhibited negligible crosstalk between channels and optical signals could be sampled simultaneously with a sample rate of at least 100 Hz for each channel, which is sufficient for recording calcium signals. Using this system, we successfully recorded GCaMP6 fluorescent signals from the bilateral barrel cortices of a head-restrained mouse in a dual-channel mode, and the orbitofrontal cortices of multiple freely moving mice in a triple-channel mode. The multi-channel fiber photometry system would be a valuable tool for simultaneous recordings of population activities in different brain areas of a given animal and different interacting individuals. PMID:26504642

  20. A Multi-Channel Radar Receiver.

    DTIC Science & Technology

    1985-01-07

    Doppler weather radar I ’Multi-channel radar receiver -,, Dual frequency radar , Polarization...V ~ ’.= :• ’> . . S , . . .. - -. .° , . * . - . -. . . A Multi-Channel Radar Receiver 1. INTRODUCTION The 10-cm Doppler weather radar at AFGL is...cm Dual Frequency Doppler Weather Radar . Part I: The Radar System, AFGL-TR-82- 0321 (I). 4. Ussailis, J.S., Leiker, L.A.. Goodman, R.M. IV.

  1. 40 Hz auditory steady state response to linguistic features of stimuli during auditory hallucinations.

    PubMed

    Ying, Jun; Yan, Zheng; Gao, Xiao-rong

    2013-10-01

    The auditory steady state response (ASSR) may reflect activity from different regions of the brain, depending on the modulation frequency used. In general, responses induced by low rates (≤40 Hz) emanate mostly from central structures of the brain, and responses from high rates (≥80 Hz) emanate mostly from the peripheral auditory nerve or brainstem structures. Besides, it was reported that the gamma band ASSR (30-90 Hz) played an important role in working memory, speech understanding and recognition. This paper investigated the 40 Hz ASSR evoked by modulated speech and reversed speech. The speech was Chinese phrase voice, and the noise-like reversed speech was obtained by temporally reversing the speech. Both auditory stimuli were modulated with a frequency of 40 Hz. Ten healthy subjects and 5 patients with hallucination symptom participated in the experiment. Results showed reduction in left auditory cortex response when healthy subjects listened to the reversed speech compared with the speech. In contrast, when the patients who experienced auditory hallucinations listened to the reversed speech, the auditory cortex of left hemispheric responded more actively. The ASSR results were consistent with the behavior results of patients. Therefore, the gamma band ASSR is expected to be helpful for rapid and objective diagnosis of hallucination in clinic.

  2. A computer model of auditory stream segregation.

    PubMed

    Beauvois, M W; Meddis, R

    1991-08-01

    A computer model is described which simulates some aspects of auditory stream segregation. The model emphasizes the explanatory power of simple physiological principles operating at a peripheral rather than a central level. The model consists of a multi-channel bandpass-filter bank with a "noisy" output and an attentional mechanism that responds selectively to the channel with the greatest activity. A "leaky integration" principle allows channel excitation to accumulate and dissipate over time. The model produces similar results to two experimental demonstrations of streaming phenomena, which are presented in detail. These results are discussed in terms of the "emergent properties" of a system governed by simple physiological principles. As such the model is contrasted with higher-level Gestalt explanations of the same phenomena while accepting that they may constitute complementary kinds of explanation.

  3. Effects of low frequency rTMS treatment on brain networks for inner speech in patients with schizophrenia and auditory verbal hallucinations.

    PubMed

    Bais, Leonie; Liemburg, Edith; Vercammen, Ans; Bruggeman, Richard; Knegtering, Henderikus; Aleman, André

    2017-08-01

    Efficacy of repetitive Transcranial Magnetic Stimulation (rTMS) targeting the temporo-parietal junction (TPJ) for the treatment of auditory verbal hallucinations (AVH) remains under debate. We assessed the influence of a 1Hz rTMS treatment on neural networks involved in a cognitive mechanism proposed to subserve AVH. Patients with schizophrenia (N=24) experiencing medication-resistant AVH completed a 10-day 1Hz rTMS treatment. Participants were randomized to active stimulation of the left or bilateral TPJ, or sham stimulation. The effects of rTMS on neural networks were investigated with an inner speech task during fMRI. Changes within and between neural networks were analyzed using Independent Component Analysis. rTMS of the left and bilateral TPJ areas resulted in a weaker network contribution of the left supramarginal gyrus to the bilateral fronto-temporal network. Left-sided rTMS resulted in stronger network contributions of the right superior temporal gyrus to the auditory-sensorimotor network, right inferior gyrus to the left fronto-parietal network, and left middle frontal gyrus to the default mode network. Bilateral rTMS was associated with a predominant inhibitory effect on network contribution. Sham stimulation showed different patterns of change compared to active rTMS. rTMS of the left temporo-parietal region decreased the contribution of the left supramarginal gyrus to the bilateral fronto-temporal network, which may reduce the likelihood of speech intrusions. On the other hand, left rTMS appeared to increase the contribution of functionally connected regions involved in perception, cognitive control and self-referential processing. These findings hint to potential neural mechanisms underlying rTMS for hallucinations but need corroboration in larger samples. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A lateralized functional auditory network is involved in anuran sexual selection.

    PubMed

    Xue, Fei; Fang, Guangzhan; Yue, Xizi; Zhao, Ermi; Brauth, Steven E; Tang, Yezhong

    2016-12-01

    Right ear advantage (REA) exists in many land vertebrates in which the right ear and left hemisphere preferentially process conspecific acoustic stimuli such as those related to sexual selection. Although ecological and neural mechanisms for sexual selection have been widely studied, the brain networks involved are still poorly understood. In this study we used multi-channel electroencephalographic data in combination with Granger causal connectivity analysis to demonstrate, for the first time, that auditory neural network interconnecting the left and right midbrain and forebrain function asymmetrically in the Emei music frog (Babina daunchina), an anuran species which exhibits REA. The results showed the network was lateralized. Ascending connections between the mesencephalon and telencephalon were stronger in the left side while descending ones were stronger in the right, which matched with the REA in this species and implied that inhibition from the forebrainmay induce REA partly. Connections from the telencephalon to ipsilateral mesencephalon in response to white noise were the highest in the non-reproductive stage while those to advertisement calls were the highest in reproductive stage, implying the attention resources and living strategy shift when entered the reproductive season. Finally, these connection changes were sexually dimorphic, revealing sex differences in reproductive roles.

  5. Auditory neuroscience: Development, transduction, and integration

    PubMed Central

    Hudspeth, A. J.; Konishi, Masakazu

    2000-01-01

    Hearing underlies our ability to locate sound sources in the environment, our appreciation of music, and our ability to communicate. Participants in the National Academy of Sciences colloquium on Auditory Neuroscience: Development, Transduction, and Integration presented research results bearing on four key issues in auditory research. How does the complex inner ear develop? How does the cochlea transduce sounds into electrical signals? How does the brain's ability to compute the location of a sound source develop? How does the forebrain analyze complex sounds, particularly species-specific communications? This article provides an introduction to the papers stemming from the meeting. PMID:11050196

  6. Simultanagnosia does not affect processes of auditory Gestalt perception.

    PubMed

    Rennig, Johannes; Bleyer, Anna Lena; Karnath, Hans-Otto

    2017-03-23

    Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia. We also included control tasks for auditory working memory deficits and for auditory extinction. We tested four simultanagnosia patients who suffered from severe symptoms in the visual domain. Two of them indeed showed significant impairments in recognition of simultaneously presented sounds. However, the same two patients also suffered from severe auditory working memory deficits and from symptoms comparable to auditory extinction, both sufficiently explaining the impairments in simultaneous auditory perception. We thus conclude that deficits in auditory Gestalt perception do not appear to be characteristic for simultanagnosia and that the human brain obviously uses independent mechanisms for visual and for auditory Gestalt perception.

  7. Multichannel and Multidimensional Bargmann Potentials

    NASA Astrophysics Data System (ADS)

    Plekhanvov, E. B.; Suzko, A. S.; Zakhariev, B. N.

    The class of potential matrices for which coupled channel Schrödinger equations have exact solutions is presented. This is achieved due to degeneration of the kernel of the inverse-problem integral equation with respect to the channel indices, in addition to separability of its coordinate dependence. No attention has been paid before to this fact. Maybe therefore there was no satisfactory multichannel generalization of Bargmann potentials.Partially nonlocal Bargmann potentials for multidimensional and many-particle systems are constructed. Examples of new transparent potentials are given.Translated AbstractMehrkanal und multidimensionale Bargmann-PotentialePotentialmatrizen, für die die Mehrkanal-Schrödingergleichung exakt lösbar ist, werden angegeben. Die entsprechenden Schrödingergleichungen sind exakt lösbar dank der Entartung des Kerns der Integralgleichung des inversen Streuproblems hinsichtlich sowohl der Koordinatenabhängigkeit als auch der Kanalindizes. Dieser Sachverhalt war bisher nicht bemerkt worden. Es werden teilweise nichtlokale Potentiale für mehrdimensionale und Vielteilchen-Systeme konstruiert. Neue Beispiele von nichtreflektierden Potentialen werden angegeben.

  8. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes.

    PubMed

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  9. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    PubMed Central

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  10. Automated analysis of multi-channel EEG in preterm infants.

    PubMed

    Murphy, Keelin; Stevenson, Nathan J; Goulding, Robert M; Lloyd, Rhodri O; Korotchikova, Irina; Livingstone, Vicki; Boylan, Geraldine B

    2015-09-01

    To develop and validate two automatic methods for the detection of burst and interburst periods in preterm eight-channel electroencephalographs (EEG). To perform a detailed analysis of interobserver agreement on burst and interburst periods and use this as a benchmark for the performance of the automatic methods. To examine mathematical features of the EEG signal and their potential correlation with gestational age. Multi-channel EEG from 36 infants, born at less than 30 weeks gestation was utilised, with a 10 min artifact-free epoch selected for each subject. Three independent expert observers annotated all EEG activity bursts in the dataset. Two automatic algorithms for burst/interburst detection were applied to the EEG data and their performances were analysed and compared with interobserver agreement. A total of 12 mathematical features of the EEG signal were calculated and correlated with gestational age. The mean interobserver agreement was found to be 77% while mean algorithm/observer agreement was 81%. Six of the mathematical features calculated (spectral entropy, Higuchi fractal dimension, spectral edge frequency, variance, extrema median and Hilberts transform amplitude) were found to have significant correlation with gestational age. Automatic detection of burst/interburst periods has been performed in multi-channel EEG of 36 preterm infants. The algorithm agreement with expert observers is found to be on a par with interobserver agreement. Mathematical features of EEG have been calculated which show significant correlation with gestational age. Automatic analysis of preterm multi-channel EEG is possible. The methods described here have the potential to be incorporated into a fully automatic system to quantitatively assess brain maturity from preterm EEG. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. The dorsal auditory pathway is involved in performance of both visual and auditory rhythms.

    PubMed

    Karabanov, Anke; Blom, Orjan; Forsman, Lea; Ullén, Fredrik

    2009-01-15

    We used functional magnetic resonance imaging to investigate the effect of two factors on the neural control of temporal sequence performance: the modality in which the rhythms had been trained, and the modality of the pacing stimuli preceding performance. The rhythms were trained 1-2 days before scanning. Each participant learned two rhythms: one was presented visually, the other auditorily. During fMRI, the rhythms were performed in blocks. In each block, four beats of a visual or auditory pacing metronome were followed by repetitive self-paced rhythm performance from memory. Data from the self-paced performance phase was analysed in a 2x2 factorial design, with the two factors Training Modality (auditory or visual) and Metronome Modality (auditory or visual), as well as with a conjunction analysis across all active conditions, to identify activations that were independent of both Training Modality and Metronome Modality. We found a significant main effect only for visual versus auditory Metronome Modality, in the left angular gyrus, due to a deactivation of this region after auditory pacing. The conjunction analysis revealed a set of brain areas that included dorsal auditory pathway areas (left temporo-parietal junction area and ventral premotor cortex), dorsal premotor cortex, the supplementary and presupplementary premotor areas, the cerebellum and the basal ganglia. We conclude that these regions are involved in controlling performance of well-learned rhythms, regardless of the modality in which the rhythms are trained and paced. This suggests that after extensive short-term training, all rhythms, even those that were both trained and paced in visual modality, had been transformed into auditory-motor representations. The deactivation of the angular cortex following auditory pacing may represent cross-modal auditory-visual inhibition.

  12. Severe auditory processing disorder secondary to viral meningoencephalitis.

    PubMed

    Pillion, Joseph P; Shiffler, Dorothy E; Hoon, Alexander H; Lin, Doris D M

    2014-06-01

    To describe auditory function in an individual with bilateral damage to the temporal and parietal cortex. Case report. A previously healthy 17-year old male is described who sustained extensive cortical injury following an episode of viral meningoencephalitis. He developed status epilepticus and required intubation and multiple anticonvulsants. Serial brain MRIs showed bilateral temporoparietal signal changes reflecting extensive damage to language areas and the first transverse gyrus of Heschl on both sides. The patient was referred for assessment of auditory processing but was so severely impaired in speech processing that he was unable to complete any formal tests of his speech processing abilities. Audiological assessment utilizing objective measures of auditory function established the presence of normal peripheral auditory function and illustrates the importance of the use of objective measures of auditory function in patients with injuries to the auditory cortex. Use of objective measures of auditory function is essential in establishing the presence of normal peripheral auditory function in individuals with cortical damage who may not be able to cooperate sufficiently for assessment utilizing behavioral measures of auditory function.

  13. Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve

    PubMed Central

    Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.

    2015-01-01

    The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538

  14. Multichannel DBS halftoning for improved texture quality

    NASA Astrophysics Data System (ADS)

    Slavuj, Radovan; Pedersen, Marius

    2015-01-01

    The paper aims to develop a method for multichannel halftoning based on the Direct Binary Search (DBS) algorithm. We integrate specifics and benefits of multichannel printing into the halftoning method in order to further improve texture quality of DBS and to create halftoning that would suit for multichannel printing. Originally, multichannel printing is developed for an extended color gamut, at the same time additional channels can help to improve individual and combined texture of color halftoning. It does so in a similar manner to the introduction of the light colors (diluted inks) in printing. Namely, if one observes Red, Green and Blue inks as the light version of the M+Y, C+Y, C+M combinations, the visibility of the unwanted halftoning textures can be reduced. Analogy can be extent to any number of ink combinations, or Neugebauer Primaries (NPs) as the alternative building blocks. The extended variability of printing spatially distributed NPs could provide many practical solution and improvements in color accuracy, image quality, and could enable spectral printing. This could be done by selection of NPs per dot area location based on the constraint of the desired reproduction. Replacement with brighter NP at the location could induce a color difference where a tradeoff between image quality and color accuracy is created. With multichannel enabled DBS haftoning, we are able to reduce visibility of the textures, to provide better rendering of transitions, especially in mid and dark tones.

  15. Temporal prediction errors in visual and auditory cortices.

    PubMed

    Lee, Hweeling; Noppeney, Uta

    2014-04-14

    To form a coherent percept of the environment, the brain needs to bind sensory signals emanating from a common source, but to segregate those from different sources [1]. Temporal correlations and synchrony act as prominent cues for multisensory integration [2-4], but the neural mechanisms by which such cues are identified remain unclear. Predictive coding suggests that the brain iteratively optimizes an internal model of its environment by minimizing the errors between its predictions and the sensory inputs [5,6]. This model enables the brain to predict the temporal evolution of natural audiovisual inputs and their statistical (for example, temporal) relationship. A prediction of this theory is that asynchronous audiovisual signals violating the model's predictions induce an error signal that depends on the directionality of the audiovisual asynchrony. As the visual system generates the dominant temporal predictions for visual leading asynchrony, the delayed auditory inputs are expected to generate a prediction error signal in the auditory system (and vice versa for auditory leading asynchrony). Using functional magnetic resonance imaging (fMRI), we measured participants' brain responses to synchronous, visual leading and auditory leading movies of speech, sinewave speech or music. In line with predictive coding, auditory leading asynchrony elicited a prediction error in visual cortices and visual leading asynchrony in auditory cortices. Our results reveal predictive coding as a generic mechanism to temporally bind signals from multiple senses into a coherent percept. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. PARATHYROID HORMONE 2 RECEPTOR AND ITS ENDOGENOUS LIGAND TIP39 ARE CONCENTRATED IN ENDOCRINE, VISCEROSENSORY AND AUDITORY BRAIN REGIONS IN MACAQUE AND HUMAN

    PubMed Central

    Bagó, Attila G.; Dimitrov, Eugene; Saunders, Richard; Seress, László; Palkovits, Miklós; Usdin, Ted B.; Dobolyi, Arpád

    2009-01-01

    Parathyroid hormone receptor 2 (PTH2R) and its ligand, tuberoinfundibular peptide of 39 residues (TIP39) constitute a neuromodulator system implicated in endocrine and nociceptive regulations. We now describe the presence and distribution of the PTH2R and TIP39 in the brain of primates using a range of tissues and ages from macaque and human brain. In situ hybridization histochemistry of TIP39 mRNA, studied in young macaque brain, due to its possible decline beyond late postnatal ages, was present only in the thalamic subparafascicular area and the pontine medial paralemniscal nucleus. In contrast in situ hybridization histochemistry in macaque identified high levels of PTH2R expression in the central amygdaloid nucleus, medial preoptic area, hypothalamic paraventricular and periventricular nuclei, medial geniculate, and the pontine tegmentum. PTH2R mRNA was also detected in several human brain areas by RT-PCR. The distribution of PTH2R-immunoreactive fibers in human, determined by immunocytochemistry, was similar to that in rodents including dense fiber networks in the medial preoptic area, hypothalamic paraventricular, periventricular and infundibular (arcuate) nuclei, lateral hypothalamic area, median eminence, thalamic paraventricular nucleus, periaqueductal gray, lateral parabrachial nucleus, nucleus of the solitary tract, sensory trigeminal nuclei, medullary dorsal reticular nucleus, and dorsal horn of the spinal cord. Co-localization suggested that PTH2R fibers are glutamatergic, and that TIP39 may directly influence hypophysiotropic somatostatin containing and indirectly influence corticotropin releasing-hormone containing neurons. The results demonstrate that TIP39 and the PTH2R are expressed in the brain of primates in locations that suggest involvement in regulation of fear, anxiety, reproductive behaviors, release of pituitary hormones, and nociception. PMID:19401215

  17. Social experience influences the development of a central auditory area

    NASA Astrophysics Data System (ADS)

    Cousillas, Hugo; George, Isabelle; Mathelier, Maryvonne; Richard, Jean-Pierre; Henry, Laurence; Hausberger, Martine

    2006-12-01

    Vocal communication develops under social influences that can enhance attention, an important factor in memory formation and perceptual tuning. In songbirds, social conditions can delay sensitive periods of development, overcome learning inhibitions and enable exceptional learning or induce selective learning. However, we do not know how social conditions influence auditory processing in the brain. In the present study, we raised young naive starlings under different social conditions but with the same auditory experience of adult songs, and we compared the effects of these different conditions on the development of the auditory cortex analogue. Several features appeared to be influenced by the social experience, among which the proportion of auditory neuronal sites and the neuronal selectivity. Both physical and social isolation from adult models altered the development of the auditory area in parallel to alterations in vocal development. To our knowledge, this is the first evidence that social deprivation has as much influence on neuronal responsiveness as sensory deprivation.

  18. Altered auditory function in rats exposed to hypergravic fields

    NASA Technical Reports Server (NTRS)

    Jones, T. A.; Hoffman, L.; Horowitz, J. M.

    1982-01-01

    The effect of an orthodynamic hypergravic field of 6 G on the brainstem auditory projections was studied in rats. The brain temperature and EEG activity were recorded in the rats during 6 G orthodynamic acceleration and auditory brainstem responses were used to monitor auditory function. Results show that all animals exhibited auditory brainstem responses which indicated impaired conduction and transmission of brainstem auditory signals during the exposure to the 6 G acceleration field. Significant increases in central conduction time were observed for peaks 3N, 4P, 4N, and 5P (N = negative, P = positive), while the absolute latency values for these same peaks were also significantly increased. It is concluded that these results, along with those for fields below 4 G (Jones and Horowitz, 1981), indicate that impaired function proceeds in a rostro-caudal progression as field strength is increased.

  19. Social experience influences the development of a central auditory area.

    PubMed

    Cousillas, Hugo; George, Isabelle; Mathelier, Maryvonne; Richard, Jean-Pierre; Henry, Laurence; Hausberger, Martine

    2006-12-01

    Vocal communication develops under social influences that can enhance attention, an important factor in memory formation and perceptual tuning. In songbirds, social conditions can delay sensitive periods of development, overcome learning inhibitions and enable exceptional learning or induce selective learning. However, we do not know how social conditions influence auditory processing in the brain. In the present study, we raised young naive starlings under different social conditions but with the same auditory experience of adult songs, and we compared the effects of these different conditions on the development of the auditory cortex analogue. Several features appeared to be influenced by the social experience, among which the proportion of auditory neuronal sites and the neuronal selectivity. Both physical and social isolation from adult models altered the development of the auditory area in parallel to alterations in vocal development. To our knowledge, this is the first evidence that social deprivation has as much influence on neuronal responsiveness as sensory deprivation.

  20. Egocentric and allocentric representations in auditory cortex

    PubMed Central

    Brimijoin, W. Owen; Bizley, Jennifer K.

    2017-01-01

    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796

  1. Egocentric and allocentric representations in auditory cortex.

    PubMed

    Town, Stephen M; Brimijoin, W Owen; Bizley, Jennifer K

    2017-06-01

    A key function of the brain is to provide a stable representation of an object's location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position.

  2. Anatomy, Physiology and Function of the Auditory System

    NASA Astrophysics Data System (ADS)

    Kollmeier, Birger

    The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.

  3. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  4. Wireless multichannel biopotential recording using an integrated FM telemetry circuit.

    PubMed

    Mohseni, Pedram; Najafi, Khalil; Eliades, Steven J; Wang, Xiaoqin

    2005-09-01

    This paper presents a four-channel telemetric microsystem featuring on-chip alternating current amplification, direct current baseline stabilization, clock generation, time-division multiplexing, and wireless frequency-modulation transmission of microvolt- and millivolt-range input biopotentials in the very high frequency band of 94-98 MHz over a distance of approximately 0.5 m. It consists of a 4.84-mm2 integrated circuit, fabricated using a 1.5-microm double-poly double-metal n-well standard complementary metal-oxide semiconductor process, interfaced with only three off-chip components on a custom-designed printed-circuit board that measures 1.7 x 1.2 x 0.16 cm3, and weighs 1.1 g including two miniature 1.5-V batteries. We characterize the microsystem performance, operating in a truly wireless fashion in single-channel and multichannel operation modes, via extensive benchtop and in vitro tests in saline utilizing two different micromachined neural recording microelectrodes, while dissipating approximately 2.2 mW from a 3-V power supply. Moreover, we demonstrate successful wireless in vivo recording of spontaneous neural activity at 96.2 MHz from the auditory cortex of an awake marmoset monkey at several transmission distances ranging from 10 to 50 cm with signal-to-noise ratios in the range of 8.4-9.5 dB.

  5. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C.-L.; Olson, W. S.

    1985-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Its properties and limitations are presented. The effect of noise was investigated and a better understanding of the performance of the algorithm with noisy data has been achieved. The restoration scheme with the selection of appropriate constraints was applied to a practical problem. The 6.6, 10.7, 18, and 21 GHz satellite images obtained by the scanning multichannel microwave radiometer (SMMR), each having different spatial resolution, were restored to a common, high resolution (that of the 37 GHz channels) to demonstrate the effectiveness of the method. Both simulated data and real data were used in this study. The restored multichannel images may be utilized to retrieve rainfall distributions.

  6. Restoration of multichannel microwave radiometric images.

    PubMed

    Chin, R T; Yeh, C L; Olson, W S

    1985-04-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Its properties and limitations are presented. The effect of noise was investigated and a better understanding of the performance of the algorithm with noisy data has been achieved. The restoration scheme with the selection of appropriate constraints was applied to a practical problem. The 6.6, 10.7, 18, and 21 GHz satellite images obtained by the scanning multichannel microwave radiometer (SMMR), each having different spatial resolution, were restored to a common, high resolution (that of the 37 GHz channels) to demonstrate the effectiveness of the method. Both simulated data and real data were used in this study. The restored multichannel images may be utilized to retrieve rainfall distributions.

  7. Multichannel framework for singular quantum mechanics

    SciTech Connect

    Camblong, Horacio E.; Epele, Luis N.; Fanchiotti, Huner; García Canal, Carlos A.; Ordóñez, Carlos R.

    2014-01-15

    A multichannel S-matrix framework for singular quantum mechanics (SQM) subsumes the renormalization and self-adjoint extension methods and resolves its boundary-condition ambiguities. In addition to the standard channel accessible to a distant (“asymptotic”) observer, one supplementary channel opens up at each coordinate singularity, where local outgoing and ingoing singularity waves coexist. The channels are linked by a fully unitary S-matrix, which governs all possible scenarios, including cases with an apparent nonunitary behavior as viewed from asymptotic distances. -- Highlights: •A multichannel framework is proposed for singular quantum mechanics and analogues. •The framework unifies several established approaches for singular potentials. •Singular points are treated as new scattering channels. •Nonunitary asymptotic behavior is subsumed in a unitary multichannel S-matrix. •Conformal quantum mechanics and the inverse quartic potential are highlighted.

  8. Spectrotemporal resolution tradeoff in auditory processing as revealed by human auditory brainstem responses and psychophysical indices.

    PubMed

    Bidelman, Gavin M; Syed Khaja, Ameenuddin

    2014-06-20

    Auditory filter theory dictates a physiological compromise between frequency and temporal resolution of cochlear signal processing. We examined neurophysiological correlates of these spectrotemporal tradeoffs in the human auditory system using auditory evoked brain potentials and psychophysical responses. Temporal resolution was assessed using scalp-recorded auditory brainstem responses (ABRs) elicited by paired clicks. The inter-click interval (ICI) between successive pulses was parameterized from 0.7 to 25 ms to map ABR amplitude recovery as a function of stimulus spacing. Behavioral frequency difference limens (FDLs) and auditory filter selectivity (Q10 of psychophysical tuning curves) were obtained to assess relations between behavioral spectral acuity and electrophysiological estimates of temporal resolvability. Neural responses increased monotonically in amplitude with increasing ICI, ranging from total suppression (0.7 ms) to full recovery (25 ms) with a temporal resolution of ∼3-4 ms. ABR temporal thresholds were correlated with behavioral Q10 (frequency selectivity) but not FDLs (frequency discrimination); no correspondence was observed between Q10 and FDLs. Results suggest that finer frequency selectivity, but not discrimination, is associated with poorer temporal resolution. The inverse relation between ABR recovery and perceptual frequency tuning demonstrates a time-frequency tradeoff between the temporal and spectral resolving power of the human auditory system.

  9. Repeated restraint stress impairs auditory attention and GABAergic synaptic efficacy in the rat auditory cortex.

    PubMed

    Pérez, Miguel Ángel; Pérez-Valenzuela, Catherine; Rojas-Thomas, Felipe; Ahumada, Juan; Fuenzalida, Marco; Dagnino-Subiabre, Alexies

    2013-08-29

    Chronic stress induces dendritic atrophy in the rat primary auditory cortex (A1), a key brain area for auditory attention. The aim of this study was to determine whether repeated restraint stress affects auditory attention and synaptic transmission in A1. Male Sprague-Dawley rats were trained in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance over 80% of correct trials in the 2-ACT were randomly assigned to control and restraint stress experimental groups. To analyze the effects of restraint stress on the auditory attention, trained rats of both groups were subjected to 50 2-ACT trials one day before and one day after of the stress period. A difference score was determined by subtracting the number of correct trials after from those before the stress protocol. Another set of rats was used to study the synaptic transmission in A1. Restraint stress decreased the number of correct trials by 28% compared to the performance of control animals (p < 0.001). Furthermore, stress reduced the frequency of spontaneous inhibitory postsynaptic currents (sIPSC) and miniature IPSC in A1, whereas glutamatergic efficacy was not affected. Our results demonstrate that restraint stress decreased auditory attention and GABAergic synaptic efficacy in A1. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. Auditory peripersonal space in humans: a case of auditory-tactile extinction.

    PubMed

    Làdavas, E; Pavani, F; Farnè, A

    2001-01-01

    Animal experiments have shown that the spatial correspondence between auditory and tactile receptive fields of ventral pre-motor neurons provides a map of auditory peripersonal space around the head. This allows neurons to localize a near sound with respect to the head. In the present study, we demonstrated the existence of an auditory peripersonal space around the head in humans. In a right-brain damaged patient with tactile extinction, a sound delivered near the ipsilesional side of the head extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). In contrast, when an auditory stimulus was presented far from the head, cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was found only when a complex sound like a white noise burst was presented; pure tones did not produce spatially specific cross-modal extinction. These results show a high degree of functional similarity between the characteristics of the auditory peripersonal space representation in humans and monkeys. This similarity suggests that analogous physiological substrates might be responsible for coding this multisensory integrated representation of peripersonal space in human and non-human primates.

  11. Dynamic multi-channel TMS with reconfigurable coil