Science.gov

Sample records for multichannel auditory brain

  1. Electrically evoked auditory brain stem responses (EABR) and middle latency responses (EMLR) obtained from patients with the nucleus multichannel cochlear implant.

    PubMed

    Shallop, J K; Beiter, A L; Goin, D W; Mischke, R E

    1990-02-01

    Electrical auditory brain stem responses (EABR) and electrical middle latency responses (EMLR) were recorded from patients who had received the Nucleus multichannel cochlear implant system. Twenty-five sequential patients had either intraoperative or outpatient EABR testing. We also recorded EMLRs from several outpatients. EABR results were consistent among all patients tested. Wave V mean latencies were the shortest (3.82 msec) for the most apical electrode (E20) and increased slightly for the medial (E12) and basal (E5) electrodes (3.94 and 4.20 msec, respectively). Absolute latencies for all EABR component waves were observed to be 1 to 1.5 msec shorter than typical acoustic auditory brain stem response (ABR) mean latencies. We have examined the relationships between patients' EABR/EMLR and their behavioral responses to electrical stimulation. Generally, the behavioral threshold and comfort current levels were lower than the predicted values based on EABR/EMLR findings. This observation may be due in part to psychophysical loudness differences noted for pulse rates of 10 to 500 pulses per second in some of the patients that we have studied in greater detail.

  2. Multichannel Spatial Auditory Display for Speed Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  3. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  4. Consequences of Broad Auditory Filters for Identification of Multichannel-Compressed Vowels

    ERIC Educational Resources Information Center

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose: In view of previous findings (Bor, Souza, & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method: Listeners were recruited in 3 groups:…

  5. A multichannel time-domain brain oximeter for clinical studies

    NASA Astrophysics Data System (ADS)

    Contini, Davide; Spinelli, Lorenzo; Caffini, Matteo; Cubeddu, Rinaldo; Torricelli, Alessandro

    2009-07-01

    We developed and optimized a multichannel dual-wavelength time-domain brain oximeter for functional studies in the clinical environment. The system, mounted on a 19"-rack, is interfaced with instrumentation for monitoring physiological parameters and for stimuli presentation.

  6. Improving auditory steady-state response detection using independent component analysis on multichannel EEG data.

    PubMed

    Van Dun, Bram; Wouters, Jan; Moonen, Marc

    2007-07-01

    Over the last decade, the detection of auditory steady-state responses (ASSR) has been developed for reliable hearing threshold estimation at audiometric frequencies. Unfortunately, the duration of ASSR measurement can be long, which is unpractical for wide scale clinical application. In this paper, we propose independent component analysis (ICA) as a tool to improve the ASSR detection in recorded single-channel as well as multichannel electroencephalogram (EEG) data. We conclude that ICA is able to reduce measurement duration significantly. For a multichannel implementation, near-optimal performance is obtained with five-channel recordings. PMID:17605353

  7. A Brain System for Auditory Working Memory

    PubMed Central

    Joseph, Sabine; Gander, Phillip E.; Barascud, Nicolas; Halpern, Andrea R.; Griffiths, Timothy D.

    2016-01-01

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. SIGNIFICANCE STATEMENT In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. PMID:27098693

  8. Multi-channel spatial auditory display for speech communications

    NASA Astrophysics Data System (ADS)

    Begault, Durand; Erbe, Tom

    1993-10-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  9. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  10. Assessment of an ICA-based noise reduction method for multi-channel auditory evoked potentials

    NASA Astrophysics Data System (ADS)

    Mirahmadizoghi, Siavash; Bell, Steven; Simpson, David

    2015-03-01

    In this work a new independent component analysis (ICA) based method for noise reduction in evoked potentials is evaluated on for auditory late responses (ALR) captured with a 63-channel electroencephalogram (EEG) from 10 normal-hearing subjects. The performance of the new method is compared with a single channel alternative in terms of signal to noise ratio (SNR), the number of channels with an SNR above an empirically derived statistical critical value and an estimate of hearing threshold. The results show that the multichannel signal processing method can significantly enhance the quality of the signal and also detected hearing thresholds significantly lower than with the single channel alternative.

  11. Visual and auditory brain-computer interfaces.

    PubMed

    Gao, Shangkai; Wang, Yijun; Gao, Xiaorong; Hong, Bo

    2014-05-01

    Over the past several decades, electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have attracted attention from researchers in the field of neuroscience, neural engineering, and clinical rehabilitation. While the performance of BCI systems has improved, they do not yet support widespread usage. Recently, visual and auditory BCI systems have become popular because of their high communication speeds, little user training, and low user variation. However, building robust and practical BCI systems from physiological and technical knowledge of neural modulation of visual and auditory brain responses remains a challenging problem. In this paper, we review the current state and future challenges of visual and auditory BCI systems. First, we describe a new taxonomy based on the multiple access methods used in telecommunication systems. Then, we discuss the challenges of translating current technology into real-life practices and outline potential avenues to address them. Specifically, this review aims to provide useful guidelines for exploring new paradigms and methodologies to improve the current visual and auditory BCI technology.

  12. Consequences of broad auditory filters for identification of multichannel-compressed vowels

    PubMed Central

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose In view of previous findings (Bor, Souza & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method Listeners were recruited in three groups: listeners with flat sensorineural loss, listeners with sloping sensorineural loss, and a control group of listeners with normal hearing. Individual auditory filter measurements were obtained at 500 and 2000 Hz. The filter widths were related to identification of vowels processed with 16-channel MCC and with a control (linear) condition. Results Listeners with flat loss had broader filters at 500 Hz but not at 2000 Hz, compared to listeners with sloping loss. Vowel identification was poorer for MCC compared to linear amplification. Listeners with flat loss made more errors than listeners with sloping loss, and there was a significant relationship between filter width and the effects of MCC. Conclusions Broadened auditory filters can reduce the ability to process amplitude-compressed vowel spectra. This suggests that individual frequency selectivity is one factor which influences benefit of MCC, when a high number of compression channels are used. PMID:22207696

  13. Multichannel Brain-Signal-Amplifying and Digitizing System

    NASA Technical Reports Server (NTRS)

    Gevins, Alan

    2005-01-01

    An apparatus has been developed for use in acquiring multichannel electroencephalographic (EEG) data from a human subject. EEG apparatuses with many channels in use heretofore have been too heavy and bulky to be worn, and have been limited in dynamic range to no more than 18 bits. The present apparatus is small and light enough to be worn by the subject. It is capable of amplifying EEG signals and digitizing them to 22 bits in as many as 150 channels. The apparatus is controlled by software and is plugged into the USB port of a personal computer. This apparatus makes it possible, for the first time, to obtain high-resolution functional EEG images of a thinking brain in a real-life, ambulatory setting outside a research laboratory or hospital.

  14. The utility of multichannel local field potentials for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Hwang, Eun Jung; Andersen, Richard A.

    2013-08-01

    Objective. Local field potentials (LFPs) that carry information about the subject's motor intention have the potential to serve as a complement or alternative to spike signals for brain-machine interfaces (BMIs). The goal of this study is to assess the utility of LFPs for BMIs by characterizing the largely unknown information coding properties of multichannel LFPs. Approach. Two monkeys were implanted, each with a 16-channel electrode array, in the parietal reach region where both LFPs and spikes are known to encode the subject's intended reach target. We examined how multichannel LFPs recorded during a reach task jointly carry reach target information, and compared the LFP performance to simultaneously recorded multichannel spikes. Main Results. LFPs yielded a higher number of channels that were informative about reach targets than spikes. Single channel LFPs provided more accurate target information than single channel spikes. However, LFPs showed significantly larger signal and noise correlations across channels than spikes. Reach target decoders performed worse when using multichannel LFPs than multichannel spikes. The underperformance of multichannel LFPs was mostly due to their larger noise correlation because noise de-correlated multichannel LFPs produced a decoding accuracy comparable to multichannel spikes. Despite the high noise correlation, decoders using LFPs in addition to spikes outperformed decoders using only spikes. Significance. These results demonstrate that multichannel LFPs could effectively complement spikes for BMI applications by yielding more informative channels. The utility of multichannel LFPs may be further augmented if their high noise correlation can be taken into account by decoders.

  15. The SRI24 multichannel brain atlas: construction and applications

    NASA Astrophysics Data System (ADS)

    Rohlfing, Torsten; Zahr, Natalie M.; Sullivan, Edith V.; Pfefferbaum, Adolf

    2008-03-01

    We present a new standard atlas of the human brain based on magnetic resonance images. The atlas was generated using unbiased population registration from high-resolution images obtained by multichannel-coil acquisition at 3T in a group of 24 normal subjects. The final atlas comprises three anatomical channels (T I-weighted, early and late spin echo), three diffusion-related channels (fractional anisotropy, mean diffusivity, diffusion-weighted image), and three tissue probability maps (CSF, gray matter, white matter). The atlas is dynamic in that it is implicitly represented by nonrigid transformations between the 24 subject images, as well as distortion-correction alignments between the image channels in each subject. The atlas can, therefore, be generated at essentially arbitrary image resolutions and orientations (e.g., AC/PC aligned), without compounding interpolation artifacts. We demonstrate in this paper two different applications of the atlas: (a) region definition by label propagation in a fiber tracking study is enabled by the increased sharpness of our atlas compared with other available atlases, and (b) spatial normalization is enabled by its average shape property. In summary, our atlas has unique features and will be made available to the scientific community as a resource and reference system for future imaging-based studies of the human brain.

  16. The Human Brain Maintains Contradictory and Redundant Auditory Sensory Predictions

    PubMed Central

    Pieszek, Marika; Widmann, Andreas; Gruber, Thomas; Schröger, Erich

    2013-01-01

    Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs) in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants’ task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound) as well as violations of the visual-auditory prediction (i.e., an incongruent sound) elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]). Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density) equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events. PMID:23308266

  17. Evoked potential correlates of selective attention with multi-channel auditory inputs

    NASA Technical Reports Server (NTRS)

    Schwent, V. L.; Hillyard, S. A.

    1975-01-01

    Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.

  18. Infant Auditory Processing and Event-related Brain Oscillations

    PubMed Central

    Musacchia, Gabriella; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P.; Benasich, April A.

    2015-01-01

    Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net

  19. Source localization of auditory evoked responses from a human brain with an atomic magnetometer

    NASA Astrophysics Data System (ADS)

    Kim, K.; Xia, H.; Ben-Amar Baranga, A.; Hoffman, D.; Romalis, M. V.

    2007-03-01

    We report first measurements of auditory evoked fields (AEF) in a human brain with an atomic magnetometer system and discuss the techniques for magnetic source localization using this system. Until recent development of spin-exchange relaxation free (SERF) atomic magnetometers with a sensitivity of 0.5fT/Hz^1/2, only SQUID magnetometers had sufficient sensitivity to measure a magnetoencephalograph (MEG). With simple multi-channel operation and no cryogenic maintenance, the atomic magnetometer provides a promising alternative for brain activity measurements. A clear N100m feature in AEF was observed after averaging over 600 stimuli. Currently the intrinsic magnetic noise level is 3.5 fT/Hz^1/2 at 10 Hz. Optical detection of magnetic fields allows flexibility in magnetic mapping while in the same time imposing certain geometrical constraints. To investigate the magnetic source localization capabilities of the atomic MEG system we performed extensive numerical simulations and measurements with a brain phantom consisting of an artificial current source in a saline-filled sphere. We will discuss the results of numerical analysis and experimental implementation of magnetic source localization with atomic magnetometer.

  20. Brain Region-Specific Activity Patterns after Recent or Remote Memory Retrieval of Auditory Conditioned Fear

    ERIC Educational Resources Information Center

    Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee

    2012-01-01

    Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or…

  1. Brain Mapping of Language and Auditory Perception in High-Functioning Autistic Adults: A PET Study.

    ERIC Educational Resources Information Center

    Muller, R-A.; Behen, M. E.; Rothermel, R. D.; Chugani, D. C.; Muzik, O.; Mangner, T. J.; Chugani, H. T.

    1999-01-01

    A study used positron emission tomography (PET) to study patterns of brain activation during auditory processing in five high-functioning adults with autism. Results found that participants showed reversed hemispheric dominance during the verbal auditory stimulation and reduced activation of the auditory cortex and cerebellum. (CR)

  2. Brain-stem auditory evoked potentials and brain death.

    PubMed

    Machado, C; Valdés, P; García-Tigera, J; Virues, T; Biscay, R; Miranda, J; Coutin, P; Román, J; García, O

    1991-01-01

    BAEP records were obtained from 30 brain-dead patients. Three BAEP patterns were observed: (1) no identifiable waves (73.34%), (2) an isolated bilateral wave I (16.66%), and (3) an isolated unilateral wave I (10%). When wave I was present, it was always significantly delayed. Significant augmentation of wave I amplitude was present bilaterally in one case and unilaterally in another. On the other hand, in serial records from 3 cases wave I latency tended to increase progressively until this component disappeared. During the same period, wave I amplitude fluctuations were observed. A significant negative correlation was found for wave I latency with heart rate and body temperature in 1 case. Two facts might explain the progressive delay and disappearance of wave I in brain-dead patients: a progressive hypoxic-ischaemic dysfunction of the cochlea and the eighth nerve plus hypothermia, often present in brain-dead patients. Then the incidence of wave I preservation reported by different authors in single BAEP records from brain-dead patients might depend on the moment at which the evoked potential study was done in relation to the onset of the clinical state. It is suggested that, although BAEPs provide an objective electrophysiological assessment of brain-stem function, essential for BD diagnosis, this technique could be of no value for this purpose when used in isolation.

  3. Evaluation of vertigo by auditory brain stem response.

    PubMed

    Welsh, Louis W; Welsh, John J; Rosen, Laurie G

    2002-08-01

    The authors examined the hypothesis that abnormal patterns of the auditory brain stem response (ABR) could supplement the neuro-otological evaluation and assist in localizing the site of vestibulocerebellar dysfunction. This project is based upon the fact that the sources of waves I through V have been regionally identified. Absent or delayed patterns can be referenced to the normal data, and the site of a lesion generating vertigo can be established. We found absence of waves or prolonged interpeak latencies in 25% of the vertiginous subjects with normal hearing and magnetic resonance images of the brain. We conclude that in selected cases, lesions affecting the vestibular system can influence the ABR, and the electrophysiological tests of audition may suggest regionalization of the dysfunction in the hindbrain and midbrain. PMID:12184596

  4. The auditory and non-auditory brain areas involved in tinnitus. An emergent property of multiple parallel overlapping subnetworks.

    PubMed

    Vanneste, Sven; De Ridder, Dirk

    2012-01-01

    Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375

  5. Behavioral and electrophysiological auditory processing measures in traumatic brain injury after acoustically controlled auditory training: a long-term study

    PubMed Central

    Figueiredo, Carolina Calsolari; de Andrade, Adriana Neves; Marangoni-Castan, Andréa Tortosa; Gil, Daniela; Suriano, Italo Capraro

    2015-01-01

    ABSTRACT Objective To investigate the long-term efficacy of acoustically controlled auditory training in adults after tarumatic brain injury. Methods A total of six audioogically normal individuals aged between 20 and 37 years were studied. They suffered severe traumatic brain injury with diffuse axional lesion and underwent an acoustically controlled auditory training program approximately one year before. The results obtained in the behavioral and electrophysiological evaluation of auditory processing immediately after acoustically controlled auditory training were compared to reassessment findings, one year later. Results Quantitative analysis of auditory brainsteim response showed increased absolute latency of all waves and interpeak intervals, bilaterraly, when comparing both evaluations. Moreover, increased amplitude of all waves, and the wave V amplitude was statistically significant for the right ear, and wave III for the left ear. As to P3, decreased latency and increased amplitude were found for both ears in reassessment. The previous and current behavioral assessment showed similar results, except for the staggered spondaic words in the left ear and the amount of errors on the dichotic consonant-vowel test. Conclusion The acoustically controlled auditory training was effective in the long run, since better latency and amplitude results were observed in the electrophysiological evaluation, in addition to stability of behavioral measures after one-year training. PMID:26676270

  6. Brain stem auditory evoked responses during the cold pressor test.

    PubMed

    Vaney, N; Sethi, A; Tandon, O P

    1994-04-01

    This study was conducted to determine changes, if any, in Brain stem auditory evoked responses (BAEP's) during the cold pressor test (CPT) in healthy human subjects. Thirteen subjects (age 18-25 yrs) were selected for the study. Their BAEP's were recorded using standardized technique employing 10-20 international electrode placement system and sound click stimuli of specified intensity, duration and frequency. The standard CPT was performed in the non-dominant hand and the BAEP's, heart rate and blood pressure were recorded before and during the CPT. The values of absolute peak latencies and amplitude of evoked responses were statistically analysed. The amplitude of wave V showed a significant increase (P < 0.05) during the CPT (0.47 +/- 0.203 and 0.37 +/- 0.174 mu v before and during CPT respectively). This could be due to interaction of activated central ascending monoaminergic pathways or nociceptive afferents with the midbrain auditory generator so as to increase it's activity.

  7. Laminar and columnar auditory cortex in avian brain

    PubMed Central

    Wang, Yuan; Brzozowska-Prechtl, Agnieszka; Karten, Harvey J.

    2010-01-01

    The mammalian neocortex mediates complex cognitive behaviors, such as sensory perception, decision making, and language. The evolutionary history of the cortex, and the cells and circuitry underlying similar capabilities in nonmammals, are poorly understood, however. Two distinct features of the mammalian neocortex are lamination and radially arrayed columns that form functional modules, characterized by defined neuronal types and unique intrinsic connections. The seeming inability to identify these characteristic features in nonmammalian forebrains with earlier methods has often led to the assumption of uniqueness of neocortical cells and circuits in mammals. Using contemporary methods, we demonstrate the existence of comparable columnar functional modules in laminated auditory telencephalon of an avian species (Gallus gallus). A highly sensitive tracer was placed into individual layers of the telencephalon within the cortical region that is similar to mammalian auditory cortex. Distribution of anterograde and retrograde transportable markers revealed extensive interconnections across layers and between neurons within narrow radial columns perpendicular to the laminae. This columnar organization was further confirmed by visualization of radially oriented axonal collaterals of individual intracellularly filled neurons. Common cell types in birds and mammals that provide the cellular substrate of columnar functional modules were identified. These findings indicate that laminar and columnar properties of the neocortex are not unique to mammals and may have evolved from cells and circuits found in more ancient vertebrates. Specific functional pathways in the brain can be analyzed in regard to their common phylogenetic origins, which introduces a previously underutilized level of analysis to components involved in higher cognitive functions. PMID:20616034

  8. Bigger Brains or Bigger Nuclei? Regulating the Size of Auditory Structures in Birds

    PubMed Central

    Kubke, M. Fabiana; Massoglia, Dino P.; Carr, Catherine E.

    2012-01-01

    Increases in the size of the neuronal structures that mediate specific behaviors are believed to be related to enhanced computational performance. It is not clear, however, what developmental and evolutionary mechanisms mediate these changes, nor whether an increase in the size of a given neuronal population is a general mechanism to achieve enhanced computational ability. We addressed the issue of size by analyzing the variation in the relative number of cells of auditory structures in auditory specialists and generalists. We show that bird species with different auditory specializations exhibit variation in the relative size of their hindbrain auditory nuclei. In the barn owl, an auditory specialist, the hind-brain auditory nuclei involved in the computation of sound location show hyperplasia. This hyperplasia was also found in songbirds, but not in non-auditory specialists. The hyperplasia of auditory nuclei was also not seen in birds with large body weight suggesting that the total number of cells is selected for in auditory specialists. In barn owls, differences observed in the relative size of the auditory nuclei might be attributed to modifications in neurogenesis and cell death. Thus, hyperplasia of circuits used for auditory computation accompanies auditory specialization in different orders of birds. PMID:14726625

  9. The I' potential of the brain-stem auditory-evoked potential.

    PubMed

    Moore, E J; Semela, J J; Rakerd, B; Robb, R C; Ananthanarayan, A K

    1992-01-01

    We have consistently recorded a positive wave which precedes wave I, and is called I', within the human brain-stem auditory-evoked potential. It is postulated that I' represents initial neural activity of the auditory nerve, which presumably has as its origin auditory nerve dendrites. Thus, I' may represent a summed far-field dendritic potential from currents of excitatory postsynaptic potentials. We report latency and amplitude values of I'.

  10. Quantitative map of multiple auditory cortical regions with a stereotaxic fine-scale atlas of the mouse brain

    PubMed Central

    Tsukano, Hiroaki; Horie, Masao; Hishida, Ryuichi; Takahashi, Kuniyuki; Takebayashi, Hirohide; Shibuki, Katsuei

    2016-01-01

    Optical imaging studies have recently revealed the presence of multiple auditory cortical regions in the mouse brain. We have previously demonstrated, using flavoprotein fluorescence imaging, at least six regions in the mouse auditory cortex, including the anterior auditory field (AAF), primary auditory cortex (AI), the secondary auditory field (AII), dorsoanterior field (DA), dorsomedial field (DM), and dorsoposterior field (DP). While multiple regions in the visual cortex and somatosensory cortex have been annotated and consolidated in recent brain atlases, the multiple auditory cortical regions have not yet been presented from a coronal view. In the current study, we obtained regional coordinates of the six auditory cortical regions of the C57BL/6 mouse brain and illustrated these regions on template coronal brain slices. These results should reinforce the existing mouse brain atlases and support future studies in the auditory cortex. PMID:26924462

  11. Multichannel optical brain imaging to separate cerebral vascular, tissue metabolic, and neuronal effects of cocaine

    NASA Astrophysics Data System (ADS)

    Ren, Hugang; Luo, Zhongchi; Yuan, Zhijia; Pan, Yingtian; Du, Congwu

    2012-02-01

    Characterization of cerebral hemodynamic and oxygenation metabolic changes, as well neuronal function is of great importance to study of brain functions and the relevant brain disorders such as drug addiction. Compared with other neuroimaging modalities, optical imaging techniques have the potential for high spatiotemporal resolution and dissection of the changes in cerebral blood flow (CBF), blood volume (CBV), and hemoglobing oxygenation and intracellular Ca ([Ca2+]i), which serves as markers of vascular function, tissue metabolism and neuronal activity, respectively. Recently, we developed a multiwavelength imaging system and integrated it into a surgical microscope. Three LEDs of λ1=530nm, λ2=570nm and λ3=630nm were used for exciting [Ca2+]i fluorescence labeled by Rhod2 (AM) and sensitizing total hemoglobin (i.e., CBV), and deoxygenated-hemoglobin, whereas one LD of λ1=830nm was used for laser speckle imaging to form a CBF mapping of the brain. These light sources were time-sharing for illumination on the brain and synchronized with the exposure of CCD camera for multichannel images of the brain. Our animal studies indicated that this optical approach enabled simultaneous mapping of cocaine-induced changes in CBF, CBV and oxygenated- and deoxygenated hemoglobin as well as [Ca2+]i in the cortical brain. Its high spatiotemporal resolution (30μm, 10Hz) and large field of view (4x5 mm2) are advanced as a neuroimaging tool for brain functional study.

  12. Auditory brain stem response abnormalities in the very low birthweight infant: incidence and risk factors.

    PubMed

    Cox, L C; Hack, M; Metz, D A

    1984-01-01

    Auditory brain stem evoked response (ABR) testing was performed on 50 very low birthweight infants in an effort to assess the effects of multiple neonatal risk factors on auditory function. The results suggested that no single risk factor was predictive of ABR abnormality while combined risk factors were shown to be very predictive.

  13. Selective attention in an overcrowded auditory scene: implications for auditory-based brain-computer interface design.

    PubMed

    Maddox, Ross K; Cheung, Willy; Lee, Adrian K C

    2012-11-01

    Listeners are good at attending to one auditory stream in a crowded environment. However, is there an upper limit of streams present in an auditory scene at which this selective attention breaks down? Here, participants were asked to attend one stream of spoken letters amidst other letter streams. In half of the trials, an initial primer was played, cueing subjects to the sound configuration. Results indicate that performance increases with token repetitions. Priming provided a performance benefit, suggesting that stream selection, not formation, is the bottleneck associated with attention in an overcrowded scene. Results' implications for brain-computer interfaces are discussed. PMID:23145699

  14. Time course of regional brain activation associated with onset of auditory/verbal hallucinations

    PubMed Central

    Hoffman, Ralph E.; Anderson, Adam W.; Varanko, Maxine; Gore, John C.; Hampson, Michelle

    2008-01-01

    The time course of brain activation prior to onset of auditory/verbal hallucinations was characterised using functional magnetic resonance imaging in six dextral patients with schizophrenia. Composite maps of pre-hallucination periods revealed activation in the left anterior insula and in the right middle temporal gyrus, partially replicating two previous case reports, as well as deactivation in the anterior cingulate and parahippocampal gyri. These findings may reflect brain events that trigger or increase vulnerability to auditory/verbal hallucinations. PMID:18978327

  15. Scale-free brain quartet: artistic filtering of multi-channel brainwave music.

    PubMed

    Wu, Dan; Li, Chaoyi; Yao, Dezhong

    2013-01-01

    To listen to the brain activities as a piece of music, we proposed the scale-free brainwave music (SFBM) technology, which translated scalp EEGs into music notes according to the power law of both EEG and music. In the present study, the methodology was extended for deriving a quartet from multi-channel EEGs with artistic beat and tonality filtering. EEG data from multiple electrodes were first translated into MIDI sequences by SFBM, respectively. Then, these sequences were processed by a beat filter which adjusted the duration of notes in terms of the characteristic frequency. And the sequences were further filtered from atonal to tonal according to a key defined by the analysis of the original music pieces. Resting EEGs with eyes closed and open of 40 subjects were utilized for music generation. The results revealed that the scale-free exponents of the music before and after filtering were different: the filtered music showed larger variety between the eyes-closed (EC) and eyes-open (EO) conditions, and the pitch scale exponents of the filtered music were closer to 1 and thus it was more approximate to the classical music. Furthermore, the tempo of the filtered music with eyes closed was significantly slower than that with eyes open. With the original materials obtained from multi-channel EEGs, and a little creative filtering following the composition process of a potential artist, the resulted brainwave quartet opened a new window to look into the brain in an audible musical way. In fact, as the artistic beat and tonal filters were derived from the brainwaves, the filtered music maintained the essential properties of the brain activities in a more musical style. It might harmonically distinguish the different states of the brain activities, and therefore it provided a method to analyze EEGs from a relaxed audio perspective.

  16. Scale-Free Brain Quartet: Artistic Filtering of Multi-Channel Brainwave Music

    PubMed Central

    Wu, Dan; Li, Chaoyi; Yao, Dezhong

    2013-01-01

    To listen to the brain activities as a piece of music, we proposed the scale-free brainwave music (SFBM) technology, which translated scalp EEGs into music notes according to the power law of both EEG and music. In the present study, the methodology was extended for deriving a quartet from multi-channel EEGs with artistic beat and tonality filtering. EEG data from multiple electrodes were first translated into MIDI sequences by SFBM, respectively. Then, these sequences were processed by a beat filter which adjusted the duration of notes in terms of the characteristic frequency. And the sequences were further filtered from atonal to tonal according to a key defined by the analysis of the original music pieces. Resting EEGs with eyes closed and open of 40 subjects were utilized for music generation. The results revealed that the scale-free exponents of the music before and after filtering were different: the filtered music showed larger variety between the eyes-closed (EC) and eyes-open (EO) conditions, and the pitch scale exponents of the filtered music were closer to 1 and thus it was more approximate to the classical music. Furthermore, the tempo of the filtered music with eyes closed was significantly slower than that with eyes open. With the original materials obtained from multi-channel EEGs, and a little creative filtering following the composition process of a potential artist, the resulted brainwave quartet opened a new window to look into the brain in an audible musical way. In fact, as the artistic beat and tonal filters were derived from the brainwaves, the filtered music maintained the essential properties of the brain activities in a more musical style. It might harmonically distinguish the different states of the brain activities, and therefore it provided a method to analyze EEGs from a relaxed audio perspective. PMID:23717527

  17. An auditory brain-computer interface evoked by natural speech

    NASA Astrophysics Data System (ADS)

    Lopez-Gordo, M. A.; Fernandez, E.; Romero, S.; Pelayo, F.; Prieto, Alberto

    2012-06-01

    Brain-computer interfaces (BCIs) are mainly intended for people unable to perform any muscular movement, such as patients in a complete locked-in state. The majority of BCIs interact visually with the user, either in the form of stimulation or biofeedback. However, visual BCIs challenge their ultimate use because they require the subjects to gaze, explore and shift eye-gaze using their muscles, thus excluding patients in a complete locked-in state or under the condition of the unresponsive wakefulness syndrome. In this study, we present a novel fully auditory EEG-BCI based on a dichotic listening paradigm using human voice for stimulation. This interface has been evaluated with healthy volunteers, achieving an average information transmission rate of 1.5 bits min-1 in full-length trials and 2.7 bits min-1 using the optimal length of trials, recorded with only one channel and without formal training. This novel technique opens the door to a more natural communication with users unable to use visual BCIs, with promising results in terms of performance, usability, training and cognitive effort.

  18. Comparison of temporal properties of auditory single units in response to cochlear infrared laser stimulation recorded with multi-channel and single tungsten electrodes

    NASA Astrophysics Data System (ADS)

    Tan, Xiaodong; Xia, Nan; Young, Hunter; Richter, Claus-Peter

    2015-02-01

    Auditory prostheses may benefit from Infrared Neural Stimulation (INS) because optical stimulation allows for spatially selective activation of neuron populations. Selective activation of neurons in the cochlear spiral ganglion can be determined in the central nucleus of the inferior colliculus (ICC) because the tonotopic organization of frequencies in the cochlea is maintained throughout the auditory pathway. The activation profile of INS is well represented in the ICC by multichannel electrodes (MCEs). To characterize single unit properties in response to INS, however, single tungsten electrodes (STEs) should be used because of its better signal-to-noise ratio. In this study, we compared the temporal properties of ICC single units recorded with MCEs and STEs in order to characterize the response properties of single auditory neurons in response to INS in guinea pigs. The length along the cochlea stimulated with infrared radiation corresponded to a frequency range of about 0.6 octaves, similar to that recorded with STEs. The temporal properties of single units recorded with MCEs showed higher maximum rates, shorter latencies, and higher firing efficiencies compared to those recorded with STEs. When the preset amplitude threshold for triggering MCE recordings was raised to twice over the noise level, the temporal properties of the single units became similar to those obtained with STEs. Undistinguishable neural activities from multiple sources in MCE recordings could be responsible for the response property difference between MCEs and STEs. Thus, caution should be taken in single unit recordings with MCEs.

  19. Effectiveness of direct and non-direct auditory stimulation on coma arousal after traumatic brain injury.

    PubMed

    Park, Soohyun; Davis, Alice E

    2016-08-01

    The aim of this study was to evaluate the effect of direct and non-direct auditory stimulation on arousal in coma patients with severe traumatic brain injury and to compare the effects of direct vs. non-direct auditory stimulation. A crossover intervention study design was used. Nine participants who were comatose after a severe traumatic brain injury underwent direct and non-direct auditory stimulation. Direct auditory stimulation requires a higher level of interpersonal interaction between the patient and stimuli such as voices of family members, orientation by a nurse or family member and familiar music. In contrast, non-direct auditory stimuli were characterized as more general, less familiar, less interactive, indirect and not lively such as general music and TV sounds. Participants received both direct and non-direct auditory stimulation in randomized order for 15 minutes. Recovery of consciousness was measured with the Glasgow Coma Scale (GCS) and Sensory Stimulation Assessment Measure (SSAM). The Friedman test with post hoc analysis by Wilcoxon's signed-rank test comparisons was used for data analysis. Patients who received both direct and non-direct auditory stimulation exhibited significantly increased GCS (p = 0.008) and SSAM scores (p = 0.008) over baseline. The improvement in SSAM scores after direct auditory stimulation was significantly greater than that after non-direct auditory stimulation (p = 0.021), but there was no statistically significant difference in GCS scores (p = 0.139). Auditory stimulation, in particular direct auditory stimulation, might be useful for improving the recovery of consciousness and increasing the arousal of comatose patients. The SSAM is more useful for detecting subtle changes from stimulation intervention than the GCS. PMID:27241789

  20. Multichannel neural recording with a 128 Mbps UWB wireless transmitter for implantable brain-machine interfaces.

    PubMed

    Ando, H; Takizawa, K; Yoshida, T; Matsushita, K; Hirata, M; Suzuki, T

    2015-01-01

    To realize a low-invasive and high accuracy BMI (Brain-machine interface) system, we have already developed a fully-implantable wireless BMI system which consists of ECoG neural electrode arrays, neural recording ASICs, a Wi-Fi based wireless data transmitter and a wireless power receiver with a rechargeable battery. For accurate estimation of movement intentions, it is important for a BMI system to have a large number of recording channels. In this paper, we report a new multi-channel BMI system which is able to record up to 4096-ch ECoG data by multiple connections of 64-ch ASICs and time division multiplexing of recorded data. This system has an ultra-wide-band (UWB) wireless unit for transmitting the recorded neural signals to outside the body. By preliminary experiments with a human body equivalent liquid phantom, we confirmed 4096-ch UWB wireless data transmission at 128 Mbps mode below 20 mm distance.

  1. A wearable multi-channel fNIRS system for brain imaging in freely moving subjects.

    PubMed

    Piper, Sophie K; Krueger, Arne; Koch, Stefan P; Mehnert, Jan; Habermehl, Christina; Steinbrink, Jens; Obrig, Hellmuth; Schmitz, Christoph H

    2014-01-15

    Functional near infrared spectroscopy (fNIRS) is a versatile neuroimaging tool with an increasing acceptance in the neuroimaging community. While often lauded for its portability, most of the fNIRS setups employed in neuroscientific research still impose usage in a laboratory environment. We present a wearable, multi-channel fNIRS imaging system for functional brain imaging in unrestrained settings. The system operates without optical fiber bundles, using eight dual wavelength light emitting diodes and eight electro-optical sensors, which can be placed freely on the subject's head for direct illumination and detection. Its performance is tested on N=8 subjects in a motor execution paradigm performed under three different exercising conditions: (i) during outdoor bicycle riding, (ii) while pedaling on a stationary training bicycle, and (iii) sitting still on the training bicycle. Following left hand gripping, we observe a significant decrease in the deoxyhemoglobin concentration over the contralateral motor cortex in all three conditions. A significant task-related ΔHbO2 increase was seen for the non-pedaling condition. Although the gross movements involved in pedaling and steering a bike induced more motion artifacts than carrying out the same task while sitting still, we found no significant differences in the shape or amplitude of the HbR time courses for outdoor or indoor cycling and sitting still. We demonstrate the general feasibility of using wearable multi-channel NIRS during strenuous exercise in natural, unrestrained settings and discuss the origins and effects of data artifacts. We provide quantitative guidelines for taking condition-dependent signal quality into account to allow the comparison of data across various levels of physical exercise. To the best of our knowledge, this is the first demonstration of functional NIRS brain imaging during an outdoor activity in a real life situation in humans. PMID:23810973

  2. A Wearable Multi-Channel fNIRS System for Brain Imaging in Freely Moving Subjects

    PubMed Central

    Piper, Sophie K.; Krueger, Arne; Koch, Stefan P.; Mehnert, Jan; Habermehl, Christina; Steinbrink, Jens; Obrig, Hellmuth; Schmitz, Christoph H.

    2013-01-01

    Functional near infrared spectroscopy (fNIRS) is a versatile neuroimaging tool with an increasing acceptance in the neuroimaging community. While often lauded for its portability, most of the fNIRS setups employed in neuroscientific research still impose usage in a laboratory environment. We present a wearable, multi-channel fNIRS imaging system for functional brain imaging in unrestrained settings. The system operates without optical fiber bundles, using eight dual wavelength light emitting diodes and eight electro-optical sensors, which can be placed freely on the subject's head for direct illumination and detection. Its performance is tested on N = 8 subjects in a motor execution paradigm performed under three different exercising conditions: (i) during outdoor bicycle riding, (ii) while pedaling on a stationary training bicycle, and (iii) sitting still on the training bicycle. Following left hand gripping, we observe a significant decrease in the deoxyhemoglobin concentration over the contralateral motor cortex in all three conditions. A significant task-related ΔHbO2 increase was seen for the non-pedaling condition. Although the gross movements involved in pedaling and steering a bike induced more motion artifacts than carrying out the same task while sitting still, we found no significant differences in the shape or amplitude of the HbR time courses for outdoor or indoor cycling and sitting still. We demonstrate the general feasibility of using wearable multi-channel NIRS during strenuous exercise in natural, unrestrained settings and discuss the origins and effects of data artifacts. We provide quantitative guidelines for taking condition-dependent signal quality into account to allow the comparison of data across various levels of physical exercise. To the best of our knowledge, this is the first demonstration of functional NIRS brain imaging during an outdoor activity in a real life situation in humans. PMID:23810973

  3. Are Auditory Hallucinations Related to the Brain's Resting State Activity? A 'Neurophenomenal Resting State Hypothesis'

    PubMed Central

    2014-01-01

    While several hypotheses about the neural mechanisms underlying auditory verbal hallucinations (AVH) have been suggested, the exact role of the recently highlighted intrinsic resting state activity of the brain remains unclear. Based on recent findings, we therefore developed what we call the 'resting state hypotheses' of AVH. Our hypothesis suggest that AVH may be traced back to abnormally elevated resting state activity in auditory cortex itself, abnormal modulation of the auditory cortex by anterior cortical midline regions as part of the default-mode network, and neural confusion between auditory cortical resting state changes and stimulus-induced activity. We discuss evidence in favour of our 'resting state hypothesis' and show its correspondence with phenomenal, i.e., subjective-experiential features as explored in phenomenological accounts. Therefore I speak of a 'neurophenomenal resting state hypothesis' of auditory hallucinations in schizophrenia. PMID:25598821

  4. The Relationship between Phonological and Auditory Processing and Brain Organization in Beginning Readers

    ERIC Educational Resources Information Center

    Pugh, Kenneth R.; Landi, Nicole; Preston, Jonathan L.; Mencl, W. Einar; Austin, Alison C.; Sibley, Daragh; Fulbright, Robert K.; Seidenberg, Mark S.; Grigorenko, Elena L.; Constable, R. Todd; Molfese, Peter; Frost, Stephen J.

    2013-01-01

    We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of…

  5. Non-local Atlas-guided Multi-channel Forest Learning for Human Brain Labeling

    PubMed Central

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang

    2015-01-01

    Labeling MR brain images into anatomically meaningful regions is important in many quantitative brain researches. In many existing label fusion methods, appearance information is widely used. Meanwhile, recent progress in computer vision suggests that the context feature is very useful in identifying an object from a complex scene. In light of this, we propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). In particular, we employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and the target labels (i.e., corresponding to certain anatomical structures). Moreover, to accommodate the high inter-subject variations, we further extend our learning-based label fusion to a multi-atlas scenario, i.e., we train a random forest for each atlas and then obtain the final labeling result according to the consensus of all atlases. We have comprehensively evaluated our method on both LONI-LBPA40 and IXI datasets, and achieved the highest labeling accuracy, compared to the state-of-the-art methods in the literature. PMID:26942235

  6. Multi-channel linear descriptors for event-related EEG collected in brain computer interface

    NASA Astrophysics Data System (ADS)

    Pei, Xiao-mei; Zheng, Chong-xun; Xu, Jin; Bin, Guang-yu; Wang, Hong-wu

    2006-03-01

    By three multi-channel linear descriptors, i.e. spatial complexity (Ω), field power (Σ) and frequency of field changes (Φ), event-related EEG data within 8-30 Hz were investigated during imagination of left or right hand movement. Studies on the event-related EEG data indicate that a two-channel version of Ω, Σ and Φ could reflect the antagonistic ERD/ERS patterns over contralateral and ipsilateral areas and also characterize different phases of the changing brain states in the event-related paradigm. Based on the selective two-channel linear descriptors, the left and right hand motor imagery tasks are classified to obtain satisfactory results, which testify the validity of the three linear descriptors Ω, Σ and Φ for characterizing event-related EEG. The preliminary results show that Ω, Σ together with Φ have good separability for left and right hand motor imagery tasks, which could be considered for classification of two classes of EEG patterns in the application of brain computer interfaces.

  7. BabySQUID: A mobile, high-resolution multichannel magnetoencephalography system for neonatal brain assessment

    NASA Astrophysics Data System (ADS)

    Okada, Yoshio; Pratt, Kevin; Atwood, Christopher; Mascarenas, Anthony; Reineman, Richard; Nurminen, Jussi; Paulson, Douglas

    2006-02-01

    We developed a prototype of a mobile, high-resolution, multichannel magnetoencephalography (MEG) system, called babySQUID, for assessing brain functions in newborns and infants. Unlike electroencephalography, MEG signals are not distorted by the scalp or the fontanels and sutures in the skull. Thus, brain activity can be measured and localized with MEG as if the sensors were above an exposed brain. The babySQUID is housed in a moveable cart small enough to be transported from one room to another. To assess brain functions, one places the baby on the bed of the cart and the head on its headrest with MEG sensors just below. The sensor array consists of 76 first-order axial gradiometers, each with a pickup coil diameter of 6mm and a baseline of 30mm, in a high-density array with a spacing of 12-14mm center-to-center. The pickup coils are 6±1mm below the outer surface of the headrest. The short gap provides unprecedented sensitivity since the scalp and skull are thin (as little as 3-4mm altogether) in babies. In an electromagnetically unshielded room in a hospital, the field sensitivity at 1kHz was ˜17fT/√Hz. The noise was reduced from ˜400to200fT/√Hz at 1Hz using a reference cancellation technique and further to ˜40fT/√Hz using a gradient common mode rejection technique. Although the residual environmental magnetic noise interfered with the operation of the babySQUID, the instrument functioned sufficiently well to detect spontaneous brain signals from babies with a signal to noise ratio (SNR) of as much as 7.6:1. In a magnetically shielded room, the field sensitivity was 17fT/√Hz at 20Hz and 30fT/√Hz at 1Hz without implementation of reference or gradient cancellation. The sensitivity was sufficiently high to detect spontaneous brain activity from a 7month old baby with a SNR as much as 40:1 and evoked somatosensory responses with a 50Hz bandwidth after as little as four averages. We expect that both the noise and the sensor gap can be reduced further by

  8. Multichannel brain recordings in behaving Drosophila reveal oscillatory activity and local coherence in response to sensory stimulation and circuit activation

    PubMed Central

    Paulk, Angelique C.; Zhou, Yanqiong; Stratton, Peter; Liu, Li

    2013-01-01

    Neural networks in vertebrates exhibit endogenous oscillations that have been associated with functions ranging from sensory processing to locomotion. It remains unclear whether oscillations may play a similar role in the insect brain. We describe a novel “whole brain” readout for Drosophila melanogaster using a simple multichannel recording preparation to study electrical activity across the brain of flies exposed to different sensory stimuli. We recorded local field potential (LFP) activity from >2,000 registered recording sites across the fly brain in >200 wild-type and transgenic animals to uncover specific LFP frequency bands that correlate with: 1) brain region; 2) sensory modality (olfactory, visual, or mechanosensory); and 3) activity in specific neural circuits. We found endogenous and stimulus-specific oscillations throughout the fly brain. Central (higher-order) brain regions exhibited sensory modality-specific increases in power within narrow frequency bands. Conversely, in sensory brain regions such as the optic or antennal lobes, LFP coherence, rather than power, best defined sensory responses across modalities. By transiently activating specific circuits via expression of TrpA1, we found that several circuits in the fly brain modulate LFP power and coherence across brain regions and frequency domains. However, activation of a neuromodulatory octopaminergic circuit specifically increased neuronal coherence in the optic lobes during visual stimulation while decreasing coherence in central brain regions. Our multichannel recording and brain registration approach provides an effective way to track activity simultaneously across the fly brain in vivo, allowing investigation of functional roles for oscillations in processing sensory stimuli and modulating behavior. PMID:23864378

  9. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe.

    PubMed

    Berns, Gregory S; Cook, Peter F; Foxley, Sean; Jbabdi, Saad; Miller, Karla L; Marino, Lori

    2015-07-22

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of 'associative' regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin (Delphinus delphis) and a pantropical dolphin (Stenella attenuata) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species

  10. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe

    PubMed Central

    Berns, Gregory S.; Cook, Peter F.; Foxley, Sean; Jbabdi, Saad; Miller, Karla L.; Marino, Lori

    2015-01-01

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of ‘associative′ regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin (Delphinus delphis) and a pantropical dolphin (Stenella attenuata) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species

  11. Nonlocal atlas-guided multi-channel forest learning for human brain labeling

    PubMed Central

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang

    2016-01-01

    Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI_LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the

  12. Brain region-specific activity patterns after recent or remote memory retrieval of auditory conditioned fear.

    PubMed

    Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee

    2012-01-01

    Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or retrieval. To investigate this possibility, we systematically imaged the brain activity patterns in the lateral amygdala, MGm/PIN, and AuV/TeA using activity-dependent induction of immediate early gene zif268 after recent and remote memory retrieval of auditory conditioned fear. Consistent with the critical role of the amygdala in fear memory, the zif268 activity in the lateral amygdala was significantly increased after both recent and remote memory retrieval. Interesting, however, the density of zif268 (+) neurons in both MGm/PIN and AuV/TeA, particularly in layers IV and VI, was increased only after remote but not recent fear memory retrieval compared to control groups. Further analysis of zif268 signals in AuV/TeA revealed that conditioned tone induced stronger zif268 induction compared to familiar tone in each individual zif268 (+) neuron after recent memory retrieval. Taken together, our results support that the lateral amygdala is a key brain site for permanent fear memory storage and suggest that MGm/PIN and AuV/TeA might play a role for remote memory storage or retrieval of auditory conditioned fear, or, alternatively, that these auditory brain regions might have a different way of processing for familiar or conditioned tone information at recent and remote time phases. PMID:22993170

  13. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions.

  14. Turning down the noise: the benefit of musical training on the aging auditory brain.

    PubMed

    Alain, Claude; Zendel, Benjamin Rich; Hutka, Stefanie; Bidelman, Gavin M

    2014-02-01

    Age-related decline in hearing abilities is a ubiquitous part of aging, and commonly impacts speech understanding, especially when there are competing sound sources. While such age effects are partially due to changes within the cochlea, difficulties typically exist beyond measurable hearing loss, suggesting that central brain processes, as opposed to simple peripheral mechanisms (e.g., hearing sensitivity), play a critical role in governing hearing abilities late into life. Current training regimens aimed to improve central auditory processing abilities have experienced limited success in promoting listening benefits. Interestingly, recent studies suggest that in young adults, musical training positively modifies neural mechanisms, providing robust, long-lasting improvements to hearing abilities as well as to non-auditory tasks that engage cognitive control. These results offer the encouraging possibility that musical training might be used to counteract age-related changes in auditory cognition commonly observed in older adults. Here, we reviewed studies that have examined the effects of age and musical experience on auditory cognition with an emphasis on auditory scene analysis. We infer that musical training may offer potential benefits to complex listening and might be utilized as a means to delay or even attenuate declines in auditory perception and cognition that often emerge later in life.

  15. A blueprint for vocal learning: auditory predispositions from brains to genomes.

    PubMed

    Wheatcroft, David; Qvarnström, Anna

    2015-08-01

    Memorizing and producing complex strings of sound are requirements for spoken human language. We share these behaviours with likely more than 4000 species of songbirds, making birds our primary model for studying the cognitive basis of vocal learning and, more generally, an important model for how memories are encoded in the brain. In songbirds, as in humans, the sounds that a juvenile learns later in life depend on auditory memories formed early in development. Experiments on a wide variety of songbird species suggest that the formation and lability of these auditory memories, in turn, depend on auditory predispositions that stimulate learning when a juvenile hears relevant, species-typical sounds. We review evidence that variation in key features of these auditory predispositions are determined by variation in genes underlying the development of the auditory system. We argue that increased investigation of the neuronal basis of auditory predispositions expressed early in life in combination with modern comparative genomic approaches may provide insights into the evolution of vocal learning. PMID:26246333

  16. Lateralization of ventral and dorsal auditory-language pathways in the human brain.

    PubMed

    Parker, Geoffrey J M; Luzzi, Simona; Alexander, Daniel C; Wheeler-Kingshott, Claudia A M; Ciccarelli, Olga; Lambon Ralph, Matthew A

    2005-02-01

    Recent electrophysiological investigations of the auditory system in primates along with functional neuroimaging studies of auditory perception in humans have suggested there are two pathways arising from the primary auditory cortex. In the primate brain, a 'ventral' pathway is thought to project anteriorly from the primary auditory cortex to prefrontal areas along the superior temporal gyrus while a separate 'dorsal' route connects these areas posteriorly via the inferior parietal lobe. We use diffusion MRI tractography, a noninvasive technique based on diffusion-weighted MRI, to investigate the possibility of a similar pattern of connectivity in the human brain for the first time. The dorsal pathway from Wernicke's area to Broca's area is shown to include the arcuate fasciculus and connectivity to Brodmann area 40, lateral superior temporal gyrus (LSTG), and lateral middle temporal gyrus. A ventral route between Wernicke's area and Broca's area is demonstrated that connects via the external capsule/uncinate fasciculus and the medial superior temporal gyrus. Ventral connections are also observed in the lateral superior and middle temporal gyri. The connections are stronger in the dominant hemisphere, in agreement with previous studies of functional lateralization of auditory-language processing. PMID:15652301

  17. Auditory information processing during human sleep as revealed by event-related brain potentials.

    PubMed

    Atienza, M; Cantero, J L; Escera, C

    2001-11-01

    The main goal of this review is to elucidate up to what extent pre-attentive auditory information processing is affected during human sleep. Evidence from event-related brain potential (ERP) studies indicates that auditory information processing is selectively affected, even at early phases, across the different stages of sleep-wakefulness continuum. According to these studies, 3 main conclusions are drawn: (1) the sleeping brain is able to automatically detect stimulus occurrence and trigger an orienting response towards that stimulus if its degree of novelty is large; (2) auditory stimuli are represented in the auditory system and maintained for a period of time in sensory memory, making the automatic-change detection during sleep possible; and (3) there are specific brain mechanisms (sleep-specific ERP components associated with the presence of vertex waves and K-complexes) by which information processing can be improved during non-rapid eye movement sleep. However, the remarkably affected amplitude and latency of the waking-ERPs during the different stages of sleep suggests deficits in the building and maintenance of a neural representation of the stimulus as well as in the process by which neural events lead to an orienting response toward such a stimulus. The deactivation of areas in the dorsolateral pre-frontal cortex during sleep contributing to the generation of these ERP components is hypothesized to be one of the main causes for the attenuated amplitude of these ERPs during human sleep. PMID:11682341

  18. Coding space-time stimulus dynamics in auditory brain maps

    PubMed Central

    Wang, Yunyan; Gutfreund, Yoram; Peña, José L.

    2014-01-01

    Sensory maps are often distorted representations of the environment, where ethologically-important ranges are magnified. The implication of a biased representation extends beyond increased acuity for having more neurons dedicated to a certain range. Because neurons are functionally interconnected, non-uniform representations influence the processing of high-order features that rely on comparison across areas of the map. Among these features are time-dependent changes of the auditory scene generated by moving objects. How sensory representation affects high order processing can be approached in the map of auditory space of the owl's midbrain, where locations in the front are over-represented. In this map, neurons are selective not only to location but also to location over time. The tuning to space over time leads to direction selectivity, which is also topographically organized. Across the population, neurons tuned to peripheral space are more selective to sounds moving into the front. The distribution of direction selectivity can be explained by spatial and temporal integration on the non-uniform map of space. Thus, the representation of space can induce biased computation of a second-order stimulus feature. This phenomenon is likely observed in other sensory maps and may be relevant for behavior. PMID:24782781

  19. Brain Network Interactions in Auditory, Visual and Linguistic Processing

    ERIC Educational Resources Information Center

    Horwitz, Barry; Braun, Allen R.

    2004-01-01

    In the paper, we discuss the importance of network interactions between brain regions in mediating performance of sensorimotor and cognitive tasks, including those associated with language processing. Functional neuroimaging, especially PET and fMRI, provide data that are obtained essentially simultaneously from much of the brain, and thus are…

  20. The relationship between phonological and auditory processing and brain organization in beginning readers

    PubMed Central

    PUGH, Kenneth R.; LANDI, Nicole; PRESTON, Jonathan L.; MENCL, W. Einar; AUSTIN, Alison C.; SIBLEY, Daragh; FULBRIGHT, Robert K.; SEIDENBERG, Mark S.; GRIGORENKO, Elena L.; CONSTABLE, R. Todd; MOLFESE, Peter; FROST, Stephen J.

    2012-01-01

    We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of skill-correlated regions, including left hemisphere temporoparietal and occipitotemporal sites, as well as inferior frontal, visual, visual attention, and subcortical components. For speech-related activation, shared variance among reading skill measures was most prominently correlated with activation in left hemisphere inferior frontal gyrus and precuneus. Implications for brain-based models of literacy acquisition are discussed. PMID:22572517

  1. Neurogenesis in the brain auditory pathway of a marsupial, the northern native cat (Dasyurus hallucatus)

    SciTech Connect

    Aitkin, L.; Nelson, J.; Farrington, M.; Swann, S. )

    1991-07-08

    Neurogenesis in the auditory pathway of the marsupial Dasyurus hallucatus was studied. Intraperitoneal injections of tritiated thymidine (20-40 microCi) were made into pouch-young varying from 1 to 56 days pouch-life. Animals were killed as adults and brain sections were prepared for autoradiography and counterstained with a Nissl stain. Neurons in the ventral cochlear nucleus were generated prior to 3 days pouch-life, in the superior olive at 5-7 days, and in the dorsal cochlear nucleus over a prolonged period. Inferior collicular neurogenesis lagged behind that in the medial geniculate, the latter taking place between days 3 and 9 and the former between days 7 and 22. Neurogenesis began in the auditory cortex on day 9 and was completed by about day 42. Thus neurogenesis was complete in the medullary auditory nuclei before that in the midbrain commenced, and in the medial geniculate before that in the auditory cortex commenced. The time course of neurogenesis in the auditory pathway of the native cat was very similar to that in another marsupial, the brushtail possum. For both, neurogenesis occurred earlier than in eutherian mammals of a similar size but was more protracted.

  2. Localized brain activation related to the strength of auditory learning in a parrot.

    PubMed

    Eda-Fujiwara, Hiroko; Imagawa, Takuya; Matsushita, Masanori; Matsuda, Yasushi; Takeuchi, Hiro-Aki; Satoh, Ryohei; Watanabe, Aiko; Zandbergen, Matthijs A; Manabe, Kazuchika; Kawashima, Takashi; Bolhuis, Johan J

    2012-01-01

    Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus), a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory. PMID:22701714

  3. Localized brain activation related to the strength of auditory learning in a parrot.

    PubMed

    Eda-Fujiwara, Hiroko; Imagawa, Takuya; Matsushita, Masanori; Matsuda, Yasushi; Takeuchi, Hiro-Aki; Satoh, Ryohei; Watanabe, Aiko; Zandbergen, Matthijs A; Manabe, Kazuchika; Kawashima, Takashi; Bolhuis, Johan J

    2012-01-01

    Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus), a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory.

  4. Brain stem auditory evoked responses in human infants and adults

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Galambos, R.

    1974-01-01

    Brain stem evoked potentials were recorded by conventional scalp electrodes in infants (3 weeks to 3 years of age) and adults. The latency of one of the major response components (wave V) is shown to be a function both of click intensity and the age of the subject; this latency at a given signal strength shortens postnatally to reach the adult value (about 6 msec) by 12 to 18 months of age. The demonstrated reliability and limited variability of these brain stem electrophysiological responses provide the basis for an optimistic estimate of their usefulness as an objective method for assessing hearing in infants and adults.

  5. Endogenous Delta/Theta Sound-Brain Phase Entrainment Accelerates the Buildup of Auditory Streaming.

    PubMed

    Riecke, Lars; Sack, Alexander T; Schroeder, Charles E

    2015-12-21

    In many natural listening situations, meaningful sounds (e.g., speech) fluctuate in slow rhythms among other sounds. When a slow rhythmic auditory stream is selectively attended, endogenous delta (1‒4 Hz) oscillations in auditory cortex may shift their timing so that higher-excitability neuronal phases become aligned with salient events in that stream [1, 2]. As a consequence of this stream-brain phase entrainment [3], these events are processed and perceived more readily than temporally non-overlapping events [4-11], essentially enhancing the neural segregation between the attended stream and temporally noncoherent streams [12]. Stream-brain phase entrainment is robust to acoustic interference [13-20] provided that target stream-evoked rhythmic activity can be segregated from noncoherent activity evoked by other sounds [21], a process that usually builds up over time [22-27]. However, it has remained unclear whether stream-brain phase entrainment functionally contributes to this buildup of rhythmic streams or whether it is merely an epiphenomenon of it. Here, we addressed this issue directly by experimentally manipulating endogenous stream-brain phase entrainment in human auditory cortex with non-invasive transcranial alternating current stimulation (TACS) [28-30]. We assessed the consequences of these manipulations on the perceptual buildup of the target stream (the time required to recognize its presence in a noisy background), using behavioral measures in 20 healthy listeners performing a naturalistic listening task. Experimentally induced cyclic 4-Hz variations in stream-brain phase entrainment reliably caused a cyclic 4-Hz pattern in perceptual buildup time. Our findings demonstrate that strong endogenous delta/theta stream-brain phase entrainment accelerates the perceptual emergence of task-relevant rhythmic streams in noisy environments. PMID:26628008

  6. Endogenous Delta/Theta Sound-Brain Phase Entrainment Accelerates the Buildup of Auditory Streaming.

    PubMed

    Riecke, Lars; Sack, Alexander T; Schroeder, Charles E

    2015-12-21

    In many natural listening situations, meaningful sounds (e.g., speech) fluctuate in slow rhythms among other sounds. When a slow rhythmic auditory stream is selectively attended, endogenous delta (1‒4 Hz) oscillations in auditory cortex may shift their timing so that higher-excitability neuronal phases become aligned with salient events in that stream [1, 2]. As a consequence of this stream-brain phase entrainment [3], these events are processed and perceived more readily than temporally non-overlapping events [4-11], essentially enhancing the neural segregation between the attended stream and temporally noncoherent streams [12]. Stream-brain phase entrainment is robust to acoustic interference [13-20] provided that target stream-evoked rhythmic activity can be segregated from noncoherent activity evoked by other sounds [21], a process that usually builds up over time [22-27]. However, it has remained unclear whether stream-brain phase entrainment functionally contributes to this buildup of rhythmic streams or whether it is merely an epiphenomenon of it. Here, we addressed this issue directly by experimentally manipulating endogenous stream-brain phase entrainment in human auditory cortex with non-invasive transcranial alternating current stimulation (TACS) [28-30]. We assessed the consequences of these manipulations on the perceptual buildup of the target stream (the time required to recognize its presence in a noisy background), using behavioral measures in 20 healthy listeners performing a naturalistic listening task. Experimentally induced cyclic 4-Hz variations in stream-brain phase entrainment reliably caused a cyclic 4-Hz pattern in perceptual buildup time. Our findings demonstrate that strong endogenous delta/theta stream-brain phase entrainment accelerates the perceptual emergence of task-relevant rhythmic streams in noisy environments.

  7. Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model

    PubMed Central

    Nakao, Kazuhito; Nakazawa, Kazu

    2014-01-01

    In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs) are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR) hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP) activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 s, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1) a broadband increase in spontaneous LFP power in the absence of external inputs, and (2) a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The “paradoxically” high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception. PMID:25018691

  8. Multi-channel atomic magnetometer for magnetoencephalography: a configuration study.

    PubMed

    Kim, Kiwoong; Begus, Samo; Xia, Hui; Lee, Seung-Kyun; Jazbinsek, Vojko; Trontelj, Zvonko; Romalis, Michael V

    2014-04-01

    Atomic magnetometers are emerging as an alternative to SQUID magnetometers for detection of biological magnetic fields. They have been used to measure both the magnetocardiography (MCG) and magnetoencephalography (MEG) signals. One of the virtues of the atomic magnetometers is their ability to operate as a multi-channel detector while using many common elements. Here we study two configurations of such a multi-channel atomic magnetometer optimized for MEG detection. We describe measurements of auditory evoked fields (AEF) from a human brain as well as localization of dipolar phantoms and auditory evoked fields. A clear N100m peak in AEF was observed with a signal-to-noise ratio of higher than 10 after averaging of 250 stimuli. Currently the intrinsic magnetic noise level is 4fTHz(-1/2) at 10Hz. We compare the performance of the two systems in regards to current source localization and discuss future development of atomic MEG systems.

  9. The role of event-related brain potentials in assessing central auditory processing.

    PubMed

    Alain, Claude; Tremblay, Kelly

    2007-01-01

    The perception of complex acoustic signals such as speech and music depends on the interaction between peripheral and central auditory processing. As information travels from the cochlea to primary and associative auditory cortices, the incoming sound is subjected to increasingly more detailed and refined analysis. These various levels of analyses are thought to include low-level automatic processes that detect, discriminate and group sounds that are similar in physical attributes such as frequency, intensity, and location as well as higher-level schema-driven processes that reflect listeners' experience and knowledge of the auditory environment. In this review, we describe studies that have used event-related brain potentials in investigating the processing of complex acoustic signals (e.g., speech, music). In particular, we examine the role of hearing loss on the neural representation of sound and how cognitive factors and learning can help compensate for perceptual difficulties. The notion of auditory scene analysis is used as a conceptual framework for interpreting and studying the perception of sound. PMID:18236645

  10. Can an auditory illusion trick the brain into turning down tinnitus?

    PubMed

    Fletcher, M D; Wiggins, I M

    2014-07-01

    Tinnitus, the phantom perception of sound with no external source, affects an estimated 10-15% of the adult population. Current treatments for this oftentimes distressing condition are of limited effectiveness. The "central gain" model proposes that tinnitus arises from an increase in the responsiveness, or gain, of neurons in central auditory pathways, triggered by damage to the auditory periphery. It has been suggested that tinnitus might be treated by compensating for the peripheral damage, thereby restoring normal levels of input to the central pathways, and hence reducing central gain. Unfortunately, when tinnitus originates with permanent damage to the auditory periphery, it may be impossible to compensate for this damage directly. However, we hypothesize that tinnitus may be treated by tricking the brain into believing that it temporarily receives normal levels of input at frequencies where peripheral damage has occurred. We identify an auditory illusion that seems capable, in principle, of achieving this objective. If effective, this approach would offer a safe, accessible, and non-invasive treatment for tinnitus.

  11. Time course of regional brain activity accompanying auditory verbal hallucinations in schizophrenia

    PubMed Central

    Hoffman, Ralph E.; Pittman, Brian; Constable, R. Todd; Bhagwagar, Zubin; Hampson, Michelle

    2011-01-01

    Background The pathophysiology of auditory verbal hallucinations remains poorly understood. Aims To characterise the time course of regional brain activity leading to auditory verbal hallucinations. Method During functional magnetic resonance imaging, 11 patients with schizophrenia or schizoaffective disorder signalled auditory verbal hallucination events by pressing a button. To control for effects of motor behaviour, regional activity associated with hallucination events was scaled against corresponding activity arising from random button-presses produced by 10 patients who did not experience hallucinations. Results Immediately prior to the hallucinations, motor-adjusted activity in the left inferior frontal gyrus was significantly greater than corresponding activity in the right inferior frontal gyrus. In contrast, motor-adjusted activity in a right posterior temporal region overshadowed corresponding activity in the left homologous temporal region. Robustly elevated motor-adjusted activity in the left temporal region associated with auditory verbal hallucinations was also detected, but only subsequent to hallucination events. At the earliest time shift studied, the correlation between left inferior frontal gyrus and right temporal activity was significantly higher for the hallucination group compared with non-hallucinating patients. Conclusions Findings suggest that heightened functional coupling between the left inferior frontal gyrus and right temporal regions leads to coactivation in these speech processing regions that is hallucinogenic. Delayed left temporal activation may reflect impaired corollary discharge contributing to source misattribution of resulting verbal images. PMID:21972276

  12. A vision-free brain-computer interface (BCI) paradigm based on auditory selective attention.

    PubMed

    Kim, Do-Won; Cho, Jae-Hyun; Hwang, Han-Jeong; Lim, Jeong-Hwan; Im, Chang-Hwan

    2011-01-01

    Majority of the recently developed brain computer interface (BCI) systems have been using visual stimuli or visual feedbacks. However, the BCI paradigms based on visual perception might not be applicable to severe locked-in patients who have lost their ability to control their eye movement or even their vision. In the present study, we investigated the feasibility of a vision-free BCI paradigm based on auditory selective attention. We used the power difference of auditory steady-state responses (ASSRs) when the participant modulates his/her attention to the target auditory stimulus. The auditory stimuli were constructed as two pure-tone burst trains with different beat frequencies (37 and 43 Hz) which were generated simultaneously from two speakers located at different positions (left and right). Our experimental results showed high classification accuracies (64.67%, 30 commands/min, information transfer rate (ITR) = 1.89 bits/min; 74.00%, 12 commands/min, ITR = 2.08 bits/min; 82.00%, 6 commands/min, ITR = 1.92 bits/min; 84.33%, 3 commands/min, ITR = 1.12 bits/min; without any artifact rejection, inter-trial interval = 6 sec), enough to be used for a binary decision. Based on the suggested paradigm, we implemented a first online ASSR-based BCI system that demonstrated the possibility of materializing a totally vision-free BCI system.

  13. The WIN-speller: a new intuitive auditory brain-computer interface spelling application

    PubMed Central

    Kleih, Sonja C.; Herweg, Andreas; Kaufmann, Tobias; Staiger-Sälzer, Pit; Gerstner, Natascha; Kübler, Andrea

    2015-01-01

    The objective of this study was to test the usability of a new auditory Brain-Computer Interface (BCI) application for communication. We introduce a word based, intuitive auditory spelling paradigm the WIN-speller. In the WIN-speller letters are grouped by words, such as the word KLANG representing the letters A, G, K, L, and N. Thereby, the decoding step between perceiving a code and translating it to the stimuli it represents becomes superfluous. We tested 11 healthy volunteers and four end-users with motor impairment in the copy spelling mode. Spelling was successful with an average accuracy of 84% in the healthy sample. Three of the end-users communicated with average accuracies of 80% or higher while one user was not able to communicate reliably. Even though further evaluation is required, the WIN-speller represents a potential alternative for BCI based communication in end-users. PMID:26500476

  14. Responses to Vocalizations and Auditory Controls in the Human Newborn Brain

    PubMed Central

    Cristia, Alejandrina; Minagawa, Yasuyo; Dupoux, Emmanuel

    2014-01-01

    In the adult brain, speech can recruit a brain network that is overlapping with, but not identical to, that involved in perceiving non-linguistic vocalizations. Using the same stimuli that had been presented to human 4-month-olds and adults, as well as adult macaques, we sought to shed light on the cortical networks engaged when human newborns process diverse vocalization types. Near infrared spectroscopy was used to register the response of 40 newborns' perisylvian regions when stimulated with speech, human and macaque emotional vocalizations, as well as auditory controls where the formant structure was destroyed but the long-term spectrum was retained. Left fronto-temporal and parietal regions were significantly activated in the comparison of stimulation versus rest, with unclear selectivity in cortical activation. These results for the newborn brain are qualitatively and quantitatively compared with previous work on newborns, older human infants, adult humans, and adult macaques reported in previous work. PMID:25517997

  15. The Wellcome Prize Lecture. A map of auditory space in the mammalian brain: neural computation and development.

    PubMed

    King, A J

    1993-09-01

    The experiments described in this review have demonstrated that the SC contains a two-dimensional map of auditory space, which is synthesized within the brain using a combination of monaural and binaural localization cues. There is also an adaptive fusion of auditory and visual space in this midbrain nucleus, providing for a common access to the motor pathways that control orientation behaviour. This necessitates a highly plastic relationship between the visual and auditory systems, both during postnatal development and in adult life. Because of the independent mobility of difference sense organs, gating mechanisms are incorporated into the auditory representation to provide up-to-date information about the spatial orientation of the eyes and ears. The SC therefore provides a valuable model system for studying a number of important issues in brain function, including the neural coding of sound location, the co-ordination of spatial information between different sensory systems, and the integration of sensory signals with motor outputs. PMID:8240794

  16. Synchrony of auditory brain responses predicts behavioral ability to keep still in children with autism spectrum disorder: Auditory-evoked response in children with autism spectrum disorder.

    PubMed

    Yoshimura, Yuko; Kikuchi, Mitsuru; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Remijn, Gerard B; Oi, Manabu; Munesue, Toshio; Higashida, Haruhiro; Minabe, Yoshio

    2016-01-01

    The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD. PMID:27551667

  17. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    PubMed

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators.

  18. Brain responses to altered auditory feedback during musical keyboard production: an fMRI study.

    PubMed

    Pfordresher, Peter Q; Mantell, James T; Brown, Steven; Zivadinov, Robert; Cox, Jennifer L

    2014-03-27

    Alterations of auditory feedback during piano performance can be profoundly disruptive. Furthermore, different alterations can yield different types of disruptive effects. Whereas alterations of feedback synchrony disrupt performed timing, alterations of feedback pitch contents can disrupt accuracy. The current research tested whether these behavioral dissociations correlate with differences in brain activity. Twenty pianists performed simple piano keyboard melodies while being scanned in a 3-T magnetic resonance imaging (MRI) scanner. In different conditions they experienced normal auditory feedback, altered auditory feedback (asynchronous delays or altered pitches), or control conditions that excluded movement or sound. Behavioral results replicated past findings. Neuroimaging data suggested that asynchronous delays led to increased activity in Broca's area and its right homologue, whereas disruptive alterations of pitch elevated activations in the cerebellum, area Spt, inferior parietal lobule, and the anterior cingulate cortex. Both disruptive conditions increased activations in the supplementary motor area. These results provide the first evidence of neural responses associated with perception/action mismatch during keyboard production. PMID:24513403

  19. Brain responses to altered auditory feedback during musical keyboard production: an fMRI study.

    PubMed

    Pfordresher, Peter Q; Mantell, James T; Brown, Steven; Zivadinov, Robert; Cox, Jennifer L

    2014-03-27

    Alterations of auditory feedback during piano performance can be profoundly disruptive. Furthermore, different alterations can yield different types of disruptive effects. Whereas alterations of feedback synchrony disrupt performed timing, alterations of feedback pitch contents can disrupt accuracy. The current research tested whether these behavioral dissociations correlate with differences in brain activity. Twenty pianists performed simple piano keyboard melodies while being scanned in a 3-T magnetic resonance imaging (MRI) scanner. In different conditions they experienced normal auditory feedback, altered auditory feedback (asynchronous delays or altered pitches), or control conditions that excluded movement or sound. Behavioral results replicated past findings. Neuroimaging data suggested that asynchronous delays led to increased activity in Broca's area and its right homologue, whereas disruptive alterations of pitch elevated activations in the cerebellum, area Spt, inferior parietal lobule, and the anterior cingulate cortex. Both disruptive conditions increased activations in the supplementary motor area. These results provide the first evidence of neural responses associated with perception/action mismatch during keyboard production.

  20. An Auditory-Tactile Visual Saccade-Independent P300 Brain-Computer Interface.

    PubMed

    Yin, Erwei; Zeyl, Timothy; Saab, Rami; Hu, Dewen; Zhou, Zongtan; Chau, Tom

    2016-02-01

    Most P300 event-related potential (ERP)-based brain-computer interface (BCI) studies focus on gaze shift-dependent BCIs, which cannot be used by people who have lost voluntary eye movement. However, the performance of visual saccade-independent P300 BCIs is generally poor. To improve saccade-independent BCI performance, we propose a bimodal P300 BCI approach that simultaneously employs auditory and tactile stimuli. The proposed P300 BCI is a vision-independent system because no visual interaction is required of the user. Specifically, we designed a direction-congruent bimodal paradigm by randomly and simultaneously presenting auditory and tactile stimuli from the same direction. Furthermore, the channels and number of trials were tailored to each user to improve online performance. With 12 participants, the average online information transfer rate (ITR) of the bimodal approach improved by 45.43% and 51.05% over that attained, respectively, with the auditory and tactile approaches individually. Importantly, the average online ITR of the bimodal approach, including the break time between selections, reached 10.77 bits/min. These findings suggest that the proposed bimodal system holds promise as a practical visual saccade-independent P300 BCI. PMID:26678249

  1. Audio representations of multi-channel EEG: a new tool for diagnosis of brain disorders

    PubMed Central

    Vialatte, François B; Dauwels, Justin; Musha, Toshimitsu; Cichocki, Andrzej

    2012-01-01

    Objective: The objective of this paper is to develop audio representations of electroencephalographic (EEG) multichannel signals, useful for medical practitioners and neuroscientists. The fundamental question explored in this paper is whether clinically valuable information contained in the EEG, not available from the conventional graphical EEG representation, might become apparent through audio representations. Methods and Materials: Music scores are generated from sparse time-frequency maps of EEG signals. Specifically, EEG signals of patients with mild cognitive impairment (MCI) and (healthy) control subjects are considered. Statistical differences in the audio representations of MCI patients and control subjects are assessed through mathematical complexity indexes as well as a perception test; in the latter, participants try to distinguish between audio sequences from MCI patients and control subjects. Results: Several characteristics of the audio sequences, including sample entropy, number of notes, and synchrony, are significantly different in MCI patients and control subjects (Mann-Whitney p < 0.01). Moreover, the participants of the perception test were able to accurately classify the audio sequences (89% correctly classified). Conclusions: The proposed audio representation of multi-channel EEG signals helps to understand the complex structure of EEG. Promising results were obtained on a clinical EEG data set. PMID:23383399

  2. Case study: auditory brain responses in a minimally verbal child with autism and cerebral palsy

    PubMed Central

    Yau, Shu H.; McArthur, Genevieve; Badcock, Nicholas A.; Brock, Jon

    2015-01-01

    An estimated 30% of individuals with autism spectrum disorders (ASD) remain minimally verbal into late childhood, but research on cognition and brain function in ASD focuses almost exclusively on those with good or only moderately impaired language. Here we present a case study investigating auditory processing of GM, a nonverbal child with ASD and cerebral palsy. At the age of 8 years, GM was tested using magnetoencephalography (MEG) whilst passively listening to speech sounds and complex tones. Where typically developing children and verbal autistic children all demonstrated similar brain responses to speech and nonspeech sounds, GM produced much stronger responses to nonspeech than speech, particularly in the 65–165 ms (M50/M100) time window post-stimulus onset. GM was retested aged 10 years using electroencephalography (EEG) whilst passively listening to pure tone stimuli. Consistent with her MEG response to complex tones, GM showed an unusually early and strong response to pure tones in her EEG responses. The consistency of the MEG and EEG data in this single case study demonstrate both the potential and the feasibility of these methods in the study of minimally verbal children with ASD. Further research is required to determine whether GM's atypical auditory responses are characteristic of other minimally verbal children with ASD or of other individuals with cerebral palsy. PMID:26150768

  3. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise

    PubMed Central

    Ioannou, Christos I.; Pereda, Ernesto; Lindsen, Job P.; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies. PMID:26065708

  4. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    PubMed Central

    Scott, Gregory D.; Karns, Christina M.; Dow, Mark W.; Stevens, Courtney; Neville, Helen J.

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11–15° vs. 2–7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf

  5. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition.

  6. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    PubMed

    An, Xingwei; Höhne, Johannes; Ming, Dong; Blankertz, Benjamin

    2014-01-01

    For Brain-Computer Interface (BCI) systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP) speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller) and interleaved independent streams (Parallel-Speller). Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3%) showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms. PMID:25350547

  7. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    PubMed

    An, Xingwei; Höhne, Johannes; Ming, Dong; Blankertz, Benjamin

    2014-01-01

    For Brain-Computer Interface (BCI) systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP) speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller) and interleaved independent streams (Parallel-Speller). Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3%) showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  8. Exploring Combinations of Auditory and Visual Stimuli for Gaze-Independent Brain-Computer Interfaces

    PubMed Central

    An, Xingwei; Höhne, Johannes; Ming, Dong; Blankertz, Benjamin

    2014-01-01

    For Brain-Computer Interface (BCI) systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP) speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller) and interleaved independent streams (Parallel-Speller). Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3%) showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms. PMID:25350547

  9. High-Resolution Mapping of Myeloarchitecture In Vivo: Localization of Auditory Areas in the Human Brain.

    PubMed

    De Martino, Federico; Moerel, Michelle; Xu, Junqian; van de Moortele, Pierre-Francois; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia

    2015-10-01

    The precise delineation of auditory areas in vivo remains problematic. Histological analysis of postmortem tissue indicates that the relation of areal borders to macroanatomical landmarks is variable across subjects. Furthermore, functional parcellation schemes based on measures of, for example, frequency preference (tonotopy) remain controversial. Here, we propose a 7 Tesla magnetic resonance imaging method that enables the anatomical delineation of auditory cortical areas in vivo and in individual brains, through the high-resolution visualization (0.6 × 0.6 × 0.6 mm(3)) of intracortical anatomical contrast related to myelin. The approach combines the acquisition and analysis of images with multiple MR contrasts (T1, T2*, and proton density). Compared with previous methods, the proposed solution is feasible at high fields and time efficient, which allows collecting myelin-related and functional images within the same measurement session. Our results show that a data-driven analysis of cortical depth-dependent profiles of anatomical contrast allows identifying a most densely myelinated cortical region on the medial Heschl's gyrus. Analyses of functional responses show that this region includes neuronal populations with typical primary functional properties (single tonotopic gradient and narrow frequency tuning), thus indicating that it may correspond to the human homolog of monkey A1. PMID:24994817

  10. Auditory Hallucinations and the Brain's Resting-State Networks: Findings and Methodological Observations.

    PubMed

    Alderson-Day, Ben; Diederen, Kelly; Fernyhough, Charles; Ford, Judith M; Horga, Guillermo; Margulies, Daniel S; McCarthy-Jones, Simon; Northoff, Georg; Shine, James M; Turner, Jessica; van de Ven, Vincent; van Lutterveld, Remko; Waters, Flavie; Jardri, Renaud

    2016-09-01

    In recent years, there has been increasing interest in the potential for alterations to the brain's resting-state networks (RSNs) to explain various kinds of psychopathology. RSNs provide an intriguing new explanatory framework for hallucinations, which can occur in different modalities and population groups, but which remain poorly understood. This collaboration from the International Consortium on Hallucination Research (ICHR) reports on the evidence linking resting-state alterations to auditory hallucinations (AH) and provides a critical appraisal of the methodological approaches used in this area. In the report, we describe findings from resting connectivity fMRI in AH (in schizophrenia and nonclinical individuals) and compare them with findings from neurophysiological research, structural MRI, and research on visual hallucinations (VH). In AH, various studies show resting connectivity differences in left-hemisphere auditory and language regions, as well as atypical interaction of the default mode network and RSNs linked to cognitive control and salience. As the latter are also evident in studies of VH, this points to a domain-general mechanism for hallucinations alongside modality-specific changes to RSNs in different sensory regions. However, we also observed high methodological heterogeneity in the current literature, affecting the ability to make clear comparisons between studies. To address this, we provide some methodological recommendations and options for future research on the resting state and hallucinations. PMID:27280452

  11. Brain-Generated Estradiol Drives Long-Term Optimization of Auditory Coding to Enhance the Discrimination of Communication Signals

    PubMed Central

    Tremere, Liisa A.; Pinaud, Raphael

    2011-01-01

    Auditory processing and hearing-related pathologies are heavily influenced by steroid hormones in a variety of vertebrate species including humans. The hormone estradiol has been recently shown to directly modulate the gain of central auditory neurons, in real-time, by controlling the strength of inhibitory transmission via a non-genomic mechanism. The functional relevance of this modulation, however, remains unknown. Here we show that estradiol generated in the songbird homologue of the mammalian auditory association cortex, rapidly enhances the effectiveness of the neural coding of complex, learned acoustic signals in awake zebra finches. Specifically, estradiol increases mutual information rates, coding efficiency and the neural discrimination of songs. These effects are mediated by estradiol’s modulation of both rate and temporal coding of auditory signals. Interference with the local action or production of estradiol in the auditory forebrain of freely-behaving animals disrupts behavioral responses to songs, but not to other behaviorally-relevant communication signals. Our findings directly show that estradiol is a key regulator of auditory function in the adult vertebrate brain. PMID:21368039

  12. Incorporating modern neuroscience findings to improve brain-computer interfaces: tracking auditory attention

    NASA Astrophysics Data System (ADS)

    Wronkiewicz, Mark; Larson, Eric; Lee, Adrian KC

    2016-10-01

    Objective. Brain-computer interface (BCI) technology allows users to generate actions based solely on their brain signals. However, current non-invasive BCIs generally classify brain activity recorded from surface electroencephalography (EEG) electrodes, which can hinder the application of findings from modern neuroscience research. Approach. In this study, we use source imaging—a neuroimaging technique that projects EEG signals onto the surface of the brain—in a BCI classification framework. This allowed us to incorporate prior research from functional neuroimaging to target activity from a cortical region involved in auditory attention. Main results. Classifiers trained to detect attention switches performed better with source imaging projections than with EEG sensor signals. Within source imaging, including subject-specific anatomical MRI information (instead of using a generic head model) further improved classification performance. This source-based strategy also reduced accuracy variability across three dimensionality reduction techniques—a major design choice in most BCIs. Significance. Our work shows that source imaging provides clear quantitative and qualitative advantages to BCIs and highlights the value of incorporating modern neuroscience knowledge and methods into BCI systems.

  13. "Where do auditory hallucinations come from?"--a brain morphometry study of schizophrenia patients with inner or outer space hallucinations.

    PubMed

    Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc; Cachia, Arnaud

    2011-01-01

    Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N=12) and patients with only inner space hallucinations (N=15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the "where" auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge.

  14. An online multi-channel SSVEP-based brain-computer interface using a canonical correlation analysis method

    NASA Astrophysics Data System (ADS)

    Bin, Guangyu; Gao, Xiaorong; Yan, Zheng; Hong, Bo; Gao, Shangkai

    2009-08-01

    In recent years, there has been increasing interest in using steady-state visual evoked potential (SSVEP) in brain-computer interface (BCI) systems. However, several aspects of current SSVEP-based BCI systems need improvement, specifically in relation to speed, user variation and ease of use. With these improvements in mind, this paper presents an online multi-channel SSVEP-based BCI system using a canonical correlation analysis (CCA) method for extraction of frequency information associated with the SSVEP. The key parameters, channel location, window length and the number of harmonics, are investigated using offline data, and the result used to guide the design of the online system. An SSVEP-based BCI system with six targets, which use nine channel locations in the occipital and parietal lobes, a window length of 2 s and the first harmonic, is used for online testing on 12 subjects. The results show that the proposed BCI system has a high performance, achieving an average accuracy of 95.3% and an information transfer rate of 58 ± 9.6 bit min-1. The positive characteristics of the proposed system are that channel selection and parameter optimization are not required, the possible use of harmonic frequencies, low user variation and easy setup.

  15. Coding of Visual, Auditory, Rule, and Response Information in the Brain: 10 Years of Multivoxel Pattern Analysis.

    PubMed

    Woolgar, Alexandra; Jackson, Jade; Duncan, John

    2016-10-01

    How is the processing of task information organized in the brain? Many views of brain function emphasize modularity, with different regions specialized for processing different types of information. However, recent accounts also highlight flexibility, pointing especially to the highly consistent pattern of frontoparietal activation across many tasks. Although early insights from functional imaging were based on overall activation levels during different cognitive operations, in the last decade many researchers have used multivoxel pattern analyses to interrogate the representational content of activations, mapping out the brain regions that make particular stimulus, rule, or response distinctions. Here, we drew on 100 searchlight decoding analyses from 57 published papers to characterize the information coded in different brain networks. The outcome was highly structured. Visual, auditory, and motor networks predominantly (but not exclusively) coded visual, auditory, and motor information, respectively. By contrast, the frontoparietal multiple-demand network was characterized by domain generality, coding visual, auditory, motor, and rule information. The contribution of the default mode network and voxels elsewhere was minor. The data suggest a balanced picture of brain organization in which sensory and motor networks are relatively specialized for information in their own domain, whereas a specific frontoparietal network acts as a domain-general "core" with the capacity to code many different aspects of a task.

  16. Coding of Visual, Auditory, Rule, and Response Information in the Brain: 10 Years of Multivoxel Pattern Analysis.

    PubMed

    Woolgar, Alexandra; Jackson, Jade; Duncan, John

    2016-10-01

    How is the processing of task information organized in the brain? Many views of brain function emphasize modularity, with different regions specialized for processing different types of information. However, recent accounts also highlight flexibility, pointing especially to the highly consistent pattern of frontoparietal activation across many tasks. Although early insights from functional imaging were based on overall activation levels during different cognitive operations, in the last decade many researchers have used multivoxel pattern analyses to interrogate the representational content of activations, mapping out the brain regions that make particular stimulus, rule, or response distinctions. Here, we drew on 100 searchlight decoding analyses from 57 published papers to characterize the information coded in different brain networks. The outcome was highly structured. Visual, auditory, and motor networks predominantly (but not exclusively) coded visual, auditory, and motor information, respectively. By contrast, the frontoparietal multiple-demand network was characterized by domain generality, coding visual, auditory, motor, and rule information. The contribution of the default mode network and voxels elsewhere was minor. The data suggest a balanced picture of brain organization in which sensory and motor networks are relatively specialized for information in their own domain, whereas a specific frontoparietal network acts as a domain-general "core" with the capacity to code many different aspects of a task. PMID:27315269

  17. Proteome rearrangements after auditory learning: high-resolution profiling of synapse-enriched protein fractions from mouse brain.

    PubMed

    Kähne, Thilo; Richter, Sandra; Kolodziej, Angela; Smalla, Karl-Heinz; Pielot, Rainer; Engler, Alexander; Ohl, Frank W; Dieterich, Daniela C; Seidenbecher, Constanze; Tischmeyer, Wolfgang; Naumann, Michael; Gundelfinger, Eckart D

    2016-07-01

    Learning and memory processes are accompanied by rearrangements of synaptic protein networks. While various studies have demonstrated the regulation of individual synaptic proteins during these processes, much less is known about the complex regulation of synaptic proteomes. Recently, we reported that auditory discrimination learning in mice is associated with a relative down-regulation of proteins involved in the structural organization of synapses in various brain regions. Aiming at the identification of biological processes and signaling pathways involved in auditory memory formation, here, a label-free quantification approach was utilized to identify regulated synaptic junctional proteins and phosphoproteins in the auditory cortex, frontal cortex, hippocampus, and striatum of mice 24 h after the learning experiment. Twenty proteins, including postsynaptic scaffolds, actin-remodeling proteins, and RNA-binding proteins, were regulated in at least three brain regions pointing to common, cross-regional mechanisms. Most of the detected synaptic proteome changes were, however, restricted to individual brain regions. For example, several members of the Septin family of cytoskeletal proteins were up-regulated only in the hippocampus, while Septin-9 was down-regulated in the hippocampus, the frontal cortex, and the striatum. Meta analyses utilizing several databases were employed to identify underlying cellular functions and biological pathways. Data are available via ProteomeExchange with identifier PXD003089. How does the protein composition of synapses change in different brain areas upon auditory learning? We unravel discrete proteome changes in mouse auditory cortex, frontal cortex, hippocampus, and striatum functionally implicated in the learning process. We identify not only common but also area-specific biological pathways and cellular processes modulated 24 h after training, indicating individual contributions of the regions to memory processing. PMID

  18. Noninvasive brain stimulation for the treatment of auditory verbal hallucinations in schizophrenia: methods, effects and challenges

    PubMed Central

    Kubera, Katharina M.; Barth, Anja; Hirjak, Dusan; Thomann, Philipp A.; Wolf, Robert C.

    2015-01-01

    This mini-review focuses on noninvasive brain stimulation techniques as an augmentation method for the treatment of persistent auditory verbal hallucinations (AVH) in patients with schizophrenia. Paradigmatically, we place emphasis on transcranial magnetic stimulation (TMS). We specifically discuss rationales of stimulation and consider methodological questions together with issues of phenotypic diversity in individuals with drug-refractory and persistent AVH. Eventually, we provide a brief outlook for future investigations and treatment directions. Taken together, current evidence suggests TMS as a promising method in the treatment of AVH. Low-frequency stimulation of the superior temporal cortex (STC) may reduce symptom severity and frequency. Yet clinical effects are of relatively short duration and effect sizes appear to decrease over time along with publication of larger trials. Apart from considering other innovative stimulation techniques, such as transcranial Direct Current Stimulation (tDCS), and optimizing stimulation protocols, treatment of AVH using noninvasive brain stimulation will essentially rely on accurate identification of potential responders and non-responders for these treatment modalities. In this regard, future studies will need to consider distinct phenotypic presentations of AVH in patients with schizophrenia, together with the putative functional neurocircuitry underlying these phenotypes. PMID:26528145

  19. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks

    PubMed Central

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention. PMID:25745395

  20. Descending brain neurons in the cricket Gryllus bimaculatus (de Geer): auditory responses and impact on walking.

    PubMed

    Zorović, Maja; Hedwig, Berthold

    2013-01-01

    The activity of four types of sound-sensitive descending brain neurons in the cricket Gryllus bimaculatus was recorded intracellularly while animals were standing or walking on an open-loop trackball system. In a neuron with a contralaterally descending axon, the male calling song elicited responses that copied the pulse pattern of the song during standing and walking. The accuracy of pulse copying increased during walking. Neurons with ipsilaterally descending axons responded weakly to sound only during standing. The responses were mainly to the first pulse of each chirp, whereas the complete pulse pattern of a chirp was not copied. During walking the auditory responses were suppressed in these neurons. The spiking activity of all four neuron types was significantly correlated to forward walking velocity, indicating their relevance for walking. Additionally, injection of depolarizing current elicited walking and/or steering in three of four neuron types described. In none of the neurons was the spiking activity both sufficient and necessary to elicit and maintain walking behaviour. Some neurons showed arborisations in the lateral accessory lobes, pointing to the relevance of this brain region for cricket audition and descending motor control.

  1. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    NASA Astrophysics Data System (ADS)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  2. Noise trauma induced plastic changes in brain regions outside the classical auditory pathway.

    PubMed

    Chen, G-D; Sheppard, A; Salvi, R

    2016-02-19

    The effects of intense noise exposure on the classical auditory pathway have been extensively investigated; however, little is known about the effects of noise-induced hearing loss on non-classical auditory areas in the brain such as the lateral amygdala (LA) and striatum (Str). To address this issue, we compared the noise-induced changes in spontaneous and tone-evoked responses from multiunit clusters (MUC) in the LA and Str with those seen in auditory cortex (AC) in rats. High-frequency octave band noise (10-20 kHz) and narrow band noise (16-20 kHz) induced permanent threshold shifts at high-frequencies within and above the noise band but not at low frequencies. While the noise trauma significantly elevated spontaneous discharge rate (SR) in the AC, SRs in the LA and Str were only slightly increased across all frequencies. The high-frequency noise trauma affected tone-evoked firing rates in frequency and time-dependent manner and the changes appeared to be related to the severity of noise trauma. In the LA, tone-evoked firing rates were reduced at the high-frequencies (trauma area) whereas firing rates were enhanced at the low-frequencies or at the edge-frequency dependent on severity of hearing loss at the high frequencies. The firing rate temporal profile changed from a broad plateau to one sharp, delayed peak. In the AC, tone-evoked firing rates were depressed at high frequencies and enhanced at the low frequencies while the firing rate temporal profiles became substantially broader. In contrast, firing rates in the Str were generally decreased and firing rate temporal profiles become more phasic and less prolonged. The altered firing rate and pattern at low frequencies induced by high-frequency hearing loss could have perceptual consequences. The tone-evoked hyperactivity in low-frequency MUC could manifest as hyperacusis whereas the discharge pattern changes could affect temporal resolution and integration. PMID:26701290

  3. Delta, theta, beta, and gamma brain oscillations index levels of auditory sentence processing.

    PubMed

    Mai, Guangting; Minett, James W; Wang, William S-Y

    2016-06-01

    A growing number of studies indicate that multiple ranges of brain oscillations, especially the delta (δ, <4Hz), theta (θ, 4-8Hz), beta (β, 13-30Hz), and gamma (γ, 30-50Hz) bands, are engaged in speech and language processing. It is not clear, however, how these oscillations relate to functional processing at different linguistic hierarchical levels. Using scalp electroencephalography (EEG), the current study tested the hypothesis that phonological and the higher-level linguistic (semantic/syntactic) organizations during auditory sentence processing are indexed by distinct EEG signatures derived from the δ, θ, β, and γ oscillations. We analyzed specific EEG signatures while subjects listened to Mandarin speech stimuli in three different conditions in order to dissociate phonological and semantic/syntactic processing: (1) sentences comprising valid disyllabic words assembled in a valid syntactic structure (real-word condition); (2) utterances with morphologically valid syllables, but not constituting valid disyllabic words (pseudo-word condition); and (3) backward versions of the real-word and pseudo-word conditions. We tested four signatures: band power, EEG-acoustic entrainment (EAE), cross-frequency coupling (CFC), and inter-electrode renormalized partial directed coherence (rPDC). The results show significant effects of band power and EAE of δ and θ oscillations for phonological, rather than semantic/syntactic processing, indicating the importance of tracking δ- and θ-rate phonetic patterns during phonological analysis. We also found significant β-related effects, suggesting tracking of EEG to the acoustic stimulus (high-β EAE), memory processing (θ-low-β CFC), and auditory-motor interactions (20-Hz rPDC) during phonological analysis. For semantic/syntactic processing, we obtained a significant effect of γ power, suggesting lexical memory retrieval or processing grammatical word categories. Based on these findings, we confirm that scalp EEG

  4. A case of generalized auditory agnosia with unilateral subcortical brain lesion.

    PubMed

    Suh, Hyee; Shin, Yong-Il; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-12-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia.

  5. Auditory brain response modified by temporal deviation of language rhythm: an auditory event-related potential study.

    PubMed

    Jomori, Izumi; Hoshiyama, Minoru

    2009-10-01

    The effects of the temporal disruption of language rhythm in Japanese on auditory evoked potentials were investigated in normal subjects. Auditory event-related evoked potentials (AERP) were recorded following syllables using a natural and deviated language rhythm by inserting various (0-400 ms) silent intervals between syllables. The language speed was changed to assess the effect of a deviant rhythm relative to the language speed on AERP in another experiment. The prolonging of intervals did not affect the N100-P150 components until the inserted interval became 400 ms, while the negative component (early negativity, EN), peaking at 250-300 ms, was enhanced when the interval was 100 ms or more. The N100-P150 components following deviated language rhythms did not change during the fast speed but did in the standard and slow speed. We considered that the N100-P150 components were changed by the mixed effects of adaptation and prediction related to the reading speed, and that EN was evoked by deviated language rhythm in a different way from that caused N100-P150 changes, possibly via mismatch detection process between deviant rhythm and intrinsic rehearsed rhythm.

  6. Effect of Hearing Aids on Auditory Function in Infants with Perinatal Brain Injury and Severe Hearing Loss

    PubMed Central

    Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio

    2012-01-01

    Background Approximately 2–4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. Methodology/Principal Findings A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). Conclusions/Significance This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs. PMID:22808289

  7. Localization of brain activity during auditory verbal short-term memory derived from magnetic recordings.

    PubMed

    Starr, A; Kristeva, R; Cheyne, D; Lindinger, G; Deecke, L

    1991-09-01

    We have studied magnetic and electrical fields of the brain in normal subjects during the performance of an auditory verbal short-term memory task. On each trial 3 digits, selected from the numbers 'one' through 'nine', were presented for memorization followed by a probe number which could or could not be a member of the preceding memory set. The subject pressed an appropriate response button and accuracy and reaction time were measured. Magnetic fields recorded from up to 63 sites over both hemispheres revealed a transient field at 110 ms to both the memory item and the probe consistent with a dipole source in Heschl's gyrus; a sustained magnetic field between 300 and 800 ms to just the memory items localized to the temporal lobe slightly deeper and posterior to Heschl's gyri; and a sustained magnetic field between 300 and 800 ms to just the probes localized bilaterally to the medio-basal temporal lobes. These results are related to clinical disorders of short-term memory in man.

  8. Non-invasive Brain Stimulation and Auditory Verbal Hallucinations: New Techniques and Future Directions

    PubMed Central

    Moseley, Peter; Alderson-Day, Ben; Ellison, Amanda; Jardri, Renaud; Fernyhough, Charles

    2016-01-01

    Auditory verbal hallucinations (AVHs) are the experience of hearing a voice in the absence of any speaker. Results from recent attempts to treat AVHs with neurostimulation (rTMS or tDCS) to the left temporoparietal junction have not been conclusive, but suggest that it may be a promising treatment option for some individuals. Some evidence suggests that the therapeutic effect of neurostimulation on AVHs may result from modulation of cortical areas involved in the ability to monitor the source of self-generated information. Here, we provide a brief overview of cognitive models and neurostimulation paradigms associated with treatment of AVHs, and discuss techniques that could be explored in the future to improve the efficacy of treatment, including alternating current and random noise stimulation. Technical issues surrounding the use of neurostimulation as a treatment option are discussed (including methods to localize the targeted cortical area, and the state-dependent effects of brain stimulation), as are issues surrounding the acceptability of neurostimulation for adolescent populations and individuals who experience qualitatively different types of AVH. PMID:26834541

  9. Hyperpolarization-independent maturation and refinement of GABA/glycinergic connections in the auditory brain stem.

    PubMed

    Lee, Hanmi; Bach, Eva; Noh, Jihyun; Delpire, Eric; Kandler, Karl

    2016-03-01

    During development GABA and glycine synapses are initially excitatory before they gradually become inhibitory. This transition is due to a developmental increase in the activity of neuronal potassium-chloride cotransporter 2 (KCC2), which shifts the chloride equilibrium potential (ECl) to values more negative than the resting membrane potential. While the role of early GABA and glycine depolarizations in neuronal development has become increasingly clear, the role of the transition to hyperpolarization in synapse maturation and circuit refinement has remained an open question. Here we investigated this question by examining the maturation and developmental refinement of GABA/glycinergic and glutamatergic synapses in the lateral superior olive (LSO), a binaural auditory brain stem nucleus, in KCC2-knockdown mice, in which GABA and glycine remain depolarizing. We found that many key events in the development of synaptic inputs to the LSO, such as changes in neurotransmitter phenotype, strengthening and elimination of GABA/glycinergic connection, and maturation of glutamatergic synapses, occur undisturbed in KCC2-knockdown mice compared with wild-type mice. These results indicate that maturation of inhibitory and excitatory synapses in the LSO is independent of the GABA and glycine depolarization-to-hyperpolarization transition. PMID:26655825

  10. Brain-stem involvement in multiple sclerosis: a comparison between brain-stem auditory evoked potentials and the acoustic stapedius reflex.

    PubMed

    Kofler, B; Oberascher, G; Pommer, B

    1984-01-01

    Brain-stem auditory evoked potentials (BAEPs) and the acoustic stapedius reflex (ASR) were recorded in 68 patients with definite, probable and possible multiple sclerosis (using the definitions of McAlpine). The high incidence of abnormal results, 68% and 60%, respectively, pointed to the diagnostic value of these two measures in detecting brain-stem dysfunction. Combination of the methods increased the diagnostic yield to 85%. Since in part the same brain-stem generator sites underlie BAEPs and the ASR, it was considered that a study of their correlation might serve to increase the reliability and validity of these techniques. There was 71% agreement overall between results from the two measures. Furthermore, 72% of the joint BAEP and ASR abnormalities corresponded in detection of the brain-stem lesion site. It was concluded that the combined approach may supply powerful, complementary information on brain-stem dysfunction, which may aid in establishing the diagnosis of multiple sclerosis.

  11. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    PubMed Central

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  12. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    PubMed

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  13. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    PubMed

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  14. Top-down controlled and bottom-up triggered orienting of auditory attention to pitch activate overlapping brain networks.

    PubMed

    Alho, Kimmo; Salmi, Juha; Koistinen, Sonja; Salonen, Oili; Rinne, Teemu

    2015-11-11

    A number of previous studies have suggested segregated networks of brain areas for top-down controlled and bottom-up triggered orienting of visual attention. However, the corresponding networks involved in auditory attention remain less studied. Our participants attended selectively to a tone stream with either a lower pitch or higher pitch in order to respond to infrequent changes in duration of attended tones. The participants were also required to shift their attention from one stream to the other when guided by a visual arrow cue. In addition to these top-down controlled cued attention shifts, infrequent task-irrelevant louder tones occurred in both streams to trigger attention in a bottom-up manner. Both cued shifts and louder tones were associated with enhanced activity in the superior temporal gyrus and sulcus, temporo-parietal junction, superior parietal lobule, inferior and middle frontal gyri, frontal eye field, supplementary motor area, and anterior cingulate gyrus. Thus, the present findings suggest that in the auditory modality, unlike in vision, top-down controlled and bottom-up triggered attention activate largely the same cortical networks. Comparison of the present results with our previous results from a similar experiment on spatial auditory attention suggests that fronto-parietal networks of attention to location or pitch overlap substantially. However, the auditory areas in the anterior superior temporal cortex might have a more important role in attention to the pitch than location of sounds. This article is part of a Special Issue entitled SI: Prediction and Attention.

  15. An Evaluation of Training with an Auditory P300 Brain-Computer Interface for the Japanese Hiragana Syllabary

    PubMed Central

    Halder, Sebastian; Takano, Kouji; Ora, Hiroki; Onishi, Akinari; Utsumi, Kota; Kansaku, Kenji

    2016-01-01

    Gaze-independent brain-computer interfaces (BCIs) are a possible communication channel for persons with paralysis. We investigated if it is possible to use auditory stimuli to create a BCI for the Japanese Hiragana syllabary, which has 46 Hiragana characters. Additionally, we investigated if training has an effect on accuracy despite the high amount of different stimuli involved. Able-bodied participants (N = 6) were asked to select 25 syllables (out of fifty possible choices) using a two step procedure: First the consonant (ten choices) and then the vowel (five choices). This was repeated on 3 separate days. Additionally, a person with spinal cord injury (SCI) participated in the experiment. Four out of six healthy participants reached Hiragana syllable accuracies above 70% and the information transfer rate increased from 1.7 bits/min in the first session to 3.2 bits/min in the third session. The accuracy of the participant with SCI increased from 12% (0.2 bits/min) to 56% (2 bits/min) in session three. Reliable selections from a 10 × 5 matrix using auditory stimuli were possible and performance is increased by training. We were able to show that auditory P300 BCIs can be used for communication with up to fifty symbols. This enables the use of the technology of auditory P300 BCIs with a variety of applications. PMID:27746716

  16. Brain-computer interfaces using capacitive measurement of visual or auditory steady-state responses

    NASA Astrophysics Data System (ADS)

    Baek, Hyun Jae; Kim, Hyun Seok; Heo, Jeong; Lim, Yong Gyu; Park, Kwang Suk

    2013-04-01

    Objective. Brain-computer interface (BCI) technologies have been intensely studied to provide alternative communication tools entirely independent of neuromuscular activities. Current BCI technologies use electroencephalogram (EEG) acquisition methods that require unpleasant gel injections, impractical preparations and clean-up procedures. The next generation of BCI technologies requires practical, user-friendly, nonintrusive EEG platforms in order to facilitate the application of laboratory work in real-world settings. Approach. A capacitive electrode that does not require an electrolytic gel or direct electrode-scalp contact is a potential alternative to the conventional wet electrode in future BCI systems. We have proposed a new capacitive EEG electrode that contains a conductive polymer-sensing surface, which enhances electrode performance. This paper presents results from five subjects who exhibited visual or auditory steady-state responses according to BCI using these new capacitive electrodes. The steady-state visual evoked potential (SSVEP) spelling system and the auditory steady-state response (ASSR) binary decision system were employed. Main results. Offline tests demonstrated BCI performance high enough to be used in a BCI system (accuracy: 95.2%, ITR: 19.91 bpm for SSVEP BCI (6 s), accuracy: 82.6%, ITR: 1.48 bpm for ASSR BCI (14 s)) with the analysis time being slightly longer than that when wet electrodes were employed with the same BCI system (accuracy: 91.2%, ITR: 25.79 bpm for SSVEP BCI (4 s), accuracy: 81.3%, ITR: 1.57 bpm for ASSR BCI (12 s)). Subjects performed online BCI under the SSVEP paradigm in copy spelling mode and under the ASSR paradigm in selective attention mode with a mean information transfer rate (ITR) of 17.78 ± 2.08 and 0.7 ± 0.24 bpm, respectively. Significance. The results of these experiments demonstrate the feasibility of using our capacitive EEG electrode in BCI systems. This capacitive electrode may become a flexible and

  17. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    PubMed Central

    Hill, N J; Schölkopf, B

    2012-01-01

    We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135

  18. Are you listening? Brain activation associated with sustained nonspatial auditory attention in the presence and absence of stimulation.

    PubMed

    Seydell-Greenwald, Anna; Greenberg, Adam S; Rauschecker, Josef P

    2014-05-01

    Neuroimaging studies investigating the voluntary (top-down) control of attention largely agree that this process recruits several frontal and parietal brain regions. Since most studies used attention tasks requiring several higher-order cognitive functions (e.g. working memory, semantic processing, temporal integration, spatial orienting) as well as different attentional mechanisms (attention shifting, distractor filtering), it is unclear what exactly the observed frontoparietal activations reflect. The present functional magnetic resonance imaging study investigated, within the same participants, signal changes in (1) a "Simple Attention" task in which participants attended to a single melody, (2) a "Selective Attention" task in which they simultaneously ignored another melody, and (3) a "Beep Monitoring" task in which participants listened in silence for a faint beep. Compared to resting conditions with identical stimulation, all tasks produced robust activation increases in auditory cortex, cross-modal inhibition in visual and somatosensory cortex, and decreases in the default mode network, indicating that participants were indeed focusing their attention on the auditory domain. However, signal increases in frontal and parietal brain areas were only observed for tasks 1 and 2, but completely absent for task 3. These results lead to the following conclusions: under most conditions, frontoparietal activations are crucial for attention since they subserve higher-order cognitive functions inherently related to attention. However, under circumstances that minimize other demands, nonspatial auditory attention in the absence of stimulation can be maintained without concurrent frontal or parietal activations. PMID:23913818

  19. Repetition suppression and repetition enhancement underlie auditory memory-trace formation in the human brain: an MEG study.

    PubMed

    Recasens, Marc; Leung, Sumie; Grimm, Sabine; Nowak, Rafal; Escera, Carles

    2015-03-01

    The formation of echoic memory traces has traditionally been inferred from the enhanced responses to its deviations. The mismatch negativity (MMN), an auditory event-related potential (ERP) elicited between 100 and 250ms after sound deviation is an indirect index of regularity encoding that reflects a memory-based comparison process. Recently, repetition positivity (RP) has been described as a candidate ERP correlate of direct memory trace formation. RP consists of repetition suppression and enhancement effects occurring in different auditory components between 50 and 250ms after sound onset. However, the neuronal generators engaged in the encoding of repeated stimulus features have received little interest. This study intends to investigate the neuronal sources underlying the formation and strengthening of new memory traces by employing a roving-standard paradigm, where trains of different frequencies and different lengths are presented randomly. Source generators of repetition enhanced (RE) and suppressed (RS) activity were modeled using magnetoencephalography (MEG) in healthy subjects. Our results show that, in line with RP findings, N1m (~95-150ms) activity is suppressed with stimulus repetition. In addition, we observed the emergence of a sustained field (~230-270ms) that showed RE. Source analysis revealed neuronal generators of RS and RE located in both auditory and non-auditory areas, like the medial parietal cortex and frontal areas. The different timing and location of neural generators involved in RS and RE points to the existence of functionally separated mechanisms devoted to acoustic memory-trace formation in different auditory processing stages of the human brain.

  20. On the temporal window of auditory-brain system in connection with subjective responses

    NASA Astrophysics Data System (ADS)

    Mouri, Kiminori

    2003-08-01

    The human auditory-brain system processes information extracted from autocorrelation function (ACF) of the source signal and interaural cross correlation function (IACF) of binaural sound signals which are associated with the left and right cerebral hemispheres, respectively. The purpose of this dissertation is to determine the desirable temporal window (2T: integration interval) for ACF and IACF mechanisms. For the ACF mechanism, the visual change of Φ(0), i.e., the power of ACF, was associated with the change of loudness, and it is shown that the recommended temporal window is given as about 30(τe)min [s]. The value of (τe)min is the minimum value of effective duration of the running ACF of the source signal. It is worth noticing from the experiment of EEG that the most preferred delay time of the first reflection sound is determined by the piece indicating (τe)min in the source signal. For the IACF mechanism, the temporal window is determined as below: The measured range of τIACC corresponding to subjective angle for the moving image sound depends on the temporal window. Here, the moving image was simulated by the use of two loudspeakers located at +/-20° in the horizontal plane, reproducing amplitude modulated band-limited noise alternatively. It is found that the temporal window has a wide range of values from 0.03 to 1 [s] for the modulation frequency below 0.2 Hz. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Kiminori Mouri, 5-3-3-1110 Harayama-dai, Sakai city, Osaka 590-0132, Japan. E-mail address: km529756@aol.com

  1. Multi-channel EEG signal feature extraction and pattern recognition on horizontal mental imagination task of 1-D cursor movement for brain computer interface.

    PubMed

    Serdar Bascil, M; Tesneli, Ahmet Y; Temurtas, Feyzullah

    2015-06-01

    Brain computer interfaces (BCIs), based on multi-channel electroencephalogram (EEG) signal processing convert brain signal activities to machine control commands. It provides new communication way with a computer by extracting electroencephalographic activity. This paper, deals with feature extraction and classification of horizontal mental task pattern on 1-D cursor movement from EEG signals. The hemispherical power changes are computed and compared on alpha & beta frequencies and horizontal cursor control extracted with only mental imagination of cursor movements. In the first stage, features are extracted with the well-known average signal power or power difference (alpha and beta) method. Principal component analysis is used for reducing feature dimensions. All features are classified and the mental task patterns are recognized by three neural network classifiers which learning vector quantization, multilayer neural network and probabilistic neural network due to obtaining acceptable good results and using successfully in pattern recognition via k-fold cross validation technique.

  2. Auditory hallucinations.

    PubMed

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  3. The localization and physiological effects of cannabinoid receptor 1 (CB1) in the brain stem auditory system of the chick

    PubMed Central

    Stincic, Todd L.; Hyson, Richard L.

    2011-01-01

    Fast, temporally-precise, and consistent synaptic transmission is required to encode features of acoustic stimuli. Neurons of nucleus magnocellularis (NM) in the auditory brain stem of the chick possess numerous adaptations to optimize the coding of temporal information. One potential problem for the system is the depression of synaptic transmission during a prolonged stimulus. The present studies tested the hypothesis that cannabinoid receptor one (CB1) signaling may limit synaptic depression at the auditory nerve-NM synapse. In situ hybridization was used to confirm that CB1 mRNA is expressed in the cochlear ganglion; immunohistochemistry was used to confirm the presence of CB1 protein in NM. These findings are consistent with the common presynaptic locus of CB1 in the brain. Rate-dependent synaptic depression was then examined in a brain slice preparation before and after administration of WIN 55,212-2 (WIN), a potent CB1 agonist. WIN decreased the amplitude of excitatory postsynaptic currents and also reduced depression across a train of stimuli. The effect was most obvious late in the pulse train and during high rates of stimulation. This CB1-mediated influence could allow for lower, but more consistent activation of NM neurons, which could be of importance for optimizing the coding of prolonged, temporally-locked acoustic stimuli. PMID:21703331

  4. Sex-Specific Brain Deficits in Auditory Processing in an Animal Model of Cocaine-Related Schizophrenic Disorders

    PubMed Central

    Broderick, Patricia A.; Rosenbaum, Taylor

    2013-01-01

    Cocaine is a psychostimulant in the pharmacological class of drugs called Local Anesthetics. Interestingly, cocaine is the only drug in this class that has a chemical formula comprised of a tropane ring and is, moreover, addictive. The correlation between tropane and addiction is well-studied. Another well-studied correlation is that between psychosis induced by cocaine and that psychosis endogenously present in the schizophrenic patient. Indeed, both of these psychoses exhibit much the same behavioral as well as neurochemical properties across species. Therefore, in order to study the link between schizophrenia and cocaine addiction, we used a behavioral paradigm called Acoustic Startle. We used this acoustic startle paradigm in female versus male Sprague-Dawley animals to discriminate possible sex differences in responses to startle. The startle method operates through auditory pathways in brain via a network of sensorimotor gating processes within auditory cortex, cochlear nuclei, inferior and superior colliculi, pontine reticular nuclei, in addition to mesocorticolimbic brain reward and nigrostriatal motor circuitries. This paper is the first to report sex differences to acoustic stimuli in Sprague-Dawley animals (Rattus norvegicus) although such gender responses to acoustic startle have been reported in humans (Swerdlow et al. 1997 [1]). The startle method monitors pre-pulse inhibition (PPI) as a measure of the loss of sensorimotor gating in the brain's neuronal auditory network; auditory deficiencies can lead to sensory overload and subsequently cognitive dysfunction. Cocaine addicts and schizophrenic patients as well as cocaine treated animals are reported to exhibit symptoms of defective PPI (Geyer et al., 2001 [2]). Key findings are: (a) Cocaine significantly reduced PPI in both sexes. (b) Females were significantly more sensitive than males; reduced PPI was greater in females than in males. (c) Physiological saline had no effect on startle in either sex

  5. Sex-specific brain deficits in auditory processing in an animal model of cocaine-related schizophrenic disorders.

    PubMed

    Broderick, Patricia A; Rosenbaum, Taylor

    2013-01-01

    Cocaine is a psychostimulant in the pharmacological class of drugs called Local Anesthetics. Interestingly, cocaine is the only drug in this class that has a chemical formula comprised of a tropane ring and is, moreover, addictive. The correlation between tropane and addiction is well-studied. Another well-studied correlation is that between psychosis induced by cocaine and that psychosis endogenously present in the schizophrenic patient. Indeed, both of these psychoses exhibit much the same behavioral as well as neurochemical properties across species. Therefore, in order to study the link between schizophrenia and cocaine addiction, we used a behavioral paradigm called Acoustic Startle. We used this acoustic startle paradigm in female versus male Sprague-Dawley animals to discriminate possible sex differences in responses to startle. The startle method operates through auditory pathways in brain via a network of sensorimotor gating processes within auditory cortex, cochlear nuclei, inferior and superior colliculi, pontine reticular nuclei, in addition to mesocorticolimbic brain reward and nigrostriatal motor circuitries. This paper is the first to report sex differences to acoustic stimuli in Sprague-Dawley animals (Rattus norvegicus) although such gender responses to acoustic startle have been reported in humans (Swerdlow et al. 1997 [1]). The startle method monitors pre-pulse inhibition (PPI) as a measure of the loss of sensorimotor gating in the brain's neuronal auditory network; auditory deficiencies can lead to sensory overload and subsequently cognitive dysfunction. Cocaine addicts and schizophrenic patients as well as cocaine treated animals are reported to exhibit symptoms of defective PPI (Geyer et al., 2001 [2]). Key findings are: (a) Cocaine significantly reduced PPI in both sexes. (b) Females were significantly more sensitive than males; reduced PPI was greater in females than in males. (c) Physiological saline had no effect on startle in either sex

  6. Electrophysiological evidence for the hierarchical organization of auditory change detection in the human brain.

    PubMed

    Grimm, Sabine; Escera, Carles; Slabu, Lavinia; Costa-Faidella, Jordi

    2011-03-01

    Auditory change detection has been associated with mismatch negativity (MMN), an event-related potential (ERP) occurring at 100-250 ms after the onset of an acoustic change. Yet, single-unit recordings in animals suggest much faster novelty-specific responses in the auditory system. To investigate change detection in a corresponding early time range in humans, we measured the Middle Latency Response (MLR) and MMN during a controlled frequency oddball paradigm. In addition to MMN, an early effect of change detection was observed at about 40 ms after change onset reflected in an enhancement of the Nb component of the MLR. Both MMN and the Nb effect were shown to be free from confounding influences such as differences in refractoriness. This finding implies that early change detection processes exist in humans upstream of MMN generation, which supports the emerging view of a hierarchical organization of change detection expanding along multiple levels of the auditory pathway.

  7. Towards user-friendly spelling with an auditory brain-computer interface: the CharStreamer paradigm.

    PubMed

    Höhne, Johannes; Tangermann, Michael

    2014-01-01

    Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: "CharStreamer". The speller can be used with an instruction as simple as "please attend to what you want to spell". The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences.

  8. Plasticity in the neural coding of auditory space in the mammalian brain

    PubMed Central

    King, Andrew J.; Parsons, Carl H.; Moore, David R.

    2000-01-01

    Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the “cocktail party effect”) are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy. PMID:11050215

  9. The Application of the International Classification of Functioning, Disability and Health to Functional Auditory Consequences of Mild Traumatic Brain Injury.

    PubMed

    Werff, Kathy R Vander

    2016-08-01

    This article reviews the auditory consequences of mild traumatic brain injury (mTBI) within the context of the International Classification of Functioning, Disability and Health (ICF). Because of growing awareness of mTBI as a public health concern and the diverse and heterogeneous nature of the individual consequences, it is important to provide audiologists and other health care providers with a better understanding of potential implications in the assessment of levels of function and disability for individual interdisciplinary remediation planning. In consideration of body structures and function, the mechanisms of injury that may result in peripheral or central auditory dysfunction in mTBI are reviewed, along with a broader scope of effects of injury to the brain. The activity limitations and participation restrictions that may affect assessment and management in the context of an individual's personal factors and their environment are considered. Finally, a review of management strategies for mTBI from an audiological perspective as part of a multidisciplinary team is included. PMID:27489400

  10. The functional foetal brain: A systematic preview of methodological factors in reporting foetal visual and auditory capacity.

    PubMed

    Dunn, Kirsty; Reissland, Nadja; Reid, Vincent M

    2015-06-01

    Due to technological advancements in functional brain imaging, foetal brain responses to visual and auditory stimuli is a growing area of research despite being relatively small with much variation between research laboratories. A number of inconsistencies between studies are, nonetheless, present in the literature. This article aims to explore the potential contribution of methodological factors to variation in reports of foetal neural responses to external stimuli. Some of the variation in reports can be explained by methodological differences in aspects of study design, such as brightness and wavelength of light source. In contrast to visual foetal processing, auditory foetal processing has been more frequently investigated and findings are more consistent between different studies. This is an early preview of an emerging field with many articles reporting small sample sizes with techniques that are yet to be replicated. We suggest areas for improvement for the field as a whole, such as the standardisation of stimulus delivery and a more detailed reporting of methods and results. This will improve our understanding of foetal functional response to light and sound. We suggest that enhanced technology will allow for a more reliable description of the developmental trajectory of foetal processing of light stimuli. PMID:25967364

  11. Characteristics of Auditory Agnosia in a Child with Severe Traumatic Brain Injury: A Case Report

    ERIC Educational Resources Information Center

    Hattiangadi, Nina; Pillion, Joseph P.; Slomine, Beth; Christensen, James; Trovato, Melissa K.; Speedie, Lynn J.

    2005-01-01

    We present a case that is unusual in many respects from other documented incidences of auditory agnosia, including the mechanism of injury, age of the individual, and location of neurological insult. The clinical presentation is one of disturbance in the perception of spoken language, music, pitch, emotional prosody, and temporal auditory…

  12. MULTICHANNEL ANALYZER

    DOEpatents

    Kelley, G.G.

    1959-11-10

    A multichannel pulse analyzer having several window amplifiers, each amplifier serving one group of channels, with a single fast pulse-lengthener and a single novel interrogation circuit serving all channels is described. A pulse followed too closely timewise by another pulse is disregarded by the interrogation circuit to prevent errors due to pulse pileup. The window amplifiers are connected to the pulse lengthener output, rather than the linear amplifier output, so need not have the fast response characteristic formerly required.

  13. BDNF in Lower Brain Parts Modifies Auditory Fiber Activity to Gain Fidelity but Increases the Risk for Generation of Central Noise After Injury.

    PubMed

    Chumak, Tetyana; Rüttiger, Lukas; Lee, Sze Chim; Campanelli, Dario; Zuccotti, Annalisa; Singer, Wibke; Popelář, Jiří; Gutsche, Katja; Geisler, Hyun-Soon; Schraven, Sebastian Philipp; Jaumann, Mirko; Panford-Walsh, Rama; Hu, Jing; Schimmang, Thomas; Zimmermann, Ulrike; Syka, Josef; Knipper, Marlies

    2016-10-01

    For all sensory organs, the establishment of spatial and temporal cortical resolution is assumed to be initiated by the first sensory experience and a BDNF-dependent increase in intracortical inhibition. To address the potential of cortical BDNF for sound processing, we used mice with a conditional deletion of BDNF in which Cre expression was under the control of the Pax2 or TrkC promoter. BDNF deletion profiles between these mice differ in the organ of Corti (BDNF (Pax2) -KO) versus the auditory cortex and hippocampus (BDNF (TrkC) -KO). We demonstrate that BDNF (Pax2) -KO but not BDNF (TrkC) -KO mice exhibit reduced sound-evoked suprathreshold ABR waves at the level of the auditory nerve (wave I) and inferior colliculus (IC) (wave IV), indicating that BDNF in lower brain regions but not in the auditory cortex improves sound sensitivity during hearing onset. Extracellular recording of IC neurons of BDNF (Pax2) mutant mice revealed that the reduced sensitivity of auditory fibers in these mice went hand in hand with elevated thresholds, reduced dynamic range, prolonged latency, and increased inhibitory strength in IC neurons. Reduced parvalbumin-positive contacts were found in the ascending auditory circuit, including the auditory cortex and hippocampus of BDNF (Pax2) -KO, but not of BDNF (TrkC) -KO mice. Also, BDNF (Pax2) -WT but not BDNF (Pax2) -KO mice did lose basal inhibitory strength in IC neurons after acoustic trauma. These findings suggest that BDNF in the lower parts of the auditory system drives auditory fidelity along the entire ascending pathway up to the cortex by increasing inhibitory strength in behaviorally relevant frequency regions. Fidelity and inhibitory strength can be lost following auditory nerve injury leading to diminished sensory outcome and increased central noise.

  14. BDNF in Lower Brain Parts Modifies Auditory Fiber Activity to Gain Fidelity but Increases the Risk for Generation of Central Noise After Injury.

    PubMed

    Chumak, Tetyana; Rüttiger, Lukas; Lee, Sze Chim; Campanelli, Dario; Zuccotti, Annalisa; Singer, Wibke; Popelář, Jiří; Gutsche, Katja; Geisler, Hyun-Soon; Schraven, Sebastian Philipp; Jaumann, Mirko; Panford-Walsh, Rama; Hu, Jing; Schimmang, Thomas; Zimmermann, Ulrike; Syka, Josef; Knipper, Marlies

    2016-10-01

    For all sensory organs, the establishment of spatial and temporal cortical resolution is assumed to be initiated by the first sensory experience and a BDNF-dependent increase in intracortical inhibition. To address the potential of cortical BDNF for sound processing, we used mice with a conditional deletion of BDNF in which Cre expression was under the control of the Pax2 or TrkC promoter. BDNF deletion profiles between these mice differ in the organ of Corti (BDNF (Pax2) -KO) versus the auditory cortex and hippocampus (BDNF (TrkC) -KO). We demonstrate that BDNF (Pax2) -KO but not BDNF (TrkC) -KO mice exhibit reduced sound-evoked suprathreshold ABR waves at the level of the auditory nerve (wave I) and inferior colliculus (IC) (wave IV), indicating that BDNF in lower brain regions but not in the auditory cortex improves sound sensitivity during hearing onset. Extracellular recording of IC neurons of BDNF (Pax2) mutant mice revealed that the reduced sensitivity of auditory fibers in these mice went hand in hand with elevated thresholds, reduced dynamic range, prolonged latency, and increased inhibitory strength in IC neurons. Reduced parvalbumin-positive contacts were found in the ascending auditory circuit, including the auditory cortex and hippocampus of BDNF (Pax2) -KO, but not of BDNF (TrkC) -KO mice. Also, BDNF (Pax2) -WT but not BDNF (Pax2) -KO mice did lose basal inhibitory strength in IC neurons after acoustic trauma. These findings suggest that BDNF in the lower parts of the auditory system drives auditory fidelity along the entire ascending pathway up to the cortex by increasing inhibitory strength in behaviorally relevant frequency regions. Fidelity and inhibitory strength can be lost following auditory nerve injury leading to diminished sensory outcome and increased central noise. PMID:26476841

  15. Far-field brainstem responses evoked by vestibular and auditory stimuli exhibit increases in interpeak latency as brain temperature is decreased

    NASA Technical Reports Server (NTRS)

    Hoffman, L. F.; Horowitz, J. M.

    1984-01-01

    The effect of decreasing of brain temperature on the brainstem auditory evoked response (BAER) in rats was investigated. Voltage pulses, applied to a piezoelectric crystal attached to the skull, were used to evoke stimuli in the auditory system by means of bone-conducted vibrations. The responses were recorded at 37 C and 34 C brain temperatures. The peaks of the BAER recorded at 34 C were delayed in comparison with the peaks from the 37 C wave, and the later peaks were more delayed than the earlier peaks. These results indicate that an increase in the interpeak latency occurs as the brain temperature is decreased. Preliminary experiments, in which responses to brief angular acceleration were used to measure the brainstem vestibular evoked response (BVER), have also indicated increases in the interpeak latency in response to the lowering of brain temperature.

  16. A trade-off between somatosensory and auditory related brain activity during object naming but not reading.

    PubMed

    Seghier, Mohamed L; Hope, Thomas M H; Prejawa, Susan; Parker Jones, 'Ōiwi; Vitkovitch, Melanie; Price, Cathy J

    2015-03-18

    The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level. PMID:25788691

  17. A Trade-Off between Somatosensory and Auditory Related Brain Activity during Object Naming But Not Reading

    PubMed Central

    Hope, Thomas M.H.; Prejawa, Susan; Parker Jones, ‘Ōiwi; Vitkovitch, Melanie; Price, Cathy J.

    2015-01-01

    The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying “1-2-3.” Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying “1-2-3” and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level. PMID:25788691

  18. A trade-off between somatosensory and auditory related brain activity during object naming but not reading.

    PubMed

    Seghier, Mohamed L; Hope, Thomas M H; Prejawa, Susan; Parker Jones, 'Ōiwi; Vitkovitch, Melanie; Price, Cathy J

    2015-03-18

    The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level.

  19. Suppression and facilitation of auditory neurons through coordinated acoustic and midbrain stimulation: investigating a deep brain stimulator for tinnitus

    NASA Astrophysics Data System (ADS)

    Offutt, Sarah J.; Ryan, Kellie J.; Konop, Alexander E.; Lim, Hubert H.

    2014-12-01

    Objective. The inferior colliculus (IC) is the primary processing center of auditory information in the midbrain and is one site of tinnitus-related activity. One potential option for suppressing the tinnitus percept is through deep brain stimulation via the auditory midbrain implant (AMI), which is designed for hearing restoration and is already being implanted in deaf patients who also have tinnitus. However, to assess the feasibility of AMI stimulation for tinnitus treatment we first need to characterize the functional connectivity within the IC. Previous studies have suggested modulatory projections from the dorsal cortex of the IC (ICD) to the central nucleus of the IC (ICC), though the functional properties of these projections need to be determined. Approach. In this study, we investigated the effects of electrical stimulation of the ICD on acoustic-driven activity within the ICC in ketamine-anesthetized guinea pigs. Main Results. We observed ICD stimulation induces both suppressive and facilitatory changes across ICC that can occur immediately during stimulation and remain after stimulation. Additionally, ICD stimulation paired with broadband noise stimulation at a specific delay can induce greater suppressive than facilitatory effects, especially when stimulating in more rostral and medial ICD locations. Significance. These findings demonstrate that ICD stimulation can induce specific types of plastic changes in ICC activity, which may be relevant for treating tinnitus. By using the AMI with electrode sites positioned with the ICD and the ICC, the modulatory effects of ICD stimulation can be tested directly in tinnitus patients.

  20. Processing of species-specific auditory patterns in the cricket brain by ascending, local, and descending neurons during standing and walking.

    PubMed

    Zorović, M; Hedwig, B

    2011-05-01

    The recognition of the male calling song is essential for phonotaxis in female crickets. We investigated the responses toward different models of song patterns by ascending, local, and descending neurons in the brain of standing and walking crickets. We describe results for two ascending, three local, and two descending interneurons. Characteristic dendritic and axonal arborizations of the local and descending neurons indicate a flow of auditory information from the ascending interneurons toward the lateral accessory lobes and point toward the relevance of this brain region for cricket phonotaxis. Two aspects of auditory processing were studied: the tuning of interneuron activity to pulse repetition rate and the precision of pattern copying. Whereas ascending neurons exhibited weak, low-pass properties, local neurons showed both low- and band-pass properties, and descending neurons represented clear band-pass filters. Accurate copying of single pulses was found at all three levels of the auditory pathway. Animals were walking on a trackball, which allowed an assessment of the effect that walking has on auditory processing. During walking, all neurons were additionally activated, and in most neurons, the spike rate was correlated to walking velocity. The number of spikes elicited by a chirp increased with walking only in ascending neurons, whereas the peak instantaneous spike rate of the auditory responses increased on all levels of the processing pathway. Extra spiking activity resulted in a somewhat degraded copying of the pulse pattern in most neurons. PMID:21346206

  1. Processing of species-specific auditory patterns in the cricket brain by ascending, local, and descending neurons during standing and walking

    PubMed Central

    Zorović, M.

    2011-01-01

    The recognition of the male calling song is essential for phonotaxis in female crickets. We investigated the responses toward different models of song patterns by ascending, local, and descending neurons in the brain of standing and walking crickets. We describe results for two ascending, three local, and two descending interneurons. Characteristic dendritic and axonal arborizations of the local and descending neurons indicate a flow of auditory information from the ascending interneurons toward the lateral accessory lobes and point toward the relevance of this brain region for cricket phonotaxis. Two aspects of auditory processing were studied: the tuning of interneuron activity to pulse repetition rate and the precision of pattern copying. Whereas ascending neurons exhibited weak, low-pass properties, local neurons showed both low- and band-pass properties, and descending neurons represented clear band-pass filters. Accurate copying of single pulses was found at all three levels of the auditory pathway. Animals were walking on a trackball, which allowed an assessment of the effect that walking has on auditory processing. During walking, all neurons were additionally activated, and in most neurons, the spike rate was correlated to walking velocity. The number of spikes elicited by a chirp increased with walking only in ascending neurons, whereas the peak instantaneous spike rate of the auditory responses increased on all levels of the processing pathway. Extra spiking activity resulted in a somewhat degraded copying of the pulse pattern in most neurons. PMID:21346206

  2. Processing of species-specific auditory patterns in the cricket brain by ascending, local, and descending neurons during standing and walking.

    PubMed

    Zorović, M; Hedwig, B

    2011-05-01

    The recognition of the male calling song is essential for phonotaxis in female crickets. We investigated the responses toward different models of song patterns by ascending, local, and descending neurons in the brain of standing and walking crickets. We describe results for two ascending, three local, and two descending interneurons. Characteristic dendritic and axonal arborizations of the local and descending neurons indicate a flow of auditory information from the ascending interneurons toward the lateral accessory lobes and point toward the relevance of this brain region for cricket phonotaxis. Two aspects of auditory processing were studied: the tuning of interneuron activity to pulse repetition rate and the precision of pattern copying. Whereas ascending neurons exhibited weak, low-pass properties, local neurons showed both low- and band-pass properties, and descending neurons represented clear band-pass filters. Accurate copying of single pulses was found at all three levels of the auditory pathway. Animals were walking on a trackball, which allowed an assessment of the effect that walking has on auditory processing. During walking, all neurons were additionally activated, and in most neurons, the spike rate was correlated to walking velocity. The number of spikes elicited by a chirp increased with walking only in ascending neurons, whereas the peak instantaneous spike rate of the auditory responses increased on all levels of the processing pathway. Extra spiking activity resulted in a somewhat degraded copying of the pulse pattern in most neurons.

  3. The effect of stimulus repetition rate on the diagnostic efficacy of the auditory nerve-brain-stem evoked response.

    PubMed

    Freeman, S; Sohmer, H; Silver, S

    1991-04-01

    This study investigates the hypothesis that an increase in the click presentation rate during diagnostic testing with the auditory nerve-brain-stem response (ABR) will increase the efficiency with which lesions may be detected in the nervous system. Cats were exposed to conditions of hypoxia, hypercapnia and acidemia, and hypoglycemia was induced in rats. ABR was recorded using the standard 10/sec click rate and also a higher (55/sec) rate during both the control state and experimental state. Various parameters of the ABR were compared at the two click rates in the control and experimental states to see if the higher click rate was more effective in detecting pathology in the nervous system. It was found that in only a very few cases was the higher stimulus presentation rate more effective, and that in general ABR recordings at one stimulus rate only is quite sufficient for work in a clinical setting. PMID:1706249

  4. Dilemmas in auditory assessment of developmentally retarded children using behavioural observation audiometry and brain stem evoked response audiometry.

    PubMed

    Rupa, V

    1995-07-01

    The records of 94 consecutive developmentally retarded children with speech retardation and suspected hearing loss who underwent auditory assessment by both conventional behavioural observation audiometry (BOA) and brain stem evoked response audiometry (BERA) were analysed. In 54 children (57.4 per cent) there was good agreement between the results of both techniques leading to a clearcut diagnosis. In 22 children a diagnosis was possible only by the results of BERA as the results of BOA were inconclusive. Of the remaining 18 children, two groups could be identified whose results posed a dilemma. Group 1 (n = 7) consisted of children whose BOA test results differed considerably from their BERA results. Group 2 (n = 11) consisted of children in whom there was no discernible response by BERA while the response by BOA was either inconsistent (n = 5) or not elicitable (n = 6). The specific strategies to be adopted for hearing assessment in these situations are discussed.

  5. Intraoperative and postoperative electrically evoked auditory brain stem responses in nucleus cochlear implant users: implications for the fitting process.

    PubMed

    Brown, C J; Abbas, P J; Fryauf-Bertschy, H; Kelsay, D; Gantz, B J

    1994-04-01

    Electrically evoked auditory brain stem responses (EABR) were measured in 12 adults and 14 children with the Nucleus cochlear implant. Measures were made both intraoperatively and several months following surgery. EABR thresholds were consistently greater than clinically determined measures of behavioral threshold (T-level) but less than maximum comfort levels (C-level). When the data were pooled across subjects and different stimulating electrodes, EABR thresholds were strongly correlated with both T- and C-levels. In subjects where both intraoperative and postimplant EABR measures were obtained, intraoperative EABR thresholds were consistently higher than postimplant thresholds. The electrophysiologic data have been incorporated into a practical procedure for programming the implant in young children.

  6. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation

    PubMed Central

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A.

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  7. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    PubMed

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  8. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  9. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    PubMed

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  10. “Where Do Auditory Hallucinations Come From?”—A Brain Morphometry Study of Schizophrenia Patients With Inner or Outer Space Hallucinations

    PubMed Central

    Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc

    2011-01-01

    Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N = 12) and patients with only inner space hallucinations (N = 15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the “where” auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge. PMID:19666833

  11. Brain Networks of Novelty-Driven Involuntary and Cued Voluntary Auditory Attention Shifting

    PubMed Central

    Huang, Samantha; Belliveau, John W.; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-01-01

    In everyday life, we need a capacity to flexibly shift attention between alternative sound sources. However, relatively little work has been done to elucidate the mechanisms of attention shifting in the auditory domain. Here, we used a mixed event-related/sparse-sampling fMRI approach to investigate this essential cognitive function. In each 10-sec trial, subjects were instructed to wait for an auditory “cue” signaling the location where a subsequent “target” sound was likely to be presented. The target was occasionally replaced by an unexpected “novel” sound in the uncued ear, to trigger involuntary attention shifting. To maximize the attention effects, cues, targets, and novels were embedded within dichotic 800-Hz vs. 1500-Hz pure-tone “standard” trains. The sound of clustered fMRI acquisition (starting at t = 7.82 sec) served as a controlled trial-end signal. Our approach revealed notable activation differences between the conditions. Cued voluntary attention shifting activated the superior intra­­parietal sulcus (IPS), whereas novelty-triggered involuntary orienting activated the inferior IPS and certain subareas of the precuneus. Clearly more widespread activations were observed during voluntary than involuntary orienting in the premotor cortex, including the frontal eye fields. Moreover, we found ­evidence for a frontoinsular-cingular attentional control network, consisting of the anterior insula, inferior frontal cortex, and medial frontal cortices, which were activated during both target discrimination and voluntary attention shifting. Finally, novels and targets activated much wider areas of superior temporal auditory cortices than shifting cues. PMID:22937153

  12. When the brain plays music: auditory-motor interactions in music perception and production.

    PubMed

    Zatorre, Robert J; Chen, Joyce L; Penhune, Virginia B

    2007-07-01

    Music performance is both a natural human activity, present in all societies, and one of the most complex and demanding cognitive challenges that the human mind can undertake. Unlike most other sensory-motor activities, music performance requires precise timing of several hierarchically organized actions, as well as precise control over pitch interval production, implemented through diverse effectors according to the instrument involved. We review the cognitive neuroscience literature of both motor and auditory domains, highlighting the value of studying interactions between these systems in a musical context, and propose some ideas concerning the role of the premotor cortex in integration of higher order features of music with appropriately timed and organized actions.

  13. Hearing silences: human auditory processing relies on preactivation of sound-specific brain activity patterns.

    PubMed

    SanMiguel, Iria; Widmann, Andreas; Bendixen, Alexandra; Trujillo-Barreto, Nelson; Schröger, Erich

    2013-05-15

    The remarkable capabilities displayed by humans in making sense of an overwhelming amount of sensory information cannot be explained easily if perception is viewed as a passive process. Current theoretical and computational models assume that to achieve meaningful and coherent perception, the human brain must anticipate upcoming stimulation. But how are upcoming stimuli predicted in the brain? We unmasked the neural representation of a prediction by omitting the predicted sensory input. Electrophysiological brain signals showed that when a clear prediction can be formulated, the brain activates a template of its response to the predicted stimulus before it arrives to our senses.

  14. [Topography of the Event-Related Brain Responses during Discrimination of Auditory Motion in Humans].

    PubMed

    Shestopalova, L B; Petropavlovskaia, E A; Vaitulevich, S Ph; Nikitin, N I

    2015-01-01

    The present study investigates the hemispheric asymmetry of auditory event-related potentials (ERPs) and mismatch negativity (MMN) during passive discrimination of the moving sound stimuli presented according to the oddball paradigm. The sound movement to the left/right from the head midline was produced by linear changes of the interaural time delay (ITD). It was found that the right-hemispheric N1 and P2 responses were more prominent than the left-hemispheric ones, especially in the fronto-lateral region. On the contrary, N250 and MMN responses demonstrated contralateral dominance in the fronto-lateral and fronto-medial regions. Direction of sound motion had no significant effect on the ERP or MMN topography. The right-hemispheric asymmetry of N1 increased with sound velocity. Maximal asymmetry of P2 was obtained with short stimulus trajectories. The contralateral bias of N250 and MMN increased with the spatial difference between standard and deviant stimuli. The results showed different type of hemispheric asymmetry for the early and late ERP components which could reflect the activity of distinct neural populations involved in the sensory and cognitive processing of the auditory input. PMID:26860001

  15. Brain dynamics that correlate with effects of learning on auditory distance perception.

    PubMed

    Wisniewski, Matthew G; Mercado, Eduardo; Church, Barbara A; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  16. Instrument specific brain activation in sensorimotor and auditory representation in musicians.

    PubMed

    Gebel, B; Braun, Ch; Kaza, E; Altenmüller, E; Lotze, M

    2013-07-01

    Musicians show a remarkable ability to interconnect motor patterns and sensory processing in the somatosensory and auditory domains. Many of these processes are specific for the instrument used. We were interested in the cerebral and cerebellar representations of these instrument-specific changes and therefore applied functional magnetic resonance imaging (fMRI) in two groups of instrumentalists with different instrumental training for comparable periods (approximately 15 years). The first group (trumpet players) uses tight finger and lip interaction; the second (pianists as control group) uses only the extremities for performance. fMRI tasks were balanced for instructions (piano and trumpet notes), sensory feedback (keypad and trumpet), and hand-lip interaction on the trumpet. During fMRI, both groups switched between different devices (trumpet or keypad) and performance was combined with or without auditory feedback. Playing the trumpet without any tone emission or using the mouthpiece showed an instrument training-specific activation increase in trumpet players. This was evident for the posterior-superior cerebellar hemisphere, the dominant primary sensorimotor cortex, and the left Heschl's gyrus. Additionally, trumpet players showed increased activity in the bilateral Heschl's gyrus during actual trumpet playing, although they showed significantly decreased loudness while playing with the mouthpiece in the scanner compared to pianists.

  17. Multichannel fiber-based diffuse reflectance spectroscopy for the rat brain exposed to a laser-induced shock wave: comparison between ipsi- and contralateral hemispheres

    NASA Astrophysics Data System (ADS)

    Miyaki, Mai; Kawauchi, Satoko; Okuda, Wataru; Nawashiro, Hiroshi; Takemura, Toshiya; Sato, Shunichi; Nishidate, Izumi

    2015-03-01

    Due to considerable increase in the terrorism using explosive devices, blast-induced traumatic brain injury (bTBI) receives much attention worldwide. However, little is known about the pathology and mechanism of bTBI. In our previous study, we found that cortical spreading depolarization (CSD) occurred in the hemisphere exposed to a laser- induced shock wave (LISW), which was followed by long-lasting hypoxemia-oligemia. However, there is no information on the events occurred in the contralateral hemisphere. In this study, we performed multichannel fiber-based diffuse reflectance spectroscopy for the rat brain exposed to an LISW and compared the results for the ipsilateral and contralateral hemispheres. A pair of optical fibers was put on the both exposed right and left parietal bone; white light was delivered to the brain through source fibers and diffuse reflectance signals were collected with detection fibers for both hemispheres. An LISW was applied to the left (ipsilateral) hemisphere. By analyzing reflectance signals, we evaluated occurrence of CSD, blood volume and oxygen saturation for both hemispheres. In the ipsilateral hemispheres, we observed the occurrence of CSD and long-lasting hypoxemia-oligemia in all rats examined (n=8), as observed in our previous study. In the contralateral hemisphere, on the other hand, no occurrence of CSD was observed, but we observed oligemia in 7 of 8 rats and hypoxemia in 1 of 8 rats, suggesting a mechanism to cause hypoxemia or oligemia or both that is (are) not directly associated with CSD in the contralateral hemisphere.

  18. [Increase in intracranial pressure in monitoring brain stem auditory evoked potentials using headphones].

    PubMed

    Schwarz, G; Pfurtscheller, G; Tritthart, H; List, W F

    1988-11-01

    Ten measurements of intracranial pressure (ICP) (ventricular n = 5, epidural n = 3) in 8 patients (3 after aneurysm surgery, 5 with head trauma) were performed before and after application of conventional headphones for stimulating brainstem auditory evoked potentials (BAEP). The effects of miniature earphones and sound tubes on ICP were also studied. In 7 of 10 measurements after application of headphones a reversible increase of ICP (mean 26 +/- 19% in patients with ICP greater than 10 mmHg was recorded; in 3 patients (ICP less than or equal to 10 mgHg) no changes of ICP were seen. Using miniature earphones and sound tubes no increase of ICP was noted in any patient, and hence these can be recommended for stimulating BAEP in case of increased ICP.

  19. Sound Perception: Rhythmic Brain Activity Really Is Important for Auditory Segregation.

    PubMed

    Snyder, Joel S

    2015-12-21

    A new study suggests that rhythmic brain activity plays a causal role in the perceptual segregation of sound patterns, rather than such activity simply being a non-functional by-product of sensory processing.

  20. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response

    PubMed Central

    Nuttall, Helen E.; Moore, David R.; Barry, Johanna G.; Krumbholz, Katrin

    2015-01-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. PMID:25787954

  1. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response.

    PubMed

    Nuttall, Helen E; Moore, David R; Barry, Johanna G; Krumbholz, Katrin; de Boer, Jessica

    2015-06-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics.

  2. Auditory imagery: empirical findings.

    PubMed

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565

  3. Auditory imagery: empirical findings.

    PubMed

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear).

  4. Auditory evoked potentials to spectro-temporal modulation of complex tones in normal subjects and patients with severe brain injury.

    PubMed

    Jones, S J; Vaz Pato, M; Sprague, L; Stokes, M; Munday, R; Haque, N

    2000-05-01

    In order to assess higher auditory processing capabilities, long-latency auditory evoked potentials (AEPs) were recorded to synthesized musical instrument tones in 22 post-comatose patients with severe brain injury causing variably attenuated behavioural responsiveness. On the basis of normative studies, three different types of spectro-temporal modulation were employed. When a continuous 'clarinet' tone changes pitch once every few seconds, N1/P2 potentials are evoked at latencies of approximately 90 and 180 ms, respectively. Their distribution in the fronto-central region is consistent with generators in the supratemporal cortex of both hemispheres. When the pitch is modulated at a much faster rate ( approximately 16 changes/s), responses to each change are virtually abolished but potentials with similar distribution are still elicited by changing the timbre (e.g. 'clarinet' to 'oboe') every few seconds. These responses appear to represent the cortical processes concerned with spectral pattern analysis and the grouping of frequency components to form sound 'objects'. Following a period of 16/s oscillation between two pitches, a more anteriorly distributed negativity is evoked on resumption of a steady pitch. Various lines of evidence suggest that this is probably equivalent to the 'mismatch negativity' (MMN), reflecting a pre-perceptual, memory-based process for detection of change in spectro-temporal sound patterns. This method requires no off-line subtraction of AEPs evoked by the onset of a tone, and the MMN is produced rapidly and robustly with considerably larger amplitude (usually >5 microV) than that to discontinuous pure tones. In the brain-injured patients, the presence of AEPs to two or more complex tone stimuli (in the combined assessment of two authors who were 'blind' to the clinical and behavioural data) was significantly associated with the demonstrable possession of discriminative hearing (the ability to respond differentially to verbal commands

  5. Mining multi-channel EEG for its information content: an ANN-based method for a brain-computer interface.

    PubMed

    Peters, Bjorn O.; Pfurtscheller, Gert; Flyvbjerg, Henrik

    1998-10-01

    We have studied 56-channel electroencephalograms (EEG) from three subjects who planned and performed three kinds of movements, left and right index finger, and right foot movement. Using autoregressive modeling of EEG time series and artificial neural nets (ANN), we have developed a classifier that can tell which movement is performed from a segment of the EEG signal from a single trial. The classifier's rate of recognition of EEGs not seen before was 92-99% on the basis of a 1s segment per trial. The recognition rate provides a pragmatic measure of the information content of the EEG signal. This high recognition rate makes the classifier suitable for a so-called 'Brain-Computer Interface', a system that allows one to control a computer, or another device, with ones brain waves. Our classifier Laplace filters the EEG spatially, but makes use of its entire frequency range, and automatically locates regions of relevant activity on the skull.

  6. Optical Brain Imaging Reveals General Auditory and Language-Specific Processing in Early Infant Development

    PubMed Central

    Minagawa-Kawai, Yasuyo; van der Lely, Heather; Ramus, Franck; Sato, Yutaka; Mazuka, Reiko; Dupoux, Emmanuel

    2011-01-01

    This study uses near-infrared spectroscopy in young infants in order to elucidate the nature of functional cerebral processing for speech. Previous imaging studies of infants’ speech perception revealed left-lateralized responses to native language. However, it is unclear if these activations were due to language per se rather than to some low-level acoustic correlate of spoken language. Here we compare native (L1) and non-native (L2) languages with 3 different nonspeech conditions including emotional voices, monkey calls, and phase scrambled sounds that provide more stringent controls. Hemodynamic responses to these stimuli were measured in the temporal areas of Japanese 4 month-olds. The results show clear left-lateralized responses to speech, prominently to L1, as opposed to various activation patterns in the nonspeech conditions. Furthermore, implementing a new analysis method designed for infants, we discovered a slower hemodynamic time course in awake infants. Our results are largely explained by signal-driven auditory processing. However, stronger activations to L1 than to L2 indicate a language-specific neural factor that modulates these responses. This study is the first to discover a significantly higher sensitivity to L1 in 4 month-olds and reveals a neural precursor of the functional specialization for the higher cognitive network. PMID:20497946

  7. Temporal correlations versus noise in the correlation matrix formalism: An example of the brain auditory response

    NASA Astrophysics Data System (ADS)

    Kwapień, J.; DrożdŻ, S.; Ioannides, A. A.

    2000-10-01

    We adopt the concept of the correlation matrix to study correlations among sequences of time-extended events occurring repeatedly at consecutive time intervals. As an application we analyze the magnetoencephalography recordings obtained from the human auditory cortex in the epoch mode during the delivery of sound stimuli to the left or right ear. We look into statistical properties and the eigenvalue spectrum of the correlation matrix C calculated for signals corresponding to different trials and originating from the same or opposite hemispheres. The spectrum of C largely agrees with the universal properties of the Gaussian orthogonal ensemble of random matrices, with deviations characterized by eigenvectors with high eigenvalues. The properties of these eigenvectors and eigenvalues provide an elegant and powerful way of quantifying the degree of the underlying collectivity during well-defined latency intervals with respect to stimulus onset. We also extend this analysis to study the time-lagged interhemispheric correlations, as a computationally less demanding alternative to other methods such as mutual information.

  8. Age-Related Changes in Transient and Oscillatory Brain Responses to Auditory Stimulation during Early Adolescence

    ERIC Educational Resources Information Center

    Poulsen, Catherine; Picton, Terence W.; Paus, Tomas

    2009-01-01

    Maturational changes in the capacity to process quickly the temporal envelope of sound have been linked to language abilities in typically developing individuals. As part of a longitudinal study of brain maturation and cognitive development during adolescence, we employed dense-array EEG and spatiotemporal source analysis to characterize…

  9. Brain activity is related to individual differences in the number of items stored in auditory short-term memory for pitch: evidence from magnetoencephalography.

    PubMed

    Grimault, Stephan; Nolden, Sophie; Lefebvre, Christine; Vachon, François; Hyde, Krista; Peretz, Isabelle; Zatorre, Robert; Robitaille, Nicolas; Jolicoeur, Pierre

    2014-07-01

    We used magnetoencephalography (MEG) to examine brain activity related to the maintenance of non-verbal pitch information in auditory short-term memory (ASTM). We focused on brain activity that increased with the number of items effectively held in memory by the participants during the retention interval of an auditory memory task. We used very simple acoustic materials (i.e., pure tones that varied in pitch) that minimized activation from non-ASTM related systems. MEG revealed neural activity in frontal, temporal, and parietal cortices that increased with a greater number of items effectively held in memory by the participants during the maintenance of pitch representations in ASTM. The present results reinforce the functional role of frontal and temporal cortices in the retention of pitch information in ASTM. This is the first MEG study to provide both fine spatial localization and temporal resolution on the neural mechanisms of non-verbal ASTM for pitch in relation to individual differences in the capacity of ASTM. This research contributes to a comprehensive understanding of the mechanisms mediating the representation and maintenance of basic non-verbal auditory features in the human brain.

  10. Comparisons of MRI images, and auditory-related and vocal-related protein expressions in the brain of echolocation bats and rodents.

    PubMed

    Hsiao, Chun-Jen; Hsu, Chih-Hsiang; Lin, Ching-Lung; Wu, Chung-Hsin; Jen, Philip Hung-Sun

    2016-08-17

    Although echolocating bats and other mammals share the basic design of laryngeal apparatus for sound production and auditory system for sound reception, they have a specialized laryngeal mechanism for ultrasonic sound emissions as well as a highly developed auditory system for processing species-specific sounds. Because the sounds used by bats for echolocation and rodents for communication are quite different, there must be differences in the central nervous system devoted to producing and processing species-specific sounds between them. The present study examines the difference in the relative size of several brain structures and expression of auditory-related and vocal-related proteins in the central nervous system of echolocation bats and rodents. Here, we report that bats using constant frequency-frequency-modulated sounds (CF-FM bats) and FM bats for echolocation have a larger volume of midbrain nuclei (inferior and superior colliculi) and cerebellum relative to the size of the brain than rodents (mice and rats). However, the former have a smaller volume of the cerebrum and olfactory bulb, but greater expression of otoferlin and forkhead box protein P2 than the latter. Although the size of both midbrain colliculi is comparable in both CF-FM and FM bats, CF-FM bats have a larger cerebrum and greater expression of otoferlin and forkhead box protein P2 than FM bats. These differences in brain structure and protein expression are discussed in relation to their biologically relevant sounds and foraging behavior. PMID:27337384

  11. An offline auditory P300 brain-computer interface using principal and independent component analysis techniques for functional electrical stimulation application.

    PubMed

    Bentley, Alexander S J; Andrew, Colin M; John, Lester R

    2008-01-01

    A brain-computer interface (BCI) provides technology that allows communication and control for people who are unable to interact with their environment. A P300 BCI exploits the fact that external or internal stimuli may provide a recognition response in the brain's electrical activity which may be recorded by an electroencephalogram (EEG) to act as a control signal. Additionally an auditory BCI does not require the user to avert their visual attention away from the task at hand and is thus more practical in a real environment than other visual stimulus BCIs.

  12. Neuronal coupling by endogenous electric fields: cable theory and applications to coincidence detector neurons in the auditory brain stem.

    PubMed

    Goldwyn, Joshua H; Rinzel, John

    2016-04-01

    The ongoing activity of neurons generates a spatially and time-varying field of extracellular voltage (Ve). This Ve field reflects population-level neural activity, but does it modulate neural dynamics and the function of neural circuits? We provide a cable theory framework to study how a bundle of model neurons generates Ve and how this Ve feeds back and influences membrane potential (Vm). We find that these "ephaptic interactions" are small but not negligible. The model neural population can generate Ve with millivolt-scale amplitude, and this Ve perturbs the Vm of "nearby" cables and effectively increases their electrotonic length. After using passive cable theory to systematically study ephaptic coupling, we explore a test case: the medial superior olive (MSO) in the auditory brain stem. The MSO is a possible locus of ephaptic interactions: sounds evoke large (millivolt scale)Vein vivo in this nucleus. The Ve response is thought to be generated by MSO neurons that perform a known neuronal computation with submillisecond temporal precision (coincidence detection to encode sound source location). Using a biophysically based model of MSO neurons, we find millivolt-scale ephaptic interactions consistent with the passive cable theory results. These subtle membrane potential perturbations induce changes in spike initiation threshold, spike time synchrony, and time difference sensitivity. These results suggest that ephaptic coupling may influence MSO function.

  13. Latency of tone-burst-evoked auditory brain stem responses and otoacoustic emissions: Level, frequency, and rise-time effects

    PubMed Central

    Rasetshwane, Daniel M.; Argenyi, Michael; Neely, Stephen T.; Kopun, Judy G.; Gorga, Michael P.

    2013-01-01

    Simultaneous measurement of auditory brain stem response (ABR) and otoacoustic emission (OAE) delays may provide insights into effects of level, frequency, and stimulus rise-time on cochlear delay. Tone-burst-evoked ABRs and OAEs (TBOAEs) were measured simultaneously in normal-hearing human subjects. Stimuli included a wide range of frequencies (0.5–8 kHz), levels (20–90 dB SPL), and tone-burst rise times. ABR latencies have orderly dependence on these three parameters, similar to previously reported data by Gorga et al. [J. Speech Hear. Res. 31, 87–97 (1988)]. Level dependence of ABR and TBOAE latencies was similar across a wide range of stimulus conditions. At mid-frequencies, frequency dependence of ABR and TBOAE latencies were similar. The dependence of ABR latency on both rise time and level was significant; however, the interaction was not significant, suggesting independent effects. Comparison between ABR and TBOAE latencies reveals that the ratio of TBOAE latency to ABR forward latency (the level-dependent component of ABR total latency) is close to one below 1.5 kHz, but greater than two above 1.5 kHz. Despite the fact that the current experiment was designed to test compatibility with models of reverse-wave propagation, existing models do not completely explain the current data. PMID:23654387

  14. The combined monitoring of brain stem auditory evoked potentials and intracranial pressure in coma. A study of 57 patients.

    PubMed Central

    García-Larrea, L; Artru, F; Bertrand, O; Pernier, J; Mauguière, F

    1992-01-01

    Continuous monitoring of brainstem auditory evoked potentials (BAEPs) was carried out in 57 comatose patients for periods ranging from 5 hours to 13 days. In 53 cases intracranial pressure (ICP) was also simultaneously monitored. The study of relative changes of evoked potentials over time proved more relevant to prognosis than the mere consideration of "statistical normality" of waveforms; thus progressive degradation of the BAEPs was associated with a bad outcome even if the responses remained within normal limits. Contrary to previous reports, a normal BAEP obtained during the second week of coma did not necessarily indicate a good vital outcome; it could, however, do so in cases with a low probability of secondary insults. The simultaneous study of BAEPs and ICP showed that apparently significant (greater than 40 mm Hg) acute rises in ICP were not always followed by BAEP changes. The stability of BAEP's despite "significant" ICP rises was associated in our patients with a high probability of survival, while prolongation of central latency of BAEPs in response to ICP modifications was almost invariably followed by brain death. Continuous monitoring of brainstem responses provided a useful physiological counterpart to physical parameters such as ICP. Serial recording of cortical EPs should be added to BAEP monitoring to permit the early detection of rostrocaudal deterioration. Images PMID:1402970

  15. Asymmetries of the human social brain in the visual, auditory and chemical modalities

    PubMed Central

    Brancucci, Alfredo; Lucci, Giuliana; Mazzatenta, Andrea; Tommasi, Luca

    2008-01-01

    Structural and functional asymmetries are present in many regions of the human brain responsible for motor control, sensory and cognitive functions and communication. Here, we focus on hemispheric asymmetries underlying the domain of social perception, broadly conceived as the analysis of information about other individuals based on acoustic, visual and chemical signals. By means of these cues the brain establishes the border between ‘self’ and ‘other’, and interprets the surrounding social world in terms of the physical and behavioural characteristics of conspecifics essential for impression formation and for creating bonds and relationships. We show that, considered from the standpoint of single- and multi-modal sensory analysis, the neural substrates of the perception of voices, faces, gestures, smells and pheromones, as evidenced by modern neuroimaging techniques, are characterized by a general pattern of right-hemispheric functional asymmetry that might benefit from other aspects of hemispheric lateralization rather than constituting a true specialization for social information. PMID:19064350

  16. Auditory brain-stem evoked potentials in cat after kainic acid induced neuronal loss. II. Cochlear nucleus.

    PubMed

    Zaaroor, M; Starr, A

    1991-01-01

    Auditory brain-stem potentials (ABRs) were studied in cats for up to 6 weeks after kainic acid had been injected unilaterally into the cochlear nucleus (CN) producing extensive neuronal destruction. The ABR components were labeled by the polarity at the vertex (P, for positive) and their order of appearance (the arabic numerals 1, 2, etc.). Component P1 can be further subdivided into 2 subcomponents, P1a and P1b. The assumed correspondence between the ABR components in cat and man is indicated by providing human Roman numeral designations in parentheses following the feline notation, e.g., P2 (III). To stimulation of the ear ipsilateral to the injection, the ABR changes consisted of a loss of components P2 (III) and P3 (IV), and an attenuation and prolongation of latency of components P4 (V) and P5 (VI). The sustained potential shift from which the components arose was not affected. Wave P1a (I) was also slightly but significantly attenuated compatible with changes of excitability of nerve VIII in the cochlea secondary to cochlear nucleus destruction. Unexpectedly, to stimulation of the ear contralateral to the injection side, waves P2 (III), P3 (IV), and P4 (V) were also attenuated and delayed in latency but to a lesser degree than to stimulation of the ear ipsilateral to the injection. Changes in binaural interaction of the ABR following cochlear nucleus lesions were similar to those produced in normal animals by introducing a temporal delay of the input to one ear. The results of the present set of studies using kainic acid to induce neuronal loss in auditory pathway when combined with prior lesion and recording experiments suggest that each of the components of the ABR requires the integrity of an anatomically diffuse system comprising a set of neurons, their axons, and the neurons on which they terminate. Disruption of any portion of the system will alter the amplitude and/or the latency of that component. PMID:1716569

  17. Far-field brainstem responses evoked by vestibular and auditory stimuli exhibit increases in interpeak latency as brain temperature is decreased.

    PubMed

    Hoffman, L F; Horowitz, J M

    1984-01-01

    The effect of decreasing brain temperature upon the transmission of neural signals along the brainstem auditory pathway has been well documented in cats and mice. The increase in the absolute and interpeak latencies of components of the brainstem auditory evoked response (BAER) has indicated that a progressive slowing occurs along the pathway as the signals ascend toward higher brainstem areas. Therefore to fully describe BAERs, both peak latencies and temperature are measured, especially in anesthetized preparations when brain temperature can be labile. In comparison to the numerous studies on the auditory system there are few studies that relate far-field responses evoked by angular acceleration to the vestibular system. Moreover the temperature dependence of such responses has apparently not been investigated. In this study we performed experiments designed to examine whether interpeak latencies of the BAER in rats depended upon temperature. This led to experiments designed to examine whether interpeak latencies of responses evoked by an angular acceleration show a dependence on temperature. PMID:11539019

  18. An fMRI Study of Auditory Orienting and Inhibition of Return in Pediatric Mild Traumatic Brain Injury

    PubMed Central

    Yang, Zhen; Yeo, Ronald A.; Pena, Amanda; Ling, Josef M.; Klimaj, Stefan; Campbell, Richard; Doezema, David

    2012-01-01

    Abstract Studies in adult mild traumatic brain injury (mTBI) have shown that two key measures of attention, spatial reorienting and inhibition of return (IOR), are impaired during the first few weeks of injury. However, it is currently unknown whether similar deficits exist following pediatric mTBI. The current study used functional magnetic resonance imaging (fMRI) to investigate the effects of semi-acute mTBI (<3 weeks post-injury) on auditory orienting in 14 pediatric mTBI patients (age 13.50±1.83 years; education: 6.86±1.88 years), and 14 healthy controls (age 13.29±2.09 years; education: 7.21±2.08 years), matched for age and years of education. The results indicated that patients with mTBI showed subtle (i.e., moderate effect sizes) but non-significant deficits on formal neuropsychological testing and during IOR. In contrast, functional imaging results indicated that patients with mTBI demonstrated significantly decreased activation within the bilateral posterior cingulate gyrus, thalamus, basal ganglia, midbrain nuclei, and cerebellum. The spatial topography of hypoactivation was very similar to our previous study in adults, suggesting that subcortical structures may be particularly affected by the initial biomechanical forces in mTBI. Current results also suggest that fMRI may be a more sensitive tool for identifying semi-acute effects of mTBI than the procedures currently used in clinical practice, such as neuropsychological testing and structural scans. fMRI findings could potentially serve as a biomarker for measuring the subtle injury caused by mTBI, and documenting the course of recovery. PMID:22533632

  19. Conventional and cross-correlation brain-stem auditory evoked responses in the white leghorn chick: rate manipulations

    NASA Technical Reports Server (NTRS)

    Burkard, R.; Jones, S.; Jones, T.

    1994-01-01

    Rate-dependent changes in the chick brain-stem auditory evoked response (BAER) using conventional averaging and a cross-correlation technique were investigated. Five 15- to 19-day-old white leghorn chicks were anesthetized with Chloropent. In each chick, the left ear was acoustically stimulated. Electrical pulses of 0.1-ms duration were shaped, attenuated, and passed through a current driver to an Etymotic ER-2 which was sealed in the ear canal. Electrical activity from stainless-steel electrodes was amplified, filtered (300-3000 Hz) and digitized at 20 kHz. Click levels included 70 and 90 dB peSPL. In each animal, conventional BAERs were obtained at rates ranging from 5 to 90 Hz. BAERs were also obtained using a cross-correlation technique involving pseudorandom pulse sequences called maximum length sequences (MLSs). The minimum time between pulses, called the minimum pulse interval (MPI), ranged from 0.5 to 6 ms. Two BAERs were obtained for each condition. Dependent variables included the latency and amplitude of the cochlear microphonic (CM), wave 2 and wave 3. BAERs were observed in all chicks, for all level by rate combinations for both conventional and MLS BAERs. There was no effect of click level or rate on the latency of the CM. The latency of waves 2 and 3 increased with decreasing click level and increasing rate. CM amplitude decreased with decreasing click level, but was not influenced by click rate for the 70 dB peSPL condition. For the 90 dB peSPL click, CM amplitude was uninfluenced by click rate for conventional averaging. For MLS BAERs, CM amplitude was similar to conventional averaging for longer MPIs.(ABSTRACT TRUNCATED AT 250 WORDS).

  20. Differences in brain circuitry for appetitive and reactive aggression as revealed by realistic auditory scripts

    PubMed Central

    Moran, James K.; Weierstall, Roland; Elbert, Thomas

    2014-01-01

    Aggressive behavior is thought to divide into two motivational elements: The first being a self-defensively motivated aggression against threat and a second, hedonically motivated “appetitive” aggression. Appetitive aggression is the less understood of the two, often only researched within abnormal psychology. Our approach is to understand it as a universal and adaptive response, and examine the functional neural activity of ordinary men (N = 50) presented with an imaginative listening task involving a murderer describing a kill. We manipulated motivational context in a between-subjects design to evoke appetitive or reactive aggression, against a neutral control, measuring activity with Magnetoencephalography (MEG). Results show differences in left frontal regions in delta (2–5 Hz) and alpha band (8–12 Hz) for aggressive conditions and right parietal delta activity differentiating appetitive and reactive aggression. These results validate the distinction of reward-driven appetitive aggression from reactive aggression in ordinary populations at the level of functional neural brain circuitry. PMID:25538590

  1. Influence of dominant motivation on the functional organization of auditory input to the sensorimotor cortex of the cat brain.

    PubMed

    Ivanova YuV; Vasil'eva, L A; Kulikov, G A

    1990-01-01

    The results of experiments reviewed in this article demonstrate the possibility of the transformation of the frequency tuning of the auditory input into the sensorimotor cortex (SMC) of the cat under the influence of a dominant motivation. Similar changes took place in the parietal cortex (PC) but they were significantly less in absolute magnitude. The identified transformation of the frequency tuning of the auditory input into the SMC and the PC is in agreement with a change in the biological significance of the auditory signals of kittens for females in the period of lactation, and corresponds for each cat to the spectral composition of the vocalizations of its own kittens.

  2. Auditory brain-stem evoked potentials in cat after kainic acid induced neuronal loss. I. Superior olivary complex.

    PubMed

    Zaaroor, M; Starr, A

    1991-01-01

    Auditory brain-stem potentials (ABRs) were studied in cats for up to 45 days after kainic acid had been injected unilaterally or bilaterally into the superior olivary complex (SOC) to produce neuronal destruction while sparing fibers of passage and the terminals of axons of extrinsic origin connecting to SOC neurons. The components of the ABR in cat were labeled by their polarity at the vertex (P, for positive) and their order of appearance (the arabic numerals 1, 2, etc.). Component P1 can be further subdivided into 2 subcomponents labeled P1a and P1b. The correspondences we have assumed between the ABR components in cat and man are indicated by providing a Roman numeral designation for the human component in parentheses following the feline notation, e.g., P4 (V). With bilateral SOC destruction, there was a significant and marked attenuation of waves P2 (III), P3 (IV), P4 (V), P5 (VI), and the sustained potential shift (SPS) amounting to as much as 80% of preoperative values. Following unilateral SOC destruction the attenuation of many of these same ABR components, in response to stimulation of either ear, was up to 50%. No component of the ABR was totally abolished even when the SOC was lesioned 100% bilaterally. In unilaterally lesioned cats with extensive neuronal loss (greater than 75%) the latencies of the components beginning at P3 (IV) were delayed to stimulation of the ear ipsilateral to the injection site but not to stimulation of the ear contralateral to the injection. Binaural interaction components of the ABR were affected in proportion to the attenuation of the ABR. These results are compatible with multiple brain regions contributing to the generation of the components of the ABR beginning with P2 (III) and that components P3 (IV), P4 (V), and P5 (VI) and the sustained potential shift depend particularly on the integrity of the neurons of the SOC bilaterally. The neurons of the lateral subdivision (LSO) and the medial nucleus of the trapezoid body

  3. Bilinguals at the "cocktail party": dissociable neural activity in auditory-linguistic brain regions reveals neurobiological basis for nonnative listeners' speech-in-noise recognition deficits.

    PubMed

    Bidelman, Gavin M; Dexter, Lauren

    2015-04-01

    We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs.

  4. Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.

    PubMed

    Bauer, Martin; Trahms, Lutz; Sander, Tilmann

    2015-04-01

    The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.

  5. Loss of auditory sensitivity from inner hair cell synaptopathy can be centrally compensated in the young but not old brain.

    PubMed

    Möhrle, Dorit; Ni, Kun; Varakina, Ksenya; Bing, Dan; Lee, Sze Chim; Zimmermann, Ulrike; Knipper, Marlies; Rüttiger, Lukas

    2016-08-01

    A dramatic shift in societal demographics will lead to rapid growth in the number of older people with hearing deficits. Poorer performance in suprathreshold speech understanding and temporal processing with age has been previously linked with progressing inner hair cell (IHC) synaptopathy that precedes age-dependent elevation of auditory thresholds. We compared central sound responsiveness after acoustic trauma in young, middle-aged, and older rats. We demonstrate that IHC synaptopathy progresses from middle age onward and hearing threshold becomes elevated from old age onward. Interestingly, middle-aged animals could centrally compensate for the loss of auditory fiber activity through an increase in late auditory brainstem responses (late auditory brainstem response wave) linked to shortening of central response latencies. In contrast, old animals failed to restore central responsiveness, which correlated with reduced temporal resolution in responding to amplitude changes. These findings may suggest that cochlear IHC synaptopathy with age does not necessarily induce temporal auditory coding deficits, as long as the capacity to generate neuronal gain maintains normal sound-induced central amplitudes. PMID:27318145

  6. Clinical assessment of auditory dysfunction.

    PubMed Central

    Thomas, W G

    1982-01-01

    Many drugs, chemical substances and agents are potentially toxic to the human auditory system. The extent of toxicity depends on numerous factors. With few exceptions, toxicity in the auditory system affects various organs or cells within the cochlea or vestibular system, with brain stem and other central nervous system involvement reported with some chemicals and agents. This ototoxicity usually presents as a decrease in auditory sensitivity, tinnitus and/or vertigo or loss of balance. Classical and newer audiological techniques used in clinical assessment are beneficial in specifying the site of lesion in the cochlea, although auditory test results, themselves, give little information regarding possible pathology or etiology within the cochlea. Typically,, ototoxicity results in high frequency hearing loss, progressive as a function of frequency, usually accompanied by tinnitus and occasionally by vertigo or loss of balance. Auditory testing protocols are necessary to document this loss in auditory function. PMID:7044778

  7. The Role of Animacy in the Real Time Comprehension of Mandarin Chinese: Evidence from Auditory Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Philipp, Markus; Bornkessel-Schlesewsky, Ina; Bisang, Walter; Schlesewsky, Matthias

    2008-01-01

    Two auditory ERP studies examined the role of animacy in sentence comprehension in Mandarin Chinese by comparing active and passive sentences in simple verb-final (Experiment 1) and relative clause constructions (Experiment 2). In addition to the voice manipulation (which modulated the assignment of actor and undergoer roles to the arguments),…

  8. Evidence from Auditory Nerve and Brainstem Evoked Responses for an Organic Brain Lesion in Children with Autistic Traits

    ERIC Educational Resources Information Center

    Student, M.; Sohmer, H.

    1978-01-01

    In an attempt to resolve the question as to whether children with autistic traits have an organic nervous system lesion, auditory nerve and brainstem evoked responses were recorded in a group of 15 children (4 to 12 years old) with autistic traits. (Author)

  9. Design and evaluation of area-efficient and wide-range impedance analysis circuit for multichannel high-quality brain signal recording system

    NASA Astrophysics Data System (ADS)

    Iwagami, Takuma; Tani, Takaharu; Ito, Keita; Nishino, Satoru; Harashima, Takuya; Kino, Hisashi; Kiyoyama, Koji; Tanaka, Tetsu

    2016-04-01

    To enable chronic and stable neural recording, we have been developing an implantable multichannel neural recording system with impedance analysis functions. One of the important things for high-quality neural signal recording is to maintain well interfaces between recording electrodes and tissues. We have proposed an impedance analysis circuit with a very small circuit area, which is implemented in a multichannel neural recording and stimulating system. In this paper, we focused on the design of an impedance analysis circuit configuration and the evaluation of a minimal voltage measurement unit. The proposed circuit has a very small circuit area of 0.23 mm2 designed with 0.18 µm CMOS technology and can measure interface impedances between recording electrodes and tissues in ultrawide ranges from 100 Ω to 10 MΩ. In addition, we also successfully acquired interface impedances using the proposed circuit in agarose gel experiments.

  10. An auditory multiclass brain-computer interface with natural stimuli: Usability evaluation with healthy participants and a motor impaired end user.

    PubMed

    Simon, Nadine; Käthner, Ivo; Ruf, Carolin A; Pasqualotto, Emanuele; Kübler, Andrea; Halder, Sebastian

    2014-01-01

    Brain-computer interfaces (BCIs) can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g., in the completely locked-in state) or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on 2 consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS) in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min) that increased to 90% (4.23 bits/min) on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease. PMID:25620924

  11. An auditory multiclass brain-computer interface with natural stimuli: Usability evaluation with healthy participants and a motor impaired end user

    PubMed Central

    Simon, Nadine; Käthner, Ivo; Ruf, Carolin A.; Pasqualotto, Emanuele; Kübler, Andrea; Halder, Sebastian

    2015-01-01

    Brain-computer interfaces (BCIs) can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g., in the completely locked-in state) or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on 2 consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS) in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min) that increased to 90% (4.23 bits/min) on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease. PMID:25620924

  12. Subcortical processing in auditory communication.

    PubMed

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2015-10-01

    The voice is a rich source of information, which the human brain has evolved to decode and interpret. Empirical observations have shown that the human auditory system is especially sensitive to the human voice, and that activity within the voice-sensitive regions of the primary and secondary auditory cortex is modulated by the emotional quality of the vocal signal, and may therefore subserve, with frontal regions, the cognitive ability to correctly identify the speaker's affective state. So far, the network involved in the processing of vocal affect has been mainly characterised at the cortical level. However, anatomical and functional evidence suggests that acoustic information relevant to the affective quality of the auditory signal might be processed prior to the auditory cortex. Here we review the animal and human literature on the main subcortical structures along the auditory pathway, and propose a model whereby the distinction between different types of vocal affect in auditory communication begins at very early stages of auditory processing, and relies on the analysis of individual acoustic features of the sound signal. We further suggest that this early feature-based decoding occurs at a subcortical level along the ascending auditory pathway, and provides a preliminary coarse (but fast) characterisation of the affective quality of the auditory signal before the more refined (but slower) cortical processing is completed.

  13. Subcortical processing in auditory communication.

    PubMed

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2015-10-01

    The voice is a rich source of information, which the human brain has evolved to decode and interpret. Empirical observations have shown that the human auditory system is especially sensitive to the human voice, and that activity within the voice-sensitive regions of the primary and secondary auditory cortex is modulated by the emotional quality of the vocal signal, and may therefore subserve, with frontal regions, the cognitive ability to correctly identify the speaker's affective state. So far, the network involved in the processing of vocal affect has been mainly characterised at the cortical level. However, anatomical and functional evidence suggests that acoustic information relevant to the affective quality of the auditory signal might be processed prior to the auditory cortex. Here we review the animal and human literature on the main subcortical structures along the auditory pathway, and propose a model whereby the distinction between different types of vocal affect in auditory communication begins at very early stages of auditory processing, and relies on the analysis of individual acoustic features of the sound signal. We further suggest that this early feature-based decoding occurs at a subcortical level along the ascending auditory pathway, and provides a preliminary coarse (but fast) characterisation of the affective quality of the auditory signal before the more refined (but slower) cortical processing is completed. PMID:26163900

  14. Origins of task-specific sensory-independent organization in the visual and auditory brain: neuroscience evidence, open questions and clinical implications.

    PubMed

    Heimler, Benedetta; Striem-Amit, Ella; Amedi, Amir

    2015-12-01

    Evidence of task-specific sensory-independent (TSSI) plasticity from blind and deaf populations has led to a better understanding of brain organization. However, the principles determining the origins of this plasticity remain unclear. We review recent data suggesting that a combination of the connectivity bias and sensitivity to task-distinctive features might account for TSSI plasticity in the sensory cortices as a whole, from the higher-order occipital/temporal cortices to the primary sensory cortices. We discuss current theories and evidence, open questions and related predictions. Finally, given the rapid progress in visual and auditory restoration techniques, we address the crucial need to develop effective rehabilitation approaches for sensory recovery.

  15. Origins of task-specific sensory-independent organization in the visual and auditory brain: neuroscience evidence, open questions and clinical implications.

    PubMed

    Heimler, Benedetta; Striem-Amit, Ella; Amedi, Amir

    2015-12-01

    Evidence of task-specific sensory-independent (TSSI) plasticity from blind and deaf populations has led to a better understanding of brain organization. However, the principles determining the origins of this plasticity remain unclear. We review recent data suggesting that a combination of the connectivity bias and sensitivity to task-distinctive features might account for TSSI plasticity in the sensory cortices as a whole, from the higher-order occipital/temporal cortices to the primary sensory cortices. We discuss current theories and evidence, open questions and related predictions. Finally, given the rapid progress in visual and auditory restoration techniques, we address the crucial need to develop effective rehabilitation approaches for sensory recovery. PMID:26469211

  16. The Drosophila Auditory System

    PubMed Central

    Boekhoff-Falk, Grace; Eberl, Daniel F.

    2013-01-01

    Development of a functional auditory system in Drosophila requires specification and differentiation of the chordotonal sensilla of Johnston’s organ (JO) in the antenna, correct axonal targeting to the antennal mechanosensory and motor center (AMMC) in the brain, and synaptic connections to neurons in the downstream circuit. Chordotonal development in JO is functionally complicated by structural, molecular and functional diversity that is not yet fully understood, and construction of the auditory neural circuitry is only beginning to unfold. Here we describe our current understanding of developmental and molecular mechanisms that generate the exquisite functions of the Drosophila auditory system, emphasizing recent progress and highlighting important new questions arising from research on this remarkable sensory system. PMID:24719289

  17. Site of auditory plasticity in the brain stem (VLVp) of the owl revealed by early monaural occlusion.

    PubMed

    Mogdans, J; Knudsen, E I

    1994-12-01

    1. The optic tectum of the barn owl contains a physiological map of interaural level difference (ILD) that underlies, in part, its map of auditory space. Monaural occlusion shifts the range of ILDs experienced by an animal and alters the correspondence of ILDs with source locations. Chronic monaural occlusion during development induces an adaptive shift in the tectal ILD map that compensates for the effects of the earplug. The data presented in this study indicate that one site of plasticity underlying this adaptive adjustment is in the posterior division of the ventral nucleus of the lateral lemniscus (VLVp), the first site of ILD comparison in the auditory pathway. 2. Single and multiple unit sites were recorded in the optic tecta and VLVps of ketamine-anesthetized owls. The owls were raised from 4 wk of age with one ear occluded with an earplug. Auditory testing, using digitally synthesized dichotic stimuli, was carried out 8-16 wk later with the earplug removed. The adaptive adjustment in ILD coding in each bird was quantified as the shift from normal ILD tuning measured in the optic tectum. Evidence of adaptive adjustment in the VLVp was based on statistical differences between the VLVp's ipsilateral and contralateral to the occluded ear in the sensitivity of units to excitatory-ear and inhibitory-ear stimulation. 3. The balance of excitatory to inhibitory influences on VLVp units was shifted in the adaptive direction in six out of eight owls. In three of these owls, adaptive differences in inhibition, but not in excitation, were found. For this group of owls, the patterns of response properties across the two VLVps can only be accounted for by plasticity in the VLVp. For the other three owls, the possibility that the difference between the two VLVps resulted from damage to one of the VLVps could not be eliminated, and for one of these, plasticity at a more peripheral site (in the cochlea or cochlear nucleus) could also explain the data. In the remaining two

  18. Sex, acceleration, brain imaging, and rhesus monkeys: Converging evidence for an evolutionary bias for looming auditory motion

    NASA Astrophysics Data System (ADS)

    Neuhoff, John G.

    2003-04-01

    Increasing acoustic intensity is a primary cue to looming auditory motion. Perceptual overestimation of increasing intensity could provide an evolutionary selective advantage by specifying that an approaching sound source is closer than actual, thus affording advanced warning and more time than expected to prepare for the arrival of the source. Here, multiple lines of converging evidence for this evolutionary hypothesis are presented. First, it is shown that intensity change specifying accelerating source approach changes in loudness more than equivalent intensity change specifying decelerating source approach. Second, consistent with evolutionary hunter-gatherer theories of sex-specific spatial abilities, it is shown that females have a significantly larger bias for rising intensity than males. Third, using functional magnetic resonance imaging in conjunction with approaching and receding auditory motion, it is shown that approaching sources preferentially activate a specific neural network responsible for attention allocation, motor planning, and translating perception into action. Finally, it is shown that rhesus monkeys also exhibit a rising intensity bias by orienting longer to looming tones than to receding tones. Together these results illustrate an adaptive perceptual bias that has evolved because it provides a selective advantage in processing looming acoustic sources. [Work supported by NSF and CDC.

  19. Impact of Repetitive Transcranial Magnetic Stimulation (rTMS) on Brain Functional Marker of Auditory Hallucinations in Schizophrenia Patients

    PubMed Central

    Maïza, Olivier; Hervé, Pierre-Yve; Etard, Olivier; Razafimandimby, Annick; Montagne-Larmurier, Aurélie; Dollfus, Sonia

    2013-01-01

    Several cross-sectional functional Magnetic Resonance Imaging (fMRI) studies reported a negative correlation between auditory verbal hallucination (AVH) severity and amplitude of the activations during language tasks. The present study assessed the time course of this correlation and its possible structural underpinnings by combining structural, functional MRI and repetitive Transcranial Magnetic Stimulation (rTMS). Methods: Nine schizophrenia patients with AVH (evaluated with the Auditory Hallucination Rating scale; AHRS) and nine healthy participants underwent two sessions of an fMRI speech listening paradigm. Meanwhile, patients received high frequency (20 Hz) rTMS. Results: Before rTMS, activations were negatively correlated with AHRS in a left posterior superior temporal sulcus (pSTS) cluster, considered henceforward as a functional region of interest (fROI). After rTMS, activations in this fROI no longer correlated with AHRS. This decoupling was explained by a significant decrease of AHRS scores after rTMS that contrasted with a relative stability of cerebral activations. A voxel-based-morphometry analysis evidenced a cluster of the left pSTS where grey matter volume negatively correlated with AHRS before rTMS and positively correlated with activations in the fROI at both sessions. Conclusion: rTMS decreases the severity of AVH leading to modify the functional correlate of AVH underlain by grey matter abnormalities. PMID:24961421

  20. Effect Of Electromagnetic Waves Emitted From Mobile Phone On Brain Stem Auditory Evoked Potential In Adult Males.

    PubMed

    Singh, K

    2015-01-01

    Mobile phone (MP) is commonly used communication tool. Electromagnetic waves (EMWs) emitted from MP may have potential health hazards. So, it was planned to study the effect of electromagnetic waves (EMWs) emitted from the mobile phone on brainstem auditory evoked potential (BAEP) in male subjects in the age group of 20-40 years. BAEPs were recorded using standard method of 10-20 system of electrode placement and sound click stimuli of specified intensity, duration and frequency.Right ear was exposed to EMW emitted from MP for about 10 min. On comparison of before and after exposure to MP in right ear (found to be dominating ear), there was significant increase in latency of II, III (p < 0.05) and V (p < 0.001) wave, amplitude of I-Ia wave (p < 0.05) and decrease in IPL of III-V wave (P < 0.05) after exposure to MP. But no significant change was found in waves of BAEP in left ear before vs after MP. On comparison of right (having exposure routinely as found to be dominating ear) and left ears (not exposed to MP), before exposure to MP, IPL of IIl-V wave and amplitude of V-Va is more (< 0.001) in right ear compared to more latency of III and IV wave (< 0.001) in left ear. After exposure to MP, the amplitude of V-Va was (p < 0.05) more in right ear compared to left ear. In conclusion, EMWs emitted from MP affects the auditory potential.

  1. Effect Of Electromagnetic Waves Emitted From Mobile Phone On Brain Stem Auditory Evoked Potential In Adult Males.

    PubMed

    Singh, K

    2015-01-01

    Mobile phone (MP) is commonly used communication tool. Electromagnetic waves (EMWs) emitted from MP may have potential health hazards. So, it was planned to study the effect of electromagnetic waves (EMWs) emitted from the mobile phone on brainstem auditory evoked potential (BAEP) in male subjects in the age group of 20-40 years. BAEPs were recorded using standard method of 10-20 system of electrode placement and sound click stimuli of specified intensity, duration and frequency.Right ear was exposed to EMW emitted from MP for about 10 min. On comparison of before and after exposure to MP in right ear (found to be dominating ear), there was significant increase in latency of II, III (p < 0.05) and V (p < 0.001) wave, amplitude of I-Ia wave (p < 0.05) and decrease in IPL of III-V wave (P < 0.05) after exposure to MP. But no significant change was found in waves of BAEP in left ear before vs after MP. On comparison of right (having exposure routinely as found to be dominating ear) and left ears (not exposed to MP), before exposure to MP, IPL of IIl-V wave and amplitude of V-Va is more (< 0.001) in right ear compared to more latency of III and IV wave (< 0.001) in left ear. After exposure to MP, the amplitude of V-Va was (p < 0.05) more in right ear compared to left ear. In conclusion, EMWs emitted from MP affects the auditory potential. PMID:27530007

  2. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy

    PubMed Central

    Lecaignard, Françoise; Bertrand, Olivier; Gimenez, Gérard; Mattout, Jérémie; Caclin, Anne

    2015-01-01

    Deviant stimuli, violating regularities in a sensory environment, elicit the mismatch negativity (MMN), largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience) are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities. PMID:26441602

  3. Preferred EEG brain states at stimulus onset in a fixed interstimulus interval equiprobable auditory Go/NoGo task: a definitive study.

    PubMed

    Barry, Robert J; De Blasio, Frances M; De Pascalis, Vilfredo; Karamacoska, Diana

    2014-10-01

    This study examined the occurrence of preferred EEG phase states at stimulus onset in an equiprobable auditory Go/NoGo task with a fixed interstimulus interval, and their effects on the resultant event-related potentials (ERPs). We used a sliding short-time FFT decomposition of the EEG at Cz for each trial to assess prestimulus EEG activity in the delta, theta, alpha and beta bands. We determined the phase of each 2 Hz narrow-band contributing to these four broad bands at 125 ms before each stimulus onset, and for the first time, avoided contamination from poststimulus EEG activity. This phase value was extrapolated 125 ms to obtain the phase at stimulus onset, combined into the broad-band phase, and used to sort trials into four phase groups for each of the four broad bands. For each band, ERPs were derived for each phase from the raw EEG activity at 19 sites. Data sets from each band were separately decomposed using temporal Principal Components Analyses with unrestricted VARIMAX rotation to extract N1-1, PN, P2, P3, SW and LP components. Each component was analysed as a function of EEG phase at stimulus onset in the context of a simple conceptualisation of orthogonal phase effects (cortical negativity vs. positivity, negative driving vs. positive driving, waxing vs. waning). The predicted non-random occurrence of phase-defined brain states was confirmed. The preferred states of negativity, negative driving, and waxing were each associated with more efficient stimulus processing, as reflected in amplitude differences of the components. The present results confirm the existence of preferred brain states and their impact on the efficiency of brain dynamics in perceptual and cognitive processing. PMID:25043955

  4. List mode multichannel analyzer

    SciTech Connect

    Archer, Daniel E.; Luke, S. John; Mauger, G. Joseph; Riot, Vincent J.; Knapp, David A.

    2007-08-07

    A digital list mode multichannel analyzer (MCA) built around a programmable FPGA device for onboard data analysis and on-the-fly modification of system detection/operating parameters, and capable of collecting and processing data in very small time bins (<1 millisecond) when used in histogramming mode, or in list mode as a list mode MCA.

  5. Multichannel compressive sensing MRI using noiselet encoding.

    PubMed

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  6. Multichannel compressive sensing MRI using noiselet encoding.

    PubMed

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  7. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    PubMed Central

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  8. Maps of the Auditory Cortex.

    PubMed

    Brewer, Alyssa A; Barton, Brian

    2016-07-01

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration. PMID:27145914

  9. Decreases in energy and increases in phase locking of event-related oscillations to auditory stimuli occur during adolescence in human and rodent brain.

    PubMed

    Ehlers, Cindy L; Wills, Derek N; Desikan, Anita; Phillips, Evelyn; Havstad, James

    2014-01-01

    Synchrony of phase (phase locking) of event-related oscillations (EROs) within and between different brain areas has been suggested to reflect communication exchange between neural networks and as such may be a sensitive and translational measure of changes in brain remodeling that occur during adolescence. This study sought to investigate developmental changes in EROs using a similar auditory event-related potential (ERP) paradigm in both rats and humans. Energy and phase variability of EROs collected from 38 young adult men (aged 18-25 years), 33 periadolescent boys (aged 10-14 years), 15 male periadolescent rats [at postnatal day (PD) 36] and 19 male adult rats (at PD103) were investigated. Three channels of ERP data (frontal cortex, central cortex and parietal cortex) were collected from the humans using an 'oddball plus noise' paradigm that was presented under passive (no behavioral response required) conditions in the periadolescents and under active conditions (where each subject was instructed to depress a counter each time he detected an infrequent target tone) in adults and adolescents. ERPs were recorded in rats using only the passive paradigm. In order to compare the tasks used in rats to those used in humans, we first studied whether three ERO measures [energy, phase locking index (PLI) within an electrode site and phase difference locking index (PDLI) between different electrode sites] differentiated the 'active' from 'passive' ERP tasks. Secondly, we explored our main question of whether the three ERO measures differentiated adults from periadolescents in a similar manner in both humans and rats. No significant changes were found in measures of ERO energy between the active and passive tasks in the periadolescent human participants. There was a smaller but significant increase in PLI but not PDLI as a function of active task requirements. Developmental differences were found in energy, PLI and PDLI values between the periadolescents and adults in

  10. Decreases in energy and increases in phase locking of event-related oscillations to auditory stimuli occur during adolescence in human and rodent brain.

    PubMed

    Ehlers, Cindy L; Wills, Derek N; Desikan, Anita; Phillips, Evelyn; Havstad, James

    2014-01-01

    Synchrony of phase (phase locking) of event-related oscillations (EROs) within and between different brain areas has been suggested to reflect communication exchange between neural networks and as such may be a sensitive and translational measure of changes in brain remodeling that occur during adolescence. This study sought to investigate developmental changes in EROs using a similar auditory event-related potential (ERP) paradigm in both rats and humans. Energy and phase variability of EROs collected from 38 young adult men (aged 18-25 years), 33 periadolescent boys (aged 10-14 years), 15 male periadolescent rats [at postnatal day (PD) 36] and 19 male adult rats (at PD103) were investigated. Three channels of ERP data (frontal cortex, central cortex and parietal cortex) were collected from the humans using an 'oddball plus noise' paradigm that was presented under passive (no behavioral response required) conditions in the periadolescents and under active conditions (where each subject was instructed to depress a counter each time he detected an infrequent target tone) in adults and adolescents. ERPs were recorded in rats using only the passive paradigm. In order to compare the tasks used in rats to those used in humans, we first studied whether three ERO measures [energy, phase locking index (PLI) within an electrode site and phase difference locking index (PDLI) between different electrode sites] differentiated the 'active' from 'passive' ERP tasks. Secondly, we explored our main question of whether the three ERO measures differentiated adults from periadolescents in a similar manner in both humans and rats. No significant changes were found in measures of ERO energy between the active and passive tasks in the periadolescent human participants. There was a smaller but significant increase in PLI but not PDLI as a function of active task requirements. Developmental differences were found in energy, PLI and PDLI values between the periadolescents and adults in

  11. Differential brain glucose metabolic patterns in antipsychotic-naive first-episode schizophrenia with and without auditory verbal hallucinations

    PubMed Central

    Horga, Guillermo; Parellada, Eduard; Lomeña, Francisco; Fernández-Egea, Emilio; Mané, Anna; Font, Mireia; Falcón, Carles; Konova, Anna B.; Pavia, Javier; Ros, Domènec; Bernardo, Miguel

    2011-01-01

    Background Auditory verbal hallucinations (AVHs) are a core symptom of schizophrenia. Previous reports on neural activity patterns associated with AVHs are inconsistent, arguably owing to the lack of an adequate control group (i.e., patients with similar characteristics but without AVHs) and neglect of the potential confounding effects of medication. Methods The current study was conducted in a homogeneous group of patients with schizophrenia to assess whether the presence or absence of AVHs was associated with differential regional cerebral glucose metabolic patterns. We investigated differences between patients with commenting AVHs and patients without AVHs among a group of dextral antipsychotic-naive inpatients with acute first-episode schizophrenia examined with [18F]fluorodeoxyglucose positron emission tomography (FDG-PET) at rest. Univariate and multivariate approaches were used to establish between-group differences. Results We included 9 patients with AVHs and 7 patients without AVHs in this study. Patients experiencing AVHs during FDG uptake had significantly higher metabolic rates in the left superior and middle temporal cortices, bilateral superior medial frontal cortex and left caudate nucleus (cluster level p < 0.005, family wise error–corrected, and bootstrap ratio > 3.3, respectively). Additionally, the multivariate method identified hippocampal–parahippocampal, cerebellar and parietal relative hypoactivity during AVHs in both hemispheres (bootstrap ratio < −3.3). Limitations The FDG-PET imaging technique does not provide information regarding the temporal course of neural activity. The limited sample size may have increased the risk of false-negative findings. Conclusion Our results indicate that AVHs in patients with schizophrenia may be mediated by an alteration of neural pathways responsible for normal language function. Our findings also point to the potential role of the dominant caudate nucleus and the parahippocampal gyri in the

  12. Brain Dynamics of Aging: Multiscale Variability of EEG Signals at Rest and during an Auditory Oddball Task(1,2,3).

    PubMed

    Sleimen-Malkoun, Rita; Perdikis, Dionysios; Müller, Viktor; Blanc, Jean-Luc; Huys, Raoul; Temprado, Jean-Jacques; Jirsa, Viktor K

    2015-01-01

    The present work focused on the study of fluctuations of cortical activity across time scales in young and older healthy adults. The main objective was to offer a comprehensive characterization of the changes of brain (cortical) signal variability during aging, and to make the link with known underlying structural, neurophysiological, and functional modifications, as well as aging theories. We analyzed electroencephalogram (EEG) data of young and elderly adults, which were collected at resting state and during an auditory oddball task. We used a wide battery of metrics that typically are separately applied in the literature, and we compared them with more specific ones that address their limits. Our procedure aimed to overcome some of the methodological limitations of earlier studies and verify whether previous findings can be reproduced and extended to different experimental conditions. In both rest and task conditions, our results mainly revealed that EEG signals presented systematic age-related changes that were time-scale-dependent with regard to the structure of fluctuations (complexity) but not with regard to their magnitude. Namely, compared with young adults, the cortical fluctuations of the elderly were more complex at shorter time scales, but less complex at longer scales, although always showing a lower variance. Additionally, the elderly showed signs of spatial, as well as between, experimental conditions dedifferentiation. By integrating these so far isolated findings across time scales, metrics, and conditions, the present study offers an overview of age-related changes in the fluctuation electrocortical activity while making the link with underlying brain dynamics. PMID:26464983

  13. Brain Dynamics of Aging: Multiscale Variability of EEG Signals at Rest and during an Auditory Oddball Task1,2,3

    PubMed Central

    Sleimen-Malkoun, Rita; Perdikis, Dionysios; Müller, Viktor; Blanc, Jean-Luc; Huys, Raoul; Temprado, Jean-Jacques

    2015-01-01

    Abstract The present work focused on the study of fluctuations of cortical activity across time scales in young and older healthy adults. The main objective was to offer a comprehensive characterization of the changes of brain (cortical) signal variability during aging, and to make the link with known underlying structural, neurophysiological, and functional modifications, as well as aging theories. We analyzed electroencephalogram (EEG) data of young and elderly adults, which were collected at resting state and during an auditory oddball task. We used a wide battery of metrics that typically are separately applied in the literature, and we compared them with more specific ones that address their limits. Our procedure aimed to overcome some of the methodological limitations of earlier studies and verify whether previous findings can be reproduced and extended to different experimental conditions. In both rest and task conditions, our results mainly revealed that EEG signals presented systematic age-related changes that were time-scale-dependent with regard to the structure of fluctuations (complexity) but not with regard to their magnitude. Namely, compared with young adults, the cortical fluctuations of the elderly were more complex at shorter time scales, but less complex at longer scales, although always showing a lower variance. Additionally, the elderly showed signs of spatial, as well as between, experimental conditions dedifferentiation. By integrating these so far isolated findings across time scales, metrics, and conditions, the present study offers an overview of age-related changes in the fluctuation electrocortical activity while making the link with underlying brain dynamics. PMID:26464983

  14. Auditory pathways: are 'what' and 'where' appropriate?

    PubMed

    Hall, Deborah A

    2003-05-13

    New evidence confirms that the auditory system encompasses temporal, parietal and frontal brain regions, some of which partly overlap with the visual system. But common assumptions about the functional homologies between sensory systems may be misleading. PMID:12747854

  15. A subfemtotesla multichannel atomic magnetometer.

    PubMed

    Kominis, I K; Kornack, T W; Allred, J C; Romalis, M V

    2003-04-10

    The magnetic field is one of the most fundamental and ubiquitous physical observables, carrying information about all electromagnetic phenomena. For the past 30 years, superconducting quantum interference devices (SQUIDs) operating at 4 K have been unchallenged as ultrahigh-sensitivity magnetic field detectors, with a sensitivity reaching down to 1 fT Hz(-1/2) (1 fT = 10(-15) T). They have enabled, for example, mapping of the magnetic fields produced by the brain, and localization of the underlying electrical activity (magnetoencephalography). Atomic magnetometers, based on detection of Larmor spin precession of optically pumped atoms, have approached similar levels of sensitivity using large measurement volumes, but have much lower sensitivity in the more compact designs required for magnetic imaging applications. Higher sensitivity and spatial resolution combined with non-cryogenic operation of atomic magnetometers would enable new applications, including the possibility of mapping non-invasively the cortical modules in the brain. Here we describe a new spin-exchange relaxation-free (SERF) atomic magnetometer, and demonstrate magnetic field sensitivity of 0.54 fT Hz(-1/2) with a measurement volume of only 0.3 cm3. Theoretical analysis shows that fundamental sensitivity limits of this device are below 0.01 fT Hz(-1/2). We also demonstrate simple multichannel operation of the magnetometer, and localization of magnetic field sources with a resolution of 2 mm.

  16. Multichannel Human Body Communication

    NASA Astrophysics Data System (ADS)

    Przystup, Piotr; Bujnowski, Adam; Wtorek, Jerzy

    2016-01-01

    Human Body Communication is an attractive alternative for traditional wireless communication (Bluetooth, ZigBee) in case of Body Sensor Networks. Low power, high data rates and data security makes it ideal solution for medical applications. In this paper, signal attenuation for different frequencies, using FR4 electrodes, has been investigated. Performance of single and multichannel transmission with frequency modulation of analog signal has been tested. Experiment results show that HBC is a feasible solution for transmitting data between BSN nodes.

  17. Miniature multichannel biotelemeter system

    NASA Technical Reports Server (NTRS)

    Carraway, J. B.; Sumida, J. T. (Inventor)

    1974-01-01

    A miniature multichannel biotelemeter system is described. The system includes a transmitter where signals from different sources are sampled to produce a wavetrain of pulses. The transmitter also separates signals by sync pulses. The pulses amplitude modulate a radio frequency carrier which is received at a receiver unit. There the sync pulses are detected by a demultiplexer which routes the pulses from each different source to a separate output channel where the pulses are used to reconstruct the signals from the particular source.

  18. Marine Multichannel Seismology Workshop

    NASA Astrophysics Data System (ADS)

    Detrick, Bob

    1984-04-01

    The multichannel seismic (MCS) reflection technique, developed by the oil industry for petroleum exploration in sedimentary basins, has proven to be a powerful tool for imaging subsurface geology in a wide variety of tectonic settings at a scale suitable for detailed investigations of geological structures and processes. In the ocean basins, MCS studies have provided new insight into the tectonic history of rifted and convergent continental margins, the structure of the oceanic crust and midocean ridges, and the sedimentation history and paleoceanography of deep ocean basins. MCS techniques have thus developed into an important tool for marine geological and geophysical research.The National Science Foundation recently sponsored a Workshop on the Future of Academic Marine Multichannel Seismology in the United States, held in Boulder, Colo., on March 19-20, 1984, to review the current state of marine academic MCS in the United States and to make recommendations on the facilities and funding required to meet future scientific needs. The workshop, which was convened by Brian T.R. Lewis of the University of Washington, included 19 scientists representing the major U.S. oceanographic institutions with interests in marine seismic work. This article summarizes the major recommendations developed at this workshop, which have been included in a more comprehensive report entitled ‘A National Plan for Marine Multichannel Seismology,’ which has been submitted to the National Science Foundation for future publication.

  19. Multichannel extracochlear implant.

    PubMed

    Pulec, J L; Smith, J C; Lewis, M L; Hortmann, G

    1989-03-01

    The transcutaneous eight-channel extracochlear implant has undergone continuous revision to simplify the surgical technique, to minimize patient morbidity, and to improve performance. The extracochlear electrode array has been miniaturized so that it can be inserted through the facial recess without disturbing the external auditory canal, tympanic membrane, or malleus. The use of the remote antenna placed around the external auditory canal has greatly increased battery life and patient comfort. With its simplified incisions, the surgical procedure can be performed as out-patient surgery. Preoperative cochlear nerve testing and use of evoked response cochlear nerve testing allow preadjustment of the speech processor. Current features and performance of the implant are discussed.

  20. Auditory synesthesias.

    PubMed

    Afra, Pegah

    2015-01-01

    Synesthesia is experienced when sensory stimulation of one sensory modality (the inducer) elicits an involuntary or automatic sensation in another sensory modality or different aspect of the same sensory modality (the concurrent). Auditory synesthesias (AS) occur when auditory stimuli trigger a variety of concurrents, or when non-auditory sensory stimulations trigger auditory synesthetic perception. The AS are divided into three types: developmental, acquired, and induced. Developmental AS are not a neurologic disorder but a different way of experiencing one's environment. They are involuntary and highly consistent experiences throughout one's life. Acquired AS have been reported in association with neurologic diseases that cause deafferentation of anterior optic pathways, with pathologic lesions affecting the central nervous system (CNS) outside of the optic pathways, as well as non-lesional cases associated with migraine, and epilepsy. It also has been reported with mood disorders as well as a single idiopathic case. Induced AS has been reported in experimental and postsurgical blindfolding, as well as intake of hallucinogenics or psychedelics. In this chapter the three different types of synesthesia, their characteristics, and phenomologic differences, as well as their possible neural mechanisms are discussed. PMID:25726281

  1. Auditory system

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1973-01-01

    The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.

  2. Harmonic Training and the Formation of Pitch Representation in a Neural Network Model of the Auditory Brain.

    PubMed

    Ahmad, Nasir; Higgins, Irina; Walker, Kerry M M; Stringer, Simon M

    2016-01-01

    Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises. PMID:27047368

  3. Harmonic Training and the Formation of Pitch Representation in a Neural Network Model of the Auditory Brain

    PubMed Central

    Ahmad, Nasir; Higgins, Irina; Walker, Kerry M. M.; Stringer, Simon M.

    2016-01-01

    Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises. PMID:27047368

  4. Impaired auditory selective attention ameliorated by cognitive training with graded exposure to noise in patients with traumatic brain injury.

    PubMed

    Dundon, Neil M; Dockree, Suvi P; Buckley, Vanessa; Merriman, Niamh; Carton, Mary; Clarke, Sarah; Roche, Richard A P; Lalor, Edmund C; Robertson, Ian H; Dockree, Paul M

    2015-08-01

    Patients who suffer traumatic brain injury frequently report difficulty concentrating on tasks and completing routine activities in noisy and distracting environments. Such impairments can have long-term negative psychosocial consequences. A cognitive control function that may underlie this impairment is the capacity to select a goal-relevant signal for further processing while safeguarding it from irrelevant noise. A paradigmatic investigation of this problem was undertaken using a dichotic listening task (study 1) in which comprehension of a stream of speech to one ear was measured in the context of increasing interference from a second stream of irrelevant speech to the other ear. Controls showed an initial decline in performance in the presence of competing speech but thereafter showed adaptation to increasing audibility of irrelevant speech, even at the highest levels of noise. By contrast, patients showed linear decline in performance with increasing noise. Subsequently attempts were made to ameliorate this deficit (study 2) using a cognitive training procedure based on attention process training (APT) that included graded exposure to irrelevant noise over the course of training. Patients were assigned to adaptive and non-adaptive training schedules or to a no-training control group. Results showed that both types of training drove improvements in the dichotic listening and in naturalistic tasks of performance in noise. Improvements were also seen on measures of selective attention in the visual domain suggesting transfer of training. We also observed augmentation of event-related potentials (ERPs) linked to target processing (P3b) but no change in ERPs evoked by distractor stimuli (P3a) suggesting that training heightened tuning of target signals, as opposed to gating irrelevant noise. No changes in any of the above measures were observed in a no-training control group. Together these findings present an ecologically valid approach to measure selective

  5. Auditory short-term memory in the primate auditory cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  6. Phonetic perceptual identification by native- and second-language speakers differentially activates brain regions involved with acoustic phonetic processing and those involved with articulatory-auditory/orosensory internal models.

    PubMed

    Callan, Daniel E; Jones, Jeffery A; Callan, Akiko M; Akahane-Yamada, Reiko

    2004-07-01

    This experiment investigates neural processes underlying perceptual identification of the same phonemes for native- and second-language speakers. A model is proposed implicating the use of articulatory-auditory and articulatory-orosensory mappings to facilitate perceptual identification under conditions in which the phonetic contrast is ambiguous, as in the case of second-language speakers. In contrast, native-language speakers are predicted to use auditory-based phonetic representations to a greater extent for perceptual identification than second-language speakers. The English /r-l/ phonetic contrast, although easy for native English speakers, is extremely difficult for native Japanese speakers who learned English as a second language after childhood. Twenty-two native English and twenty-two native Japanese speakers participated in this study. While undergoing event-related fMRI, subjects were aurally presented with syllables starting with a /r/, /l/, or a vowel and were required to rapidly identify the phoneme perceived by pushing one of three buttons with the left thumb. Consistent with the proposed model, the results show greater activity for second- over native-language speakers during perceptual identification of /r/ and /l/ relative to vowels in brain regions implicated with instantiating forward and inverse articulatory-auditory articulatory-orosensory models [Broca's area, anterior insula, anterior superior temporal sulcus/gyrus (STS/G), planum temporale (PT), superior temporal parietal area (Stp), SMG, and cerebellum]. The results further show that activity in brain regions implicated with instantiating these internal models is correlated with better /r/ and /l/ identification performance for second-language speakers. Greater activity found for native-language speakers especially in the anterior STG/S for /r/ and /l/ perceptual identification is consistent with the hypothesis that native-language speakers use auditory phonetic representations more

  7. Auditory serial position effects in story retelling for non-brain-injured participants and persons with aphasia.

    PubMed

    Brodsky, Martin B; McNeil, Malcolm R; Doyle, Patrick J; Fossett, Tepanata R D; Timm, Neil H; Park, Grace H

    2003-10-01

    Using story retelling as an index of language ability, it is difficult to disambiguate comprehension and memory deficits. Collecting data on the serial position effect (SPE), however, illuminates the memory component. This study examined the SPE of the percentage of information units (%IU) produced in the connected speech samples of adults with aphasia and age-matched, non-brain-injured (NBI) participants. The NBI participants produced significantly more direct and alternate IUs than participants with aphasia. Significant age and gender differences were found in subsamples of the NBI controls, with younger and female participants generating significantly more direct IUs than male and older NBI participants. Alternate IU productions did not generate an SPE from any group. There was a significant linear increase from the initial (primacy) to the final (recency) portion of the recalled alternate IUs for both the NBI group and the group of participants with aphasia. Results provide evidence that individuals with aphasia recall discourse length information using similar memory functions as the nonimpaired population, though at a reduced level of efficiency or quantity. A quadratic model is suggested for the recall of information directly recalled from discourse-length language material.

  8. Fractional channel multichannel analyzer

    DOEpatents

    Brackenbush, Larry W.; Anderson, Gordon A.

    1994-01-01

    A multichannel analyzer incorporating the features of the present invention obtains the effect of fractional channels thus greatly reducing the number of actual channels necessary to record complex line spectra. This is accomplished by using an analog-to-digital converter in the asynscronous mode, i.e., the gate pulse from the pulse height-to-pulse width converter is not synchronized with the signal from a clock oscillator. This saves power and reduces the number of components required on the board to achieve the effect of radically expanding the number of channels without changing the circuit board.

  9. Multichannel interval timer (MINT)

    SciTech Connect

    Kimball, K.B.

    1982-06-01

    A prototype Multichannel INterval Timer (MINT) has been built for measuring signal Time of Arrival (TOA) from sensors placed in blast environments. The MINT is intended to reduce the space, equipment costs, and data reduction efforts associated with traditional analog TOA recording methods, making it more practical to field the large arrays of TOA sensors required to characterize blast environments. This document describes the MINT design features, provides the information required for installing and operating the system, and presents proposed improvements for the next generation system.

  10. Sampled sinusoidal stimulation profile and multichannel fuzzy logic classification for monitor-based phase-coded SSVEP brain-computer interfacing

    NASA Astrophysics Data System (ADS)

    Manyakov, Nikolay V.; Chumerin, Nikolay; Robben, Arne; Combaz, Adrien; van Vliet, Marijn; Van Hulle, Marc M.

    2013-06-01

    Objective. The performance and usability of brain-computer interfaces (BCIs) can be improved by new paradigms, stimulation methods, decoding strategies, sensor technology etc. In this study we introduce new stimulation and decoding methods for electroencephalogram (EEG)-based BCIs that have targets flickering at the same frequency but with different phases. Approach. The phase information is estimated from the EEG data, and used for target command decoding. All visual stimulation is done on a conventional (60-Hz) LCD screen. Instead of the ‘on/off’ visual stimulation, commonly used in phase-coded BCI, we propose one based on a sampled sinusoidal intensity profile. In order to fully exploit the circular nature of the evoked phase response, we introduce a filter feature selection procedure based on circular statistics and propose a fuzzy logic classifier designed to cope with circular information from multiple channels jointly. Main results. We show that the proposed visual stimulation enables us not only to encode more commands under the same conditions, but also to obtain EEG responses with a more stable phase. We also demonstrate that the proposed decoding approach outperforms existing ones, especially for the short time windows used. Significance. The work presented here shows how to overcome some of the limitations of screen-based visual stimulation. The superiority of the proposed decoding approach demonstrates the importance of preserving the circularity of the data during the decoding stage.

  11. Central auditory function of deafness genes.

    PubMed

    Willaredt, Marc A; Ebbers, Lena; Nothwang, Hans Gerd

    2014-06-01

    The highly variable benefit of hearing devices is a serious challenge in auditory rehabilitation. Various factors contribute to this phenomenon such as the diversity in ear defects, the different extent of auditory nerve hypoplasia, the age of intervention, and cognitive abilities. Recent analyses indicate that, in addition, central auditory functions of deafness genes have to be considered in this context. Since reduced neuronal activity acts as the common denominator in deafness, it is widely assumed that peripheral deafness influences development and function of the central auditory system in a stereotypical manner. However, functional characterization of transgenic mice with mutated deafness genes demonstrated gene-specific abnormalities in the central auditory system as well. A frequent function of deafness genes in the central auditory system is supported by a genome-wide expression study that revealed significant enrichment of these genes in the transcriptome of the auditory brainstem compared to the entire brain. Here, we will summarize current knowledge of the diverse central auditory functions of deafness genes. We furthermore propose the intimately interwoven gene regulatory networks governing development of the otic placode and the hindbrain as a mechanistic explanation for the widespread expression of these genes beyond the cochlea. We conclude that better knowledge of central auditory dysfunction caused by genetic alterations in deafness genes is required. In combination with improved genetic diagnostics becoming currently available through novel sequencing technologies, this information will likely contribute to better outcome prediction of hearing devices.

  12. Progressive auditory neuropathy in patients with Leber's hereditary optic neuropathy

    PubMed Central

    Ceranic, B; Luxon, L

    2004-01-01

    Objective: To investigate auditory neural involvement in patients with Leber's hereditary optic neuropathy (LHON). Methods: Auditory assessment was undertaken in two patients with LHON. One was a 45 year old woman with Harding disease (multiple-sclerosis-like illness and positive 11778mtDNA mutation) and mild auditory symptoms, whose auditory function was monitored over five years. The other was a 59 year old man with positive 11778mtDNA mutation, who presented with a long standing progressive bilateral hearing loss, moderate on one side and severe to profound on the other. Standard pure tone audiometry, tympanometry, stapedial reflex threshold measurements, stapedial reflex decay, otoacoustic emissions with olivo-cochlear suppression, auditory brain stem responses, and vestibular function tests were undertaken. Results: Both patients had good cochlear function, as judged by otoacoustic emissions (intact outer hair cells) and normal stapedial reflexes (intact inner hair cells). A brain stem lesion was excluded by negative findings on imaging, recordable stapedial reflex thresholds, and, in one of the patients, olivocochlear suppression of otoacoustic emissions. The deterioration of auditory function implied a progressive course in both cases. Vestibular function was unaffected. Conclusions: The findings are consistent with auditory neuropathy—a lesion of the cochlear nerve presenting with abnormal auditory brain stem responses and with normal inner hair cells and the cochlear nucleus (lower brain stem). The association of auditory neuropathy, or any other auditory dysfunction, with LHON has not been recognised previously. Further studies are necessary to establish whether this is a consistent finding. PMID:15026512

  13. Separating heart and brain: on the reduction of physiological noise from multichannel functional near-infrared spectroscopy (fNIRS) signals

    NASA Astrophysics Data System (ADS)

    Bauernfeind, G.; Wriessnegger, S. C.; Daly, I.; Müller-Putz, G. R.

    2014-10-01

    Objective. Functional near-infrared spectroscopy (fNIRS) is an emerging technique for the in vivo assessment of functional activity of the cerebral cortex as well as in the field of brain-computer interface (BCI) research. A common challenge for the utilization of fNIRS in these areas is a stable and reliable investigation of the spatio-temporal hemodynamic patterns. However, the recorded patterns may be influenced and superimposed by signals generated from physiological processes, resulting in an inaccurate estimation of the cortical activity. Up to now only a few studies have investigated these influences, and still less has been attempted to remove/reduce these influences. The present study aims to gain insights into the reduction of physiological rhythms in hemodynamic signals (oxygenated hemoglobin (oxy-Hb), deoxygenated hemoglobin (deoxy-Hb)). Approach. We introduce the use of three different signal processing approaches (spatial filtering, a common average reference (CAR) method; independent component analysis (ICA); and transfer function (TF) models) to reduce the influence of respiratory and blood pressure (BP) rhythms on the hemodynamic responses. Main results. All approaches produce large reductions in BP and respiration influences on the oxy-Hb signals and, therefore, improve the contrast-to-noise ratio (CNR). In contrast, for deoxy-Hb signals CAR and ICA did not improve the CNR. However, for the TF approach, a CNR-improvement in deoxy-Hb can also be found. Significance. The present study investigates the application of different signal processing approaches to reduce the influences of physiological rhythms on the hemodynamic responses. In addition to the identification of the best signal processing method, we also show the importance of noise reduction in fNIRS data.

  14. Causal contribution of primate auditory cortex to auditory perceptual decision-making

    PubMed Central

    Tsunada, Joji; Liu, Andrew S.K.; Gold, Joshua I.; Cohen, Yale E.

    2015-01-01

    Auditory perceptual decisions are thought to be mediated by the ventral auditory pathway. However, the specific and causal contributions of different brain regions in this pathway, including the middle-lateral (ML) and anterolateral (AL) belt regions of the auditory cortex, to auditory decisions have not been fully identified. To identify these contributions, we recorded from and microstimulated ML and AL sites while monkeys decided whether an auditory stimulus contained more low-frequency or high-frequency tone bursts. Both ML and AL neural activity was modulated by the frequency content of the stimulus. However, only the responses of the most stimulus-sensitive AL neurons were systematically modulated by the monkeys’ choices. Consistent with this observation, microstimulation of AL—but not ML—systematically biased the monkeys’ behavior toward the choice associated with the preferred frequency of the stimulated site. Together, these findings suggest that AL directly and causally contributes sensory evidence used to form this auditory decision. PMID:26656644

  15. The brain-stem auditory-evoked response in the big brown bat (Eptesicus fuscus) to clicks and frequency-modulated sweeps.

    PubMed

    Burkard, R; Moss, C F

    1994-08-01

    Three experiments were performed to evaluate the effects of stimulus level on the brain-stem auditory-evoked response (BAER) in the big brown bat (Eptesicus fuscus), a species that uses frequency-modulated (FM) sonar sounds for echolocation. In experiment 1, the effects of click level on the BAER were investigated. Clicks were presented at levels of 30 to 90 dB pSPL in 10-dB steps. Each animal responded reliably to clicks at levels of 50 dB pSPL and above, showing a BAER containing four peaks in the first 3-4 ms from click onset (waves i-iv). With increasing click level, BAER peak amplitude increased and peak latency decreased. A decrease in the i-iv interval also occurred with increasing click level. In experiment 2, stimuli were 1-ms linear FM sweeps, decreasing in frequency from 100 to 20 kHz. Stimulus levels ranged from 20 to 90 dB pSPL. BAERs to FM sweeps were observed in all animals for levels of 40 dB pSPL and above. These responses were similar to the click-evoked BAER in waveform morphology, with the notable exception of an additional peak observed at the higher levels of FM sweeps. This peak (wave ia) occurred prior to the first wave seen at lower levels (wave ib). As the level of the FM sweep increased, there was a decrease in peak latency and an increase in peak amplitude. Similarity in the magnitude and behavior of the i-iv and ib-iv intervals suggests that wave ib to FM sweeps is the homolog of the wave i response to click stimuli. Experiment 3 tested the hypothesis that wave ia represented activity emanating from more basal cochlear regions than wave ib. FM sweeps (100-20 kHz) were presented at 90 dB pSPL, and broadband noise was raised in level until the BAER was eliminated. This "masked threshold" occurred at 85 dB SPL of noise. At masked threshold, the broadband noise was steeply high-pass filtered at five cutoff frequencies ranging from 20 to 80 kHz. Generally, wave ia was eliminated for masker cutoff frequencies of 56.6 kHz and below, while wave

  16. Software Configurable Multichannel Transceiver

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Cornelius, Harold; Hickling, Ron; Brooks, Walter

    2009-01-01

    Emerging test instrumentation and test scenarios increasingly require network communication to manage complexity. Adapting wireless communication infrastructure to accommodate challenging testing needs can benefit from reconfigurable radio technology. A fundamental requirement for a software-definable radio system is independence from carrier frequencies, one of the radio components that to date has seen only limited progress toward programmability. This paper overviews an ongoing project to validate the viability of a promising chipset that performs conversion of radio frequency (RF) signals directly into digital data for the wireless receiver and, for the transmitter, converts digital data into RF signals. The Software Configurable Multichannel Transceiver (SCMT) enables four transmitters and four receivers in a single unit the size of a commodity disk drive, programmable for any frequency band between 1 MHz and 6 GHz.

  17. Multichannel optical sensing device

    DOEpatents

    Selkowitz, Stephen E.

    1990-01-01

    A multichannel optical sensing device is disclosed, for measuring the outr sky luminance or illuminance or the luminance or illuminance distribution in a room, comprising a plurality of light receptors, an optical shutter matrix including a plurality of liquid crystal optical shutter elements operable by electrical control signals between light transmitting and light stopping conditions, fiber optic elements connected between the receptors and the shutter elements, a microprocessor based programmable control unit for selectively supplying control signals to the optical shutter elements in a programmable sequence, a photodetector including an optical integrating spherical chamber having an input port for receiving the light from the shutter matrix and at least one detector element in the spherical chamber for producing output signals corresponding to the light, and output units for utilizing the output signals including a storage unit having a control connection to the microprocessor based programmable control unit for storing the output signals under the sequence control of the programmable control unit.

  18. Multichannel optical sensing device

    DOEpatents

    Selkowitz, S.E.

    1985-08-16

    A multichannel optical sensing device is disclosed, for measuring the outdoor sky luminance or illuminance or the luminance or illuminance distribution in a room, comprising a plurality of light receptors, an optical shutter matrix including a plurality of liquid crystal optical shutter elements operable by electrical control signals between light transmitting and light stopping conditions, fiber optical elements connected between the receptors and the shutter elements, a microprocessor based programmable control unit for selectively supplying control signals to the optical shutter elements in a programmable sequence, a photodetector including an optical integrating spherical chamber having an input port for receiving the light from the shutter matrix and at least one detector element in the spherical chamber for producing output signals corresponding to the light, and output units for utilizing the output signals including a storage unit having a control connection to the microprocessor based programmable control unit for storing the output signals under the sequence control of the programmable control unit.

  19. Multichannel signal enhancement

    DOEpatents

    Lewis, Paul S.

    1990-01-01

    A mixed adaptive filter is formulated for the signal processing problem where desired a priori signal information is not available. The formulation generates a least squares problem which enables the filter output to be calculated directly from an input data matrix. In one embodiment, a folded processor array enables bidirectional data flow to solve the recursive problem by back substitution without global communications. In another embodiment, a balanced processor array solves the recursive problem by forward elimination through the array. In a particular application to magnetoencephalography, the mixed adaptive filter enables an evoked response to an auditory stimulus to be identified from only a single trial.

  20. Neurons Differentiated from Transplanted Stem Cells Respond Functionally to Acoustic Stimuli in the Awake Monkey Brain.

    PubMed

    Wei, Jing-Kuan; Wang, Wen-Chao; Zhai, Rong-Wei; Zhang, Yu-Hua; Yang, Shang-Chuan; Rizak, Joshua; Li, Ling; Xu, Li-Qi; Liu, Li; Pan, Ming-Ke; Hu, Ying-Zhou; Ghanemi, Abdelaziz; Wu, Jing; Yang, Li-Chuan; Li, Hao; Lv, Long-Bao; Li, Jia-Li; Yao, Yong-Gang; Xu, Lin; Feng, Xiao-Li; Yin, Yong; Qin, Dong-Dong; Hu, Xin-Tian; Wang, Zheng-Bo

    2016-07-26

    Here, we examine whether neurons differentiated from transplanted stem cells can integrate into the host neural network and function in awake animals, a goal of transplanted stem cell therapy in the brain. We have developed a technique in which a small "hole" is created in the inferior colliculus (IC) of rhesus monkeys, then stem cells are transplanted in situ to allow for investigation of their integration into the auditory neural network. We found that some transplanted cells differentiated into mature neurons and formed synaptic input/output connections with the host neurons. In addition, c-Fos expression increased significantly in the cells after acoustic stimulation, and multichannel recordings indicated IC specific tuning activities in response to auditory stimulation. These results suggest that the transplanted cells have the potential to functionally integrate into the host neural network.

  1. Neurons Differentiated from Transplanted Stem Cells Respond Functionally to Acoustic Stimuli in the Awake Monkey Brain.

    PubMed

    Wei, Jing-Kuan; Wang, Wen-Chao; Zhai, Rong-Wei; Zhang, Yu-Hua; Yang, Shang-Chuan; Rizak, Joshua; Li, Ling; Xu, Li-Qi; Liu, Li; Pan, Ming-Ke; Hu, Ying-Zhou; Ghanemi, Abdelaziz; Wu, Jing; Yang, Li-Chuan; Li, Hao; Lv, Long-Bao; Li, Jia-Li; Yao, Yong-Gang; Xu, Lin; Feng, Xiao-Li; Yin, Yong; Qin, Dong-Dong; Hu, Xin-Tian; Wang, Zheng-Bo

    2016-07-26

    Here, we examine whether neurons differentiated from transplanted stem cells can integrate into the host neural network and function in awake animals, a goal of transplanted stem cell therapy in the brain. We have developed a technique in which a small "hole" is created in the inferior colliculus (IC) of rhesus monkeys, then stem cells are transplanted in situ to allow for investigation of their integration into the auditory neural network. We found that some transplanted cells differentiated into mature neurons and formed synaptic input/output connections with the host neurons. In addition, c-Fos expression increased significantly in the cells after acoustic stimulation, and multichannel recordings indicated IC specific tuning activities in response to auditory stimulation. These results suggest that the transplanted cells have the potential to functionally integrate into the host neural network. PMID:27425612

  2. Auditory Efferent System Modulates Mosquito Hearing.

    PubMed

    Andrés, Marta; Seifert, Marvin; Spalthoff, Christian; Warren, Ben; Weiss, Lukas; Giraldo, Diego; Winkler, Margret; Pauls, Stephanie; Göpfert, Martin C

    2016-08-01

    The performance of vertebrate ears is controlled by auditory efferents that originate in the brain and innervate the ear, synapsing onto hair cell somata and auditory afferent fibers [1-3]. Efferent activity can provide protection from noise and facilitate the detection and discrimination of sound by modulating mechanical amplification by hair cells and transmitter release as well as auditory afferent action potential firing [1-3]. Insect auditory organs are thought to lack efferent control [4-7], but when we inspected mosquito ears, we obtained evidence for its existence. Antibodies against synaptic proteins recognized rows of bouton-like puncta running along the dendrites and axons of mosquito auditory sensory neurons. Electron microscopy identified synaptic and non-synaptic sites of vesicle release, and some of the innervating fibers co-labeled with somata in the CNS. Octopamine, GABA, and serotonin were identified as efferent neurotransmitters or neuromodulators that affect auditory frequency tuning, mechanical amplification, and sound-evoked potentials. Mosquito brains thus modulate mosquito ears, extending the use of auditory efferent systems from vertebrates to invertebrates and adding new levels of complexity to mosquito sound detection and communication. PMID:27476597

  3. Multichannel electrochemical microbial detection unit

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Young, R. N.; Boykin, E. H.

    1978-01-01

    The paper describes the design and capabilities of a compact multichannel electrochemical unit devised to detect and automatically indicate detection time length of bacteria. By connecting this unit to a strip-chart recorder, a permanent record is obtained of the end points and growth curves for each of eight channels. The experimental setup utilizing the multichannel unit consists of a test tube (25 by 150 mm) containing a combination redox electrode plus 18 ml of lauryl tryptose broth and positioned in a 35-C water bath. Leads from the electrodes are connected to the multichannel unit, which in turn is connected to a strip-chart recorder. After addition of 2.0 ml of inoculum to the test tubes, depression of the push-button starter activates the electronics, timer, and indicator light for each channel. The multichannel unit is employed to test tenfold dilutions of various members of the Enterobacteriaceae group, and a typical dose-response curve is presented.

  4. Brain

    MedlinePlus

    ... will return after updating. Resources Archived Modules Updates Brain Cerebrum The cerebrum is the part of the ... the outside of the brain and spinal cord. Brain Stem The brain stem is the part of ...

  5. Auditory based neuropsychology in neurosurgery.

    PubMed

    Wester, Knut

    2008-04-01

    In this article, an account is given on the author's experience with auditory based neuropsychology in a clinical, neurosurgical setting. The patients that were included in the studies are patients with traumatic or vascular brain lesions, patients undergoing brain surgery to alleviate symptoms of Parkinson's disease, or patients harbouring an intracranial arachnoid cyst affecting the temporal or the frontal lobe. The aims of these investigations were to collect information about the location of cognitive processes in the human brain, or to disclose dyscognition in patients with an arachnoid cyst. All the patients were tested with the DL technique. In addition, the cyst patients were subjected to a number of non-auditory, standard neuropsychological tests, such as Benton Visual Retention Test, Street Gestalt Test, Stroop Test and Trails Test A and B. The neuropsychological tests revealed that arachnoid cysts in general cause dyscognition that also includes auditory processes, and more importantly, that these cognition deficits normalise after surgical removal of the cyst. These observations constitute strong evidence in favour of surgical decompression. PMID:18024027

  6. Multichannel SQUID systems for brain research

    SciTech Connect

    Ahonen, A.I.; Hamalainen, M.S.; Kajola, M.J.; Knuutila, J.E.F.; Lounasmaa, O.V.; Simola, J.T.; Vilkman, V.A. . Low Temperature Lab.); Tesche, C.D. . Thomas J. Watson Research Center)

    1991-03-01

    This paper reviews basis principles of magnetoencephalography (MEG) and neuromagnetic instrumentation. The authors' 24-channel system, based on planar gradiometer coils and dc-SQUIDs, is then described. Finally, recent MEG-experiments on human somatotopy and focal epilepsy, carried out in the authors' laboratory, are presented.

  7. Estrogenic modulation of auditory processing: a vertebrate comparison

    PubMed Central

    Caras, Melissa L.

    2013-01-01

    Sex-steroid hormones are well-known regulators of vocal motor behavior in several organisms. A large body of evidence now indicates that these same hormones modulate processing at multiple levels of the ascending auditory pathway. The goal of this review is to provide a comparative analysis of the role of estrogens in vertebrate auditory function. Four major conclusions can be drawn from the literature: First, estrogens may influence the development of the mammalian auditory system. Second, estrogenic signaling protects the mammalian auditory system from noise- and age-related damage. Third, estrogens optimize auditory processing during periods of reproductive readiness in multiple vertebrate lineages. Finally, brain-derived estrogens can act locally to enhance auditory response properties in at least one avian species. This comparative examination may lead to a better appreciation of the role of estrogens in the processing of natural vocalizations and may provide useful insights toward alleviating auditory dysfunctions emanating from hormonal imbalances. PMID:23911849

  8. Electrophysiological measurement of human auditory function

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1975-01-01

    Contingent negative variations in the presence and amplitudes of brain potentials evoked by sound are considered. Evidence is produced that the evoked brain stem response to auditory stimuli is clearly related to brain events associated with cognitive processing of acoustic signals since their properties depend upon where the listener directs his attention, whether the signal is an expected event or a surprise, and when sound that is listened-for is heard at last.

  9. Multichannel demultiplexer-demodulator

    NASA Technical Reports Server (NTRS)

    Courtois, Hector; Sherry, Mike; Cangiane, Peter; Caso, Greg

    1993-01-01

    One of the critical satellite technologies in a meshed VSAT (very small aperture terminal) satellite communication networks utilizing FDMA (frequency division multiple access) uplinks is a multichannel demultiplexer/demodulator (MCDD). TRW Electronic Systems Group developed a proof-of-concept (POC) MCDD using advanced digital technologies. This POC model demonstrates the capability of demultiplexing and demodulating multiple low to medium data rate FDMA uplinks with potential for expansion to demultiplexing and demodulating hundreds to thousands of narrowband uplinks. The TRW approach uses baseband sampling followed by successive wideband and narrowband channelizers with each channelizer feeding into a multirate, time-shared demodulator. A full-scale MCDD would consist of an 8 bit A/D sampling at 92.16 MHz, four wideband channelizers capable of demultiplexing eight wideband channels, thirty-two narrowband channelizers capable of demultiplexing one wideband signal into 32 narrowband channels, and thirty-two multirate demodulators. The POC model consists of an 8 bit A/D sampling at 23.04 MHz, one wideband channelizer, 16 narrowband channelizers, and three multirate demodulators. The implementation loss of the wideband and narrowband channels is 0.3dB and 0.75dB at 10(exp -7) E(sub b)/N(sub o) respectively.

  10. On-line statistical segmentation of a non-speech auditory stream in neonates as demonstrated by event-related brain potentials.

    PubMed

    Kudo, Noriko; Nonaka, Yulri; Mizuno, Noriko; Mizuno, Katsumi; Okanoya, Kazuo

    2011-09-01

    The ability to statistically segment a continuous auditory stream is one of the most important preparations for initiating language learning. Such ability is available to human infants at 8 months of age, as shown by a behavioral measurement. However, behavioral study alone cannot determine how early this ability is available. A recent study using measurements of event-related potential (ERP) revealed that neonates are able to detect statistical boundaries within auditory streams of speech syllables. Extending this line of research will allow us to better understand the cognitive preparation for language acquisition that is available to neonates. The aim of the present study was to examine the domain-generality of such statistical segmentation. Neonates were presented with nonlinguistic tone sequences composed of four tritone units, each consisting of three semitones extracted from one octave, for two 5-minute sessions. Only the first tone of each unit evoked a significant positivity in the frontal area during the second session, but not in the first session. This result suggests that the general ability to distinguish units in an auditory stream by statistical information is activated at birth and is probably innately prepared in humans. PMID:21884325

  11. Auditory memory function in expert chess players

    PubMed Central

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. Results: The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Conclusion: Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time. PMID:26793666

  12. McGurk illusion recalibrates subsequent auditory perception

    PubMed Central

    Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  13. McGurk illusion recalibrates subsequent auditory perception.

    PubMed

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of 'ada'. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as 'ada'. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as 'ada', activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  14. Hemodynamic imaging of the auditory cortex.

    PubMed

    Deborah, Ann Hall; Karima, Susi

    2015-01-01

    Over the past 20 years or so, functional magnetic resonance imaging (fMRI) has proven to be an influential tool for measuring perceptual and cognitive processing non-invasively in the human brain. This article provides a brief yet comprehensive overview of this dominant method for human auditory neuroscience, providing the reader with knowledge about the practicalities of using this technique to assess central auditory coding. Key learning objectives include developing an understanding of the basic MR physics underpinning the technique, the advantage of auditory fMRI over other current neuroimaging alternatives, and highlighting some of the practical considerations involved in setting up, running, and analyzing an auditory fMRI experiment. The future utility of fMRI and anticipated technical developments is also briefly evaluated. Throughout the review, key concepts are illustrated using specific author examples, with particular emphasis on fMRI findings that address questions pertaining to basic sound coding (such as frequency and pitch).

  15. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer

  16. Presentation of electromagnetic multichannel data: The signal space separation method

    NASA Astrophysics Data System (ADS)

    Taulu, Samu; Kajola, Matti

    2005-06-01

    Measurement of external magnetic fields provides information on electric current distribution inside an object. For example, in magnetoencephalography modern measurement devices sample the magnetic field produced by the brain in several hundred distinct locations around the head. The signal space separation (SSS) method creates a fundamental linear basis for all measurable multichannel signal vectors of magnetic origin. The SSS basis is based on the fact that the magnetic field can be expressed as a combination of two separate and rapidly converging expansions of harmonic functions with one expansion for signals arising from inside of the measurement volume of the sensor array and another for signals arising from outside of this volume. The separation is based on the different convergence volumes of the two expansions and on the fact that the sensors are located in a source current-free volume between the interesting and interfering sources. Individual terms of the expansions are shown to contain uncorrelated information of the underlying source distribution. SSS provides a stable decomposition of the measurement into a fundamental device-independent form when used with an accurately calibrated multichannel device. The external interference signals are elegantly suppressed by leaving the interference components out from the reconstruction based on the decomposition. Representation of multichannel data with the SSS basis is shown to provide a large variety of applications for improved analysis of multichannel data.

  17. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  18. Auditory Training for Central Auditory Processing Disorder.

    PubMed

    Weihing, Jeffrey; Chermak, Gail D; Musiek, Frank E

    2015-11-01

    Auditory training (AT) is an important component of rehabilitation for patients with central auditory processing disorder (CAPD). The present article identifies and describes aspects of AT as they relate to applications in this population. A description of the types of auditory processes along with information on relevant AT protocols that can be used to address these specific deficits is included. Characteristics and principles of effective AT procedures also are detailed in light of research that reflects on their value. Finally, research investigating AT in populations who show CAPD or present with auditory complaints is reported. Although efficacy data in this area are still emerging, current findings support the use of AT for treatment of auditory difficulties. PMID:27587909

  19. Auditory Training for Central Auditory Processing Disorder

    PubMed Central

    Weihing, Jeffrey; Chermak, Gail D.; Musiek, Frank E.

    2015-01-01

    Auditory training (AT) is an important component of rehabilitation for patients with central auditory processing disorder (CAPD). The present article identifies and describes aspects of AT as they relate to applications in this population. A description of the types of auditory processes along with information on relevant AT protocols that can be used to address these specific deficits is included. Characteristics and principles of effective AT procedures also are detailed in light of research that reflects on their value. Finally, research investigating AT in populations who show CAPD or present with auditory complaints is reported. Although efficacy data in this area are still emerging, current findings support the use of AT for treatment of auditory difficulties. PMID:27587909

  20. Reality of auditory verbal hallucinations

    PubMed Central

    Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-01-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency. PMID:19620178

  1. The human auditory evoked response

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1974-01-01

    Figures are presented of computer-averaged auditory evoked responses (AERs) that point to the existence of a completely endogenous brain event. A series of regular clicks or tones was administered to the ear, and 'odd-balls' of different intensity or frequency respectively were included. Subjects were asked either to ignore the sounds (to read or do something else) or to attend to the stimuli. When they listened and counted the odd-balls, a P3 wave occurred at 300msec after stimulus. When the odd-balls consisted of omitted clicks or tone bursts, a similar response was observed. This could not have come from auditory nerve, but only from cortex. It is evidence of recognition, a conscious process.

  2. Electrophysiological measurement of human auditory function

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1975-01-01

    Knowledge of the human auditory evoked response is reviewed, including methods of determining this response, the way particular changes in the stimulus are coupled to specific changes in the response, and how the state of mind of the listener will influence the response. Important practical applications of this basic knowledge are discussed. Measurement of the brainstem evoked response, for instance, can state unequivocally how well the peripheral auditory apparatus functions. It might then be developed into a useful hearing test, especially for infants and preverbal or nonverbal children. Clinical applications of measuring the brain waves evoked 100 msec and later after the auditory stimulus are undetermined. These waves are clearly related to brain events associated with cognitive processing of acoustic signals, since their properties depend upon where the listener directs his attention and whether how long he expects the signal.

  3. The Brain As a Mixer, I. Preliminary Literature Review: Auditory Integration. Studies in Language and Language Behavior, Progress Report Number VII.

    ERIC Educational Resources Information Center

    Semmel, Melvyn I.; And Others

    Methods to evaluate central hearing deficiencies and to localize brain damage are reviewed beginning with Bocca who showed that patients with temporal lobe tumors made significantly lower discrimination scores in the ear opposite the tumor when speech signals were distorted. Tests were devised to attempt to pinpoint brain damage on the basis of…

  4. Network community structure detection for directional neural networks inferred from multichannel multisubject EEG data.

    PubMed

    Liu, Ying; Moser, Jason; Aviyente, Selin

    2014-07-01

    In many neuroscience applications, one is interested in identifying the functional brain modules from multichannel, multiple subject neuroimaging data. However, most of the existing network community structure detection algorithms are limited to single undirected networks and cannot reveal the common community structure for a collection of directed networks. In this paper, we propose a community detection algorithm for weighted asymmetric (directed) networks representing the effective connectivity in the brain. Moreover, the issue of finding a common community structure across subjects is addressed by maximizing the total modularity of the group. Finally, the proposed community detection algorithm is applied to multichannel multisubject electroencephalogram data.

  5. Auditory hallucinations inhibit exogenous activation of auditory association cortex.

    PubMed

    David, A S; Woodruff, P W; Howard, R; Mellers, J D; Brammer, M; Bullmore, E; Wright, I; Andrew, C; Williams, S C

    1996-03-22

    Percepts unaccompanied by a veridical stimulus, such as hallucinations, provide an opportunity for mapping the neural correlates of conscious perception. Functional magnetic resonance imaging (fMRI) can reveal localized changes in blood oxygenation in response to actual as well as imagined sensory stimulation. The safe repeatability of fMRI enabled us to study a patient with schizophrenia while he was experiencing auditory hallucinations and when hallucination-free (with supporting data from a second case). Cortical activation was measured in response to periodic exogenous auditory and visual stimulations using time series regression analysis. Functional brain images were obtained in each hallucination condition both while the patient was on and off antipsychotic drugs. The response of the temporal cortex to exogenous auditory stimulation (speech) was markedly reduced when the patient was experiencing hallucinating voices addressing him, regardless of medication. Visual cortical activation (to flashing lights) remained normal over four scans. From the results of this study and previous work on visual hallucinations we conclude that hallucinations coincide with maximal activation of the sensory and association cortex, specific to the modality of the experience. PMID:8724677

  6. 40 Hz auditory steady state response to linguistic features of stimuli during auditory hallucinations.

    PubMed

    Ying, Jun; Yan, Zheng; Gao, Xiao-rong

    2013-10-01

    The auditory steady state response (ASSR) may reflect activity from different regions of the brain, depending on the modulation frequency used. In general, responses induced by low rates (≤40 Hz) emanate mostly from central structures of the brain, and responses from high rates (≥80 Hz) emanate mostly from the peripheral auditory nerve or brainstem structures. Besides, it was reported that the gamma band ASSR (30-90 Hz) played an important role in working memory, speech understanding and recognition. This paper investigated the 40 Hz ASSR evoked by modulated speech and reversed speech. The speech was Chinese phrase voice, and the noise-like reversed speech was obtained by temporally reversing the speech. Both auditory stimuli were modulated with a frequency of 40 Hz. Ten healthy subjects and 5 patients with hallucination symptom participated in the experiment. Results showed reduction in left auditory cortex response when healthy subjects listened to the reversed speech compared with the speech. In contrast, when the patients who experienced auditory hallucinations listened to the reversed speech, the auditory cortex of left hemispheric responded more actively. The ASSR results were consistent with the behavior results of patients. Therefore, the gamma band ASSR is expected to be helpful for rapid and objective diagnosis of hallucination in clinic. PMID:24142731

  7. Experience with a multichannel system for biomagnetic study.

    PubMed

    Schneider, S; Abraham-Fuchs, K; Reichenberger, H; Seifert, H; Hoenig, H E; Röhrlein, G

    1993-11-01

    The components of the biomagnetic multichannel system Krenikon are described. The combination of biomagnetically yielded localizations with anatomic images gained from MR or CT is discussed as well as the enhancement of the signal-to-noise ratio by using a correlation technique. The overall localization accuracy is tested with technical phantoms. With volunteers measurements of auditory, visual and somatosensory evoked fields are performed to evaluate the system performance in vivo. Clinical studies were performed mainly with partners from the Universities of Erlangen-Nünberg and Ulm. The data acquisition time typically is 2-10 min which is tolerable both for the patient and the clinical staff. Electric potentials even with invasive electrodes can be recorded simultaneously with the magnetic fields. MEG gives important information for the presurgical diagnosis of epileptic patients and for the understanding of the epilepsy genesis. With MCG, centres of biologic excitation such as ventricular ectopies or accessory bundles in WPW syndrome have been successfully localized.

  8. Web-based multi-channel analyzer

    DOEpatents

    Gritzo, Russ E.

    2003-12-23

    The present invention provides an improved multi-channel analyzer designed to conveniently gather, process, and distribute spectrographic pulse data. The multi-channel analyzer may operate on a computer system having memory, a processor, and the capability to connect to a network and to receive digitized spectrographic pulses. The multi-channel analyzer may have a software module integrated with a general-purpose operating system that may receive digitized spectrographic pulses for at least 10,000 pulses per second. The multi-channel analyzer may further have a user-level software module that may receive user-specified controls dictating the operation of the multi-channel analyzer, making the multi-channel analyzer customizable by the end-user. The user-level software may further categorize and conveniently distribute spectrographic pulse data employing non-proprietary, standard communication protocols and formats.

  9. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    PubMed Central

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  10. Multichannel analysis of surface waves

    USGS Publications Warehouse

    Park, C.B.; Miller, R.D.; Xia, J.

    1999-01-01

    The frequency-dependent properties of Rayleigh-type surface waves can be utilized for imaging and characterizing the shallow subsurface. Most surface-wave analysis relies on the accurate calculation of phase velocities for the horizontally traveling fundamental-mode Rayleigh wave acquired by stepping out a pair of receivers at intervals based on calculated ground roll wavelengths. Interference by coherent source-generated noise inhibits the reliability of shear-wave velocities determined through inversion of the whole wave field. Among these nonplanar, nonfundamental-mode Rayleigh waves (noise) are body waves, scattered and nonsource-generated surface waves, and higher-mode surface waves. The degree to which each of these types of noise contaminates the dispersion curve and, ultimately, the inverted shear-wave velocity profile is dependent on frequency as well as distance from the source. Multichannel recording permits effective identification and isolation of noise according to distinctive trace-to-trace coherency in arrival time and amplitude. An added advantage is the speed and redundancy of the measurement process. Decomposition of a multichannel record into a time variable-frequency format, similar to an uncorrelated Vibroseis record, permits analysis and display of each frequency component in a unique and continuous format. Coherent noise contamination can then be examined and its effects appraised in both frequency and offset space. Separation of frequency components permits real-time maximization of the S/N ratio during acquisition and subsequent processing steps. Linear separation of each ground roll frequency component allows calculation of phase velocities by simply measuring the linear slope of each frequency component. Breaks in coherent surface-wave arrivals, observable on the decomposed record, can be compensated for during acquisition and processing. Multichannel recording permits single-measurement surveying of a broad depth range, high levels of

  11. Midbrain auditory selectivity to natural sounds

    PubMed Central

    Moss, Cynthia F.

    2016-01-01

    This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation—sonar vocalizations—offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors. PMID:26884152

  12. Midbrain auditory selectivity to natural sounds.

    PubMed

    Wohlgemuth, Melville J; Moss, Cynthia F

    2016-03-01

    This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation-sonar vocalizations-offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors. PMID:26884152

  13. Midbrain auditory selectivity to natural sounds.

    PubMed

    Wohlgemuth, Melville J; Moss, Cynthia F

    2016-03-01

    This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation-sonar vocalizations-offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors.

  14. Forebrain pathway for auditory space processing in the barn owl.

    PubMed

    Cohen, Y E; Miller, G L; Knudsen, E I

    1998-02-01

    The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway. PMID:9463450

  15. Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve

    PubMed Central

    Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.

    2015-01-01

    The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538

  16. PARATHYROID HORMONE 2 RECEPTOR AND ITS ENDOGENOUS LIGAND TIP39 ARE CONCENTRATED IN ENDOCRINE, VISCEROSENSORY AND AUDITORY BRAIN REGIONS IN MACAQUE AND HUMAN

    PubMed Central

    Bagó, Attila G.; Dimitrov, Eugene; Saunders, Richard; Seress, László; Palkovits, Miklós; Usdin, Ted B.; Dobolyi, Arpád

    2009-01-01

    Parathyroid hormone receptor 2 (PTH2R) and its ligand, tuberoinfundibular peptide of 39 residues (TIP39) constitute a neuromodulator system implicated in endocrine and nociceptive regulations. We now describe the presence and distribution of the PTH2R and TIP39 in the brain of primates using a range of tissues and ages from macaque and human brain. In situ hybridization histochemistry of TIP39 mRNA, studied in young macaque brain, due to its possible decline beyond late postnatal ages, was present only in the thalamic subparafascicular area and the pontine medial paralemniscal nucleus. In contrast in situ hybridization histochemistry in macaque identified high levels of PTH2R expression in the central amygdaloid nucleus, medial preoptic area, hypothalamic paraventricular and periventricular nuclei, medial geniculate, and the pontine tegmentum. PTH2R mRNA was also detected in several human brain areas by RT-PCR. The distribution of PTH2R-immunoreactive fibers in human, determined by immunocytochemistry, was similar to that in rodents including dense fiber networks in the medial preoptic area, hypothalamic paraventricular, periventricular and infundibular (arcuate) nuclei, lateral hypothalamic area, median eminence, thalamic paraventricular nucleus, periaqueductal gray, lateral parabrachial nucleus, nucleus of the solitary tract, sensory trigeminal nuclei, medullary dorsal reticular nucleus, and dorsal horn of the spinal cord. Co-localization suggested that PTH2R fibers are glutamatergic, and that TIP39 may directly influence hypophysiotropic somatostatin containing and indirectly influence corticotropin releasing-hormone containing neurons. The results demonstrate that TIP39 and the PTH2R are expressed in the brain of primates in locations that suggest involvement in regulation of fear, anxiety, reproductive behaviors, release of pituitary hormones, and nociception. PMID:19401215

  17. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    PubMed

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  18. Altered auditory function in rats exposed to hypergravic fields

    NASA Technical Reports Server (NTRS)

    Jones, T. A.; Hoffman, L.; Horowitz, J. M.

    1982-01-01

    The effect of an orthodynamic hypergravic field of 6 G on the brainstem auditory projections was studied in rats. The brain temperature and EEG activity were recorded in the rats during 6 G orthodynamic acceleration and auditory brainstem responses were used to monitor auditory function. Results show that all animals exhibited auditory brainstem responses which indicated impaired conduction and transmission of brainstem auditory signals during the exposure to the 6 G acceleration field. Significant increases in central conduction time were observed for peaks 3N, 4P, 4N, and 5P (N = negative, P = positive), while the absolute latency values for these same peaks were also significantly increased. It is concluded that these results, along with those for fields below 4 G (Jones and Horowitz, 1981), indicate that impaired function proceeds in a rostro-caudal progression as field strength is increased.

  19. Anatomy, Physiology and Function of the Auditory System

    NASA Astrophysics Data System (ADS)

    Kollmeier, Birger

    The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.

  20. The harmonic organization of auditory cortex.

    PubMed

    Wang, Xiaoqin

    2013-01-01

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds. PMID:24381544

  1. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features. PMID:22271265

  2. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  3. A Student-Made Inexpensive Multichannel Pipet

    ERIC Educational Resources Information Center

    Dragojlovic, Veljko

    2009-01-01

    An inexpensive multichannel pipet designed to deliver small volumes of liquid simultaneously to wells in a multiwell plate can be prepared by students in a single laboratory period. The multichannel pipet is made of disposable plastic 1 mL syringes and drilled plastic plates, which are used to make plunger and barrel assemblies. Application of the…

  4. Unconscious learning of auditory discrimination using mismatch negativity (MMN) neurofeedback.

    PubMed

    Chang, Ming; Iizuka, Hiroyuki; Naruse, Yasushi; Ando, Hideyuki; Maeda, Taro

    2014-10-24

    Neurofeedback is a strong direct training method for brain function, wherein brain activity patterns are measured and displayed as feedback, and trainees try to stabilize the feedback signal onto certain desirable states to regulate their own mental states. Here, we introduce a novel neurofeedback method, using the mismatch negativity (MMN) responses elicited by similar sounds that cannot be consciously discriminated. Through neurofeedback training, without participants' attention to the auditory stimuli or awareness of what was to be learned, we found that the participants could unconsciously achieve a significant improvement in the auditory discrimination of the applied stimuli. Our method has great potential to provide effortless auditory perceptual training. Based on this method, participants do not need to make an effort to discriminate auditory stimuli, and can choose tasks of interest without boredom due to training. In particular, it could be used to train people to recognize speech sounds that do not exist in their native language and thereby facilitate foreign language learning.

  5. Tuning out the noise: Limbic-auditory interactions in tinnitus

    PubMed Central

    Rauschecker, Josef P.; Leaver, Amber M.; Mühlau, Mark

    2010-01-01

    Tinnitus, the most common auditory disorder, affects about 40 million people in the United States alone, and its incidence is rising due to an aging population and increasing noise exposure. Although several approaches for the alleviation of tinnitus exist, there is as of yet no cure. The present article proposes a testable model for tinnitus that is grounded in recent findings from human imaging and focuses on brain areas in cortex, thalamus, and ventral striatum. Limbic and auditory brain areas are thought to interact at the thalamic level. While a tinnitus signal originates from lesion-induced plasticity of the auditory pathways, it can be tuned out by feedback connections from limbic regions, which block the tinnitus signal from reaching auditory cortex. If the limbic regions are compromised, this “noise-cancellation” mechanism breaks down, and chronic tinnitus results. Hopefully, this model will ultimately enable the development of effective treatment. PMID:20620868

  6. Lateralization of auditory-cortex functions.

    PubMed

    Tervaniemi, Mari; Hugdahl, Kenneth

    2003-12-01

    In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding. PMID:14629926

  7. Development of multichannel MEG system at IGCAR

    NASA Astrophysics Data System (ADS)

    Mariyappa, N.; Parasakthi, C.; Gireesan, K.; Sengottuvel, S.; Patel, Rajesh; Janawadkar, M. P.; Radhakrishnan, T. S.; Sundar, C. S.

    2013-02-01

    We describe some of the challenging aspects in the indigenous development of the whole head multichannel magnetoencephalography (MEG) system at IGCAR, Kalpakkam. These are: i) fabrication and testing of a helmet shaped sensor array holder of a polymeric material experimentally tested to be compatible with liquid helium temperatures, ii) the design and fabrication of the PCB adapter modules, keeping in mind the inter-track cross talk considerations between the electrical leads used to provide connections from SQUID at liquid helium temperature (4.2K) to the electronics at room temperature (300K) and iii) use of high resistance manganin wires for the 86 channels (86×8 leads) essential to reduce the total heat leak which, however, inevitably causes an attenuation of the SQUID output signal due to voltage drop in the leads. We have presently populated 22 of the 86 channels, which include 6 reference channels to reject the common mode noise. The whole head MEG system to cover all the lobes of the brain will be progressively assembled when other three PCB adapter modules, presently under fabrication, become available. The MEG system will be used for a variety of basic and clinical studies including localization of epileptic foci during pre-surgical mapping in collaboration with neurologists.

  8. Least squares restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.

    1991-01-01

    Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.

  9. Virtual Microphones for Multichannel Audio Resynthesis

    NASA Astrophysics Data System (ADS)

    Mouchtaris, Athanasios; Narayanan, Shrikanth S.; Kyriakakis, Chris

    2003-12-01

    Multichannel audio offers significant advantages for music reproduction, including the ability to provide better localization and envelopment, as well as reduced imaging distortion. On the other hand, multichannel audio is a demanding media type in terms of transmission requirements. Often, bandwidth limitations prohibit transmission of multiple audio channels. In such cases, an alternative is to transmit only one or two reference channels and recreate the rest of the channels at the receiving end. Here, we propose a system capable of synthesizing the required signals from a smaller set of signals recorded in a particular venue. These synthesized "virtual" microphone signals can be used to produce multichannel recordings that accurately capture the acoustics of that venue. Applications of the proposed system include transmission of multichannel audio over the current Internet infrastructure and, as an extension of the methods proposed here, remastering existing monophonic and stereophonic recordings for multichannel rendering.

  10. The Distributed Auditory Cortex

    PubMed Central

    Winer, Jeffery A.; Lee, Charles C.

    2009-01-01

    A synthesis of cat auditory cortex (AC) organization is presented in which the extrinsic and intrinsic connections interact to derive a unified profile of the auditory stream and use it to direct and modify cortical and subcortical information flow. Thus, the thalamocortical input provides essential sensory information about peripheral stimulus events, which AC redirects locally for feature extraction, and then conveys to parallel auditory, multisensory, premotor, limbic, and cognitive centers for further analysis. The corticofugal output influences areas as remote as the pons and the cochlear nucleus, structures whose effects upon AC are entirely indirect, and has diverse roles in the transmission of information through the medial geniculate body and inferior colliculus. The distributed AC is thus construed as a functional network in which the auditory percept is assembled for subsequent redistribution in sensory, premotor, and cognitive streams contingent on the derived interpretation of the acoustic events. The confluence of auditory and multisensory streams likely precedes cognitive processing of sound. The distributed AC constitutes the largest and arguably the most complete representation of the auditory world. Many facets of this scheme may apply in rodent and primate AC as well. We propose that the distributed auditory cortex contributes to local processing regimes in regions as disparate as the frontal pole and the cochlear nucleus to construct the acoustic percept. PMID:17329049

  11. Auditory cortex involvement in emotional learning and memory.

    PubMed

    Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B

    2015-07-23

    Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged.

  12. Cerebral responses to local and global auditory novelty under general anesthesia.

    PubMed

    Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir

    2016-11-01

    Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices.

  13. Cerebral responses to local and global auditory novelty under general anesthesia.

    PubMed

    Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir

    2016-11-01

    Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046

  14. Salicylate-induced peripheral auditory changes and tonotopic reorganization of auditory cortex

    PubMed Central

    Stolzberg, Daniel; Chen, Guang-Di; Allman, Brian L.; Salvi, Richard J.

    2011-01-01

    The neuronal mechanism underlying the phantom auditory perception of tinnitus remains at present elusive. For over 25 years, temporary tinnitus following acute salicylate intoxication in rats has been used as a model to understand how a phantom sound can be generated. Behavioral studies have indicated the pitch of salicylate-induced tinnitus in the rat is approximately 16 kHz. In order to better understand the origin of the tinnitus pitch, in the present study, measurements were made at the levels of auditory input and output; both cochlear and cortical physiological recordings were performed in ketamine/xylazine anesthetized rats. Both compound action potentials and distortion product otoacoustic emission measurements revealed a salicylate-induced band-pass-like cochlear deficit in which the reduction of cochlear input was least at 16 kHz and significantly greater at high and low frequencies. In a separate group of rats, frequency receptive fields of primary auditory cortex neurons were tracked using multichannel microelectrodes before and after systemic salicylate treatment. Tracking frequency receptive fields following salicylate revealed a population of neurons that shifted their frequency of maximum sensitivity (i.e., characteristic frequency) towards the tinnitus frequency region of the tonotopic axis (~16 kHz). The data presented here supports the hypothesis that salicylateinduced tinnitus results from an expanded cortical representation of the tinnitus pitch determined by an altered profile of input from the cochlea. Moreover, the pliability of cortical frequency receptive fields during salicylate-induced tinnitus is likely due to salicylate’s direct action on intracortical inhibitory networks. Such a disproportionate representation of middle frequencies in the auditory cortex following salicylate may result in a finer analysis of signals within this region which may pathologically enhance the functional importance of spurious neuronal activity

  15. Processing of spatial sounds in human auditory cortex during visual, discrimination and 2-back tasks

    PubMed Central

    Rinne, Teemu; Ala-Salomäki, Heidi; Stecker, G. Christopher; Pätynen, Jukka; Lokki, Tapio

    2014-01-01

    Previous imaging studies on the brain mechanisms of spatial hearing have mainly focused on sounds varying in the horizontal plane. In this study, we compared activations in human auditory cortex (AC) and adjacent inferior parietal lobule (IPL) to sounds varying in horizontal location, distance, or space (i.e., different rooms). In order to investigate both stimulus-dependent and task-dependent activations, these sounds were presented during visual discrimination, auditory discrimination, and auditory 2-back memory tasks. Consistent with previous studies, activations in AC were modulated by the auditory tasks. During both auditory and visual tasks, activations in AC were stronger to sounds varying in horizontal location than along other feature dimensions. However, in IPL, this enhancement was detected only during auditory tasks. Based on these results, we argue that IPL is not primarily involved in stimulus-level spatial analysis but that it may represent such information for more general processing when relevant to an active auditory task. PMID:25120423

  16. Auditory motion affects visual biological motion processing.

    PubMed

    Brooks, A; van der Zwan, R; Billard, A; Petreska, B; Clarke, S; Blanke, O

    2007-02-01

    The processing of biological motion is a critical, everyday task performed with remarkable efficiency by human sensory systems. Interest in this ability has focused to a large extent on biological motion processing in the visual modality (see, for example, Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait. Perception and Psychophysics, 44(4), 339-347). In naturalistic settings, however, it is often the case that biological motion is defined by input to more than one sensory modality. For this reason, here in a series of experiments we investigate behavioural correlates of multisensory, in particular audiovisual, integration in the processing of biological motion cues. More specifically, using a new psychophysical paradigm we investigate the effect of suprathreshold auditory motion on perceptions of visually defined biological motion. Unlike data from previous studies investigating audiovisual integration in linear motion processing [Meyer, G. F. & Wuerger, S. M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12(11), 2557-2560; Wuerger, S. M., Hofbauer, M., & Meyer, G. F. (2003). The integration of auditory and motion signals at threshold. Perception and Psychophysics, 65(8), 1188-1196; Alais, D. & Burr, D. (2004). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19, 185-194], we report the existence of direction-selective effects: relative to control (stationary) auditory conditions, auditory motion in the same direction as the visually defined biological motion target increased its detectability, whereas auditory motion in the opposite direction had the inverse effect. Our data suggest these effects do not arise through general shifts in visuo-spatial attention, but instead are a consequence of motion-sensitive, direction-tuned integration mechanisms that are, if not unique to biological visual motion, at least not common to all types of

  17. Multi-channel fiber photometry for population neuronal activity recording.

    PubMed

    Guo, Qingchun; Zhou, Jingfeng; Feng, Qiru; Lin, Rui; Gong, Hui; Luo, Qingming; Zeng, Shaoqun; Luo, Minmin; Fu, Ling

    2015-10-01

    Fiber photometry has become increasingly popular among neuroscientists as a convenient tool for the recording of genetically defined neuronal population in behaving animals. Here, we report the development of the multi-channel fiber photometry system to simultaneously monitor neural activities in several brain areas of an animal or in different animals. In this system, a galvano-mirror modulates and cyclically couples the excitation light to individual multimode optical fiber bundles. A single photodetector collects excited light and the configuration of fiber bundle assembly and the scanner determines the total channel number. We demonstrated that the system exhibited negligible crosstalk between channels and optical signals could be sampled simultaneously with a sample rate of at least 100 Hz for each channel, which is sufficient for recording calcium signals. Using this system, we successfully recorded GCaMP6 fluorescent signals from the bilateral barrel cortices of a head-restrained mouse in a dual-channel mode, and the orbitofrontal cortices of multiple freely moving mice in a triple-channel mode. The multi-channel fiber photometry system would be a valuable tool for simultaneous recordings of population activities in different brain areas of a given animal and different interacting individuals.

  18. [Central auditory prosthesis].

    PubMed

    Lenarz, T; Lim, H; Joseph, G; Reuter, G; Lenarz, M

    2009-06-01

    Deaf patients with severe sensory hearing loss can benefit from a cochlear implant (CI), which stimulates the auditory nerve fibers. However, patients who do not have an intact auditory nerve cannot benefit from a CI. The majority of these patients are neurofibromatosis type 2 (NF2) patients who developed neural deafness due to growth or surgical removal of a bilateral acoustic neuroma. The only current solution is the auditory brainstem implant (ABI), which stimulates the surface of the cochlear nucleus in the brainstem. Although the ABI provides improvement in environmental awareness and lip-reading capabilities, only a few NF2 patients have achieved some limited open set speech perception. In the search for alternative procedures our research group in collaboration with Cochlear Ltd. (Australia) developed a human prototype auditory midbrain implant (AMI), which is designed to electrically stimulate the inferior colliculus (IC). The IC has the potential as a new target for an auditory prosthesis as it provides access to neural projections necessary for speech perception as well as a systematic map of spectral information. In this paper the present status of research and development in the field of central auditory prostheses is presented with respect to technology, surgical technique and hearing results as well as the background concepts of ABI and AMI. PMID:19517084

  19. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  20. Multichannel laser-fiber vibrometer

    NASA Astrophysics Data System (ADS)

    Dudzik, Grzegorz; Waz, Adam; Kaczmarek, Pawel; Antonczak, Arkadiusz; Sotor, Jaroslaw; Krzempek, Karol; Sobon, Grzegorz; Abramski, Krzysztof M.

    2013-01-01

    For the last few years we were elaborating the laser-fiber vibrometer working at 1550 nm. Our main stress was directed towards different aspects of research: analysis of scattered light, efficient photodetection, optimization of the fiber-free space interfaces and signal processing. As a consequence we proposed the idea of a multichannel fiber vibrometer based on well developed telecommunication technique - Wavelength Division Multiplexing (WDM). One of the most important parts of a fiber-laser vibrometer is demodulation electronic section. The distortion, nonlinearity, offset and added noise of measured signal come from electronic circuits and they have direct influence on finale measuring results. We present the results of finished project "Developing novel laser-fiber monitoring technologies to prevent environmental hazards from vibrating objects" where we have constructed a 4-channel WDM laser-fiber vibrometer.

  1. Multichannel Spectrometer of Time Distribution

    NASA Astrophysics Data System (ADS)

    Akindinova, E. V.; Babenko, A. G.; Vakhtel, V. M.; Evseev, N. A.; Rabotkin, V. A.; Kharitonova, D. D.

    2015-06-01

    For research and control of characteristics of radiation fluxes, radioactive sources in particular, for example, in paper [1], a spectrometer and methods of data measurement and processing based on the multichannel counter of time intervals of accident events appearance (impulses of particle detector) MC-2A (SPC "ASPECT") were created. The spectrometer has four independent channels of registration of time intervals of impulses appearance and correspondent amplitude and spectrometric channels for control along the energy spectra of the operation stationarity of paths of each of the channels from the detector to the amplifier. The registration of alpha-radiation is carried out by the semiconductor detectors with energy resolution of 16-30 keV. Using a spectrometer there have been taken measurements of oscillations of alpha-radiation 239-Pu flux intensity with a subsequent autocorrelative statistical analysis of the time series of readings.

  2. Multi-channel polarized thermal emitter

    DOEpatents

    Lee, Jae-Hwang; Ho, Kai-Ming; Constant, Kristen P

    2013-07-16

    A multi-channel polarized thermal emitter (PTE) is presented. The multi-channel PTE can emit polarized thermal radiation without using a polarizer at normal emergence. The multi-channel PTE consists of two layers of metallic gratings on a monolithic and homogeneous metallic plate. It can be fabricated by a low-cost soft lithography technique called two-polymer microtransfer molding. The spectral positions of the mid-infrared (MIR) radiation peaks can be tuned by changing the periodicity of the gratings and the spectral separation between peaks are tuned by changing the mutual angle between the orientations of the two gratings.

  3. A Multichannel Bioluminescence Determination Platform for Bioassays.

    PubMed

    Kim, Sung-Bae; Naganawa, Ryuichi

    2016-01-01

    The present protocol introduces a multichannel bioluminescence determination platform allowing a high sample throughput determination of weak bioluminescence with reduced standard deviations. The platform is designed to carry a multichannel conveyer, an optical filter, and a mirror cap. The platform enables us to near-simultaneously determine ligands in multiple samples without the replacement of the sample tubes. Furthermore, the optical filters beneath the multichannel conveyer are designed to easily discriminate colors during assays. This optical system provides excellent time- and labor-efficiency to users during bioassays. PMID:27424912

  4. A Multichannel Bioluminescence Determination Platform for Bioassays.

    PubMed

    Kim, Sung-Bae; Naganawa, Ryuichi

    2016-01-01

    The present protocol introduces a multichannel bioluminescence determination platform allowing a high sample throughput determination of weak bioluminescence with reduced standard deviations. The platform is designed to carry a multichannel conveyer, an optical filter, and a mirror cap. The platform enables us to near-simultaneously determine ligands in multiple samples without the replacement of the sample tubes. Furthermore, the optical filters beneath the multichannel conveyer are designed to easily discriminate colors during assays. This optical system provides excellent time- and labor-efficiency to users during bioassays.

  5. Auditory perception vs. recognition: representation of complex communication sounds in the mouse auditory cortical fields.

    PubMed

    Geissler, Diana B; Ehret, Günter

    2004-02-01

    Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies. PMID:15009150

  6. Effect of Neonatal Asphyxia on the Impairment of the Auditory Pathway by Recording Auditory Brainstem Responses in Newborn Piglets: A New Experimentation Model to Study the Perinatal Hypoxic-Ischemic Damage on the Auditory System

    PubMed Central

    Alvarez, Francisco Jose; Revuelta, Miren; Santaolalla, Francisco; Alvarez, Antonia; Lafuente, Hector; Arteaga, Olatz; Alonso-Alconada, Daniel; Sanchez-del-Rey, Ana; Hilario, Enrique; Martinez-Ibargüen, Agustin

    2015-01-01

    Introduction Hypoxia–ischemia (HI) is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets. Method Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs) of newborn piglets exposed to acute hypoxia/ischemia (n = 6) and a control group with no such exposure (n = 10). ABRs were recorded for both ears before the start of the experiment (baseline), after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury. Results Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant. Conclusion The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway. PMID:26010092

  7. Auditory models for speech analysis

    NASA Astrophysics Data System (ADS)

    Maybury, Mark T.

    This paper reviews the psychophysical basis for auditory models and discusses their application to automatic speech recognition. First an overview of the human auditory system is presented, followed by a review of current knowledge gleaned from neurological and psychoacoustic experimentation. Next, a general framework describes established peripheral auditory models which are based on well-understood properties of the peripheral auditory system. This is followed by a discussion of current enhancements to that models to include nonlinearities and synchrony information as well as other higher auditory functions. Finally, the initial performance of auditory models in the task of speech recognition is examined and additional applications are mentioned.

  8. Hyperactive auditory processing in Williams syndrome: Evidence from auditory evoked potentials.

    PubMed

    Zarchi, Omer; Avni, Chen; Attias, Josef; Frisch, Amos; Carmel, Miri; Michaelovsky, Elena; Green, Tamar; Weizman, Abraham; Gothelf, Doron

    2015-06-01

    The neurophysiologic aberrations underlying the auditory hypersensitivity in Williams syndrome (WS) are not well defined. The P1-N1-P2 obligatory complex and mismatch negativity (MMN) response were investigated in 18 participants with WS, and the results were compared with those of 18 age- and gender-matched typically developing (TD) controls. Results revealed significantly higher amplitudes of both the P1-N1-P2 obligatory complex and the MMN response in the WS participants than in the TD controls. The P1-N1-P2 complex showed an age-dependent reduction in the TD but not in the WS participants. Moreover, high P1-N1-P2 complex was associated with low verbal comprehension scores in WS. This investigation demonstrates that central auditory processing is hyperactive in WS. The increase in auditory brain responses of both the obligatory complex and MMN response suggests aberrant processes of auditory encoding and discrimination in WS. Results also imply that auditory processing may be subjected to a delayed or diverse maturation and may affect the development of high cognitive functioning in WS.

  9. Developmental changes in distinguishing concurrent auditory objects.

    PubMed

    Alain, Claude; Theunissen, Eef L; Chevalier, Hélène; Batty, Magali; Taylor, Margot J

    2003-04-01

    Children have considerable difficulties in identifying speech in noise. In the present study, we examined age-related differences in central auditory functions that are crucial for parsing co-occurring auditory events using behavioral and event-related brain potential measures. Seventeen pre-adolescent children and 17 adults were presented with complex sounds containing multiple harmonics, one of which could be 'mistuned' so that it was no longer an integer multiple of the fundamental. Both children and adults were more likely to report hearing the mistuned harmonic as a separate sound with an increase in mistuning. However, children were less sensitive in detecting mistuning across all levels as revealed by lower d' scores than adults. The perception of two concurrent auditory events was accompanied by a negative wave that peaked at about 160 ms after sound onset. In both age groups, the negative wave, referred to as the 'object-related negativity' (ORN), increased in amplitude with mistuning. The ORN was larger in children than in adults despite a lower d' score. Together, the behavioral and electrophysiological results suggest that concurrent sound segregation is probably adult-like in pre-adolescent children, but that children are inefficient in processing the information following the detection of mistuning. These findings also suggest that processes involved in distinguishing concurrent auditory objects continue to mature during adolescence.

  10. Multivariate sensitivity to voice during auditory categorization

    PubMed Central

    Peelle, Jonathan E.; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-01-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. PMID:26245316

  11. Multivariate sensitivity to voice during auditory categorization.

    PubMed

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. PMID:26245316

  12. Integration and segregation in auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  13. Neurotrophic factor intervention restores auditory function in deafened animals

    NASA Astrophysics Data System (ADS)

    Shinohara, Takayuki; Bredberg, Göran; Ulfendahl, Mats; Pyykkö, Ilmari; Petri Olivius, N.; Kaksonen, Risto; Lindström, Bo; Altschuler, Richard; Miller, Josef M.

    2002-02-01

    A primary cause of deafness is damage of receptor cells in the inner ear. Clinically, it has been demonstrated that effective functionality can be provided by electrical stimulation of the auditory nerve, thus bypassing damaged receptor cells. However, subsequent to sensory cell loss there is a secondary degeneration of the afferent nerve fibers, resulting in reduced effectiveness of such cochlear prostheses. The effects of neurotrophic factors were tested in a guinea pig cochlear prosthesis model. After chemical deafening to mimic the clinical situation, the neurotrophic factors brain-derived neurotrophic factor and an analogue of ciliary neurotrophic factor were infused directly into the cochlea of the inner ear for 26 days by using an osmotic pump system. An electrode introduced into the cochlea was used to elicit auditory responses just as in patients implanted with cochlear prostheses. Intervention with brain-derived neurotrophic factor and the ciliary neurotrophic factor analogue not only increased the survival of auditory spiral ganglion neurons, but significantly enhanced the functional responsiveness of the auditory system as measured by using electrically evoked auditory brainstem responses. This demonstration that neurotrophin intervention enhances threshold sensitivity within the auditory system will have great clinical importance for the treatment of deaf patients with cochlear prostheses. The findings have direct implications for the enhancement of responsiveness in deafferented peripheral nerves.

  14. Pilocarpine Seizures Cause Age-Dependent Impairment in Auditory Location Discrimination

    ERIC Educational Resources Information Center

    Neill, John C.; Liu, Zhao; Mikati, Mohammad; Holmes, Gregory L.

    2005-01-01

    Children who have status epilepticus have continuous or rapidly repeating seizures that may be life-threatening and may cause life-long changes in brain and behavior. The extent to which status epilepticus causes deficits in auditory discrimination is unknown. A naturalistic auditory location discrimination method was used to evaluate this…

  15. Positron Emission Tomography in Cochlear Implant and Auditory Brainstem Implant Recipients.

    ERIC Educational Resources Information Center

    Miyamoto, Richard T.; Wong, Donald

    2001-01-01

    Positron emission tomography imaging was used to evaluate the brain's response to auditory stimulation, including speech, in deaf adults (five with cochlear implants and one with an auditory brainstem implant). Functional speech processing was associated with activation in areas classically associated with speech processing. (Contains five…

  16. Children's Performance on Pseudoword Repetition Depends on Auditory Trace Quality: Evidence from Event-Related Potentials.

    ERIC Educational Resources Information Center

    Ceponiene, Rita; Service, Elisabet; Kurjenluoma, Sanna; Cheour, Marie; Naatanen, Risto

    1999-01-01

    Compared the mismatch-negativity (MMN) component of auditory event-related brain potentials to explore the relationship between phonological short-term memory and auditory-sensory processing in 7- to 9-year olds scoring the highest and lowest on a pseudoword repetition test. Found that high and low repeaters differed in MMN amplitude to speech…

  17. Auditory-visual crossmodal integration in perception of face gender.

    PubMed

    Smith, Eric L; Grabowecky, Marcia; Suzuki, Satoru

    2007-10-01

    Whereas extensive neuroscientific and behavioral evidence has confirmed a role of auditory-visual integration in representing space [1-6], little is known about the role of auditory-visual integration in object perception. Although recent neuroimaging results suggest integrated auditory-visual object representations [7-11], substantiating behavioral evidence has been lacking. We demonstrated auditory-visual integration in the perception of face gender by using pure tones that are processed in low-level auditory brain areas and that lack the spectral components that characterize human vocalization. When androgynous faces were presented together with pure tones in the male fundamental-speaking-frequency range, faces were more likely to be judged as male, whereas when faces were presented with pure tones in the female fundamental-speaking-frequency range, they were more likely to be judged as female. Importantly, when participants were explicitly asked to attribute gender to these pure tones, their judgments were primarily based on relative pitch and were uncorrelated with the male and female fundamental-speaking-frequency ranges. This perceptual dissociation of absolute-frequency-based crossmodal-integration effects from relative-pitch-based explicit perception of the tones provides evidence for a sensory integration of auditory and visual signals in representing human gender. This integration probably develops because of concurrent neural processing of visual and auditory features of gender.

  18. A corollary discharge maintains auditory sensitivity during sound production.

    PubMed

    Poulet, James F A; Hedwig, Berthold

    2002-08-22

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  19. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  20. Auditory processing--speech, space and auditory objects.

    PubMed

    Scott, Sophie K

    2005-04-01

    There have been recent developments in our understanding of the auditory neuroscience of non-human primates that, to a certain extent, can be integrated with findings from human functional neuroimaging studies. This framework can be used to consider the cortical basis of complex sound processing in humans, including implications for speech perception, spatial auditory processing and auditory scene segregation. PMID:15831402

  1. Synaptic Morphology and the Influence of Auditory Experience

    PubMed Central

    O’Neil, Jahn N.; Connelly, Catherine J.; Limb, Charles J.; Ryugo, David K.

    2011-01-01

    The auditory experience is crucial for the normal development and maturation of brain structure and the maintenance of the auditory pathways. The specific aims of this review are (i) to provide a brief background of the synaptic morphology of the endbulb of Held in hearing and deaf animals; (ii) to argue the importance of this large synaptic ending in linking neural activity along ascending pathways to environmental acoustic events; (iii) to describe how the re-introduction of electrical activity changes this synapse; and (iv) to examine how changes at the endbulb synapse initiate trans-synaptic changes in ascending auditory projections to the superior olivary complex, the inferior complex, and the auditory cortex. PMID:21310226

  2. Selective corticostriatal plasticity during acquisition of an auditory discrimination task.

    PubMed

    Xiong, Qiaojie; Znamenskiy, Petr; Zador, Anthony M

    2015-05-21

    Perceptual decisions are based on the activity of sensory cortical neurons, but how organisms learn to transform this activity into appropriate actions remains unknown. Projections from the auditory cortex to the auditory striatum carry information that drives decisions in an auditory frequency discrimination task. To assess the role of these projections in learning, we developed a channelrhodopsin-2-based assay to probe selectively for synaptic plasticity associated with corticostriatal neurons representing different frequencies. Here we report that learning this auditory discrimination preferentially potentiates corticostriatal synapses from neurons representing either high or low frequencies, depending on reward contingencies. We observe frequency-dependent corticostriatal potentiation in vivo over the course of training, and in vitro in striatal brain slices. Our findings suggest a model in which the corticostriatal synapses made by neurons tuned to different features of the sound are selectively potentiated to enable the learned transformation of sound into action. PMID:25731173

  3. Role of the auditory system in speech production.

    PubMed

    Guenther, Frank H; Hickok, Gregory

    2015-01-01

    This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer.

  4. [Neural Representation of Sound Texture in the Auditory Cortex].

    PubMed

    Shiramatsu Isoguchi, Tomoyo; Takahashi, Hirokazu

    2015-06-01

    Natural sounds have a variety of sound spectra, which produce the so-called textures of sounds. These sound textures are extracted and perceived through interactions of the auditory, emotional, and cognitive systems in our brain. Recent studies have investigated how our brain handles musical sound textures, such as consonant and dissonant chords, or major and minor scales. Accumulating evidence indicates that the mammal auditory system has adapted to extract the harmonic structure of sounds and that this adaptation plays crucial roles in the perception of the consonance of two-tone chords. In addintion, functional magnetic resonance imaging studies have shown that major and minor scales activate not only the auditory system but also the emotional and cognitive systems. Our study revealed that phase synchrony within the auditory cortex of rodents represents the tonality of three-tone chords in a band-specific manner, and these findings support the hypothesis that the auditory system interact with the emotional and/or cognitive systems. Thus, the neural bases for the perception of sound textures are widely distributed within our brain, and these evolution of these neural systems significantly affects the establishment of musical grammar. PMID:26062583

  5. [Neural Representation of Sound Texture in the Auditory Cortex].

    PubMed

    Shiramatsu Isoguchi, Tomoyo; Takahashi, Hirokazu

    2015-06-01

    Natural sounds have a variety of sound spectra, which produce the so-called textures of sounds. These sound textures are extracted and perceived through interactions of the auditory, emotional, and cognitive systems in our brain. Recent studies have investigated how our brain handles musical sound textures, such as consonant and dissonant chords, or major and minor scales. Accumulating evidence indicates that the mammal auditory system has adapted to extract the harmonic structure of sounds and that this adaptation plays crucial roles in the perception of the consonance of two-tone chords. In addintion, functional magnetic resonance imaging studies have shown that major and minor scales activate not only the auditory system but also the emotional and cognitive systems. Our study revealed that phase synchrony within the auditory cortex of rodents represents the tonality of three-tone chords in a band-specific manner, and these findings support the hypothesis that the auditory system interact with the emotional and/or cognitive systems. Thus, the neural bases for the perception of sound textures are widely distributed within our brain, and these evolution of these neural systems significantly affects the establishment of musical grammar.

  6. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    NASA Astrophysics Data System (ADS)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  7. Cross-Modal Plasticity in Higher-Order Auditory Cortex of Congenitally Deaf Cats Does Not Limit Auditory Responsiveness to Cochlear Implants

    PubMed Central

    Baumhoff, Peter; Tillein, Jochen; Lomber, Stephen G.; Hubka, Peter; Kral, Andrej

    2016-01-01

    Congenital sensory deprivation can lead to reorganization of the deprived cortical regions by another sensory system. Such cross-modal reorganization may either compete with or complement the “original“ inputs to the deprived area after sensory restoration and can thus be either adverse or beneficial for sensory restoration. In congenital deafness, a previous inactivation study documented that supranormal visual behavior was mediated by higher-order auditory fields in congenitally deaf cats (CDCs). However, both the auditory responsiveness of “deaf” higher-order fields and interactions between the reorganized and the original sensory input remain unknown. Here, we studied a higher-order auditory field responsible for the supranormal visual function in CDCs, the auditory dorsal zone (DZ). Hearing cats and visual cortical areas served as a control. Using mapping with microelectrode arrays, we demonstrate spatially scattered visual (cross-modal) responsiveness in the DZ, but show that this did not interfere substantially with robust auditory responsiveness elicited through cochlear implants. Visually responsive and auditory-responsive neurons in the deaf auditory cortex formed two distinct populations that did not show bimodal interactions. Therefore, cross-modal plasticity in the deaf higher-order auditory cortex had limited effects on auditory inputs. The moderate number of scattered cross-modally responsive neurons could be the consequence of exuberant connections formed during development that were not pruned postnatally in deaf cats. Although juvenile brain circuits are modified extensively by experience, the main driving input to the cross-modally (visually) reorganized higher-order auditory cortex remained auditory in congenital deafness. SIGNIFICANCE STATEMENT In a common view, the “unused” auditory cortex of deaf individuals is reorganized to a compensatory sensory function during development. According to this view, cross-modal plasticity takes

  8. An auditory feature detection circuit for sound pattern recognition

    PubMed Central

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-01-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns. PMID:26601259

  9. Dynamics of auditory plasticity after cochlear implantation: a longitudinal study.

    PubMed

    Pantev, C; Dinnesen, A; Ross, B; Wollbrink, A; Knief, A

    2006-01-01

    Human representational cortex may fundamentally alter its organization and (re)gain the capacity for auditory processing even when it is deprived of its input for more than two decades. Stimulus-evoked brain activity was recorded in post-lingual deaf patients after implantation of a cochlear prosthesis, which partly restored their hearing. During a 2 year follow-up study this activity revealed almost normal component configuration and was localized in the auditory cortex, demonstrating adequacy of the cochlear implant stimulation. Evoked brain activity increased over several months after the cochlear implant was turned on. This is taken as a measure of the temporal dynamics of plasticity of the human auditory system after implantation of cochlear prosthesis. PMID:15843632

  10. An auditory feature detection circuit for sound pattern recognition.

    PubMed

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  11. Low noise multichannel circuits for physics and biology applications

    NASA Astrophysics Data System (ADS)

    Grybos, Pawel

    2005-09-01

    Experimental techniques in physics, material science, biology and medicine want to gain profit from the advantages of the VLSI technology by using a new generation of electronic measurement systems based on parallel signal processing from the multielement sensors. In most cases key problems for building such system are multichannel mixed-mode Application Specific Integrated Circuits, which are capable to process small amplitude signals from multielement sensor. In this class of integrated circuits several important problems like power limitation, low level of noise, good matching performance and crosstalk effects must be solved simultaneously. This presentation shows two ASICs which, given the original solutions implemented and their universal properties, can be used in different applications and are significant milestones in experimental techniques. The first presented ASIC is the 64-channel charge amplifier with binary readout architecture for a low energy X-ray imaging techniques. This integrated circuit connected to silicon strip detector can be used in powder diffractometry and then it reduces the measurement time by two order of magnitude. The second presented ASIC is multichannel low noise readout for extracellular neural recording, which is able to cope with extracellular neuronal recording for the systems comprising several hundreds of electrodes. Important steps forward in this design are a novel solution for band-pass filters for low frequency range, which follow requirements for good matching, low power and small silicon area. This ASIC can be used to monitor the neural activity of such complicated system like retina or brain.

  12. Wireless multichannel biopotential recording using an integrated FM telemetry circuit.

    PubMed

    Mohseni, Pedram; Najafi, Khalil; Eliades, Steven J; Wang, Xiaoqin

    2005-09-01

    This paper presents a four-channel telemetric microsystem featuring on-chip alternating current amplification, direct current baseline stabilization, clock generation, time-division multiplexing, and wireless frequency-modulation transmission of microvolt- and millivolt-range input biopotentials in the very high frequency band of 94-98 MHz over a distance of approximately 0.5 m. It consists of a 4.84-mm2 integrated circuit, fabricated using a 1.5-microm double-poly double-metal n-well standard complementary metal-oxide semiconductor process, interfaced with only three off-chip components on a custom-designed printed-circuit board that measures 1.7 x 1.2 x 0.16 cm3, and weighs 1.1 g including two miniature 1.5-V batteries. We characterize the microsystem performance, operating in a truly wireless fashion in single-channel and multichannel operation modes, via extensive benchtop and in vitro tests in saline utilizing two different micromachined neural recording microelectrodes, while dissipating approximately 2.2 mW from a 3-V power supply. Moreover, we demonstrate successful wireless in vivo recording of spontaneous neural activity at 96.2 MHz from the auditory cortex of an awake marmoset monkey at several transmission distances ranging from 10 to 50 cm with signal-to-noise ratios in the range of 8.4-9.5 dB.

  13. Auditory Channel Problems.

    ERIC Educational Resources Information Center

    Mann, Philip H.; Suiter, Patricia A.

    This teacher's guide contains a list of general auditory problem areas where students have the following problems: (a) inability to find or identify source of sound; (b) difficulty in discriminating sounds of words and letters; (c) difficulty with reproducing pitch, rhythm, and melody; (d) difficulty in selecting important from unimportant sounds;…

  14. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  15. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates.

    PubMed

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-01-01

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys. PMID:27438411

  16. The neglected neglect: auditory neglect.

    PubMed

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  17. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    PubMed

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise.

  18. Auditory, Tactile, and Audiotactile Information Processing Following Visual Deprivation

    ERIC Educational Resources Information Center

    Occelli, Valeria; Spence, Charles; Zampini, Massimiliano

    2013-01-01

    We highlight the results of those studies that have investigated the plastic reorganization processes that occur within the human brain as a consequence of visual deprivation, as well as how these processes give rise to behaviorally observable changes in the perceptual processing of auditory and tactile information. We review the evidence showing…

  19. Auditory Technology and Its Impact on Bilingual Deaf Education

    ERIC Educational Resources Information Center

    Mertes, Jennifer

    2015-01-01

    Brain imaging studies suggest that children can simultaneously develop, learn, and use two languages. A visual language, such as American Sign Language (ASL), facilitates development at the earliest possible moments in a child's life. Spoken language development can be delayed due to diagnostic evaluations, device fittings, and auditory skill…

  20. Multichannel photocells for image converters with color separation

    SciTech Connect

    Denisova, E. A.; Uzdovskii, V. V. Khainovskii, V. I.

    2011-12-15

    The results of a study of photoelectric processes in photosensitive structures based on a multichannel vertically integrated p-n junction are presented. Optical radiation absorption in the space-charge region of a multichannel vertically integrated structure is studied.

  1. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C.-L.; Olson, W. S.

    1985-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Its properties and limitations are presented. The effect of noise was investigated and a better understanding of the performance of the algorithm with noisy data has been achieved. The restoration scheme with the selection of appropriate constraints was applied to a practical problem. The 6.6, 10.7, 18, and 21 GHz satellite images obtained by the scanning multichannel microwave radiometer (SMMR), each having different spatial resolution, were restored to a common, high resolution (that of the 37 GHz channels) to demonstrate the effectiveness of the method. Both simulated data and real data were used in this study. The restored multichannel images may be utilized to retrieve rainfall distributions.

  2. Joint enhancement of multichannel SAR data

    NASA Astrophysics Data System (ADS)

    Ramakrishnan, Naveen; Ertin, Emre; Moses, Randolph L.

    2007-04-01

    In this paper we consider the problem of joint enhancement of multichannel Synthetic Aperture Radar (SAR) data. Previous work by Cetin and Karl introduced nonquadratic regularization methods for image enhancement using sparsity enforcing penalty terms. For multichannel data, independent enhancement of each channel is shown to degrade the relative phase information across channels that is useful for 3D reconstruction. We thus propose a method for joint enhancement of multichannel SAR data with joint sparsity constraints. We develop both a gradient-based and a Lagrange-Newton-based method for solving the joint reconstruction problem, and demonstrate the performance of the proposed methods on IFSAR height extraction problem from multi-elevation data.

  3. Multichannel framework for singular quantum mechanics

    SciTech Connect

    Camblong, Horacio E.; Epele, Luis N.; Fanchiotti, Huner; García Canal, Carlos A.; Ordóñez, Carlos R.

    2014-01-15

    A multichannel S-matrix framework for singular quantum mechanics (SQM) subsumes the renormalization and self-adjoint extension methods and resolves its boundary-condition ambiguities. In addition to the standard channel accessible to a distant (“asymptotic”) observer, one supplementary channel opens up at each coordinate singularity, where local outgoing and ingoing singularity waves coexist. The channels are linked by a fully unitary S-matrix, which governs all possible scenarios, including cases with an apparent nonunitary behavior as viewed from asymptotic distances. -- Highlights: •A multichannel framework is proposed for singular quantum mechanics and analogues. •The framework unifies several established approaches for singular potentials. •Singular points are treated as new scattering channels. •Nonunitary asymptotic behavior is subsumed in a unitary multichannel S-matrix. •Conformal quantum mechanics and the inverse quartic potential are highlighted.

  4. Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System

    PubMed Central

    Anderson, Lucy A.

    2016-01-01

    High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the

  5. Multichannel DC SQUID sensor array for biomagnetic applications

    SciTech Connect

    Hoenig, H.E.; Daalmans, G.M.; Bar, L.; Bommel, F.; Paulus, A.; Uhl, D.; Weisse, H.J. ); Schneider, S.; Seifert, H.; Reichenberger, H.; Abraham-Fuchs, K. )

    1991-03-01

    This paper reports on a biomagnetic multichannel system for medical diagnosis of brain and heart KRENIKON has been developed. 37 axial 2st order gradiometers - manufactured as flexible superconducting printed circuits - are arranged in a circular flat array of 19 cm diameter. Additionally, 3 orthogonal magnetometers are provided. The DC SQUIDs are fabricated in all-Nb technology, ten on a chip. The sensor system is operated in a shielded room with two layers of soft magnetic material and one layer of Al. The every day noise level is 10 fT/Hz{sup 1/2} at frequencies above 10 Hz. Within 2 years of operation in a normal urban surrounding, useful clinical applications have been demonstrated (e.g. for epilepsy and heart arrhythmias).

  6. Characterization of auditory synaptic inputs to gerbil perirhinal cortex

    PubMed Central

    Kotak, Vibhakar C.; Mowery, Todd M.; Sanes, Dan H.

    2015-01-01

    The representation of acoustic cues involves regions downstream from the auditory cortex (ACx). One such area, the perirhinal cortex (PRh), processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG) and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of type A gamma-aminobutyric acid (GABA-A) receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the ACx. PMID:26321918

  7. Deficient auditory interhemispheric transfer in patients with PAX6 mutations.

    PubMed

    Bamiou, Doris-Eva; Musiek, Frank E; Sisodiya, Sanjay M; Free, Samantha L; Davies, Rosalyn A; Moore, Anthony; van Heyningen, Veronica; Luxon, Linda M

    2004-10-01

    PAX6 mutations are associated with absence/hypoplasia of the anterior commissure and reduction in the callosal area in humans. Both of these structures contain auditory interhemispheric fibers. The aim of this study was to characterize central auditory function in patients with a PAX6 mutation. We conducted central auditory tests (dichotic speech, pattern, and gaps in noise tests) on eight subjects with a PAX6 mutation and eight age- and sex-matched controls. Brain magnetic resonance imaging showed absent/hypoplastic anterior commissure in six and a hypoplastic corpus callosum in three PAX6 subjects. The control group gave normal central auditory tests results. All the PAX6 subjects gave abnormal results in at least two tests that require interhemispheric transfer, and all but one gave normal results in a test not requiring interhemispheric transfer. The left ear scores in the dichotic speech tests was significantly lower in the PAX6 than in the control group. These results are consistent with deficient auditory interhemispheric transfer in patients with a PAX6 mutation, which may be attributable to structural and/or functional abnormalities of the anterior commisure and corpus callosum, although the exact contribution of these two formations to our findings remains unclear. Our unique findings broaden the possible functions of PAX6 to include neurodevelopmental roles in higher order auditory processing. PMID:15389894

  8. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C. L.; Olson, W. S.

    1983-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Some of its properties and limitations are also presented. The selection of appropriate constraints was emphasized in a practical application. Multichannel microwave images, each having different spatial resolution, were restored to a common highest resolution to demonstrate the effectiveness of the method. Both noise-free and noisy images were used in this investigation.

  9. Optical multichannel sensing of skin blood pulsations

    NASA Astrophysics Data System (ADS)

    Spigulis, Janis; Erts, Renars; Kukulis, Indulis; Ozols, Maris; Prieditis, Karlis

    2004-09-01

    Time resolved detection and analysis of the skin back-scattered optical signals (reflection photoplethysmography or PPG) provide information on skin blood volume pulsations and can serve for cardiovascular assessment. The multi-channel PPG concept has been developed and clinically verified in this study. Portable two- and four-channel PPG monitoring devices have been designed for real-time data acquisition and processing. The multi-channel devices were successfully applied for cardiovascular fitness tests and for early detection of arterial occlusions in extremities. The optically measured heartbeat pulse wave propagation made possible to estimate relative arterial resistances for numerous patients and healthy volunteers.

  10. Multimodal Lexical Processing in Auditory Cortex Is Literacy Skill Dependent

    PubMed Central

    McNorgan, Chris; Awati, Neha; Desroches, Amy S.; Booth, James R.

    2014-01-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8–14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. PMID:23588185

  11. Multimodal lexical processing in auditory cortex is literacy skill dependent.

    PubMed

    McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R

    2014-09-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. PMID:23588185

  12. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  13. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  14. Auditory object cognition in dementia

    PubMed Central

    Goll, Johanna C.; Kim, Lois G.; Hailstone, Julia C.; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J.; Warren, Jason D.

    2011-01-01

    The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n = 21), progressive nonfluent aphasia (PNFA; n = 5), logopenic progressive aphasia (LPA; n = 7) and aphasia in association with a progranulin gene mutation (GAA; n = 1), and in healthy age-matched controls (n = 20). Based on a cognitive framework treating complex sounds as ‘auditory objects’, we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671

  15. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study.

    PubMed

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-01-01

    Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event.

  16. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study.

    PubMed

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-01-01

    Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event. PMID:27242420

  17. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study

    PubMed Central

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-01-01

    Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event. PMID:27242420

  18. PET imaging of the 40 Hz auditory steady state response.

    PubMed

    Reyes, Samuel A; Salvi, Richard J; Burkard, Robert F; Coad, Mary Lou; Wack, David S; Galantowicz, Paul J; Lockwood, Alan H

    2004-08-01

    The auditory steady state response (aSSR) is an oscillatory electrical potential recorded from the scalp induced by amplitude-modulated (AM) or click/tone burst stimuli. Its clinical utility has been limited by uncertainty regarding the specific areas of the brain involved in its generation. To identify the generators of the aSSR, 15O-water PET imaging was used to locate the regions of the brain activated by a steady 1 kHz pure tone, the same tone amplitude modulated (AM) at 40 Hz and the specific regions of the brain responsive to the AM component of the stimulus relative to the continuous tone. The continuous tone produced four clusters of activation. The boundaries of these activated clusters extended to include regions in left primary auditory cortex, right non-primary auditory cortex, left thalamus, and left cingulate. The AM tone produced three clusters of activation. The boundaries of these activated clusters extended to include primary auditory cortex bilaterally, left medial geniculate and right middle frontal gyrus. Two regions were specifically responsive to the AM component of the stimulus. These activated clusters extended to include the right anterior cingulate near frontal cortex and right auditory cortex. We conclude that cortical sites, including areas outside primary auditory cortex, are involved in generating the aSSR. There was an unexpected difference between morning and afternoon session scans that may reflect a pre- versus post-prandial state. These results support the hypothesis that a distributed resonating circuit mediates the generation of the aSSR.

  19. Early hominin auditory capacities.

    PubMed

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  20. Early hominin auditory capacities

    PubMed Central

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis

    2015-01-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  1. Early hominin auditory capacities.

    PubMed

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats.

  2. Multichannel analyzers at high rates of input

    NASA Technical Reports Server (NTRS)

    Rudnick, S. J.; Strauss, M. G.

    1969-01-01

    Multichannel analyzer, used with a gating system incorporating pole-zero compensation, pile-up rejection, and baseline-restoration, achieves good resolution at high rates of input. It improves resolution, reduces tailing and rate-contributed continuum, and eliminates spectral shift.

  3. Multi-channel electric aerosol spectrometer

    NASA Astrophysics Data System (ADS)

    Mirme, A.; Noppel, M.; Peil, I.; Salm, J.; Tamm, E.; Tammet, H.

    Multi-channel electric mobility spectrometry is a most efficient technique for the rapid measurement of an unstable aerosol particle size spectrum. The measuring range of the spectrometer from 10 microns to 10 microns is achieved by applying diffusional and field charging mechanisms simultaneously. On-line data processing is carried out with a microcomputer. Experimental calibration ensures correctness of measurement.

  4. A multi-channel waveform digitizer system

    SciTech Connect

    Bieser, F.; Muller, W.F.J. )

    1990-04-01

    The authors report on the design and performance of a multichannel waveform digitizer system for use with the Multiple Sample Ionization Chamber (MUSIC) Detector at the Bevalac. 128 channels of 20 MHz Flash ADC plus 256 word deep memory are housed in a single crate. Digital thresholds and hit pattern logic facilitate zero suppression during readout which is performed over a standard VME bus.

  5. Manipulation of a central auditory representation shapes learned vocal output

    PubMed Central

    Lei, Huimeng; Mooney, Richard

    2009-01-01

    Learned vocalizations depend on the ear’s ability to monitor and ultimately instruct the voice. Where is auditory feedback processed in the brain and how does it modify motor networks for learned vocalizations? Here we addressed these questions using singing-triggered microstimulation and chronic recording methods in the singing zebra finch, a small songbird that relies on auditory feedback to learn and maintain its species-typical vocalizations. Manipulating the singing-related activity of feedback-sensitive thalamic neurons subsequently triggered vocal plasticity, constraining the central pathway and functional mechanisms through which feedback-related information shapes vocalization. PMID:20152118

  6. Auditory interfaces: The human perceiver

    NASA Technical Reports Server (NTRS)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  7. Neural correlates of auditory scale illusion.

    PubMed

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. PMID:27292114

  8. Seeing sounds and hearing colors: an event-related potential study of auditory-visual synesthesia.

    PubMed

    Goller, Aviva I; Otten, Leun J; Ward, Jamie

    2009-10-01

    In auditory-visual synesthesia, sounds automatically elicit conscious and reliable visual experiences. It is presently unknown whether this reflects early or late processes in the brain. It is also unknown whether adult audiovisual synesthesia resembles auditory-induced visual illusions that can sometimes occur in the general population or whether it resembles the electrophysiological deflection over occipital sites that has been noted in infancy and has been likened to synesthesia. Electrical brain activity was recorded from adult synesthetes and control participants who were played brief tones and required to monitor for an infrequent auditory target. The synesthetes were instructed to attend either to the auditory or to the visual (i.e., synesthetic) dimension of the tone, whereas the controls attended to the auditory dimension alone. There were clear differences between synesthetes and controls that emerged early (100 msec after tone onset). These differences tended to lie in deflections of the auditory-evoked potential (e.g., the auditory N1, P2, and N2) rather than the presence of an additional posterior deflection. The differences occurred irrespective of what the synesthetes attended to (although attention had a late effect). The results suggest that differences between synesthetes and others occur early in time, and that synesthesia is qualitatively different from similar effects found in infants and certain auditory-induced visual illusions in adults. In addition, we report two novel cases of synesthesia in which colors elicit sounds, and vice versa.

  9. Grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents

    PubMed Central

    Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang

    2015-01-01

    Abstract Purpose: Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. Methods: We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. Results: The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Conclusions: Our results indicate “cross-modal” plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents. PMID:25698109

  10. Visual activity predicts auditory recovery from deafness after adult cochlear implantation.

    PubMed

    Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2013-12-01

    Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients. PMID:24136826

  11. Auditory brainstem response in dolphins.

    PubMed

    Ridgway, S H; Bullock, T H; Carder, D A; Seeley, R L; Woods, D; Galambos, R

    1981-03-01

    We recorded the auditory brainstem response (ABR) in four dolphins (Tursiops truncatus and Delphinus delphis). The ABR evoked by clicks consists of seven waves within 10 msec; two waves often contain dual peaks. The main waves can be identified with those of humans and laboratory mammals; in spite of a much longer path, the latencies of the peaks are almost identical to those of the rat. The dolphin ABR waves increase in latency as the intensity of a sound decreases by only 4 microseconds/decibel(dB) (for clicks with peak power at 66 kHz) compared to 40 microseconds/dB in humans (for clicks in the sonic range). Low-frequency clicks (6-kHz peak power) show a latency increase about 3 times (12 microseconds/dB) as great. Although the dolphin brainstem tracks individual clicks to at least 600 per sec, the latency increases and amplitude decreases with increasing click rates. This effect varies among different waves of the ABR; it is around one-fifth the effect seen in man. The dolphin brain is specialized for handling brief, frequent clicks. A small latency difference is seen between clicks 180 degrees different in phase--i.e., with initial compression vs. initial rarefaction. The ABR can be used to test theories of dolphin sonar signal processing. Hearing thresholds can be evaluated rapidly. Cetaceans that have not been investigated can now be examined, including the great whales, a group for which data are now completely lacking.

  12. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  13. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    ERIC Educational Resources Information Center

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  14. Neurophysiological Studies of Auditory Verbal Hallucinations

    PubMed Central

    Ford, Judith M.; Dierks, Thomas; Fisher, Derek J.; Herrmann, Christoph S.; Hubl, Daniela; Kindler, Jochen; Koenig, Thomas; Mathalon, Daniel H.; Spencer, Kevin M.; Strik, Werner; van Lutterveld, Remko

    2012-01-01

    We discuss 3 neurophysiological approaches to study auditory verbal hallucinations (AVH). First, we describe “state” (or symptom capture) studies where periods with and without hallucinations are compared “within” a patient. These studies take 2 forms: passive studies, where brain activity during these states is compared, and probe studies, where brain responses to sounds during these states are compared. EEG (electroencephalography) and MEG (magnetoencephalography) data point to frontal and temporal lobe activity, the latter resulting in competition with external sounds for auditory resources. Second, we discuss “trait” studies where EEG and MEG responses to sounds are recorded from patients who hallucinate and those who do not. They suggest a tendency to hallucinate is associated with competition for auditory processing resources. Third, we discuss studies addressing possible mechanisms of AVH, including spontaneous neural activity, abnormal self-monitoring, and dysfunctional interregional communication. While most studies show differences in EEG and MEG responses between patients and controls, far fewer show symptom relationships. We conclude that efforts to understand the pathophysiology of AVH using EEG and MEG have been hindered by poor anatomical resolution of the EEG and MEG measures, poor assessment of symptoms, poor understanding of the phenomenon, poor models of the phenomenon, decoupling of the symptoms from the neurophysiology due to medications and comorbidites, and the possibility that the schizophrenia diagnosis breeds truer than the symptoms it comprises. These problems are common to studies of other psychiatric symptoms and should be considered when attempting to understand the basic neural mechanisms responsible for them. PMID:22368236

  15. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss. PMID:26590050

  16. Practiced musical style shapes auditory skills.

    PubMed

    Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari

    2012-04-01

    Musicians' processing of sounds depends highly on instrument, performance practice, and level of expertise. Here, we measured the mismatch negativity (MMN), a preattentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, and rock/pop) and in nonmusicians using a novel, fast, and musical sounding multifeature MMN paradigm. We found MMN to all six deviants, showing that MMN paradigms can be adapted to resemble a musical context. Furthermore, we found that jazz musicians had larger MMN amplitude than all other experimental groups across all sound features, indicating greater overall sensitivity to auditory outliers. Furthermore, we observed a tendency toward shorter latency of the MMN to all feature changes in jazz musicians compared to band musicians. These findings indicate that the characteristics of the style of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in music.

  17. Auditory tracts identified with combined fMRI and diffusion tractography.

    PubMed

    Javad, Faiza; Warren, Jason D; Micallef, Caroline; Thornton, John S; Golay, Xavier; Yousry, Tarek; Mancini, Laura

    2014-01-01

    The auditory tracts in the human brain connect the inferior colliculus (IC) and medial geniculate body (MGB) to various components of the auditory cortex (AC). While in non-human primates and in humans, the auditory system is differentiated in core, belt and parabelt areas, the correspondence between these areas and anatomical landmarks on the human superior temporal gyri is not straightforward, and at present not completely understood. However it is not controversial that there is a hierarchical organization of auditory stimuli processing in the auditory system. The aims of this study were to demonstrate that it is possible to non-invasively and robustly identify auditory projections between the auditory thalamus/brainstem and different functional levels of auditory analysis in the cortex of human subjects in vivo combining functional magnetic resonance imaging (fMRI) with diffusion MRI, and to investigate the possibility of differentiating between different components of the auditory pathways (e.g. projections to areas responsible for sound, pitch and melody processing). We hypothesized that the major limitation in the identification of the auditory pathways is the known problem of crossing fibres and addressed this issue acquiring DTI with b-values higher than commonly used and adopting a multi-fibre ball-and-stick analysis model combined with probabilistic tractography. Fourteen healthy subjects were studied. Auditory areas were localized functionally using an established hierarchical pitch processing fMRI paradigm. Together fMRI and diffusion MRI allowed the successful identification of tracts connecting IC with AC in 64 to 86% of hemispheres and left sound areas with homologous areas in the right hemisphere in 86% of hemispheres. The identified tracts corresponded closely with a three-dimensional stereotaxic atlas based on postmortem data. The findings have both neuroscientific and clinical implications for delineation of the human auditory system in vivo

  18. Auditory tracts identified with combined fMRI and diffusion tractography.

    PubMed

    Javad, Faiza; Warren, Jason D; Micallef, Caroline; Thornton, John S; Golay, Xavier; Yousry, Tarek; Mancini, Laura

    2014-01-01

    The auditory tracts in the human brain connect the inferior colliculus (IC) and medial geniculate body (MGB) to various components of the auditory cortex (AC). While in non-human primates and in humans, the auditory system is differentiated in core, belt and parabelt areas, the correspondence between these areas and anatomical landmarks on the human superior temporal gyri is not straightforward, and at present not completely understood. However it is not controversial that there is a hierarchical organization of auditory stimuli processing in the auditory system. The aims of this study were to demonstrate that it is possible to non-invasively and robustly identify auditory projections between the auditory thalamus/brainstem and different functional levels of auditory analysis in the cortex of human subjects in vivo combining functional magnetic resonance imaging (fMRI) with diffusion MRI, and to investigate the possibility of differentiating between different components of the auditory pathways (e.g. projections to areas responsible for sound, pitch and melody processing). We hypothesized that the major limitation in the identification of the auditory pathways is the known problem of crossing fibres and addressed this issue acquiring DTI with b-values higher than commonly used and adopting a multi-fibre ball-and-stick analysis model combined with probabilistic tractography. Fourteen healthy subjects were studied. Auditory areas were localized functionally using an established hierarchical pitch processing fMRI paradigm. Together fMRI and diffusion MRI allowed the successful identification of tracts connecting IC with AC in 64 to 86% of hemispheres and left sound areas with homologous areas in the right hemisphere in 86% of hemispheres. The identified tracts corresponded closely with a three-dimensional stereotaxic atlas based on postmortem data. The findings have both neuroscientific and clinical implications for delineation of the human auditory system in vivo.

  19. Auditory tracts identified with combined fMRI and diffusion tractography

    PubMed Central

    Javad, Faiza; Warren, Jason D.; Micallef, Caroline; Thornton, John S.; Golay, Xavier; Yousry, Tarek; Mancini, Laura

    2014-01-01

    The auditory tracts in the human brain connect the inferior colliculus (IC) and medial geniculate body (MGB) to various components of the auditory cortex (AC). While in non-human primates and in humans, the auditory system is differentiated in core, belt and parabelt areas, the correspondence between these areas and anatomical landmarks on the human superior temporal gyri is not straightforward, and at present not completely understood. However it is not controversial that there is a hierarchical organization of auditory stimuli processing in the auditory system. The aims of this study were to demonstrate that it is possible to non-invasively and robustly identify auditory projections between the auditory thalamus/brainstem and different functional levels of auditory analysis in the cortex of human subjects in vivo combining functional magnetic resonance imaging (fMRI) with diffusion MRI, and to investigate the possibility of differentiating between different components of the auditory pathways (e.g. projections to areas responsible for sound, pitch and melody processing). We hypothesized that the major limitation in the identification of the auditory pathways is the known problem of crossing fibres and addressed this issue acquiring DTI with b-values higher than commonly used and adopting a multi-fibre ball-and-stick analysis model combined with probabilistic tractography. Fourteen healthy subjects were studied. Auditory areas were localized functionally using an established hierarchical pitch processing fMRI paradigm. Together fMRI and diffusion MRI allowed the successful identification of tracts connecting IC with AC in 64 to 86% of hemispheres and left sound areas with homologous areas in the right hemisphere in 86% of hemispheres. The identified tracts corresponded closely with a three-dimensional stereotaxic atlas based on postmortem data. The findings have both neuroscientific and clinical implications for delineation of the human auditory system in vivo

  20. Auditory Evoked Potential Response and Hearing Loss: A Review

    PubMed Central

    Paulraj, M. P; Subramaniam, Kamalraj; Yaccob, Sazali Bin; Adom, Abdul H. Bin; Hema, C. R

    2015-01-01

    Hypoacusis is the most prevalent sensory disability in the world and consequently, it can lead to impede speech in human beings. One best approach to tackle this issue is to conduct early and effective hearing screening test using Electroencephalogram (EEG). EEG based hearing threshold level determination is most suitable for persons who lack verbal communication and behavioral response to sound stimulation. Auditory evoked potential (AEP) is a type of EEG signal emanated from the brain scalp by an acoustical stimulus. The goal of this review is to assess the current state of knowledge in estimating the hearing threshold levels based on AEP response. AEP response reflects the auditory ability level of an individual. An intelligent hearing perception level system enables to examine and determine the functional integrity of the auditory system. Systematic evaluation of EEG based hearing perception level system predicting the hearing loss in newborns, infants and multiple handicaps will be a priority of interest for future research. PMID:25893012

  1. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.

    PubMed

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442

  2. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children

    PubMed Central

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442

  3. Issues in Human Auditory Development

    ERIC Educational Resources Information Center

    Werner, Lynne A.

    2007-01-01

    The human auditory system is often portrayed as precocious in its development. In fact, many aspects of basic auditory processing appear to be adult-like by the middle of the first year of postnatal life. However, processes such as attention and sound source determination take much longer to develop. Immaturity of higher-level processes limits the…

  4. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  5. Auditory neglect and related disorders.

    PubMed

    Gutschalk, Alexander; Dykstra, Andrew

    2015-01-01

    Neglect is a neurologic disorder, typically associated with lesions of the right hemisphere, in which patients are biased towards their ipsilesional - usually right - side of space while awareness for their contralesional - usually left - side is reduced or absent. Neglect is a multimodal disorder that often includes deficits in the auditory domain. Classically, auditory extinction, in which left-sided sounds that are correctly perceived in isolation are not detected in the presence of synchronous right-sided stimulation, has been considered the primary sign of auditory neglect. However, auditory extinction can also be observed after unilateral auditory cortex lesions and is thus not specific for neglect. Recent research has shown that patients with neglect are also impaired in maintaining sustained attention, on both sides, a fact that is reflected by an impairment of auditory target detection in continuous stimulation conditions. Perhaps the most impressive auditory symptom in full-blown neglect is alloacusis, in which patients mislocalize left-sided sound sources to their right, although even patients with less severe neglect still often show disturbance of auditory spatial perception, most commonly a lateralization bias towards the right. We discuss how these various disorders may be explained by a single model of neglect and review emerging interventions for patient rehabilitation.

  6. Topographic recordings of auditory evoked potentials to speech: subcortical and cortical responses.

    PubMed

    Bellier, Ludovic; Bouchet, Patrick; Jeanvoine, Arnaud; Valentin, Olivier; Thai-Van, Hung; Caclin, Anne

    2015-04-01

    Topographies of speech auditory brainstem response (speech ABR), a fine electrophysiological marker of speech encoding, have never been described. Yet, they could provide useful information to assess speech ABR generators and better characterize populations of interest (e.g., musicians, dyslexics). We present here a novel methodology of topographic speech ABR recording, using a 32-channel low sampling rate (5 kHz) EEG system. Quality of speech ABRs obtained with this conventional multichannel EEG system were compared to that of signals simultaneously recorded with a high sampling rate (13.3 kHz) EEG system. Correlations between speech ABRs recorded with the two systems revealed highly similar signals, without any significant difference between their signal-to-noise ratios (SNRs). Moreover, an advanced denoising method for multichannel data (denoising source separation) significantly improved SNR and allowed topography of speech ABR to be recovered.

  7. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  8. Loss of Prestin Does Not Alter the Development of Auditory Cortical Dendritic Spines

    PubMed Central

    Bogart, L. J.; Levy, A. D.; Gladstone, M.; Allen, P. D.; Zettel, M.; Ison, J. R.; Luebke, A. E.; Majewska, A. K.

    2011-01-01

    Disturbance of sensory input during development can have disastrous effects on the development of sensory cortical areas. To examine how moderate perturbations of hearing can impact the development of primary auditory cortex, we examined markers of excitatory synapses in mice who lacked prestin, a protein responsible for somatic electromotility of cochlear outer hair cells. While auditory brain stem responses of these mice show an approximately 40 dB increase in threshold, we found that loss of prestin produced no changes in spine density or morphological characteristics on apical dendrites of cortical layer 5 pyramidal neurons. PSD-95 immunostaining also showed no changes in overall excitatory synapse density. Surprisingly, behavioral assessments of auditory function using the acoustic startle response showed only modest changes in prestin KO animals. These results suggest that moderate developmental hearing deficits produce minor changes in the excitatory connectivity of layer 5 neurons of primary auditory cortex and surprisingly mild auditory behavioral deficits in the startle response. PMID:21773053

  9. Visual short-term memory load affects sensory processing of irrelevant sounds in human auditory cortex.

    PubMed

    Valtonen, Jussi; May, Patrick; Mäkinen, Ville; Tiitinen, Hannu

    2003-07-01

    We used whole-head magnetoencephalography (MEG) to investigate neural activity in human auditory cortex elicited by irrelevant tones while the subjects were engaged in a short-term memory task presented in the visual modality. As compared to a no-memory-task condition, memory load enhanced the amplitude of the auditory N1m response. In addition, the N1m amplitude depended on the phase of the memory task, with larger response amplitudes observed during encoding than retention. Further, these amplitude modulations were accompanied by anterior-posterior shifts in N1m source locations. The results show that a memory task for visually presented stimuli alters sensory processing in human auditory cortex, even when subjects are explicitly instructed to ignore any auditory stimuli. Thus, it appears that task demands requiring attentional allocation and short-term memory result in interaction across visual and auditory brain areas carrying out the processing of stimulus features.

  10. Auditory hallucinations: A review of the ERC "VOICE" project.

    PubMed

    Hugdahl, Kenneth

    2015-06-22

    In this invited review I provide a selective overview of recent research on brain mechanisms and cognitive processes involved in auditory hallucinations. The review is focused on research carried out in the "VOICE" ERC Advanced Grant Project, funded by the European Research Council, but I also review and discuss the literature in general. Auditory hallucinations are suggested to be perceptual phenomena, with a neuronal origin in the speech perception areas in the temporal lobe. The phenomenology of auditory hallucinations is conceptualized along three domains, or dimensions; a perceptual dimension, experienced as someone speaking to the patient; a cognitive dimension, experienced as an inability to inhibit, or ignore the voices, and an emotional dimension, experienced as the "voices" having primarily a negative, or sinister, emotional tone. I will review cognitive, imaging, and neurochemistry data related to these dimensions, primarily the first two. The reviewed data are summarized in a model that sees auditory hallucinations as initiated from temporal lobe neuronal hyper-activation that draws attentional focus inward, and which is not inhibited due to frontal lobe hypo-activation. It is further suggested that this is maintained through abnormal glutamate and possibly gamma-amino-butyric-acid transmitter mediation, which could point towards new pathways for pharmacological treatment. A final section discusses new methods of acquiring quantitative data on the phenomenology and subjective experience of auditory hallucination that goes beyond standard interview questionnaires, by suggesting an iPhone/iPod app.

  11. Options for Auditory Training for Adults with Hearing Loss.

    PubMed

    Olson, Anne D

    2015-11-01

    Hearing aid devices alone do not adequately compensate for sensory losses despite significant technological advances in digital technology. Overall use rates of amplification among adults with hearing loss remain low, and overall satisfaction and performance in noise can be improved. Although improved technology may partially address some listening problems, auditory training may be another alternative to improve speech recognition in noise and satisfaction with devices. The literature underlying auditory plasticity following placement of sensory devices suggests that additional auditory training may be needed for reorganization of the brain to occur. Furthermore, training may be required to acquire optimal performance from devices. Several auditory training programs that are readily accessible for adults with hearing loss, hearing aids, or cochlear implants are described. Programs that can be accessed via Web-based formats and smartphone technology are reviewed. A summary table is provided for easy access to programs with descriptions of features that allow hearing health care providers to assist clients in selecting the most appropriate auditory training program to fit their needs. PMID:27587915

  12. Options for Auditory Training for Adults with Hearing Loss

    PubMed Central

    Olson, Anne D.

    2015-01-01

    Hearing aid devices alone do not adequately compensate for sensory losses despite significant technological advances in digital technology. Overall use rates of amplification among adults with hearing loss remain low, and overall satisfaction and performance in noise can be improved. Although improved technology may partially address some listening problems, auditory training may be another alternative to improve speech recognition in noise and satisfaction with devices. The literature underlying auditory plasticity following placement of sensory devices suggests that additional auditory training may be needed for reorganization of the brain to occur. Furthermore, training may be required to acquire optimal performance from devices. Several auditory training programs that are readily accessible for adults with hearing loss, hearing aids, or cochlear implants are described. Programs that can be accessed via Web-based formats and smartphone technology are reviewed. A summary table is provided for easy access to programs with descriptions of features that allow hearing health care providers to assist clients in selecting the most appropriate auditory training program to fit their needs. PMID:27587915

  13. Coupling output of multichannel high power microwaves

    SciTech Connect

    Li Guolin; Shu Ting; Yuan Chengwei; Zhang Jun; Yang Jianhua; Jin Zhenxing; Yin Yi; Wu Dapeng; Zhu Jun; Ren Heming; Yang Jie

    2010-12-15

    The coupling output of multichannel high power microwaves is a promising technique for the development of high power microwave technologies, as it can enhance the output capacities of presently studied devices. According to the investigations on the spatial filtering method and waveguide filtering method, the hybrid filtering method is proposed for the coupling output of multichannel high power microwaves. As an example, a specific structure is designed for the coupling output of S/X/X band three-channel high power microwaves and investigated with the hybrid filtering method. In the experiments, a pulse of 4 GW X band beat waves and a pulse of 1.8 GW S band microwave are obtained.

  14. Coupling output of multichannel high power microwaves

    NASA Astrophysics Data System (ADS)

    Li, Guolin; Shu, Ting; Yuan, Chengwei; Zhang, Jun; Yang, Jianhua; Jin, Zhenxing; Yin, Yi; Wu, Dapeng; Zhu, Jun; Ren, Heming; Yang, Jie

    2010-12-01

    The coupling output of multichannel high power microwaves is a promising technique for the development of high power microwave technologies, as it can enhance the output capacities of presently studied devices. According to the investigations on the spatial filtering method and waveguide filtering method, the hybrid filtering method is proposed for the coupling output of multichannel high power microwaves. As an example, a specific structure is designed for the coupling output of S/X/X band three-channel high power microwaves and investigated with the hybrid filtering method. In the experiments, a pulse of 4 GW X band beat waves and a pulse of 1.8 GW S band microwave are obtained.

  15. Multichannel cochlear implants in partially ossified cochleas.

    PubMed

    Balkany, T; Gantz, B; Nadol, J B

    1988-01-01

    Deposition of bone within the fluid spaces of the cochlea is encountered commonly in cochlear implant candidates and previously has been considered a relative contraindication to the use of multichannel intracochlear electrodes. This contraindication has been based on possible mechanical difficulty with electrode insertion as well as uncertainty about the potential benefit of the multichannel device in the patient. Fifteen profoundly deaf patients with partial ossification of the basal turn of the cochlea received implants with long intracochlear electrodes (11, Nucleus; 1, University of California at San Francisco/Storz; and 3, Symbion/Inneraid). In 11 cases, ossification had been predicted preoperatively by computed tomographic scan. Electrodes were completely inserted in 14 patients, and partial insertion was accomplished in one patient. All patients currently are using their devices and nine of 12 postlingually deaf patients have achieved some degree of open-set speech discrimination. This series demonstrates that in experienced hands, insertion of long multichannel electrodes into partially ossified cochleas is possible and that results are similar to those achieved in patients who have nonossified cochleas. PMID:3140705

  16. State-space models for multichannel detection

    NASA Astrophysics Data System (ADS)

    Roman, J. R.; Davis, D. W.

    1993-07-01

    In multichannel identification and detection (or model-based multichannel detection) problems the parameters of a model are identified from the observed channel process and the identified model is used to facilitate the detection of a signal in the observe process. A model-based multichannel detection algorithm was developed in the context of an innovations-based detection algorithm (IBDA) formulation for surveillance radar system applications. The state space model class was adopted to model the vector channel process because it is more general than the time series model class used in most analyses to date. An IBDA methodology was developed based on the canonical correlations algorithm which for state-space model identification offers performance advantages over alternative techniques. A computer simulation was developed to validate the methodology and the algorithm, and to carry out performance assessments. Simulation results indicate that the algorithm is capable of discriminating between the null hypothesis (clutter plus noise) and the alternative hypothesis (signal plus clutter plus noise). In summary, the applicability of the approach to radar system problems was established.

  17. Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis.

    PubMed

    Fletcher, Phillip D; Downey, Laura E; Golden, Hannah L; Clark, Camilla N; Slattery, Catherine F; Paterson, Ross W; Schott, Jonathan M; Rohrer, Jonathan D; Rossor, Martin N; Warren, Jason D

    2015-06-01

    Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music ('musicophilia') occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease.

  18. Morphology and physiology of auditory and vibratory ascending interneurones in bushcrickets.

    PubMed

    Nebeling, B

    2000-02-15

    Auditory/vibratory interneurones of the bushcricket species Decticus albifrons and Decticus verrucivorus were studied with intracellular dye injection and electrophysiology. The morphologies of five physiologically characterised auditory/vibratory interneurones are shown in the brain, subesophageal and prothoracic ganglia. Based on their physiology, these five interneurones fall into three groups, the purely auditory or sound neurones: S-neurones, the purely vibratory V-neurones, and the bimodal vibrosensitive VS-neurones. The S1-neurones respond phasically to airborne sound whereas the S4-neurones exhibit a tonic spike pattern. Their somata are located in the prothoracic ganglion and they show an ascending axon with dendrites located in the prothoracic, subesophageal ganglia, and the brain. The VS3-neurone, responding to both auditory and vibratory stimuli in a tonic manner, has its axon traversing the brain, the suboesophageal ganglion and the prothoracic ganglion although with dendrites only in the brain. The V1- and V2-neurones respond to vibratory stimulation of the fore- and midlegs with a tonic discharge pattern, and our data show that they receive inhibitory input suppressing their spontaneous activity. Their axon transverses the prothoracic ganglion, subesophageal ganglion and terminate in the brain with dendritic branching. Thus the auditory S-neurones have dendritic arborizations in all three ganglia (prothoracic, subesophageal, and brain) compared to the vibratory (V) and vibrosensitive (VS) neurones, which have dendrites almost only in the brain. The dendrites of the S-neurones are also more extensive than those of the V-, VS-neurones. V- and VS-neurones terminate more laterally in the brain. Due to an interspecific comparison of the identified auditory interneurones the S1-neurone is found to be homologous to the TN1 of crickets and other bushcrickets, and the S4-neurone also can be called AN2. J. Exp. Zool. 286:219-230, 2000.

  19. Auditory hallucinations treated by radio headphones.

    PubMed

    Feder, R

    1982-09-01

    A young man with chronic auditory hallucinations was treated according to the principle that increasing external auditory stimulation decreases the likelihood of auditory hallucinations. Listening to a radio through stereo headphones in conditions of low auditory stimulation eliminated the patient's hallucinations.

  20. Language Development Activities through the Auditory Channel.

    ERIC Educational Resources Information Center

    Fitzmaurice, Peggy, Comp.; And Others

    Presented primarily for use with educable mentally retarded and learning disabled children are approximately 100 activities for language development through the auditory channel. Activities are grouped under the following three areas: receptive skills (auditory decoding, auditory memory, and auditory discrimination); expressive skills (auditory…

  1. Asymmetry in primary auditory cortex activity in tinnitus patients and controls.

    PubMed

    Geven, L I; de Kleine, E; Willemsen, A T M; van Dijk, P

    2014-01-01

    Tinnitus is a bothersome phantom sound percept and its neural correlates are not yet disentangled. Previously published papers, using [(18)F]-fluoro-deoxyglucose positron emission tomography (FDG-PET), have suggested an increased metabolism in the left primary auditory cortex in tinnitus patients. This unilateral hyperactivity has been used as a target in localized treatments such as transcranial magnetic stimulation. The purpose of the current study was to test whether left-sided hyperactivity in the auditory cortex is specific to tinnitus or is a general characteristic of the auditory system unrelated to tinnitus. Therefore, FDG-PET was used to measure brain metabolism in 20 tinnitus patients and to compare their results to those in 19 control subjects without tinnitus. In contrast to our expectation, there was no hyperactivity associated with tinnitus. Nevertheless, the activity in the left primary auditory cortex was higher than in the right primary auditory cortex, but this asymmetry was present in both tinnitus patients and control subjects. In contrast, the lateralization in secondary auditory cortex was opposite, with higher activation in the right hemisphere. These data show that hemisphere asymmetries in the metabolic resting activity of the auditory cortex are present, but these are not associated with tinnitus and are a normal characteristic of the normal brain. PMID:24161276

  2. Cortical auditory disorders: clinical and psychoacoustic features.

    PubMed Central

    Mendez, M F; Geehan, G R

    1988-01-01

    The symptoms of two patients with bilateral cortical auditory lesions evolved from cortical deafness to other auditory syndromes: generalised auditory agnosia, amusia and/or pure word deafness, and a residual impairment of temporal sequencing. On investigation, both had dysacusis, absent middle latency evoked responses, acoustic errors in sound recognition and matching, inconsistent auditory behaviours, and similarly disturbed psychoacoustic discrimination tasks. These findings indicate that the different clinical syndromes caused by cortical auditory lesions form a spectrum of related auditory processing disorders. Differences between syndromes may depend on the degree of involvement of a primary cortical processing system, the more diffuse accessory system, and possibly the efferent auditory system. Images PMID:2450968

  3. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    PubMed Central

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  4. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI

    PubMed Central

    Zhou, Sijie; Allison, Brendan Z.; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing

    2016-01-01

    Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged. PMID:27790111

  5. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    PubMed

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  6. Effects of chronic stress on the auditory system and fear learning: an evolutionary approach.

    PubMed

    Dagnino-Subiabre, Alexies

    2013-01-01

    Stress is a complex biological reaction common to all living organisms that allows them to adapt to their environments. Chronic stress alters the dendritic architecture and function of the limbic brain areas that affect memory, learning, and emotional processing. This review summarizes our research about chronic stress effects on the auditory system, providing the details of how we developed the main hypotheses that currently guide our research. The aims of our studies are to (1) determine how chronic stress impairs the dendritic morphology of the main nuclei of the rat auditory system, the inferior colliculus (auditory mesencephalon), the medial geniculate nucleus (auditory thalamus), and the primary auditory cortex; (2) correlate the anatomic alterations with the impairments of auditory fear learning; and (3) investigate how the stress-induced alterations in the rat limbic system may spread to nonlimbic areas, affecting specific sensory system, such as the auditory and olfactory systems, and complex cognitive functions, such as auditory attention. Finally, this article gives a new evolutionary approach to understanding the neurobiology of stress and the stress-related disorders.

  7. Dynamic auditory processing, musical experience and language development.

    PubMed

    Tallal, Paula; Gaab, Nadine

    2006-07-01

    Children with language-learning impairments (LLI) form a heterogeneous population with the majority having both spoken and written language deficits as well as sensorimotor deficits, specifically those related to dynamic processing. Research has focused on whether or not sensorimotor deficits, specifically auditory spectrotemporal processing deficits, cause phonological deficit, leading to language and reading impairments. New trends aimed at resolving this question include prospective longitudinal studies of genetically at-risk infants, electrophysiological and neuroimaging studies, and studies aimed at evaluating the effects of auditory training (including musical training) on brain organization for language. Better understanding of the origins of developmental LLI will advance our understanding of the neurobiological mechanisms underlying individual differences in language development and lead to more effective educational and intervention strategies. This review is part of the INMED/TINS special issue "Nature and nurture in brain development and neurological disorders", based on presentations at the annual INMED/TINS symposium (http://inmednet.com/).

  8. The cortical language circuit: from auditory perception to sentence comprehension.

    PubMed

    Friederici, Angela D

    2012-05-01

    Over the years, a large body of work on the brain basis of language comprehension has accumulated, paving the way for the formulation of a comprehensive model. The model proposed here describes the functional neuroanatomy of the different processing steps from auditory perception to comprehension as located in different gray matter brain regions. It also specifies the information flow between these regions, taking into account white matter fiber tract connections. Bottom-up, input-driven processes proceeding from the auditory cortex to the anterior superior temporal cortex and from there to the prefrontal cortex, as well as top-down, controlled and predictive processes from the prefrontal cortex back to the temporal cortex are proposed to constitute the cortical language circuit.

  9. Auditory evoked field measurement using magneto-impedance sensors

    NASA Astrophysics Data System (ADS)

    Wang, K.; Tajima, S.; Song, D.; Hamada, N.; Cai, C.; Uchiyama, T.

    2015-05-01

    The magnetic field of the human brain is extremely weak, and it is mostly measured and monitored in the magnetoencephalography method using superconducting quantum interference devices. In this study, in order to measure the weak magnetic field of the brain, we constructed a Magneto-Impedance sensor (MI sensor) system that can cancel out the background noise without any magnetic shield. Based on our previous studies of brain wave measurements, we used two MI sensors in this system for monitoring both cerebral hemispheres. In this study, we recorded and compared the auditory evoked field signals of the subject, including the N100 (or N1) and the P300 (or P3) brain waves. The results suggest that the MI sensor can be applied to brain activity measurement.

  10. Auditory evoked field measurement using magneto-impedance sensors

    SciTech Connect

    Wang, K. Tajima, S.; Song, D.; Uchiyama, T.; Hamada, N.; Cai, C.

    2015-05-07

    The magnetic field of the human brain is extremely weak, and it is mostly measured and monitored in the magnetoencephalography method using superconducting quantum interference devices. In this study, in order to measure the weak magnetic field of the brain, we constructed a Magneto-Impedance sensor (MI sensor) system that can cancel out the background noise without any magnetic shield. Based on our previous studies of brain wave measurements, we used two MI sensors in this system for monitoring both cerebral hemispheres. In this study, we recorded and compared the auditory evoked field signals of the subject, including the N100 (or N1) and the P300 (or P3) brain waves. The results suggest that the MI sensor can be applied to brain activity measurement.

  11. Auditory-olfactory synesthesia coexisting with auditory-visual synesthesia.

    PubMed

    Jackson, Thomas E; Sandramouli, Soupramanien

    2012-09-01

    Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.

  12. Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.

    PubMed

    Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd

    2014-11-01

    In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and

  13. Hemispheric competence for auditory spatial representation.

    PubMed

    Spierer, Lucas; Bellmann-Thiran, Anne; Maeder, Philippe; Murray, Micah M; Clarke, Stephanie

    2009-07-01

    Sound localization relies on the analysis of interaural time and intensity differences, as well as attenuation patterns by the outer ear. We investigated the relative contributions of interaural time and intensity difference cues to sound localization by testing 60 healthy subjects: 25 with focal left and 25 with focal right hemispheric brain damage. Group and single-case behavioural analyses, as well as anatomo-clinical correlations, confirmed that deficits were more frequent and much more severe after right than left hemispheric lesions and for the processing of interaural time than intensity difference cues. For spatial processing based on interaural time difference cues, different error types were evident in the individual data. Deficits in discriminating between neighbouring positions occurred in both hemispaces after focal right hemispheric brain damage, but were restricted to the contralesional hemispace after focal left hemispheric brain damage. Alloacusis (perceptual shifts across the midline) occurred only after focal right hemispheric brain damage and was associated with minor or severe deficits in position discrimination. During spatial processing based on interaural intensity cues, deficits were less severe in the right hemispheric brain damage than left hemispheric brain damage group and no alloacusis occurred. These results, matched to anatomical data, suggest the existence of a binaural sound localization system predominantly based on interaural time difference cues and primarily supported by the right hemisphere. More generally, our data suggest that two distinct mechanisms contribute to: (i) the precise computation of spatial coordinates allowing spatial comparison within the contralateral hemispace for the left hemisphere and the whole space for the right hemisphere; and (ii) the building up of global auditory spatial representations in right temporo-parietal cortices.

  14. EFFECTS OF NEONATAL PARTIAL DEAFNESS AND CHRONIC INTRACOCHLEAR ELECTRICAL STIMULATION ON AUDITORY AND ELECTRICAL RESPONSE CHARACTERISTICS IN PRIMARY AUDITORY CORTEX

    PubMed Central

    Fallon, James B; Shepherd, Robert K.; Brown, Mel; Irvine, Dexter, R. F.

    2009-01-01

    The use of cochlear implants in patients with severe hearing losses but residual low-frequency hearing raises questions concerning the effects of chronic intracochlear electrical stimulation(ICES) on cortical responses to auditory and electrical stimuli. We investigated these questions by studying responses to tonal and electrical stimuli in primary auditory cortex (AI) of two groups of neonatally-deafened cats with residual high-threshold, low-frequency hearing. One group were implanted with a multi-channel intracochlear electrode at eight weeks of age, and received chronic ICES for up to nine months before cortical recording. Cats in the other group were implanted immediately prior to cortical recording as adults. In all cats in both groups, multi-neuron responses throughout the rostro-caudal extent of AI had low characteristic frequencies (CFs), in the frequency range of the residual hearing, and high-thresholds. Threshold and minimum latency at CF did not differ between the groups, but in the chronic ICES animals there was a higher proportion of electrically but not acoustically excited recording sites. Electrical response thresholds were higher and latencies shorter in the chronically stimulated animals. Thus, chronic implantation and ICES affected the extent of AI that could be activated by acoustic stimuli and resulted in changes in electrical response characteristics. PMID:19703532

  15. Auditory Processing Disorder in Children

    MedlinePlus

    ... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...

  16. Leiomyoma of External Auditory Canal.

    PubMed

    George, M V; Puthiyapurayil, Jamsheeda

    2016-09-01

    This article reports a case of piloleiomyoma of external auditory canal, which is the 7th case of leiomyoma of the external auditory canal being reported and the 2nd case of leiomyoma arising from arrectores pilorum muscles, all the other five cases were angioleiomyomas, arising from blood vessels. A 52 years old male presented with a mass in the right external auditory canal and decreased hearing of 6 months duration. Tumor excision done by end aural approach. Histopathological examination report was leiomyoma. It is extremely rare for leiomyoma to occur in the external auditory canal because of the non-availability of smooth muscles in the external canal. So it should be considered as a very rare differential diagnosis for any tumor or polyp in the ear canal. PMID:27508144

  17. Classroom Demonstrations of Auditory Perception.

    ERIC Educational Resources Information Center

    Haws, LaDawn; Oppy, Brian J.

    2002-01-01

    Presents activities to help students gain understanding about auditory perception. Describes demonstrations that cover topics, such as sound localization, wave cancellation, frequency/pitch variation, and the influence of media on sound propagation. (CMK)

  18. Processing and Analysis of Multichannel Extracellular Neuronal Signals: State-of-the-Art and Challenges

    PubMed Central

    Mahmud, Mufti; Vassanelli, Stefano

    2016-01-01

    In recent years multichannel neuronal signal acquisition systems have allowed scientists to focus on research questions which were otherwise impossible. They act as a powerful means to study brain (dys)functions in in-vivo and in in-vitro animal models. Typically, each session of electrophysiological experiments with multichannel data acquisition systems generate large amount of raw data. For example, a 128 channel signal acquisition system with 16 bits A/D conversion and 20 kHz sampling rate will generate approximately 17 GB data per hour (uncompressed). This poses an important and challenging problem of inferring conclusions from the large amounts of acquired data. Thus, automated signal processing and analysis tools are becoming a key component in neuroscience research, facilitating extraction of relevant information from neuronal recordings in a reasonable time. The purpose of this review is to introduce the reader to the current state-of-the-art of open-source packages for (semi)automated processing and analysis of multichannel extracellular neuronal signals (i.e., neuronal spikes, local field potentials, electroencephalogram, etc.), and the existing Neuroinformatics infrastructure for tool and data sharing. The review is concluded by pinpointing some major challenges that are being faced, which include the development of novel benchmarking techniques, cloud-based distributed processing and analysis tools, as well as defining novel means to share and standardize data. PMID:27313507

  19. The algorithmic complexity of multichannel EEGs is sensitive to changes in behavior.

    PubMed

    Watanabe, T A A; Cellucci, C J; Kohegyi, E; Bashore, T R; Josiassen, R C; Greenbaun, N N; Rapp, P E

    2003-01-01

    Symbolic measures of complexity provide a quantitative characterization of the sequential structure of symbol sequences. Promising results from the application of these methods to the analysis of electroencephalographic (EEG) and event-related brain potential (ERP) activity have been reported. Symbolic measures used thus far have two limitations, however. First, because the value of complexity increases with the length of the message, it is difficult to compare signals of different epoch lengths. Second, these symbolic measures do not generalize easily to the multichannel case. We address these issues in studies in which both single and multichannel EEGs were analyzed using measures of signal complexity and algorithmic redundancy, the latter being defined as a sequence-sensitive generalization of Shannon's redundancy. Using a binary partition of EEG activity about the median, redundancy was shown to be insensitive to the size of the data set while being sensitive to changes in the subject's behavioral state (eyes open vs. eyes closed). The covariance complexity, calculated from the singular value spectrum of a multichannel signal, was also found to be sensitive to changes in behavioral state. Statistical separations between the eyes open and eyes closed conditions were found to decrease following removal of the 8- to 12-Hz content in the EEG, but still remained statistically significant. Use of symbolic measures in multivariate signal classification is described.

  20. Processing and Analysis of Multichannel Extracellular Neuronal Signals: State-of-the-Art and Challenges.

    PubMed

    Mahmud, Mufti; Vassanelli, Stefano

    2016-01-01

    In recent years multichannel neuronal signal acquisition systems have allowed scientists to focus on research questions which were otherwise impossible. They act as a powerful means to study brain (dys)functions in in-vivo and in in-vitro animal models. Typically, each session of electrophysiological experiments with multichannel data acquisition systems generate large amount of raw data. For example, a 128 channel signal acquisition system with 16 bits A/D conversion and 20 kHz sampling rate will generate approximately 17 GB data per hour (uncompressed). This poses an important and challenging problem of inferring conclusions from the large amounts of acquired data. Thus, automated signal processing and analysis tools are becoming a key component in neuroscience research, facilitating extraction of relevant information from neuronal recordings in a reasonable time. The purpose of this review is to introduce the reader to the current state-of-the-art of open-source packages for (semi)automated processing and analysis of multichannel extracellular neuronal signals (i.e., neuronal spikes, local field potentials, electroencephalogram, etc.), and the existing Neuroinformatics infrastructure for tool and data sharing. The review is concluded by pinpointing some major challenges that are being faced, which include the development of novel benchmarking techniques, cloud-based distributed processing and analysis tools, as well as defining novel means to share and standardize data. PMID:27313507

  1. Multi-Channel neurodegenerative pattern analysis and its application in Alzheimer's disease characterization

    PubMed Central

    Liu, Sidong; Cai, Weidong; Wen, Lingfeng; Feng, David Dagan; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J.; Eberl, Stefan; ADNI

    2014-01-01

    Neuroimaging has played an important role in non-invasive diagnosis and differentiation of neurodegenerative disorders, such as Alzheimer's disease and Mild Cognitive Impairment. Various features have been extracted from the neuroimaging data to characterize the disorders, and these features can be roughly divided into global and local features. Recent studies show a tendency of using local features in disease characterization, since they are capable of identifying the subtle disease-specific patterns associated with the effects of the disease on human brain. However, problems arise if the neuroimaging database involved multiple disorders or progressive disorders, as disorders of different types or at different progressive stages might exhibit different degenerative patterns. It is difficult for the researchers to reach consensus on what brain regions could effectively distinguish multiple disorders or multiple progression stages. In this study we proposed a Multi-Channel pattern analysis approach to identify the most discriminative local brain metabolism features for neurodegenerative disorder characterization. We compared our method to global methods and other pattern analysis methods based on clinical expertise or statistics tests. The preliminary results suggested that the proposed Multi-Channel pattern analysis method outperformed other approaches in Alzheimer's disease characterization, and meanwhile provided important insights into the underlying pathology of Alzheimer's disease and Mild Cognitive Impairment. PMID:24933011

  2. Multi-Channel neurodegenerative pattern analysis and its application in Alzheimer's disease characterization.

    PubMed

    Liu, Sidong; Cai, Weidong; Wen, Lingfeng; Feng, David Dagan; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Eberl, Stefan

    2014-09-01

    Neuroimaging has played an important role in non-invasive diagnosis and differentiation of neurodegenerative disorders, such as Alzheimer's disease and Mild Cognitive Impairment. Various features have been extracted from the neuroimaging data to characterize the disorders, and these features can be roughly divided into global and local features. Recent studies show a tendency of using local features in disease characterization, since they are capable of identifying the subtle disease-specific patterns associated with the effects of the disease on human brain. However, problems arise if the neuroimaging database involved multiple disorders or progressive disorders, as disorders of different types or at different progressive stages might exhibit different degenerative patterns. It is difficult for the researchers to reach consensus on what brain regions could effectively distinguish multiple disorders or multiple progression stages. In this study we proposed a Multi-Channel pattern analysis approach to identify the most discriminative local brain metabolism features for neurodegenerative disorder characterization. We compared our method to global methods and other pattern analysis methods based on clinical expertise or statistics tests. The preliminary results suggested that the proposed Multi-Channel pattern analysis method outperformed other approaches in Alzheimer's disease characterization, and meanwhile provided important insights into the underlying pathology of Alzheimer's disease and Mild Cognitive Impairment.

  3. Morphometric changes in subcortical structures of the central auditory pathway in mice with bilateral nodular heterotopia.

    PubMed

    Truong, Dongnhu T; Rendall, Amanda R; Rosen, Glenn D; Fitch, R Holly

    2015-04-01

    Malformations of cortical development (MCD) have been observed in human reading and language impaired populations. Injury-induced MCD in rodent models of reading disability show morphological changes in the auditory thalamic nucleus (medial geniculate nucleus; MGN) and auditory processing impairments, thus suggesting a link between MCD, MGN, and auditory processing behavior. Previous neuroanatomical examination of a BXD29 recombinant inbred strain (BXD29-Tlr4(lps-2J)/J) revealed MCD consisting of bilateral subcortical nodular heterotopia with partial callosal agenesis. Subsequent behavioral characterization showed a severe impairment in auditory processing-a deficient behavioral phenotype seen across both male and female BXD29-Tlr4(lps-2J)/J mice. In the present study we expanded upon the neuroanatomical findings in the BXD29-Tlr4(lps-2J)/J mutant mouse by investigating whether subcortical changes in cellular morphology are present in neural structures critical to central auditory processing (MGN, and the ventral and dorsal subdivisions of the cochlear nucleus; VCN and DCN, respectively). Stereological assessment of brain tissue of male and female BXD29-Tlr4(lps-2J)/J mice previously tested on an auditory processing battery revealed overall smaller neurons in the MGN of BXD29-Tlr4(lps-2J)/J mutant mice in comparison to BXD29/Ty coisogenic controls, regardless of sex. Interestingly, examination of the VCN and DCN revealed sexually dimorphic changes in neuronal size, with a distribution shift toward larger neurons in female BXD29-Tlr4(lps-2J)/J brains. These effects were not seen in males. Together, the combined data set supports and further expands the observed co-occurrence of MCD, auditory processing impairments, and changes in subcortical anatomy of the central auditory pathway. The current stereological findings also highlight sex differences in neuroanatomical presentation in the presence of a common auditory behavioral phenotype.

  4. Asynchronous data readout system for multichannel ASIC

    NASA Astrophysics Data System (ADS)

    Ivanov, P. Y.; Atkin, E. V.

    2016-02-01

    The data readout system of multichannel data-driven ASIC, requiring high-speed (320 Mb/s) output data serialization is described. Its structure, based on a limited number of FIFO blocks, provides a lossless data transfer. The solution has been realized as a separate test IP block in the prototyped 8 channel ASIC, intended for the muon chamber of CBM experiment at FAIR. The block was developed for the UMC 0.18 μm MMRF CMOS process and prototyped via Europractice. Main parameters of the chip are given.

  5. Simple multifunction discriminator for multichannel triggers

    SciTech Connect

    Maier, M.R.

    1982-10-01

    A simple version of a multifunction timing discriminator using only two integrated circuits is presented. It can be configured as a leading edge, a constant fraction, a zero cross or a dual threshold timing discriminator. Since so few parts are used, it is well suited for building multichannel timing discriminators. Two versions of this circuit are described: a quadruple multifunction discriminator and an octal constant fraction trigger. The different compromises made in these units are discussed. Results for walk and jitter obtained with these are presented and possible improvements are disussed.

  6. Multichannel linear predictive coding of color images

    NASA Astrophysics Data System (ADS)

    Maragos, P. A.; Mersereau, R. M.; Schafer, R. W.

    This paper reports on a preliminary study of applying single-channel (scalar) and multichannel (vector) 2-D linear prediction to color image modeling and coding. Also, the novel idea of a multi-input single-output 2-D ADPCM coder is introduced. The results of this study indicate that texture information in multispectral images can be represented by linear prediction coefficients or matrices, whereas the prediction error conveys edge-information. Moreover, by using a single-channel edge-information the investigators obtained, from original color images of 24 bits/pixel, reconstructed images of good quality at information rates of 1 bit/pixel or less.

  7. FOMA: A Fast Optical Multichannel Analyzer

    NASA Astrophysics Data System (ADS)

    Haskovec, J. S.; Bramson, G.; Brooks, N. H.; Perry, M.

    1989-12-01

    A Fast Optical Multichannel Analyzer (FOMA) was built for spectroscopic measurements with fast time resolution on the DIII-D tokamak. The FOMA utilizes a linear photodiode array (RETICON RL 1024 SA) as the detector sensor. An external recharge switch and ultrafast operational amplifiers permit a readout time per pixel of 300 ns. In conjunction with standard CAMAC digitizer and timing modules, a readout time of 500 microns is achieved for the full 1024-element array. Data acquired in bench tests and in actual spectroscopic measurements on the DIII-D tokamak is presented to illustrate the camera's capability.

  8. A low power Multi-Channel Analyzer

    SciTech Connect

    Anderson, G.A.; Brackenbush, L.W.

    1993-06-01

    The instrumentation used in nuclear spectroscopy is generally large, is not portable, and requires a lot of power. Key components of these counting systems are the computer and the Multi-Channel Analyzer (MCA). To assist in performing measurements requiring portable systems, a small, very low power MCA has been developed at Pacific Northwest Laboratory (PNL). This MCA is interfaced with a Hewlett Packard palm top computer for portable applications. The MCA can also be connected to an IBM/PC for data storage and analysis. In addition, a real-time time display mode allows the user to view the spectra as they are collected.

  9. MULTI-CHANNEL PULSE HEIGHT ANALYZER

    DOEpatents

    Boyer, K.; Johnstone, C.W.

    1958-11-25

    An improved multi-channel pulse height analyzer of the type where the device translates the amplitude of each pulse into a time duration electrical quantity which is utilized to control the length of a train of pulses forwarded to a scaler is described. The final state of the scaler for any one train of pulses selects the appropriate channel in a magnetic memory in which an additional count of one is placed. The improvement consists of a storage feature for storing a signal pulse so that in many instances when two signal pulses occur in rapid succession, the second pulse is preserved and processed at a later time.

  10. Multichannel euv spectroscopy of high temperature plasmas

    SciTech Connect

    Fonck, R.J.

    1983-11-01

    Spectroscopy of magnetically confined high temperature plasmas in the visible through x-ray spectral ranges deals primarily with the study of impurity line radiation or continuum radiation. Detailed knowledge of absolute intensities, temporal behavior, and spatial distributions of the emitted radiation is desired. As tokamak facilities become more complex, larger, and less accessible, there has been an increased emphasis on developing new instrumentation to provide such information in a minimum number of discharges. The availability of spatially-imaging detectors for use in the vacuum ultraviolet region (especially the intensified photodiode array) has generated the development of a variety of multichannel spectrometers for applications on tokamak facilities.

  11. Multichannel framework for singular quantum mechanics

    NASA Astrophysics Data System (ADS)

    Camblong, Horacio E.; Epele, Luis N.; Fanchiotti, Huner; García Canal, Carlos A.; Ordóñez, Carlos R.

    2014-01-01

    A multichannel S-matrix framework for singular quantum mechanics (SQM) subsumes the renormalization and self-adjoint extension methods and resolves its boundary-condition ambiguities. In addition to the standard channel accessible to a distant ("asymptotic") observer, one supplementary channel opens up at each coordinate singularity, where local outgoing and ingoing singularity waves coexist. The channels are linked by a fully unitary S-matrix, which governs all possible scenarios, including cases with an apparent nonunitary behavior as viewed from asymptotic distances.

  12. Multichannel image regularization using anisotropic geodesic filtering

    SciTech Connect

    Grazzini, Jacopo A

    2010-01-01

    This paper extends a recent image-dependent regularization approach introduced in aiming at edge-preserving smoothing. For that purpose, geodesic distances equipped with a Riemannian metric need to be estimated in local neighbourhoods. By deriving an appropriate metric from the gradient structure tensor, the associated geodesic paths are constrained to follow salient features in images. Following, we design a generalized anisotropic geodesic filter; incorporating not only a measure of the edge strength, like in the original method, but also further directional information about the image structures. The proposed filter is particularly efficient at smoothing heterogeneous areas while preserving relevant structures in multichannel images.

  13. Biomedical Simulation Models of Human Auditory Processes

    NASA Technical Reports Server (NTRS)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  14. Joint decorrelation, a versatile tool for multichannel data analysis.

    PubMed

    de Cheveigné, Alain; Parra, Lucas C

    2014-09-01

    We review a simple yet versatile approach for the analysis of multichannel data, focusing in particular on brain signals measured with EEG, MEG, ECoG, LFP or optical imaging. Sensors are combined linearly with weights that are chosen to provide optimal signal-to-noise ratio. Signal and noise can be variably defined to match the specific need, e.g. reproducibility over trials, frequency content, or differences between stimulus conditions. We demonstrate how the method can be used to remove power line or cardiac interference, enhance stimulus-evoked or stimulus-induced activity, isolate narrow-band cortical activity, and so on. The approach involves decorrelating both the original and filtered data by joint diagonalization of their covariance matrices. We trace its origins; offer an easy-to-understand explanation; review a range of applications; and chart failure scenarios that might lead to misleading results, in particular due to overfitting. In addition to its flexibility and effectiveness, a major appeal of the method is that it is easy to understand. PMID:24990357

  15. External auditory osteoma.

    PubMed

    Carbone, Peter N; Nelson, Brenda L

    2012-06-01

    External auditory canal (EAC) osteomas are rare, benign bony neoplasms that occur in wide range of patients. While chronic irritation and inflammation have been suggested as causal factors in several cases, significant data is lacking to support these suspicions. Symptoms are rare and can include hearing loss, vertigo, pain and tinnitus. Diagnosis is made based on a combination of clinical history and examination, radiographic imaging, and histopathology. Osteomas of the EAC are usually found incidentally and are unilateral and solitary. Computed tomography reveals a hyperdense, pedunculated mass arising from the tympanosquamous suture and lateral of the isthmus. Histopathologically, EAC osteomas are covered with periosteum and squamous epithelium, and consist of lamalleted bone surrounding fibrovascular channels with minimal osteocysts. Osteomas have historically been compared and contrasted with exostoses of the EAC. While they share similarities, more often than not it is possible to distinguish the two bony neoplasms based on clinical history and radiographic studies. Debate remains in the medical literature as to whether basic histopathology can distinguish osteomas of the EAC from exostoses. Surgical excision is the standard treatment for EAC osteomas, however close observation is considered acceptable in asymptomatic patients.

  16. The mismatch negativity (MMN) in basic research of central auditory processing: a review.

    PubMed

    Näätänen, R; Paavilainen, P; Rinne, T; Alho, K

    2007-12-01

    In the present article, the basic research using the mismatch negativity (MMN) and analogous results obtained by using the magnetoencephalography (MEG) and other brain-imaging technologies is reviewed. This response is elicited by any discriminable change in auditory stimulation but recent studies extended the notion of the MMN even to higher-order cognitive processes such as those involving grammar and semantic meaning. Moreover, MMN data also show the presence of automatic intelligent processes such as stimulus anticipation at the level of auditory cortex. In addition, the MMN enables one to establish the brain processes underlying the initiation of attention switch to, conscious perception of, sound change in an unattended stimulus stream.

  17. Transcranial direct current stimulation for refractory auditory hallucinations in schizophrenia.

    PubMed

    Andrade, Chittaranjan

    2013-11-01

    Some patients with schizophrenia may suffer from continuous or severe auditory hallucinations that are refractory to antipsychotic drugs, including clozapine. Such patients may benefit from a short trial of once- to twice-daily transcranial direct current stimulation (tDCS) with the cathode placed over the left temporoparietal cortex and the anode over the left dorsolateral prefrontal cortex; negative, cognitive, and other symptoms, if present, may also improve. At present, the case for tDCS treatment of refractory auditory hallucinations rests on 1 well-conducted randomized, sham tDCS-controlled trial and several carefully documented and instructive case reports. Benefits with up to 3 years of maintenance tDCS have also been described. In patients with refractory auditory hallucinations, tDCS has been delivered at 1- to 3-mA current intensity during 20-30 minutes in once- to twice-daily sessions for up to 3 years with no apparent adverse effects. Transcranial direct current stimulation therefore appears to be a promising noninvasive brain stimulation technique for patients with antipsychotic-refractory auditory hallucinations.

  18. What works in auditory working memory? A neural oscillations perspective.

    PubMed

    Wilsch, Anna; Obleser, Jonas

    2016-06-01

    Working memory is a limited resource: brains can only maintain small amounts of sensory input (memory load) over a brief period of time (memory decay). The dynamics of slow neural oscillations as recorded using magneto- and electroencephalography (M/EEG) provide a window into the neural mechanics of these limitations. Especially oscillations in the alpha range (8-13Hz) are a sensitive marker for memory load. Moreover, according to current models, the resultant working memory load is determined by the relative noise in the neural representation of maintained information. The auditory domain allows memory researchers to apply and test the concept of noise quite literally: Employing degraded stimulus acoustics increases memory load and, at the same time, allows assessing the cognitive resources required to process speech in noise in an ecologically valid and clinically relevant way. The present review first summarizes recent findings on neural oscillations, especially alpha power, and how they reflect memory load and memory decay in auditory working memory. The focus is specifically on memory load resulting from acoustic degradation. These findings are then contrasted with contextual factors that benefit neural as well as behavioral markers of memory performance, by reducing representational noise. We end on discussing the functional role of alpha power in auditory working memory and suggest extensions of the current methodological toolkit. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26556773

  19. Central projections of auditory receptor neurons of crickets.

    PubMed

    Imaizumi, Kazuo; Pollack, Gerald S

    2005-12-19

    We describe the central projections of physiologically characterized auditory receptor neurons of crickets as revealed by confocal microscopy. Receptors tuned to ultrasonic frequencies (similar to those produced by echolocating, insectivorous bats), to a mid-range of frequencies, and a subset of those tuned to low, cricket-like frequencies have similar projections, terminating medially within the auditory neuropile. Quantitative analysis shows that despite the general similarity of these projections they are tonotopic, with receptors tuned to lower frequencies terminating more medially. Another subset of cricket-song-tuned receptors projects more laterally and posteriorly than the other types. Double-fills of receptors and identified interneurons show that the three medially projecting receptor types are anatomically well positioned to provide monosynaptic input to interneurons that relay auditory information to the brain and to interneurons that modify this ascending information. The more laterally and posteriorly branching receptor type may not interact directly with this ascending pathway, but is well positioned to provide direct input to an interneuron that carries auditory information to more posterior ganglia. These results suggest that information about cricket song is segregated into functionally different pathways as early as the level of receptor neurons. Ultrasound-tuned and mid-frequency tuned receptors have approximately twice as many varicosities, which are sites of transmitter release, per receptor as either anatomical type of cricket-song-tuned receptor. This may compensate in part for the numerical under-representation of these receptor types.

  20. Evaluation of Central Auditory Discrimination Abilities in Older Adults

    PubMed Central

    Freigang, Claudia; Schmidt, Lucas; Wagner, Jan; Eckardt, Rahel; Steinhagen-Thiessen, Elisabeth; Ernst, Arne; Rübsamen, Rudolf

    2011-01-01

    The present study focuses on auditory discrimination abilities in older adults aged 65–89 years. We applied the “Leipzig inventory for patient psychoacoustic” (LIPP), a psychoacoustic test battery specifically designed to identify deficits in central auditory processing. These tests quantify the just noticeable differences (JND) for the three basic acoustic parameters (i.e., frequency, intensity, and signal duration). Three different test modes [monaural, dichotic signal/noise (s/n) and interaural] were used, stimulus level was 35 dB sensation level. The tests are designed as three-alternative forced-choice procedure with a maximum-likelihood procedure estimating p = 0.5 correct response value. These procedures have proven to be highly efficient and provide a reliable outcome. The measurements yielded significant age-dependent deteriorations in the ability to discriminate single acoustic features pointing to progressive impairments in central auditory processing. The degree of deterioration was correlated to the different acoustic features and to the test modes. Most prominent, interaural frequency and signal duration discrimination at low test frequencies was elevated which indicates a deterioration of time- and phase-dependent processing at brain stem and cortical levels. LIPP proves to be an effective tool to identify basic pathophysiological mechanisms and the source of a specific impairment in auditory processing of the elderly. PMID:21577251

  1. Auditory scene analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    PubMed Central

    Brown, David J.; Simpson, Andrew J. R.; Proulx, Michael J.

    2015-01-01

    A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36) performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio–visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this. PMID:26528202

  2. Auditory and non-auditory effects of noise on health

    PubMed Central

    Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen

    2014-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-auditory effects of environmental noise exposure on public health is growing. Observational and experimental studies have shown that noise exposure leads to annoyance, disturbs sleep and causes daytime sleepiness, affects patient outcomes and staff performance in hospitals, increases the occurrence of hypertension and cardiovascular disease, and impairs cognitive performance in schoolchildren. In this Review, we stress the importance of adequate noise prevention and mitigation strategies for public health. PMID:24183105

  3. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing

    PubMed Central

    Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  4. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    PubMed

    Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  5. Developmental evaluation of atypical auditory sampling in dyslexia: Functional and structural evidence.

    PubMed

    Lizarazu, Mikel; Lallier, Marie; Molinaro, Nicola; Bourguignon, Mathieu; Paz-Alonso, Pedro M; Lerma-Usabiaga, Garikoitz; Carreiras, Manuel

    2015-12-01

    Whether phonological deficits in developmental dyslexia are associated with impaired neural sampling of auditory information at either syllabic- or phonemic-rates is still under debate. In addition, whereas neuroanatomical alterations in auditory regions have been documented in dyslexic readers, whether and how these structural anomalies are linked to auditory sampling and reading deficits remains poorly understood. In this study, we measured auditory neural synchronization at different frequencies corresponding to relevant phonological spectral components of speech in children and adults with and without dyslexia, using magnetoencephalography. Furthermore, structural MRI was used to estimate cortical thickness of the auditory cortex of participants. Dyslexics showed atypical brain synchronization at both syllabic (slow) and phonemic (fast) rates. Interestingly, while a left hemispheric asymmetry in cortical thickness was functionally related to a stronger left hemispheric lateralization of neural synchronization to stimuli presented at the phonemic rate in skilled readers, the same anatomical index in dyslexics was related to a stronger right hemispheric dominance for neural synchronization to syllabic-rate auditory stimuli. These data suggest that the acoustic sampling deficit in development dyslexia might be linked to an atypical specialization of the auditory cortex to both low and high frequency amplitude modulations.

  6. Developmental evaluation of atypical auditory sampling in dyslexia: Functional and structural evidence.

    PubMed

    Lizarazu, Mikel; Lallier, Marie; Molinaro, Nicola; Bourguignon, Mathieu; Paz-Alonso, Pedro M; Lerma-Usabiaga, Garikoitz; Carreiras, Manuel

    2015-12-01

    Whether phonological deficits in developmental dyslexia are associated with impaired neural sampling of auditory information at either syllabic- or phonemic-rates is still under debate. In addition, whereas neuroanatomical alterations in auditory regions have been documented in dyslexic readers, whether and how these structural anomalies are linked to auditory sampling and reading deficits remains poorly understood. In this study, we measured auditory neural synchronization at different frequencies corresponding to relevant phonological spectral components of speech in children and adults with and without dyslexia, using magnetoencephalography. Furthermore, structural MRI was used to estimate cortical thickness of the auditory cortex of participants. Dyslexics showed atypical brain synchronization at both syllabic (slow) and phonemic (fast) rates. Interestingly, while a left hemispheric asymmetry in cortical thickness was functionally related to a stronger left hemispheric lateralization of neural synchronization to stimuli presented at the phonemic rate in skilled readers, the same anatomical index in dyslexics was related to a stronger right hemispheric dominance for neural synchronization to syllabic-rate auditory stimuli. These data suggest that the acoustic sampling deficit in development dyslexia might be linked to an atypical specialization of the auditory cortex to both low and high frequency amplitude modulations. PMID:26356682

  7. Reversible Inactivation of the Auditory Thalamus Disrupts HPA Axis Habituation to Repeated Loud Noise Stress Exposures

    PubMed Central

    Day, Heidi E.W.; Masini, Cher V.; Campeau, Serge

    2009-01-01

    Although habituation to stress is a widely observed adaptive mechanism in response to repeated homotypic challenge exposure, its brain location and mechanism of plasticity remains elusive. And while habituation-related plasticity has been suggested to take place in central limbic regions, recent evidence suggests that sensory sites may provide the underlying substrate for this function. For instance, several brainstem, midbrain, thalamic, and/or cortical auditory processing areas, among others, could support habituation-related plasticity to repeated loud noise exposures. In the present study, the auditory thalamus was tested for its putative role in habituation to repeated loud noise exposures, in rats. The auditory thalamus was inactivated reversibly by muscimol injections during repeated loud noise exposures to determine if brainstem or midbrain auditory nuclei would be sufficient to support habituation to this specific stressor, as measured during an additional and drug-free loud noise exposure test. Our results indicate that auditory thalamic inactivation by muscimol disrupts acute HPA axis response specifically to loud noise. Importantly, habituation to repeated loud noise exposures was also prevented by reversible auditory thalamic inactivation, suggesting that this form of plasticity is likely mediated at, or in targets of, the auditory thalamus. PMID:19379718

  8. Demonstration of prosthetic activation of central auditory pathways using ( sup 14 C)-2-deoxyglucose

    SciTech Connect

    Evans, D.A.; Niparko, J.K.; Altschuler, R.A.; Frey, K.A.; Miller, J.M. )

    1990-02-01

    The cochlear prosthesis is not applicable to patients who lack an implantable cochlea or an intact vestibulocochlear nerve. Direct electrical stimulation of the cochlear nucleus (CN) of the brain stem might provide a method for auditory rehabilitation of these patients. A penetrating CN electrode has been developed and tissue tolerance to this device demonstrated. This study was undertaken to evaluate metabolic activation of central nervous system (CNS) auditory tracts produced by such implants. Regional cerebral glucose use resulting from CN stimulation was estimated in a series of chronically implanted guinea pigs with the use of ({sup 14}C)-2-deoxyglucose (2-DG). Enhanced 2-DG uptake was observed in structures of the auditory tract. The activation of central auditory structures achieved with CN stimulation was similar to that produced by acoustic stimulation and by electrical stimulation of the modiolar portion of the auditory nerve in control groups. An interesting banding pattern was observed in the inferior colliculus following CN stimulation, as previously described with acoustic stimulation. This study demonstrates that functional metabolic activation of central auditory pathways can be achieved with a penetrating CNS auditory prosthesis.

  9. Two distinct auditory-motor circuits for monitoring speech production as revealed by content-specific suppression of auditory cortex.

    PubMed

    Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet

    2015-06-01

    Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech.

  10. Simultaneous recording of rat auditory cortex and thalamus via a titanium-based, microfabricated, microelectrode device

    PubMed Central

    McCarthy, PT; Rao, MP; Otto, KJ

    2011-01-01

    Direct recording from sequential processing stations within the brain has provided opportunity for enhancing understanding of important neural circuits, such as the corticothalamic loops underlying auditory, visual, and somatosensory processing. However, the common reliance upon microwire-based electrodes to perform such recordings often necessitates complex surgeries and increases trauma to neural tissues. This paper reports the development of titanium-based, microfabricated, microelectrode devices designed to address these limitations by allowing acute recording from the thalamic nuclei and associated cortical sites simultaneously in a minimally-invasive manner. In particular, devices were designed to simultaneously probe rat auditory cortex and auditory thalamus, with the intent of recording auditory response latencies and isolated action potentials within the separate anatomical sites. Details regarding the design, fabrication, and characterization of these devices are presented, as are preliminary results from acute in vivo recording. PMID:21628772

  11. Modulation of auditory processing during speech movement planning is limited in adults who stutter

    PubMed Central

    Daliri, Ayoub; Max, Ludo

    2015-01-01

    Stuttering is associated with atypical structural and functional connectivity in sensorimotor brain areas, in particular premotor, motor, and auditory regions. It remains unknown, however, which specific mechanisms of speech planning and execution are affected by these neurological abnormalities. To investigate pre-movement sensory modulation, we recorded 12 stuttering and 12 nonstuttering adults’ auditory evoked potentials in response to probe tones presented prior to speech onset in a delayed-response speaking condition vs. no-speaking control conditions (silent reading; seeing nonlinguistic symbols). Findings indicate that, during speech movement planning, the nonstuttering group showed a statistically significant modulation of auditory processing (reduced N1 amplitude) that was not observed in the stuttering group. Thus, the obtained results provide electrophysiological evidence in support of the hypothesis that stuttering is associated with deficiencies in modulating the cortical auditory system during speech movement planning. This specific sensorimotor integration deficiency may contribute to inefficient feedback monitoring and, consequently, speech dysfluencies. PMID:25796060

  12. Central Gain Restores Auditory Processing following Near-Complete Cochlear Denervation.

    PubMed

    Chambers, Anna R; Resnik, Jennifer; Yuan, Yasheng; Whitton, Jonathon P; Edge, Albert S; Liberman, M Charles; Polley, Daniel B

    2016-02-17

    Sensory organ damage induces a host of cellular and physiological changes in the periphery and the brain. Here, we show that some aspects of auditory processing recover after profound cochlear denervation due to a progressive, compensatory plasticity at higher stages of the central auditory pathway. Lesioning >95% of cochlear nerve afferent synapses, while sparing hair cells, in adult mice virtually eliminated the auditory brainstem response and acoustic startle reflex, yet tone detection behavior was nearly normal. As sound-evoked responses from the auditory nerve grew progressively weaker following denervation, sound-evoked activity in the cortex-and, to a lesser extent, the midbrain-rebounded or surpassed control levels. Increased central gain supported the recovery of rudimentary sound features encoded by firing rate, but not features encoded by precise spike timing such as modulated noise or speech. These findings underscore the importance of central plasticity in the perceptual sequelae of cochlear hearing impairment. PMID:26833137

  13. Modulation of auditory processing during speech movement planning is limited in adults who stutter.

    PubMed

    Daliri, Ayoub; Max, Ludo

    2015-04-01

    Stuttering is associated with atypical structural and functional connectivity in sensorimotor brain areas, in particular premotor, motor, and auditory regions. It remains unknown, however, which specific mechanisms of speech planning and execution are affected by these neurological abnormalities. To investigate pre-movement sensory modulation, we recorded 12 stuttering and 12 nonstuttering adults' auditory evoked potentials in response to probe tones presented prior to speech onset in a delayed-response speaking condition vs. no-speaking control conditions (silent reading; seeing nonlinguistic symbols). Findings indicate that, during speech movement planning, the nonstuttering group showed a statistically significant modulation of auditory processing (reduced N1 amplitude) that was not observed in the stuttering group. Thus, the obtained results provide electrophysiological evidence in support of the hypothesis that stuttering is associated with deficiencies in modulating the cortical auditory system during speech movement planning. This specific sensorimotor integration deficiency may contribute to inefficient feedback monitoring and, consequently, speech dysfluencies.

  14. Simultaneous recording of rat auditory cortex and thalamus via a titanium-based, microfabricated, microelectrode device

    NASA Astrophysics Data System (ADS)

    McCarthy, P. T.; Rao, M. P.; Otto, K. J.

    2011-08-01

    Direct recording from sequential processing stations within the brain has provided opportunity for enhancing understanding of important neural circuits, such as the corticothalamic loops underlying auditory, visual, and somatosensory processing. However, the common reliance upon microwire-based electrodes to perform such recordings often necessitates complex surgeries and increases trauma to neural tissues. This paper reports the development of titanium-based, microfabricated, microelectrode devices designed to address these limitations by allowing acute recording from the thalamic nuclei and associated cortical sites simultaneously in a minimally invasive manner. In particular, devices were designed to simultaneously probe rat auditory cortex and auditory thalamus, with the intent of recording auditory response latencies and isolated action potentials within the separate anatomical sites. Details regarding the design, fabrication, and characterization of these devices are presented, as are preliminary results from acute in vivo recording.

  15. Music training alters the course of adolescent auditory development.

    PubMed

    Tierney, Adam T; Krizman, Jennifer; Kraus, Nina

    2015-08-11

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes.

  16. Music training alters the course of adolescent auditory development

    PubMed Central

    Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina

    2015-01-01

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739

  17. Auditory Evoked Bursts in Mouse Visual Cortex during Isoflurane Anesthesia

    PubMed Central

    Land, Rüdiger; Engler, Gerhard

    2012-01-01

    General anesthesia is not a uniform state of the brain. Ongoing activity differs between light and deep anesthesia and cortical response properties are modulated in dependence of anesthetic dosage. We investigated how anesthesia level affects cross-modal interactions in primary sensory cortex. To examine this, we continuously measured the effects of visual and auditory stimulation during increasing and decreasing isoflurane level in the mouse visual cortex and the subiculum (from baseline at 0.7 to 2.5 vol % and reverse). Auditory evoked burst activity occurred in visual cortex after a transition during increase of anesthesia level. At the same time, auditory and visual evoked bursts occurred in the subiculum, even though the subiculum was unresponsive to both stimuli previous to the transition. This altered sensory excitability was linked to the presence of burst suppression activity in cortex, and to a regular slow burst suppression rhythm (∼0.2 Hz) in the subiculum. The effect disappeared during return to light anesthesia. The results show that pseudo-heteromodal sensory burst responses can appear in brain structures as an effect of an anesthesia induced state change. PMID:23185462

  18. Auditory learning: a developmental method.

    PubMed

    Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan

    2005-05-01

    Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers.

  19. Context effects on auditory distraction

    PubMed Central

    Chen, Sufen; Sussman, Elyse S.

    2014-01-01

    The purpose of the study was to test the hypothesis that sound context modulates the magnitude of auditory distraction, indexed by behavioral and electrophysiological measures. Participants were asked to identify tone duration, while irrelevant changes occurred in tone frequency, tone intensity, and harmonic structure. Frequency deviants were randomly intermixed with standards (Uni-Condition), with intensity deviants (Bi-Condition), and with both intensity and complex deviants (Tri-Condition). Only in the Tri-Condition did the auditory distraction effect reflect the magnitude difference among the frequency and intensity deviants. The mixture of the different types of deviants in the Tri-Condition modulated the perceived level of distraction, demonstrating that the sound context can modulate the effect of deviance level on processing irrelevant acoustic changes in the environment. These findings thus indicate that perceptual contrast plays a role in change detection processes that leads to auditory distraction. PMID:23886958

  20. Hearing loss and the central auditory system: Implications for hearing aids

    NASA Astrophysics Data System (ADS)

    Frisina, Robert D.

    2003-04-01

    Hearing loss can result from disorders or damage to the ear (peripheral auditory system) or the brain (central auditory system). Here, the basic structure and function of the central auditory system will be highlighted as relevant to cases of permanent hearing loss where assistive devices (hearing aids) are called for. The parts of the brain used for hearing are altered in two basic ways in instances of hearing loss: (1) Damage to the ear can reduce the number and nature of input channels that the brainstem receives from the ear, causing plasticity of the central auditory system. This plasticity may partially compensate for the peripheral loss, or add new abnormalities such as distorted speech processing or tinnitus. (2) In some situations, damage to the brain can occur independently of the ear, as may occur in cases of head trauma, tumors or aging. Implications of deficits to the central auditory system for speech perception in noise, hearing aid use and future innovative circuit designs will be provided to set the stage for subsequent presentations in this special educational session. [Work supported by NIA-NIH Grant P01 AG09524 and the International Center for Hearing & Speech Research, Rochester, NY.

  1. The neural correlates of subjectively perceived and passively matched loudness perception in auditory phantom perception

    PubMed Central

    De Ridder, Dirk; Congedo, Marco; Vanneste, Sven

    2015-01-01

    Introduction A fundamental question in phantom perception is determining whether the brain creates a network that represents the sound intensity of the auditory phantom as measured by tinnitus matching (in dB), or whether the phantom perception is actually only a representation of the subjectively perceived loudness. Methods In tinnitus patients, tinnitus loudness was tested in two ways, by a numeric rating scale for subjectively perceived loudness and a more objective tinnitus-matching test, albeit it is still a subjective measure. Results Passively matched tinnitus does not correlate with subjective numeric rating scale, and has no electrophysiological correlates. Subjective loudness, in a whole-brain analysis, is correlated with activity in the left anterior insula (alpha), the rostral/dorsal anterior cingulate cortex (beta), and the left parahippocampus (gamma). A ROI analysis finds correlations with the auditory cortex (high beta and gamma) as well. The theta band links gamma band activity in the auditory cortex and parahippocampus via theta–gamma nesting. Conclusions Apparently the brain generates a network that represents subjectively perceived tinnitus loudness only, which is context dependent. The subjective loudness network consists of the anterior cingulate/insula, the parahippocampus, and the auditory cortex. The gamma band activity in the parahippocampus and the auditory cortex is functionally linked via theta–gamma nested lagged phase synchronization. PMID:25874164

  2. Fault analysis of multichannel spacecraft power systems

    NASA Technical Reports Server (NTRS)

    Dugal-Whitehead, Norma R.; Lollar, Louis F.

    1990-01-01

    The NASA Marshall Space Flight Center proposes to implement computer-controlled fault injection into an electrical power system breadboard to study the reactions of the various control elements of this breadboard. Elements under study include the remote power controllers, the algorithms in the control computers, and the artificially intelligent control programs resident in this breadboard. To this end, a study of electrical power system faults is being performed to yield a list of the most common power system faults. The results of this study will be applied to a multichannel high-voltage DC spacecraft power system called the large autonomous spacecraft electrical power system (LASEPS) breadboard. The results of the power system fault study and the planned implementation of these faults into the LASEPS breadboard are described.

  3. Multichannel quantum defect theory for polar molecules

    NASA Astrophysics Data System (ADS)

    Elfimov, Sergei V.; Dorofeev, Dmitrii L.; Zon, Boris A.

    2014-02-01

    Our work is devoted to developing a general approach for nonpenetrating Rydberg states of polar molecules. We propose a method to estimate the accuracy of calculation of their wave functions and quantum defects. Basing on this method we estimate the accuracy of Born-Oppenheimer (BO) and inverse Born-Oppenheimer (IBO) approximations for these states. This estimation enables us to determine the space and energy regions where BO and IBO approximations are valid. It depends on the interplay between l coupling (due to dipole potential of the core) and l uncoupling (due to rotation the core). Next we consider the intermediate region where both BO and IBO are not valid. For this intermediate region we propose a modification of Fano's multichannel quantum defect theory to match BO and IBO wave functions and show that it gives more reliable results. They are demonstrated on the example of SO molecule.

  4. A cryogenic multichannel electronically scanned pressure module

    NASA Technical Reports Server (NTRS)

    Shams, Qamar A.; Fox, Robert L.; Adcock, Edward E.; Kahng, Seun K.

    1992-01-01

    Consideration is given to a cryogenic multichannel electronically scanned pressure (ESP) module developed and tested over an extended temperature span from -184 to +50 C and a pressure range of 0 to 5 psig. The ESP module consists of 32 pressure sensor dice, four analog 8 differential-input multiplexers, and an amplifier circuit, all of which are packaged in a physical volume of 2 x 1 x 5/8 in with 32 pressure and two reference ports. Maximum nonrepeatability is measured at 0.21 percent of full-scale output. The ESP modules have performed consistently well over 15 times over the above temperature range and continue to work without any sign of degradation. These sensors are also immune to repeated thermal shock tests over a temperature change of 220 C/sec.

  5. Photonic generation for multichannel THz wireless communication.

    PubMed

    Shams, Haymen; Fice, Martyn J; Balakier, Katarzyna; Renaud, Cyril C; van Dijk, Frédéric; Seeds, Alwyn J

    2014-09-22

    We experimentally demonstrate photonic generation of a multichannel THz wireless signal at carrier frequency 200 GHz, with data rate up to 75 Gbps in QPSK modulation format, using an optical heterodyne technique and digital coherent detection. BER measurements were carried out for three subcarriers each modulated with 5 Gbaud QPSK or for two subcarriers modulated with 10 Gbaud QPSK, giving a total speed of 30 Gbps or 40 Gbps, respectively. The system evaluation was also performed with three subcarriers modulated with 12.5 Gbaud QPSK (75 Gbps total) without and with 40 km fibre transmission. The proposed system enhances the capacity of high-speed THz wireless transmission by using spectrally efficient modulated subcarriers spaced at the baud rate. This approach increases the overall transmission capacity and reduces the bandwidth requirement for electronic devices.

  6. Loudspeaker equalization for auditory research.

    PubMed

    MacDonald, Justin A; Tran, Phuong K

    2007-02-01

    The equalization of loudspeaker frequency response is necessary to conduct many types of well-controlled auditory experiments. This article introduces a program that includes functions to measure a loudspeaker's frequency response, design equalization filters, and apply the filters to a set of stimuli to be used in an auditory experiment. The filters can compensate for both magnitude and phase distortions introduced by the loudspeaker. A MATLAB script is included in the Appendix to illustrate the details of the equalization algorithm used in the program.

  7. Sensitivity to Auditory Velocity Contrast.

    PubMed

    Locke, Shannon M; Leung, Johahn; Carlile, Simon

    2016-06-13

    A natural auditory scene often contains sound moving at varying velocities. Using a velocity contrast paradigm, we compared sensitivity to velocity changes between continuous and discontinuous trajectories. Subjects compared the velocities of two stimulus intervals that moved along a single trajectory, with and without a 1 second inter stimulus interval (ISI). We found thresholds were threefold larger for velocity increases in the instantaneous velocity change condition, as compared to instantaneous velocity decreases or thresholds for the delayed velocity transition condition. This result cannot be explained by the current static "snapshot" model of auditory motion perception and suggest a continuous process where the percept of velocity is influenced by previous history of stimulation.

  8. Effects of Auditory Input in Individuation Tasks

    ERIC Educational Resources Information Center

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2008-01-01

    Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre-familiarized sounds, etc.). The current study extends this research by examining how…

  9. Pre-Attentive Auditory Processing of Lexicality

    ERIC Educational Resources Information Center

    Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan

    2004-01-01

    The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…

  10. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  11. Multichannel SAR Interferometry via Classical and Bayesian Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Budillon, Alessandra; Ferraiuolo, Giancarlo; Pascazio, Vito; Schirinzi, Gilda

    2005-12-01

    Some multichannel synthetic aperture radar interferometric configurations are analyzed. Both across-track and along-track interferometric systems, allowing to recover the height profile of the ground or the moving target radial velocities, respectively, are considered. The joint use of multichannel configurations, which can be either multifrequency or multi-baseline, and of classical or Bayesian statistical estimation techniques allows to obtain very accurate solutions and to overcome the limitations due to the presence of ambiguous solutions, intrinsic in the single-channel configurations. The improved performance of the multichannel-based methods with respect to the corresponding single-channel ones has been tested with numerical experiments on simulated data.

  12. The Brain as a Mixer, II. A Pilot Study of Central Auditory Integration Abilities of Normal and Retarded Children. Studies in Language and Language Behavior, Progress Report Number VII.

    ERIC Educational Resources Information Center

    Semmel, Melvyn I.; And Others

    To explore the binaural integration abilities of six educable mentally retarded boys (ages 8 to 13) and six normal boys (ages 7 to 12) to detect possible brain inju"y, an adaptation of Matzker's (1958) technique involving separating words into high and low frequencies was used. One frequency filter system presented frequencies from 425 to 1275…

  13. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates

    PubMed Central

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-01-01

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys. DOI: http://dx.doi.org/10.7554/eLife.15441.001 PMID:27438411

  14. Auditory Detection of the Human Brainstem Auditory Evoked Response.

    ERIC Educational Resources Information Center

    Kidd, Gerald, Jr.; And Others

    1993-01-01

    This study evaluated whether listeners can distinguish human brainstem auditory evoked responses elicited by acoustic clicks from control waveforms obtained with no acoustic stimulus when the waveforms are presented auditorily. Detection performance for stimuli presented visually was slightly, but consistently, superior to that which occurred for…

  15. Representation of speech in human auditory cortex: Is it special?

    PubMed Central

    Steinschneider, Mitchell; Nourski, Kirill V.; Fishman, Yonatan I.

    2013-01-01

    Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl’s gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl’s gyrus is engaged in general auditory, and not language-specific, processing. PMID:23792076

  16. Electrostimulation mapping of comprehension of auditory and visual words.

    PubMed

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

  17. Electrostimulation mapping of comprehension of auditory and visual words.

    PubMed

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. PMID:26332785

  18. Auditory stream segregation in children with Asperger syndrome.

    PubMed

    Lepistö, T; Kuitunen, A; Sussman, E; Saalasti, S; Jansson-Verkasalo, E; Nieminen-von Wendt, T; Kujala, T

    2009-12-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception.

  19. Perception of rhythmic grouping depends on auditory experience.

    PubMed

    Iversen, John R; Patel, Aniruddh D; Ohgushi, Kengo

    2008-10-01

    Many aspects of perception are known to be shaped by experience, but others are thought to be innate universal properties of the brain. A specific example comes from rhythm perception, where one of the fundamental perceptual operations is the grouping of successive events into higher-level patterns, an operation critical to the perception of language and music. Grouping has long been thought to be governed by innate perceptual principles established a century ago. The current work demonstrates instead that grouping can be strongly dependent on culture. Native English and Japanese speakers were tested for their perception of grouping of simple rhythmic sequences of tones. Members of the two cultures showed different patterns of perceptual grouping, demonstrating that these basic auditory processes are not universal but are shaped by experience. It is suggested that the observed perceptual differences reflect the rhythms of the two languages, and that native language can exert an influence on general auditory perception at a basic level.

  20. From ear to body: the auditory-motor loop in spatial cognition

    PubMed Central

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  1. Auditory Evoked Fields Elicited by Spectral, Temporal, and Spectral–Temporal Changes in Human Cerebral Cortex

    PubMed Central

    Okamoto, Hidehiko; Teismann, Henning; Kakigi, Ryusuke; Pantev, Christo

    2012-01-01

    Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral–temporal sound changes by means of magnetoencephalography. The auditory evoked responses elicited by the spectral–temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30–50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously. PMID:22593751

  2. Voxel-based morphometry predicts shifts in dendritic spine density and morphology with auditory fear conditioning.

    PubMed

    Keifer, O P; Hurt, R C; Gutman, D A; Keilholz, S D; Gourley, S L; Ressler, K J

    2015-07-07

    Neuroimaging has provided compelling data about the brain. Yet the underlying mechanisms of many neuroimaging techniques have not been elucidated. Here we report a voxel-based morphometry (VBM) study of Thy1-YFP mice following auditory fear conditioning complemented by confocal microscopy analysis of cortical thickness, neuronal morphometric features and nuclei size/density. Significant VBM results included the nuclei of the amygdala, the insula and the auditory cortex. There were no significant VBM changes in a control brain area. Focusing on the auditory cortex, confocal analysis showed that fear conditioning led to a significantly increased density of shorter and wider dendritic spines, while there were no spine differences in the control area. Of all the morphology metrics studied, the spine density was the only one to show significant correlation with the VBM signal. These data demonstrate that learning-induced structural changes detected by VBM may be partially explained by increases in dendritic spine density.

  3. Auditory Temporal Conditioning in Neonates.

    ERIC Educational Resources Information Center

    Franz, W. K.; And Others

    Twenty normal newborns, approximately 36 hours old, were tested using an auditory temporal conditioning paradigm which consisted of a slow rise, 75 db tone played for five seconds every 25 seconds, ten times. Responses to the tones were measured by instantaneous, beat-to-beat heartrate; and the test trial was designated as the 2 1/2-second period…

  4. Central Auditory Function in Stutterers

    ERIC Educational Resources Information Center

    Hall, James W.; Jerger, James

    1978-01-01

    Central auditory function was assessed in 10 stutterers and 10 nonstutterers between the ages of 10 and 35 years, and the performance of the two groups compared for seven audiometric procedures, including acoustic reflex threshold and acoustic reflex amplitude function. (Author)

  5. Developing Linguistic Auditory Memory Patterns.

    ERIC Educational Resources Information Center

    Valett, Robert E.

    1983-01-01

    For learning handicapped children with impaired language associations, patterns, and expressions, this paper summarizes relevant linguistic research and instructional strategies. Linguistic auditory memory pattern exercises and examples are then presented as an integrated multisensory approach which has been found useful by special educators.…

  6. Auditory Risk of Air Rifles

    PubMed Central

    Lankford, James E.; Meinke, Deanna K.; Flamme, Gregory A.; Finan, Donald S.; Stewart, Michael; Tasko, Stephen; Murphy, William J.

    2016-01-01

    Objective To characterize the impulse noise exposure and auditory risk for air rifle users for both youth and adults. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit and LAeq75 exposure limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 9 pellet air rifles and 1 BB air rifle. Results None of the air rifles generated peak levels that exceeded the 140 dB peak limit for adults and 8 (80%) exceeded the 120 dB peak SPL limit for youth. In general, for both adults and youth there is minimal auditory risk when shooting less than 100 unprotected shots with pellet air rifles. Air rifles with suppressors were less hazardous than those without suppressors and the pellet air rifles with higher velocities were generally more hazardous than those with lower velocities. Conclusion To minimize auditory risk, youth should utilize air rifles with an integrated suppressor and lower velocity ratings. Air rifle shooters are advised to wear hearing protection whenever engaging in shooting activities in order to gain self-efficacy and model appropriate hearing health behaviors necessary for recreational firearm use. PMID:26840923

  7. Delayed Auditory Feedback and Movement

    ERIC Educational Resources Information Center

    Pfordresher, Peter Q.; Dalla Bella, Simone

    2011-01-01

    It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…

  8. Dynamics of auditory working memory

    PubMed Central

    Kaiser, Jochen

    2015-01-01

    Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions. PMID:26029146

  9. Enhancement of coupled multichannel images using sparsity constraints.

    PubMed

    Ramakrishnan, Naveen; Ertin, Emre; Moses, Randolph L

    2010-08-01

    We consider the problem of joint enhancement of multichannel images with pixel based constraints on the multichannel data. Previous work by Cetin and Karl introduced nonquadratic regularization methods for SAR image enhancement using sparsity enforcing penalty terms. We formulate an optimization problem that jointly enhances complex-valued multichannel images while preserving the cross-channel information, which we include as constraints tying the multichannel images together. We pose this problem as a joint optimization problem with constraints. We first reformulate it as an equivalent (unconstrained) dual problem and develop a numerically-efficient method for solving it. We develop the Dual Descent method, which has low complexity, for solving the joint optimization problem. The algorithm is applied to both an interferometric synthetic aperture radar (IFSAR) problem, in which the relative phase between two complex-valued images indicate height, and to a synthetic multimodal medical image example. PMID:20236892

  10. Compressed sensing MRI with multichannel data using multicore processors.

    PubMed

    Chang, Ching-Hua; Ji, Jim

    2010-10-01

    Compressed sensing (CS) is a promising method to speed up MRI. Because most clinical MRI scanners are equipped with multichannel receive systems, integrating CS with multichannel systems may not only shorten the scan time but also provide improved image quality. However, significant computation time is required to perform CS reconstruction, whose complexity is scaled by the number of channels. In this article, we propose a reconstruction procedure that uses ubiquitously available multicore central processing unit to accelerate CS reconstruction from multiple channel data. The experimental results show that the reconstruction efficiency benefits significantly from parallelizing the CS reconstructions and pipelining multichannel data into multicore processors. In our experiments, an additional speedup factor of 1.6-2.0 was achieved using the proposed method on a quad-core central processing unit. The proposed method provides a straightforward way to accelerate CS reconstruction with multichannel data for parallel computation.

  11. Multichannel Kondo impurity dynamics in a Majorana device.

    PubMed

    Altland, A; Béri, B; Egger, R; Tsvelik, A M

    2014-08-15

    We study the multichannel Kondo impurity dynamics realized in a mesoscopic superconducting island connected to metallic leads. The effective "impurity spin" is nonlocally realized by Majorana bound states and strongly coupled to lead electrons by non-Fermi liquid correlations. We explore the spin dynamics and its observable ramifications near the low-temperature fixed point. The topological protection of the system raises the perspective to observe multichannel Kondo impurity dynamics in experimentally realistic environments.

  12. Packed multi-channels for parallel chromatographic separations in microchips.

    PubMed

    Nagy, Andrea; Gaspar, Attila

    2013-08-23

    Here we report on a simple method to fabricate microfluidic chip incorporating multi-channel systems packed by conventional chromatographic particles without the use of frits. The retaining effectivities of different bottlenecks created in the channels were studied. For the parallel multi-channel chromatographic separations several channel patterns were designed. The obtained multipackings were applied for parallel separations of dyes. The implementation of several chromatographic separation units in microscopic size makes possible faster and high throughput separations.

  13. Auditory evoked responses from Ear-EEG recordings.

    PubMed

    Kidmose, P; Looney, D; Mandic, D P

    2012-01-01

    A method for brain monitoring based on measuring electroencephalographic (EEG) signals from electrodes placed in-the-ear (Ear-EEG) was recently proposed. The Ear-EEG recording methodology provides a non-invasive, discreet and unobtrusive way of measuring electrical brain signals and has great potential as an enabling method for brain monitoring in everyday life. This work aims at further establishing the Ear-EEG recording methodology by considering auditory evoked potentials, and by comparing Ear-EEG responses with conventional on-scalp recordings and with well established results from the literature. It is shown that both steady state and transient responses can be obtained from Ear-EEG, and that these responses have similar characteristics and quality compared to EEG obtained from conventional on-scalp recordings.

  14. Can Spectro-Temporal Complexity Explain the Autistic Pattern of Performance on Auditory Tasks?

    ERIC Educational Resources Information Center

    Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter

    2006-01-01

    To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material…

  15. Brief Report: Atypical Neuromagnetic Responses to Illusory Auditory Pitch in Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W.

    2013-01-01

    Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in…

  16. The Role of the Auditory Brainstem in Processing Linguistically-Relevant Pitch Patterns

    ERIC Educational Resources Information Center

    Krishnan, Ananthanarayan; Gandour, Jackson T.

    2009-01-01

    Historically, the brainstem has been neglected as a part of the brain involved in language processing. We review recent evidence of language-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem. We argue that there is enhancing…

  17. Auditory Habituation in the Fetus and Neonate: An fMEG Study

    ERIC Educational Resources Information Center

    Muenssinger, Jana; Matuz, Tamara; Schleger, Franziska; Kiefer-Schmidt, Isabelle; Goelz, Rangmar; Wacker-Gussmann, Annette; Birbaumer, Niels; Preissl, Hubert

    2013-01-01

    Habituation--the most basic form of learning--is used to evaluate central nervous system (CNS) maturation and to detect abnormalities in fetal brain development. In the current study, habituation, stimulus specificity and dishabituation of auditory evoked responses were measured in fetuses and newborns using fetal magnetoencephalography (fMEG). An…

  18. Lifespan Differences in Nonlinear Dynamics during Rest and Auditory Oddball Performance

    ERIC Educational Resources Information Center

    Muller, Viktor; Lindenberger, Ulman

    2012-01-01

    Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an…

  19. Auditory Attraction: Activation of Visual Cortex by Music and Sound in Williams Syndrome

    ERIC Educational Resources Information Center

    Thornton-Wells, Tricia A.; Cannistraci, Christopher J.; Anderson, Adam W.; Kim, Chai-Youn; Eapen, Mariam; Gore, John C.; Blake, Randolph; Dykens, Elisabeth M.

    2010-01-01

    Williams syndrome is a genetic neurodevelopmental disorder with a distinctive phenotype, including cognitive-linguistic features, nonsocial anxiety, and a strong attraction to music. We performed functional MRI studies examining brain responses to musical and other types of auditory stimuli in young adults with Williams syndrome and typically…

  20. Biofeedback Auditory Alpha EEG Training and Its Effect upon Anxiety and Reading Achievement.

    ERIC Educational Resources Information Center

    Lally, Marianne B.

    The major purpose of this exploratory study was to determine if electroencephalographic (EEG) auditory biofeedback training combined with Open Focus relaxation therapy would increase alpha-brain-wave production in highly anxious freshman university students who were also deficient in reading skills. The subjects for the study were 15 volunteer…