Science.gov

Sample records for multichannel auditory brain

  1. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  2. Enhanced multi-channel model for auditory spectrotemporal integration.

    PubMed

    Oh, Yonghee; Feth, Lawrence L; Hoglund, Evelyn M

    2015-11-01

    In psychoacoustics, a multi-channel model has traditionally been used to describe detection improvement for multicomponent signals. This model commonly postulates that energy or information within either the frequency or time domain is transformed into a probabilistic decision variable across the auditory channels, and that their weighted linear summation determines optimum detection performance when compared to a critical value such as a decision criterion. In this study, representative integration-based channel models, specifically focused on signal-processing properties of the auditory periphery are reviewed (e.g., Durlach's channel model). In addition, major limitations of the previous channel models are described when applied to spectral, temporal, and spectrotemporal integration performance by human listeners. Here, integration refers to detection threshold improvements as the number of brief tone bursts in a signal is increased. Previous versions of the multi-channel model underestimate listener performance in these experiments. Further, they are unable to apply a single processing unit to signals which vary simultaneously in time and frequency. Improvements to the previous channel models are proposed by considering more realistic conditions such as correlated signal responses in the auditory channels, nonlinear properties in system performance, and a peripheral processing unit operating in both time and frequency domains.

  3. Consequences of Broad Auditory Filters for Identification of Multichannel-Compressed Vowels

    ERIC Educational Resources Information Center

    Souza, Pamela; Wright, Richard; Bor, Stephanie

    2012-01-01

    Purpose: In view of previous findings (Bor, Souza, & Wright, 2008) that some listeners are more susceptible to spectral changes from multichannel compression (MCC) than others, this study addressed the extent to which differences in effects of MCC were related to differences in auditory filter width. Method: Listeners were recruited in 3 groups:…

  4. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  5. Multichannel Brain-Signal-Amplifying and Digitizing System

    NASA Technical Reports Server (NTRS)

    Gevins, Alan

    2005-01-01

    An apparatus has been developed for use in acquiring multichannel electroencephalographic (EEG) data from a human subject. EEG apparatuses with many channels in use heretofore have been too heavy and bulky to be worn, and have been limited in dynamic range to no more than 18 bits. The present apparatus is small and light enough to be worn by the subject. It is capable of amplifying EEG signals and digitizing them to 22 bits in as many as 150 channels. The apparatus is controlled by software and is plugged into the USB port of a personal computer. This apparatus makes it possible, for the first time, to obtain high-resolution functional EEG images of a thinking brain in a real-life, ambulatory setting outside a research laboratory or hospital.

  6. The utility of multichannel local field potentials for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Hwang, Eun Jung; Andersen, Richard A.

    2013-08-01

    Objective. Local field potentials (LFPs) that carry information about the subject's motor intention have the potential to serve as a complement or alternative to spike signals for brain-machine interfaces (BMIs). The goal of this study is to assess the utility of LFPs for BMIs by characterizing the largely unknown information coding properties of multichannel LFPs. Approach. Two monkeys were implanted, each with a 16-channel electrode array, in the parietal reach region where both LFPs and spikes are known to encode the subject's intended reach target. We examined how multichannel LFPs recorded during a reach task jointly carry reach target information, and compared the LFP performance to simultaneously recorded multichannel spikes. Main Results. LFPs yielded a higher number of channels that were informative about reach targets than spikes. Single channel LFPs provided more accurate target information than single channel spikes. However, LFPs showed significantly larger signal and noise correlations across channels than spikes. Reach target decoders performed worse when using multichannel LFPs than multichannel spikes. The underperformance of multichannel LFPs was mostly due to their larger noise correlation because noise de-correlated multichannel LFPs produced a decoding accuracy comparable to multichannel spikes. Despite the high noise correlation, decoders using LFPs in addition to spikes outperformed decoders using only spikes. Significance. These results demonstrate that multichannel LFPs could effectively complement spikes for BMI applications by yielding more informative channels. The utility of multichannel LFPs may be further augmented if their high noise correlation can be taken into account by decoders.

  7. The SRI24 multichannel brain atlas: construction and applications

    NASA Astrophysics Data System (ADS)

    Rohlfing, Torsten; Zahr, Natalie M.; Sullivan, Edith V.; Pfefferbaum, Adolf

    2008-03-01

    We present a new standard atlas of the human brain based on magnetic resonance images. The atlas was generated using unbiased population registration from high-resolution images obtained by multichannel-coil acquisition at 3T in a group of 24 normal subjects. The final atlas comprises three anatomical channels (T I-weighted, early and late spin echo), three diffusion-related channels (fractional anisotropy, mean diffusivity, diffusion-weighted image), and three tissue probability maps (CSF, gray matter, white matter). The atlas is dynamic in that it is implicitly represented by nonrigid transformations between the 24 subject images, as well as distortion-correction alignments between the image channels in each subject. The atlas can, therefore, be generated at essentially arbitrary image resolutions and orientations (e.g., AC/PC aligned), without compounding interpolation artifacts. We demonstrate in this paper two different applications of the atlas: (a) region definition by label propagation in a fiber tracking study is enabled by the increased sharpness of our atlas compared with other available atlases, and (b) spatial normalization is enabled by its average shape property. In summary, our atlas has unique features and will be made available to the scientific community as a resource and reference system for future imaging-based studies of the human brain.

  8. [Brain stem auditory evoked potentials in brain death state].

    PubMed

    Kojder, I; Garell, S; Włodarczyk, E; Sagan, L; Jezewski, D; Slósarek, J

    1998-01-01

    The authors studied auditory brainstem evoked potentials (BAEP) in 27 organ donors aged 40 to 68 years treated in neurosurgery units in Szczecin and Grenoble. Abnormal results were found in all cases. In 63% of cases no evoked action potentials were obtained, in 34% only the 1st wave was obtained, and in two cases evolution was observed with activity extinction. The authors believe that in the process of shaping of BAEP morphotic extinction begins from the later waves to earlier ones in agreement with the rostrocaudal direction of extinction of the functions or brain midline structures, and in a single study various findings may be obtained.

  9. [Analysis of auditory information in the brain of the cetacean].

    PubMed

    Popov, V V; Supin, A Ia

    2006-01-01

    The cetacean brain specifics involve an exceptional development of the auditory neural centres. The place of projection sensory areas including the auditory that in the cetacean brain cortex is essentially different from that in other mammals. The EP characteristics indicated presence of several functional divisions in the auditory cortex. Physiological studies of the cetacean auditory centres were mainly performed using the EP technique. Of several types of the EPs, the short-latency auditory EP was most thoroughly studied. In cetacean, it is characterised by exceptionally high temporal resolution with the integration time about 0.3 ms which corresponds to the cut-off frequency 1700 Hz. This much exceeds the temporal resolution of the hearing in terranstrial mammals. The frequency selectivity of hearing in cetacean was measured using a number of variants of the masking technique. The hearing frequency selectivity acuity in cetacean exceeds that of most terraneous mammals (excepting the bats). This acute frequency selectivity provides the differentiation among the finest spectral patterns of auditory signals.

  10. Auditory multistability and neurotransmitter concentrations in the human brain.

    PubMed

    Kondo, Hirohito M; Farkas, Dávid; Denham, Susan L; Asai, Tomohisa; Winkler, István

    2017-02-19

    Multistability in perception is a powerful tool for investigating sensory-perceptual transformations, because it produces dissociations between sensory inputs and subjective experience. Spontaneous switching between different perceptual objects occurs during prolonged listening to a sound sequence of tone triplets or repeated words (termed auditory streaming and verbal transformations, respectively). We used these examples of auditory multistability to examine to what extent neurochemical and cognitive factors influence the observed idiosyncratic patterns of switching between perceptual objects. The concentrations of glutamate-glutamine (Glx) and γ-aminobutyric acid (GABA) in brain regions were measured by magnetic resonance spectroscopy, while personality traits and executive functions were assessed using questionnaires and response inhibition tasks. Idiosyncratic patterns of perceptual switching in the two multistable stimulus configurations were identified using a multidimensional scaling (MDS) analysis. Intriguingly, although switching patterns within each individual differed between auditory streaming and verbal transformations, similar MDS dimensions were extracted separately from the two datasets. Individual switching patterns were significantly correlated with Glx and GABA concentrations in auditory cortex and inferior frontal cortex but not with the personality traits and executive functions. Our results suggest that auditory perceptual organization depends on the balance between neural excitation and inhibition in different brain regions.This article is part of the themed issue 'Auditory and visual scene analysis'.

  11. The human brain maintains contradictory and redundant auditory sensory predictions.

    PubMed

    Pieszek, Marika; Widmann, Andreas; Gruber, Thomas; Schröger, Erich

    2013-01-01

    Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs) in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound) as well as violations of the visual-auditory prediction (i.e., an incongruent sound) elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]). Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density) equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.

  12. Evoked potential correlates of selective attention with multi-channel auditory inputs

    NASA Technical Reports Server (NTRS)

    Schwent, V. L.; Hillyard, S. A.

    1975-01-01

    Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.

  13. Methods for the analysis of auditory processing in the brain.

    PubMed

    Theunissen, Frédéric E; Woolley, Sarah M N; Hsu, Anne; Fremouw, Thane

    2004-06-01

    Understanding song perception and singing behavior in birds requires the study of auditory processing of complex sounds throughout the avian brain. We can divide the basics of auditory perception into two general processes: (1) encoding, the process whereby sound is transformed into neural activity and (2) decoding, the process whereby patterns of neural activity take on perceptual meaning and therefore guide behavioral responses to sounds. In birdsong research, most studies have focused on the decoding process: What are the responses of the specialized auditory neurons in the song control system? and What do they mean for the bird? Recently, new techniques addressing both encoding and decoding have been developed for use in songbirds. Here, we first describe some powerful methods for analyzing what acoustical aspects of complex sounds like songs are encoded by auditory processing neurons in songbird brain. These methods include the estimation and analysis of spectro-temporal receptive fields (STRFs) for auditory neurons. Then we discuss the decoding methods that have been used to understand how songbird neurons may discriminate among different songs and other sounds based on mean spike-count rates.

  14. Analysis of auditory information in the brains of cetaceans.

    PubMed

    Popov, V V; Supin, A Ya

    2007-03-01

    A characteristic feature of the brains of toothed cetaceans is the exclusive development of the auditory neural centers. The location of the projection sensory zones, including the auditory zones, in the cetacean cortex is significantly different from that in other mammals. The characteristics of evoked potentials demonstrate the existence of several functional subdivisions in the auditory cortex. Physiological studies of the auditory neural centers of cetaceans have been performed predominantly using the evoked potentials method. Of the several types of evoked potentials available for non-invasive recording, the most detailed studies have been performed using short-latency auditory evoked potentials (SLAEP). SLAEP in cetaceans are characterized by exclusively high time resolution, with integration times of about 0.3 msec, which on the frequency scale corresponds to a cut-off frequency of 1700 Hz. This is more than an order of magnitude greater than the time resolution of hearing in terrestrial mammals. The frequency selectivity of hearing in cetaceans has been measured using several versions of the masking method. The acuity of frequency selectivity in cetaceans is several times greater than that in most terrestrial mammals (except bats). The acute frequency selectivity allows the discrimination of very fine spectral patterns of sound signals.

  15. Brain metabolism during hallucination-like auditory stimulation in schizophrenia.

    PubMed

    Horga, Guillermo; Fernández-Egea, Emilio; Mané, Anna; Font, Mireia; Schatz, Kelly C; Falcon, Carles; Lomeña, Francisco; Bernardo, Miguel; Parellada, Eduard

    2014-01-01

    Auditory verbal hallucinations (AVH) in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1) at rest and (2) during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia.

  16. Brain Metabolism during Hallucination-Like Auditory Stimulation in Schizophrenia

    PubMed Central

    Horga, Guillermo; Fernández-Egea, Emilio; Mané, Anna; Font, Mireia; Schatz, Kelly C.; Falcon, Carles; Lomeña, Francisco; Bernardo, Miguel; Parellada, Eduard

    2014-01-01

    Auditory verbal hallucinations (AVH) in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1) at rest and (2) during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia. PMID:24416328

  17. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity

    PubMed Central

    Laing, Mark; Rees, Adrian; Vuong, Quoc C.

    2015-01-01

    The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies. PMID:26483710

  18. Infant Auditory Processing and Event-related Brain Oscillations

    PubMed Central

    Musacchia, Gabriella; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P.; Benasich, April A.

    2015-01-01

    Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net

  19. Effect of prenatal lignocaine on auditory brain stem evoked response.

    PubMed Central

    Bozynski, M E; Schumacher, R E; Deschner, L S; Kileny, P

    1989-01-01

    To test the hypothesis that there would be a positive correlation between the interpeak wave (I-V) interval as measured by auditory brain stem evoked response and the ratio of umbilical cord blood arterial to venous lignocaine concentrations in infants born after maternal epidural anaesthesia, 10 normal infants born at full term by elective caesarean section were studied. Umbilical cord arterial and venous plasma samples were assayed for lignocaine, and auditory brain stem evoked responses were elicited at 35 and 70 dB at less than 4 (test 1) and greater than or equal to 48 hours (test 2). Mean wave I-V intervals were prolonged in test 1 when compared with test 2. Linear regression showed the arterial:venous ratio accounted for 66% (left ear) and 43% (right ear) of the variance in test 1 intervals. No association was found in test 2. In newborn infants, changes in serial auditory brain stem evoked response tests occur after maternal lignocaine epidural anaesthesia and these changes correlate with blood lignocaine concentrations. PMID:2774635

  20. MULTI-CHANNEL TRANSDERMAL STIMULATION OF THE BRAIN

    DTIC Science & Technology

    channels are available and in each, repetition rate, pulse durations, and intensity are remotely controlled, allowing the adjustment of parameters of brain stimulation in completely unrestricted subjects.

  1. Brain Region-Specific Activity Patterns after Recent or Remote Memory Retrieval of Auditory Conditioned Fear

    ERIC Educational Resources Information Center

    Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee

    2012-01-01

    Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or…

  2. Connections for auditory language in the human brain.

    PubMed

    Gierhan, Sarah M E

    2013-11-01

    The white matter bundles that underlie comprehension and production of language have been investigated for a number of years. Several studies have examined which fiber bundles (or tracts) are involved in auditory language processing, and which kind of language information is transmitted by which fiber tract. However, there is much debate about exactly which fiber tracts are involved, their precise course in the brain, how they should be named, and which functions they fulfill. Therefore, the present article reviews the available language-related literature, and educes a neurocognitive model of the pathways for auditory language processing. Besides providing an overview of the current methods used for relating fiber anatomy to function, this article details the precise anatomy of the fiber tracts and their roles in phonological, semantic and syntactic processing, articulation, and repetition.

  3. Temporal Stability of Multichannel, Multimodal ERP (Related Brain Potentials) Recordings

    DTIC Science & Technology

    1986-06-01

    variability. Early papers by Travis and Gottlober (1936, 1937), Davis and Davis (1936), Rubin (1938) and Williams (1939) suggested that EEG activity...Travis, L. E. & Gottlober , A. Do brain waves have individuality? Science, 1936, 84, 532-533. Travis, L. E. & Gottlober , A. How consistent are an

  4. The brain's voices: comparing nonclinical auditory hallucinations and imagery.

    PubMed

    Linden, David E J; Thornton, Katy; Kuswanto, Carissa N; Johnston, Stephen J; van de Ven, Vincent; Jackson, Michael C

    2011-02-01

    Although auditory verbal hallucinations are often thought to denote mental illness, the majority of voice hearers do not satisfy the criteria for a psychiatric disorder. Here, we report the first functional imaging study of such nonclinical hallucinations in 7 healthy voice hearers comparing them with auditory imagery. The human voice area in the superior temporal sulcus was activated during both hallucinations and imagery. Other brain areas supporting both hallucinations and imagery included fronto temporal language areas in the left hemisphere and their contralateral homologues and the supplementary motor area (SMA). Hallucinations are critically distinguished from imagery by lack of voluntary control. We expected this difference to be reflected in the relative timing of prefrontal and sensory areas. Activity of the SMA indeed preceded that of auditory areas during imagery, whereas during hallucinations, the 2 processes occurred instantaneously. Voluntary control was thus represented in the relative timing of prefrontal and sensory activation, whereas the sense of reality of the sensory experience may be a product of the voice area activation. Our results reveal mechanisms of the generation of sensory experience in the absence of external stimulation and suggest new approaches to the investigation of the neurobiology of psychopathology.

  5. Sex differences in brain structure in auditory and cingulate regions.

    PubMed

    Brun, Caroline C; Leporé, Natasha; Luders, Eileen; Chou, Yi-Yu; Madsen, Sarah K; Toga, Arthur W; Thompson, Paul M

    2009-07-01

    We applied a new method to visualize the three-dimensional profile of sex differences in brain structure based on MRI scans of 100 young adults. We compared 50 men with 50 women, matched for age and other relevant demographics. As predicted, left hemisphere auditory and language-related regions were proportionally expanded in women versus men, suggesting a possible structural basis for the widely replicated sex differences in language processing. In men, primary visual, and visuo-spatial association areas of the parietal lobes were proportionally expanded, in line with prior reports of relative strengths in visuo-spatial processing in men. We relate these three-dimensional patterns to prior functional and structural studies, and to theoretical predictions based on nonlinear scaling of brain morphometry.

  6. Atypical Bilateral Brain Synchronization in the Early Stage of Human Voice Auditory Processing in Young Children with Autism

    PubMed Central

    Kurita, Toshiharu; Kikuchi, Mitsuru; Yoshimura, Yuko; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Hirosawa, Tetsu; Furutani, Naoki; Higashida, Haruhiro; Ikeda, Takashi; Mutou, Kouhei; Asada, Minoru; Minabe, Yoshio

    2016-01-01

    Autism spectrum disorder (ASD) has been postulated to involve impaired neuronal cooperation in large-scale neural networks, including cortico-cortical interhemispheric circuitry. In the context of ASD, alterations in both peripheral and central auditory processes have also attracted a great deal of interest because these changes appear to represent pathophysiological processes; therefore, many prior studies have focused on atypical auditory responses in ASD. The auditory evoked field (AEF), recorded by magnetoencephalography, and the synchronization of these processes between right and left hemispheres was recently suggested to reflect various cognitive abilities in children. However, to date, no previous study has focused on AEF synchronization in ASD subjects. To assess global coordination across spatially distributed brain regions, the analysis of Omega complexity from multichannel neurophysiological data was proposed. Using Omega complexity analysis, we investigated the global coordination of AEFs in 3–8-year-old typically developing (TD) children (n = 50) and children with ASD (n = 50) in 50-ms time-windows. Children with ASD displayed significantly higher Omega complexities compared with TD children in the time-window of 0–50 ms, suggesting lower whole brain synchronization in the early stage of the P1m component. When we analyzed the left and right hemispheres separately, no significant differences in any time-windows were observed. These results suggest lower right-left hemispheric synchronization in children with ASD compared with TD children. Our study provides new evidence of aberrant neural synchronization in young children with ASD by investigating auditory evoked neural responses to the human voice. PMID:27074011

  7. Behavioral and electrophysiological auditory processing measures in traumatic brain injury after acoustically controlled auditory training: a long-term study

    PubMed Central

    Figueiredo, Carolina Calsolari; de Andrade, Adriana Neves; Marangoni-Castan, Andréa Tortosa; Gil, Daniela; Suriano, Italo Capraro

    2015-01-01

    ABSTRACT Objective To investigate the long-term efficacy of acoustically controlled auditory training in adults after tarumatic brain injury. Methods A total of six audioogically normal individuals aged between 20 and 37 years were studied. They suffered severe traumatic brain injury with diffuse axional lesion and underwent an acoustically controlled auditory training program approximately one year before. The results obtained in the behavioral and electrophysiological evaluation of auditory processing immediately after acoustically controlled auditory training were compared to reassessment findings, one year later. Results Quantitative analysis of auditory brainsteim response showed increased absolute latency of all waves and interpeak intervals, bilaterraly, when comparing both evaluations. Moreover, increased amplitude of all waves, and the wave V amplitude was statistically significant for the right ear, and wave III for the left ear. As to P3, decreased latency and increased amplitude were found for both ears in reassessment. The previous and current behavioral assessment showed similar results, except for the staggered spondaic words in the left ear and the amount of errors on the dichotic consonant-vowel test. Conclusion The acoustically controlled auditory training was effective in the long run, since better latency and amplitude results were observed in the electrophysiological evaluation, in addition to stability of behavioral measures after one-year training. PMID:26676270

  8. Multichannel optical brain imaging to separate cerebral vascular, tissue metabolic, and neuronal effects of cocaine

    NASA Astrophysics Data System (ADS)

    Ren, Hugang; Luo, Zhongchi; Yuan, Zhijia; Pan, Yingtian; Du, Congwu

    2012-02-01

    Characterization of cerebral hemodynamic and oxygenation metabolic changes, as well neuronal function is of great importance to study of brain functions and the relevant brain disorders such as drug addiction. Compared with other neuroimaging modalities, optical imaging techniques have the potential for high spatiotemporal resolution and dissection of the changes in cerebral blood flow (CBF), blood volume (CBV), and hemoglobing oxygenation and intracellular Ca ([Ca2+]i), which serves as markers of vascular function, tissue metabolism and neuronal activity, respectively. Recently, we developed a multiwavelength imaging system and integrated it into a surgical microscope. Three LEDs of λ1=530nm, λ2=570nm and λ3=630nm were used for exciting [Ca2+]i fluorescence labeled by Rhod2 (AM) and sensitizing total hemoglobin (i.e., CBV), and deoxygenated-hemoglobin, whereas one LD of λ1=830nm was used for laser speckle imaging to form a CBF mapping of the brain. These light sources were time-sharing for illumination on the brain and synchronized with the exposure of CCD camera for multichannel images of the brain. Our animal studies indicated that this optical approach enabled simultaneous mapping of cocaine-induced changes in CBF, CBV and oxygenated- and deoxygenated hemoglobin as well as [Ca2+]i in the cortical brain. Its high spatiotemporal resolution (30μm, 10Hz) and large field of view (4x5 mm2) are advanced as a neuroimaging tool for brain functional study.

  9. Bigger Brains or Bigger Nuclei? Regulating the Size of Auditory Structures in Birds

    PubMed Central

    Kubke, M. Fabiana; Massoglia, Dino P.; Carr, Catherine E.

    2012-01-01

    Increases in the size of the neuronal structures that mediate specific behaviors are believed to be related to enhanced computational performance. It is not clear, however, what developmental and evolutionary mechanisms mediate these changes, nor whether an increase in the size of a given neuronal population is a general mechanism to achieve enhanced computational ability. We addressed the issue of size by analyzing the variation in the relative number of cells of auditory structures in auditory specialists and generalists. We show that bird species with different auditory specializations exhibit variation in the relative size of their hindbrain auditory nuclei. In the barn owl, an auditory specialist, the hind-brain auditory nuclei involved in the computation of sound location show hyperplasia. This hyperplasia was also found in songbirds, but not in non-auditory specialists. The hyperplasia of auditory nuclei was also not seen in birds with large body weight suggesting that the total number of cells is selected for in auditory specialists. In barn owls, differences observed in the relative size of the auditory nuclei might be attributed to modifications in neurogenesis and cell death. Thus, hyperplasia of circuits used for auditory computation accompanies auditory specialization in different orders of birds. PMID:14726625

  10. Scale-free brain quartet: artistic filtering of multi-channel brainwave music.

    PubMed

    Wu, Dan; Li, Chaoyi; Yao, Dezhong

    2013-01-01

    To listen to the brain activities as a piece of music, we proposed the scale-free brainwave music (SFBM) technology, which translated scalp EEGs into music notes according to the power law of both EEG and music. In the present study, the methodology was extended for deriving a quartet from multi-channel EEGs with artistic beat and tonality filtering. EEG data from multiple electrodes were first translated into MIDI sequences by SFBM, respectively. Then, these sequences were processed by a beat filter which adjusted the duration of notes in terms of the characteristic frequency. And the sequences were further filtered from atonal to tonal according to a key defined by the analysis of the original music pieces. Resting EEGs with eyes closed and open of 40 subjects were utilized for music generation. The results revealed that the scale-free exponents of the music before and after filtering were different: the filtered music showed larger variety between the eyes-closed (EC) and eyes-open (EO) conditions, and the pitch scale exponents of the filtered music were closer to 1 and thus it was more approximate to the classical music. Furthermore, the tempo of the filtered music with eyes closed was significantly slower than that with eyes open. With the original materials obtained from multi-channel EEGs, and a little creative filtering following the composition process of a potential artist, the resulted brainwave quartet opened a new window to look into the brain in an audible musical way. In fact, as the artistic beat and tonal filters were derived from the brainwaves, the filtered music maintained the essential properties of the brain activities in a more musical style. It might harmonically distinguish the different states of the brain activities, and therefore it provided a method to analyze EEGs from a relaxed audio perspective.

  11. Quantitative map of multiple auditory cortical regions with a stereotaxic fine-scale atlas of the mouse brain

    PubMed Central

    Tsukano, Hiroaki; Horie, Masao; Hishida, Ryuichi; Takahashi, Kuniyuki; Takebayashi, Hirohide; Shibuki, Katsuei

    2016-01-01

    Optical imaging studies have recently revealed the presence of multiple auditory cortical regions in the mouse brain. We have previously demonstrated, using flavoprotein fluorescence imaging, at least six regions in the mouse auditory cortex, including the anterior auditory field (AAF), primary auditory cortex (AI), the secondary auditory field (AII), dorsoanterior field (DA), dorsomedial field (DM), and dorsoposterior field (DP). While multiple regions in the visual cortex and somatosensory cortex have been annotated and consolidated in recent brain atlases, the multiple auditory cortical regions have not yet been presented from a coronal view. In the current study, we obtained regional coordinates of the six auditory cortical regions of the C57BL/6 mouse brain and illustrated these regions on template coronal brain slices. These results should reinforce the existing mouse brain atlases and support future studies in the auditory cortex. PMID:26924462

  12. Comparison of temporal properties of auditory single units in response to cochlear infrared laser stimulation recorded with multi-channel and single tungsten electrodes

    NASA Astrophysics Data System (ADS)

    Tan, Xiaodong; Xia, Nan; Young, Hunter; Richter, Claus-Peter

    2015-02-01

    Auditory prostheses may benefit from Infrared Neural Stimulation (INS) because optical stimulation allows for spatially selective activation of neuron populations. Selective activation of neurons in the cochlear spiral ganglion can be determined in the central nucleus of the inferior colliculus (ICC) because the tonotopic organization of frequencies in the cochlea is maintained throughout the auditory pathway. The activation profile of INS is well represented in the ICC by multichannel electrodes (MCEs). To characterize single unit properties in response to INS, however, single tungsten electrodes (STEs) should be used because of its better signal-to-noise ratio. In this study, we compared the temporal properties of ICC single units recorded with MCEs and STEs in order to characterize the response properties of single auditory neurons in response to INS in guinea pigs. The length along the cochlea stimulated with infrared radiation corresponded to a frequency range of about 0.6 octaves, similar to that recorded with STEs. The temporal properties of single units recorded with MCEs showed higher maximum rates, shorter latencies, and higher firing efficiencies compared to those recorded with STEs. When the preset amplitude threshold for triggering MCE recordings was raised to twice over the noise level, the temporal properties of the single units became similar to those obtained with STEs. Undistinguishable neural activities from multiple sources in MCE recordings could be responsible for the response property difference between MCEs and STEs. Thus, caution should be taken in single unit recordings with MCEs.

  13. Selective attention in an overcrowded auditory scene: implications for auditory-based brain-computer interface design.

    PubMed

    Maddox, Ross K; Cheung, Willy; Lee, Adrian K C

    2012-11-01

    Listeners are good at attending to one auditory stream in a crowded environment. However, is there an upper limit of streams present in an auditory scene at which this selective attention breaks down? Here, participants were asked to attend one stream of spoken letters amidst other letter streams. In half of the trials, an initial primer was played, cueing subjects to the sound configuration. Results indicate that performance increases with token repetitions. Priming provided a performance benefit, suggesting that stream selection, not formation, is the bottleneck associated with attention in an overcrowded scene. Results' implications for brain-computer interfaces are discussed.

  14. Cross contrast multi-channel image registration using image synthesis for MR brain images.

    PubMed

    Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L

    2017-02-01

    Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information.

  15. Network Analysis of Functional Brain Connectivity Driven by Gamma-Band Auditory Steady-State Response in Auditory Hallucinations.

    PubMed

    Ying, Jun; Zhou, Dan; Lin, Ke; Gao, Xiaorong

    The auditory steady-state response (ASSR) may reflect activity from different regions of the brain. Particularly, it was reported that the gamma-band ASSR plays an important role in working memory, speech understanding, and recognition. Traditionally, the ASSR has been determined by power spectral density analysis, which cannot detect the exact overall distributed properties of the ASSR. Functional network analysis has recently been applied in electroencephalography studies. Previous studies on resting or working state found a small-world organization of the brain network. Some researchers have studied dysfunctional networks caused by diseases. The present study investigates the brain connection networks of schizophrenia patients with auditory hallucinations during an ASSR task. A directed transfer function is utilized to estimate the brain connectivity patterns. Moreover, the structures of brain networks are analyzed by converting the connectivity matrices into graphs. It is found that for normal subjects, network connections are mainly distributed at the central and frontal-temporal regions. This indicates that the central regions act as transmission hubs of information under ASSR stimulation. For patients, network connections seem unordered. The finding that the path length was larger in patients compared to that in normal subjects under most thresholds provides insight into the structures of connectivity patterns. The results suggest that there are more synchronous oscillations that cover a long distance on the cortex but a less efficient network for patients with auditory hallucinations.

  16. Characteristics of auditory agnosia in a child with severe traumatic brain injury: a case report.

    PubMed

    Hattiangadi, Nina; Pillion, Joseph P; Slomine, Beth; Christensen, James; Trovato, Melissa K; Speedie, Lynn J

    2005-01-01

    We present a case that is unusual in many respects from other documented incidences of auditory agnosia, including the mechanism of injury, age of the individual, and location of neurological insult. The clinical presentation is one of disturbance in the perception of spoken language, music, pitch, emotional prosody, and temporal auditory processing in the absence of significant deficits in the comprehension of written language, expressive language production, or peripheral auditory function. Furthermore, the patient demonstrates relatively preserved function in other aspects of audition such as sound localization, voice recognition, and perception of animal noises and environmental sounds. This case study demonstrates that auditory agnosia is possible following traumatic brain injury in a child, and illustrates the necessity of assessment with a wide variety of auditory stimuli to fully characterize auditory agnosia in a single individual.

  17. Multichannel neural recording with a 128 Mbps UWB wireless transmitter for implantable brain-machine interfaces.

    PubMed

    Ando, H; Takizawa, K; Yoshida, T; Matsushita, K; Hirata, M; Suzuki, T

    2015-01-01

    To realize a low-invasive and high accuracy BMI (Brain-machine interface) system, we have already developed a fully-implantable wireless BMI system which consists of ECoG neural electrode arrays, neural recording ASICs, a Wi-Fi based wireless data transmitter and a wireless power receiver with a rechargeable battery. For accurate estimation of movement intentions, it is important for a BMI system to have a large number of recording channels. In this paper, we report a new multi-channel BMI system which is able to record up to 4096-ch ECoG data by multiple connections of 64-ch ASICs and time division multiplexing of recorded data. This system has an ultra-wide-band (UWB) wireless unit for transmitting the recorded neural signals to outside the body. By preliminary experiments with a human body equivalent liquid phantom, we confirmed 4096-ch UWB wireless data transmission at 128 Mbps mode below 20 mm distance.

  18. A wearable multi-channel fNIRS system for brain imaging in freely moving subjects.

    PubMed

    Piper, Sophie K; Krueger, Arne; Koch, Stefan P; Mehnert, Jan; Habermehl, Christina; Steinbrink, Jens; Obrig, Hellmuth; Schmitz, Christoph H

    2014-01-15

    Functional near infrared spectroscopy (fNIRS) is a versatile neuroimaging tool with an increasing acceptance in the neuroimaging community. While often lauded for its portability, most of the fNIRS setups employed in neuroscientific research still impose usage in a laboratory environment. We present a wearable, multi-channel fNIRS imaging system for functional brain imaging in unrestrained settings. The system operates without optical fiber bundles, using eight dual wavelength light emitting diodes and eight electro-optical sensors, which can be placed freely on the subject's head for direct illumination and detection. Its performance is tested on N=8 subjects in a motor execution paradigm performed under three different exercising conditions: (i) during outdoor bicycle riding, (ii) while pedaling on a stationary training bicycle, and (iii) sitting still on the training bicycle. Following left hand gripping, we observe a significant decrease in the deoxyhemoglobin concentration over the contralateral motor cortex in all three conditions. A significant task-related ΔHbO2 increase was seen for the non-pedaling condition. Although the gross movements involved in pedaling and steering a bike induced more motion artifacts than carrying out the same task while sitting still, we found no significant differences in the shape or amplitude of the HbR time courses for outdoor or indoor cycling and sitting still. We demonstrate the general feasibility of using wearable multi-channel NIRS during strenuous exercise in natural, unrestrained settings and discuss the origins and effects of data artifacts. We provide quantitative guidelines for taking condition-dependent signal quality into account to allow the comparison of data across various levels of physical exercise. To the best of our knowledge, this is the first demonstration of functional NIRS brain imaging during an outdoor activity in a real life situation in humans.

  19. An auditory brain-computer interface evoked by natural speech

    NASA Astrophysics Data System (ADS)

    Lopez-Gordo, M. A.; Fernandez, E.; Romero, S.; Pelayo, F.; Prieto, Alberto

    2012-06-01

    Brain-computer interfaces (BCIs) are mainly intended for people unable to perform any muscular movement, such as patients in a complete locked-in state. The majority of BCIs interact visually with the user, either in the form of stimulation or biofeedback. However, visual BCIs challenge their ultimate use because they require the subjects to gaze, explore and shift eye-gaze using their muscles, thus excluding patients in a complete locked-in state or under the condition of the unresponsive wakefulness syndrome. In this study, we present a novel fully auditory EEG-BCI based on a dichotic listening paradigm using human voice for stimulation. This interface has been evaluated with healthy volunteers, achieving an average information transmission rate of 1.5 bits min-1 in full-length trials and 2.7 bits min-1 using the optimal length of trials, recorded with only one channel and without formal training. This novel technique opens the door to a more natural communication with users unable to use visual BCIs, with promising results in terms of performance, usability, training and cognitive effort.

  20. The TLC: a novel auditory nucleus of the mammalian brain.

    PubMed

    Saldaña, Enrique; Viñuela, Antonio; Marshall, Allen F; Fitzpatrick, Douglas C; Aparicio, M-Auxiliadora

    2007-11-28

    We have identified a novel nucleus of the mammalian brain and termed it the tectal longitudinal column (TLC). Basic histologic stains, tract-tracing techniques and three-dimensional reconstructions reveal that the rat TLC is a narrow, elongated structure spanning the midbrain tectum longitudinally. This paired nucleus is located close to the midline, immediately dorsal to the periaqueductal gray matter. It occupies what has traditionally been considered the most medial region of the deep superior colliculus and the most medial region of the inferior colliculus. The TLC differs from the neighboring nuclei of the superior and inferior colliculi and the periaqueductal gray by its distinct connections and cytoarchitecture. Extracellular electrophysiological recordings show that TLC neurons respond to auditory stimuli with physiologic properties that differ from those of neurons in the inferior or superior colliculi. We have identified the TLC in rodents, lagomorphs, carnivores, nonhuman primates, and humans, which indicates that the nucleus is conserved across mammals. The discovery of the TLC reveals an unexpected level of longitudinal organization in the mammalian tectum and raises questions as to the participation of this mesencephalic region in essential, yet completely unexplored, aspects of multisensory and/or sensorimotor integration.

  1. A multi-channel magnetic induction tomography measurement system for human brain model imaging.

    PubMed

    Xu, Zheng; Luo, Haijun; He, Wei; He, Chuanhong; Song, Xiaodong; Zahng, Zhanglong

    2009-06-01

    This paper proposes a multi-channel magnetic induction tomography measurement system for biological conductivity imaging in a human brain model. A hemispherical glass bowl filled with a salt solution is used as the human brain model; meanwhile, agar blocks of different conductivity are placed in the solution to simulate the intracerebral hemorrhage. The excitation and detection coils are fixed co-axially, and the axial gradiometer is used as the detection coil in order to cancel the primary field. On the outer surface of the glass bowl, 15 sensor units are arrayed in two circles as measurement parts, and a single sensor unit for cancelling the phase drift is placed beside the glass bowl. The phase sensitivity of our system is 0.204 degrees /S m(-1) with the excitation frequency of 120 kHz and the phase noise is in the range of -0.03 degrees to +0.05 degrees . Only the coaxial detection coil is available for each excitation coil; therefore, 15 phase data are collected in each measurement turn. Finally, the two-dimensional images of conductivity distribution are obtained using an interpolation algorithm. The frequency-varying experiment indicates that the imaging quality becomes better as the excitation frequency is increased.

  2. Are Auditory Hallucinations Related to the Brain's Resting State Activity? A 'Neurophenomenal Resting State Hypothesis'

    PubMed Central

    2014-01-01

    While several hypotheses about the neural mechanisms underlying auditory verbal hallucinations (AVH) have been suggested, the exact role of the recently highlighted intrinsic resting state activity of the brain remains unclear. Based on recent findings, we therefore developed what we call the 'resting state hypotheses' of AVH. Our hypothesis suggest that AVH may be traced back to abnormally elevated resting state activity in auditory cortex itself, abnormal modulation of the auditory cortex by anterior cortical midline regions as part of the default-mode network, and neural confusion between auditory cortical resting state changes and stimulus-induced activity. We discuss evidence in favour of our 'resting state hypothesis' and show its correspondence with phenomenal, i.e., subjective-experiential features as explored in phenomenological accounts. Therefore I speak of a 'neurophenomenal resting state hypothesis' of auditory hallucinations in schizophrenia. PMID:25598821

  3. The Relationship between Phonological and Auditory Processing and Brain Organization in Beginning Readers

    ERIC Educational Resources Information Center

    Pugh, Kenneth R.; Landi, Nicole; Preston, Jonathan L.; Mencl, W. Einar; Austin, Alison C.; Sibley, Daragh; Fulbright, Robert K.; Seidenberg, Mark S.; Grigorenko, Elena L.; Constable, R. Todd; Molfese, Peter; Frost, Stephen J.

    2013-01-01

    We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of…

  4. Gonadotropin-releasing hormone (GnRH) modulates auditory processing in the fish brain.

    PubMed

    Maruska, Karen P; Tricas, Timothy C

    2011-04-01

    Gonadotropin-releasing hormone 1 (GnRH1) neurons control reproductive activity, but GnRH2 and GnRH3 neurons have widespread projections and function as neuromodulators in the vertebrate brain. While these extra-hypothalamic GnRH forms function as olfactory and visual neuromodulators, their potential effect on processing of auditory information is unknown. To test the hypothesis that GnRH modulates the processing of auditory information in the brain, we used immunohistochemistry to determine seasonal variations in these neuropeptide systems, and in vivo single-neuron recordings to identify neuromodulation in the midbrain torus semicircularis of the soniferous damselfish Abudefduf abdominalis. Our results show abundant GnRH-immunoreactive (-ir) axons in auditory processing regions of the midbrain and hindbrain. The number of extra-hypothalamic GnRH somata and the density of GnRH-ir axons within the auditory torus semicircularis also varied across the year, suggesting seasonal changes in GnRH influence of auditory processing. Exogenous application of GnRH (sGnRH and cGnRHII) caused a primarily inhibitory effect on auditory-evoked single neuron responses in the torus semicircularis. In the majority of neurons, GnRH caused a long-lasting decrease in spike rate in response to both tone bursts and playbacks of complex natural sounds. GnRH also decreased response latency and increased auditory thresholds in a frequency and stimulus type-dependent manner. To our knowledge, these results show for the first time in any vertebrate that GnRH can influence context-specific auditory processing in vivo in the brain, and may function to modulate seasonal auditory-mediated social behaviors.

  5. Attention to human speakers in a virtual auditory environment: brain potential evidence.

    PubMed

    Nager, Wido; Dethlefsen, Christina; Münte, Thomas F

    2008-07-18

    Listening to a speech message requires the accurate selection of the relevant auditory input especially when distracting background noise or other speech messages are present. To investigate such auditory selection processes we presented three different speech messages simultaneously spoken by different actors at separate spatial locations (-70, 0, 70/ azimuth). Stimuli were recorded using an artificial head with microphones embedded in the "auditory canals" to capture the interaural time and level differences as well as some of the filter properties of the outer ear structures as auditory spatial cues, thus creating a realistic virtual auditory space. In a given experimental run young healthy participants listened via headphones and either attended to the rightmost or the leftmost message in order to comprehend the story. Superimposed on the speech messages task irrelevant probe stimuli (syllables sharing spatial and spectral characteristics, 4 probes/s) were presented that were used for the generation of event-related brain potentials computed from 29 channels of EEG. ERPs to probe stimuli were characterized by a negativity starting at 250 ms with a contralateral frontal maximum for the probes sharing spatial/spectral features of the attended story relative to those for the unattended message. The relatively late onset of this attention effect was interpreted to reflect the task demands in this complex auditory environment. This study demonstrates the feasibility to use virtual auditory environments in conjunction with the probe technique to study auditory selection under realistic conditions.

  6. BabySQUID: A mobile, high-resolution multichannel magnetoencephalography system for neonatal brain assessment

    NASA Astrophysics Data System (ADS)

    Okada, Yoshio; Pratt, Kevin; Atwood, Christopher; Mascarenas, Anthony; Reineman, Richard; Nurminen, Jussi; Paulson, Douglas

    2006-02-01

    We developed a prototype of a mobile, high-resolution, multichannel magnetoencephalography (MEG) system, called babySQUID, for assessing brain functions in newborns and infants. Unlike electroencephalography, MEG signals are not distorted by the scalp or the fontanels and sutures in the skull. Thus, brain activity can be measured and localized with MEG as if the sensors were above an exposed brain. The babySQUID is housed in a moveable cart small enough to be transported from one room to another. To assess brain functions, one places the baby on the bed of the cart and the head on its headrest with MEG sensors just below. The sensor array consists of 76 first-order axial gradiometers, each with a pickup coil diameter of 6mm and a baseline of 30mm, in a high-density array with a spacing of 12-14mm center-to-center. The pickup coils are 6±1mm below the outer surface of the headrest. The short gap provides unprecedented sensitivity since the scalp and skull are thin (as little as 3-4mm altogether) in babies. In an electromagnetically unshielded room in a hospital, the field sensitivity at 1kHz was ˜17fT/√Hz. The noise was reduced from ˜400to200fT/√Hz at 1Hz using a reference cancellation technique and further to ˜40fT/√Hz using a gradient common mode rejection technique. Although the residual environmental magnetic noise interfered with the operation of the babySQUID, the instrument functioned sufficiently well to detect spontaneous brain signals from babies with a signal to noise ratio (SNR) of as much as 7.6:1. In a magnetically shielded room, the field sensitivity was 17fT/√Hz at 20Hz and 30fT/√Hz at 1Hz without implementation of reference or gradient cancellation. The sensitivity was sufficiently high to detect spontaneous brain activity from a 7month old baby with a SNR as much as 40:1 and evoked somatosensory responses with a 50Hz bandwidth after as little as four averages. We expect that both the noise and the sensor gap can be reduced further by

  7. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory

    PubMed Central

    Kraus, Nina; Strait, Dana; Parbery-Clark, Alexandra

    2012-01-01

    Musicians benefit from real-life advantages such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians’ auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. PMID:22524346

  8. Nonlocal atlas-guided multi-channel forest learning for human brain labeling

    PubMed Central

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang

    2016-01-01

    Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI_LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the

  9. Nonlocal atlas-guided multi-channel forest learning for human brain labeling

    SciTech Connect

    Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang

    2016-02-15

    Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI-LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the

  10. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations.

    PubMed

    Ćurčić-Blake, Branislava; Ford, Judith M; Hubl, Daniela; Orlov, Natasza D; Sommer, Iris E; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W; David, Olivier; Mulert, Christoph; Woodward, Todd S; Aleman, André

    2017-01-01

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of particular relevance. However, reconciliation of these theories with experimental evidence is missing. We review 50 studies investigating functional (EEG and fMRI) and anatomic (diffusion tensor imaging) connectivity in these networks, and explore the evidence supporting abnormal connectivity in these networks associated with AVH. We distinguish between functional connectivity during an actual hallucination experience (symptom capture) and functional connectivity during either the resting state or a task comparing individuals who hallucinate with those who do not (symptom association studies). Symptom capture studies clearly reveal a pattern of increased coupling among the auditory, language and striatal regions. Anatomical and symptom association functional studies suggest that the interhemispheric connectivity between posterior auditory regions may depend on the phase of illness, with increases in non-psychotic individuals and first episode patients and decreases in chronic patients. Leading hypotheses involving concepts as unstable memories, source monitoring, top-down attention, and hybrid models of hallucinations are supported in part by the published connectivity data, although several caveats and inconsistencies remain. Specifically, possible changes in fronto-temporal connectivity are still under debate. Precise hypotheses concerning the directionality of connections deduced from current theoretical approaches should be tested using experimental approaches that allow for discrimination of competing hypotheses.

  11. Prediction of Auditory and Visual P300 Brain-Computer Interface Aptitude

    PubMed Central

    Halder, Sebastian; Hammer, Eva Maria; Kleih, Sonja Claudia; Bogdan, Martin; Rosenstiel, Wolfgang; Birbaumer, Niels; Kübler, Andrea

    2013-01-01

    Objective Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball. Methods Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude. Results Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy. Conclusions Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection. Significance Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population. PMID:23457444

  12. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe

    PubMed Central

    Berns, Gregory S.; Cook, Peter F.; Foxley, Sean; Jbabdi, Saad; Miller, Karla L.; Marino, Lori

    2015-01-01

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of ‘associative′ regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin (Delphinus delphis) and a pantropical dolphin (Stenella attenuata) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species

  13. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions.

  14. A blueprint for vocal learning: auditory predispositions from brains to genomes

    PubMed Central

    Wheatcroft, David; Qvarnström, Anna

    2015-01-01

    Memorizing and producing complex strings of sound are requirements for spoken human language. We share these behaviours with likely more than 4000 species of songbirds, making birds our primary model for studying the cognitive basis of vocal learning and, more generally, an important model for how memories are encoded in the brain. In songbirds, as in humans, the sounds that a juvenile learns later in life depend on auditory memories formed early in development. Experiments on a wide variety of songbird species suggest that the formation and lability of these auditory memories, in turn, depend on auditory predispositions that stimulate learning when a juvenile hears relevant, species-typical sounds. We review evidence that variation in key features of these auditory predispositions are determined by variation in genes underlying the development of the auditory system. We argue that increased investigation of the neuronal basis of auditory predispositions expressed early in life in combination with modern comparative genomic approaches may provide insights into the evolution of vocal learning. PMID:26246333

  15. Turning down the noise: the benefit of musical training on the aging auditory brain.

    PubMed

    Alain, Claude; Zendel, Benjamin Rich; Hutka, Stefanie; Bidelman, Gavin M

    2014-02-01

    Age-related decline in hearing abilities is a ubiquitous part of aging, and commonly impacts speech understanding, especially when there are competing sound sources. While such age effects are partially due to changes within the cochlea, difficulties typically exist beyond measurable hearing loss, suggesting that central brain processes, as opposed to simple peripheral mechanisms (e.g., hearing sensitivity), play a critical role in governing hearing abilities late into life. Current training regimens aimed to improve central auditory processing abilities have experienced limited success in promoting listening benefits. Interestingly, recent studies suggest that in young adults, musical training positively modifies neural mechanisms, providing robust, long-lasting improvements to hearing abilities as well as to non-auditory tasks that engage cognitive control. These results offer the encouraging possibility that musical training might be used to counteract age-related changes in auditory cognition commonly observed in older adults. Here, we reviewed studies that have examined the effects of age and musical experience on auditory cognition with an emphasis on auditory scene analysis. We infer that musical training may offer potential benefits to complex listening and might be utilized as a means to delay or even attenuate declines in auditory perception and cognition that often emerge later in life.

  16. Human auditory evoked potentials in the assessment of brain function during major cardiovascular surgery.

    PubMed

    Rodriguez, Rosendo A

    2004-06-01

    Focal neurologic and intellectual deficits or memory problems are relatively frequent after cardiac surgery. These complications have been associated with cerebral hypoperfusion, embolization, and inflammation that occur during or after surgery. Auditory evoked potentials, a neurophysiologic technique that evaluates the function of neural structures from the auditory nerve to the cortex, provide useful information about the functional status of the brain during major cardiovascular procedures. Skepticism regarding the presence of artifacts or difficulty in their interpretation has outweighed considerations of its potential utility and noninvasiveness. This paper reviews the evidence of their potential applications in several aspects of the management of cardiac surgery patients. The sensitivity of auditory evoked potentials to the effects of changes in brain temperature makes them useful for monitoring cerebral hypothermia and rewarming during cardiopulmonary bypass. The close relationship between evoked potential waveforms and specific anatomic structures facilitates the assessment of the functional integrity of the central nervous system in cardiac surgery patients. This feature may also be relevant in the management of critical patients under sedation and coma or in the evaluation of their prognosis during critical care. Their objectivity, reproducibility, and relative insensitivity to learning effects make auditory evoked potentials attractive for the cognitive assessment of cardiac surgery patients. From a clinical perspective, auditory evoked potentials represent an additional window for the study of underlying cerebral processes in healthy and diseased patients. From a research standpoint, this technology offers opportunities for a better understanding of the particular cerebral deficits associated with patients who are undergoing major cardiovascular procedures.

  17. Expression of c-fos in auditory and non-auditory brain regions of the gerbil after manipulations that induce tinnitus.

    PubMed

    Wallhäusser-Franke, E; Mahlke, C; Oliva, R; Braun, S; Wenz, G; Langner, G

    2003-12-01

    Subjective tinnitus is a phantom sound sensation that does not result from acoustic stimulation and is audible to the affected subject only. Tinnitus-like sensations in animals can be evoked by procedures that also cause tinnitus in humans. In gerbils, we investigated brain activation after systemic application of sodium salicylate or exposure to loud noise, both known to be reliable tinnitus-inductors. Brains were screened for neurons containing the c-fos protein. After salicylate injections, auditory cortex was the only auditory area with consistently increased numbers of immunoreactive neurons compared to controls. Exposure to impulse noise led to prolonged c-fos expression in auditory cortex and dorsal cochlear nucleus. After both manipulations c-fos expression was increased in the amygdala, in thalamic midline, and intralaminar areas, in frontal cortex, as well as in hypothalamic and brainstem regions involved in behavioral and physiological defensive reactions. Activation of these non-auditory areas was attributed to acute stress, to aversive-affective components and autonomous reactions associated with the treatments and a resulting tinnitus. The present findings are in accordance with former results that provided evidence for suppressed activation in auditory midbrain but enhanced activation of the auditory cortex after injecting high doses of salicylate. In addition, our present results provide evidence that acute stress coinciding with a disruption of hearing may evoke activation of the auditory cortex. We interpret these results in favor of our model of central tinnitus generation.

  18. Multi-channel atomic magnetometer for magnetoencephalography: a configuration study.

    PubMed

    Kim, Kiwoong; Begus, Samo; Xia, Hui; Lee, Seung-Kyun; Jazbinsek, Vojko; Trontelj, Zvonko; Romalis, Michael V

    2014-04-01

    Atomic magnetometers are emerging as an alternative to SQUID magnetometers for detection of biological magnetic fields. They have been used to measure both the magnetocardiography (MCG) and magnetoencephalography (MEG) signals. One of the virtues of the atomic magnetometers is their ability to operate as a multi-channel detector while using many common elements. Here we study two configurations of such a multi-channel atomic magnetometer optimized for MEG detection. We describe measurements of auditory evoked fields (AEF) from a human brain as well as localization of dipolar phantoms and auditory evoked fields. A clear N100m peak in AEF was observed with a signal-to-noise ratio of higher than 10 after averaging of 250 stimuli. Currently the intrinsic magnetic noise level is 4fTHz(-1/2) at 10Hz. We compare the performance of the two systems in regards to current source localization and discuss future development of atomic MEG systems.

  19. Brain Network Interactions in Auditory, Visual and Linguistic Processing

    ERIC Educational Resources Information Center

    Horwitz, Barry; Braun, Allen R.

    2004-01-01

    In the paper, we discuss the importance of network interactions between brain regions in mediating performance of sensorimotor and cognitive tasks, including those associated with language processing. Functional neuroimaging, especially PET and fMRI, provide data that are obtained essentially simultaneously from much of the brain, and thus are…

  20. Neurogenesis in the brain auditory pathway of a marsupial, the northern native cat (Dasyurus hallucatus)

    SciTech Connect

    Aitkin, L.; Nelson, J.; Farrington, M.; Swann, S. )

    1991-07-08

    Neurogenesis in the auditory pathway of the marsupial Dasyurus hallucatus was studied. Intraperitoneal injections of tritiated thymidine (20-40 microCi) were made into pouch-young varying from 1 to 56 days pouch-life. Animals were killed as adults and brain sections were prepared for autoradiography and counterstained with a Nissl stain. Neurons in the ventral cochlear nucleus were generated prior to 3 days pouch-life, in the superior olive at 5-7 days, and in the dorsal cochlear nucleus over a prolonged period. Inferior collicular neurogenesis lagged behind that in the medial geniculate, the latter taking place between days 3 and 9 and the former between days 7 and 22. Neurogenesis began in the auditory cortex on day 9 and was completed by about day 42. Thus neurogenesis was complete in the medullary auditory nuclei before that in the midbrain commenced, and in the medial geniculate before that in the auditory cortex commenced. The time course of neurogenesis in the auditory pathway of the native cat was very similar to that in another marsupial, the brushtail possum. For both, neurogenesis occurred earlier than in eutherian mammals of a similar size but was more protracted.

  1. Auditory perception and syntactic cognition: brain activity-based decoding within and across subjects.

    PubMed

    Herrmann, Björn; Maess, Burkhard; Kalberlah, Christian; Haynes, John-Dylan; Friederici, Angela D

    2012-05-01

    The present magnetoencephalography study investigated whether the brain states of early syntactic and auditory-perceptual processes can be decoded from single-trial recordings with a multivariate pattern classification approach. In particular, it was investigated whether the early neural activation patterns in response to rule violations in basic auditory perception and in high cognitive processes (syntax) reflect a functional organization that largely generalizes across individuals or is subject-specific. On this account, subjects were auditorily presented with correct sentences, syntactically incorrect sentences, correct sentences including an interaural time difference change, and sentences containing both violations. For the analysis, brain state decoding was carried out within and across subjects with three pairwise classifications. Neural patterns elicited by each of the violation sentences were separately classified with the patterns elicited by the correct sentences. The results revealed the highest decoding accuracies over temporal cortex areas for all three classification types. Importantly, both the magnitude and the spatial distribution of decoding accuracies for the early neural patterns were very similar for within-subject and across-subject decoding. At the same time, across-subject decoding suggested a hemispheric bias, with the most consistent patterns in the left hemisphere. Thus, the present data show that not only auditory-perceptual processing brain states but also cognitive brain states of syntactic rule processing can be decoded from single-trial brain activations. Moreover, the findings indicate that the neural patterns in response to syntactic cognition and auditory perception reflect a functional organization that is highly consistent across individuals.

  2. Localized brain activation related to the strength of auditory learning in a parrot.

    PubMed

    Eda-Fujiwara, Hiroko; Imagawa, Takuya; Matsushita, Masanori; Matsuda, Yasushi; Takeuchi, Hiro-Aki; Satoh, Ryohei; Watanabe, Aiko; Zandbergen, Matthijs A; Manabe, Kazuchika; Kawashima, Takashi; Bolhuis, Johan J

    2012-01-01

    Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus), a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory.

  3. Development and modulation of intrinsic membrane properties control the temporal precision of auditory brain stem neurons.

    PubMed

    Franzen, Delwen L; Gleiss, Sarah A; Berger, Christina; Kümpfbeck, Franziska S; Ammer, Julian J; Felmy, Felix

    2015-01-15

    Passive and active membrane properties determine the voltage responses of neurons. Within the auditory brain stem, refinements in these intrinsic properties during late postnatal development usually generate short integration times and precise action-potential generation. This developmentally acquired temporal precision is crucial for auditory signal processing. How the interactions of these intrinsic properties develop in concert to enable auditory neurons to transfer information with high temporal precision has not yet been elucidated in detail. Here, we show how the developmental interaction of intrinsic membrane parameters generates high firing precision. We performed in vitro recordings from neurons of postnatal days 9-28 in the ventral nucleus of the lateral lemniscus of Mongolian gerbils, an auditory brain stem structure that converts excitatory to inhibitory information with high temporal precision. During this developmental period, the input resistance and capacitance decrease, and action potentials acquire faster kinetics and enhanced precision. Depending on the stimulation time course, the input resistance and capacitance contribute differentially to action-potential thresholds. The decrease in input resistance, however, is sufficient to explain the enhanced action-potential precision. Alterations in passive membrane properties also interact with a developmental change in potassium currents to generate the emergence of the mature firing pattern, characteristic of coincidence-detector neurons. Cholinergic receptor-mediated depolarizations further modulate this intrinsic excitability profile by eliciting changes in the threshold and firing pattern, irrespective of the developmental stage. Thus our findings reveal how intrinsic membrane properties interact developmentally to promote temporally precise information processing.

  4. Brain stem auditory evoked responses in human infants and adults

    NASA Technical Reports Server (NTRS)

    Hecox, K.; Galambos, R.

    1974-01-01

    Brain stem evoked potentials were recorded by conventional scalp electrodes in infants (3 weeks to 3 years of age) and adults. The latency of one of the major response components (wave V) is shown to be a function both of click intensity and the age of the subject; this latency at a given signal strength shortens postnatally to reach the adult value (about 6 msec) by 12 to 18 months of age. The demonstrated reliability and limited variability of these brain stem electrophysiological responses provide the basis for an optimistic estimate of their usefulness as an objective method for assessing hearing in infants and adults.

  5. Fast reconfiguration of high-frequency brain networks in response to surprising changes in auditory input.

    PubMed

    Nicol, Ruth M; Chapman, Sandra C; Vértes, Petra E; Nathan, Pradeep J; Smith, Marie L; Shtyrov, Yury; Bullmore, Edward T

    2012-03-01

    How do human brain networks react to dynamic changes in the sensory environment? We measured rapid changes in brain network organization in response to brief, discrete, salient auditory stimuli. We estimated network topology and distance parameters in the immediate central response period, <1 s following auditory presentation of standard tones interspersed with occasional deviant tones in a mismatch-negativity (MMN) paradigm, using magnetoencephalography (MEG) to measure synchronization of high-frequency (gamma band; 33-64 Hz) oscillations in healthy volunteers. We found that global small-world parameters of the networks were conserved between the standard and deviant stimuli. However, surprising or unexpected auditory changes were associated with local changes in clustering of connections between temporal and frontal cortical areas and with increased interlobar, long-distance synchronization during the 120- to 250-ms epoch (coinciding with the MMN-evoked response). Network analysis of human MEG data can resolve fast local topological reconfiguration and more long-range synchronization of high-frequency networks as a systems-level representation of the brain's immediate response to salient stimuli in the dynamically changing sensory environment.

  6. Brain stem auditory evoked potentials: effects of ovarian steroids correlated with increased incidence of Bell's palsy in pregnancy.

    PubMed

    Ben David, Y; Tal, J; Podoshin, L; Fradis, M; Sharf, M; Pratt, H; Faraggi, D

    1995-07-01

    To investigate the effect of ovarian steroids on the brain stem during changes of estrogen and progesterone blood levels, we recorded brain stem auditory evoked potentials with increased stimulus rates from 26 women treated for sterility by menotropins (Pergonal and Metrodin). These women were divided into three groups according to their estrogen and progesterone blood levels. The brain stem auditory evoked potential results revealed a significant delay of peak III only, with an increased stimulus rate in the group with the highest estrogen level. Estrogen may cause a brain stem synaptic impairment, presumably because of ischemic changes, and thus also may be responsible for a higher incidence of Bell's palsy during pregnancy.

  7. Effect of acupuncture on the auditory evoked brain stem potential in Parkinson's disease.

    PubMed

    Wang, Lingling; He, Chong; Liu, Yueguang; Zhu, Lili

    2002-03-01

    Under the auditory evoked brain stem potential (ABP) examination, the latent period of V wave and the intermittent periods of III-V peak and I-V peak were significantly shortened in Parkinson's disease patients of the treatment group (N = 29) after acupuncture treatment. The difference of cumulative scores in Webster's scale was also decreased in correlation analysis. The increase of dopamine in the brain and the excitability of the dopamine neurons may contribute to the therapeutic effects, in TCM terms, of subduing the pathogenic wind and tranquilizing the mind.

  8. Early auditory processing in area V5/MT+ of the congenitally blind brain.

    PubMed

    Watkins, Kate E; Shakespeare, Timothy J; O'Donoghue, M Clare; Alexander, Iona; Ragge, Nicola; Cowey, Alan; Bridge, Holly

    2013-11-13

    Previous imaging studies of congenital blindness have studied individuals with heterogeneous causes of blindness, which may influence the nature and extent of cross-modal plasticity. Here, we scanned a homogeneous group of blind people with bilateral congenital anophthalmia, a condition in which both eyes fail to develop, and, as a result, the visual pathway is not stimulated by either light or retinal waves. This model of congenital blindness presents an opportunity to investigate the effects of very early visual deafferentation on the functional organization of the brain. In anophthalmic animals, the occipital cortex receives direct subcortical auditory input. We hypothesized that this pattern of subcortical reorganization ought to result in a topographic mapping of auditory frequency information in the occipital cortex of anophthalmic people. Using functional MRI, we examined auditory-evoked activity to pure tones of high, medium, and low frequencies. Activity in the superior temporal cortex was significantly reduced in anophthalmic compared with sighted participants. In the occipital cortex, a region corresponding to the cytoarchitectural area V5/MT+ was activated in the anophthalmic participants but not in sighted controls. Whereas previous studies in the blind indicate that this cortical area is activated to auditory motion, our data show it is also active for trains of pure tone stimuli and in some anophthalmic participants shows a topographic mapping (tonotopy). Therefore, this region appears to be performing early sensory processing, possibly served by direct subcortical input from the pulvinar to V5/MT+.

  9. A vision-free brain-computer interface (BCI) paradigm based on auditory selective attention.

    PubMed

    Kim, Do-Won; Cho, Jae-Hyun; Hwang, Han-Jeong; Lim, Jeong-Hwan; Im, Chang-Hwan

    2011-01-01

    Majority of the recently developed brain computer interface (BCI) systems have been using visual stimuli or visual feedbacks. However, the BCI paradigms based on visual perception might not be applicable to severe locked-in patients who have lost their ability to control their eye movement or even their vision. In the present study, we investigated the feasibility of a vision-free BCI paradigm based on auditory selective attention. We used the power difference of auditory steady-state responses (ASSRs) when the participant modulates his/her attention to the target auditory stimulus. The auditory stimuli were constructed as two pure-tone burst trains with different beat frequencies (37 and 43 Hz) which were generated simultaneously from two speakers located at different positions (left and right). Our experimental results showed high classification accuracies (64.67%, 30 commands/min, information transfer rate (ITR) = 1.89 bits/min; 74.00%, 12 commands/min, ITR = 2.08 bits/min; 82.00%, 6 commands/min, ITR = 1.92 bits/min; 84.33%, 3 commands/min, ITR = 1.12 bits/min; without any artifact rejection, inter-trial interval = 6 sec), enough to be used for a binary decision. Based on the suggested paradigm, we implemented a first online ASSR-based BCI system that demonstrated the possibility of materializing a totally vision-free BCI system.

  10. High Resolution Quantitative Synaptic Proteome Profiling of Mouse Brain Regions After Auditory Discrimination Learning

    PubMed Central

    Kolodziej, Angela; Smalla, Karl-Heinz; Richter, Sandra; Engler, Alexander; Pielot, Rainer; Dieterich, Daniela C.; Tischmeyer, Wolfgang; Naumann, Michael; Kähne, Thilo

    2016-01-01

    The molecular synaptic mechanisms underlying auditory learning and memory remain largely unknown. Here, the workflow of a proteomic study on auditory discrimination learning in mice is described. In this learning paradigm, mice are trained in a shuttle box Go/NoGo-task to discriminate between rising and falling frequency-modulated tones in order to avoid a mild electric foot-shock. The protocol involves the enrichment of synaptosomes from four brain areas, namely the auditory cortex, frontal cortex, hippocampus, and striatum, at different stages of training. Synaptic protein expression patterns obtained from trained mice are compared to naïve controls using a proteomic approach. To achieve sufficient analytical depth, samples are fractionated in three different ways prior to mass spectrometry, namely 1D SDS-PAGE/in-gel digestion, in-solution digestion and phospho-peptide enrichment. High-resolution proteomic analysis on a mass spectrometer and label-free quantification are used to examine synaptic protein profiles in phospho-peptide-depleted and phospho-peptide-enriched fractions of synaptosomal protein samples. A commercial software package is utilized to reveal proteins and phospho-peptides with significantly regulated relative synaptic abundance levels (trained/naïve controls). Common and differential regulation modes for the synaptic proteome in the investigated brain regions of mice after training were observed. Subsequently, meta-analyses utilizing several databases are employed to identify underlying cellular functions and biological pathways. PMID:28060347

  11. High Resolution Quantitative Synaptic Proteome Profiling of Mouse Brain Regions After Auditory Discrimination Learning.

    PubMed

    Kolodziej, Angela; Smalla, Karl-Heinz; Richter, Sandra; Engler, Alexander; Pielot, Rainer; Dieterich, Daniela C; Tischmeyer, Wolfgang; Naumann, Michael; Kähne, Thilo

    2016-12-15

    The molecular synaptic mechanisms underlying auditory learning and memory remain largely unknown. Here, the workflow of a proteomic study on auditory discrimination learning in mice is described. In this learning paradigm, mice are trained in a shuttle box Go/NoGo-task to discriminate between rising and falling frequency-modulated tones in order to avoid a mild electric foot-shock. The protocol involves the enrichment of synaptosomes from four brain areas, namely the auditory cortex, frontal cortex, hippocampus, and striatum, at different stages of training. Synaptic protein expression patterns obtained from trained mice are compared to naïve controls using a proteomic approach. To achieve sufficient analytical depth, samples are fractionated in three different ways prior to mass spectrometry, namely 1D SDS-PAGE/in-gel digestion, in-solution digestion and phospho-peptide enrichment. High-resolution proteomic analysis on a mass spectrometer and label-free quantification are used to examine synaptic protein profiles in phospho-peptide-depleted and phospho-peptide-enriched fractions of synaptosomal protein samples. A commercial software package is utilized to reveal proteins and phospho-peptides with significantly regulated relative synaptic abundance levels (trained/naïve controls). Common and differential regulation modes for the synaptic proteome in the investigated brain regions of mice after training were observed. Subsequently, meta-analyses utilizing several databases are employed to identify underlying cellular functions and biological pathways.

  12. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise.

    PubMed

    Ioannou, Christos I; Pereda, Ernesto; Lindsen, Job P; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.

  13. The WIN-speller: a new intuitive auditory brain-computer interface spelling application.

    PubMed

    Kleih, Sonja C; Herweg, Andreas; Kaufmann, Tobias; Staiger-Sälzer, Pit; Gerstner, Natascha; Kübler, Andrea

    2015-01-01

    The objective of this study was to test the usability of a new auditory Brain-Computer Interface (BCI) application for communication. We introduce a word based, intuitive auditory spelling paradigm the WIN-speller. In the WIN-speller letters are grouped by words, such as the word KLANG representing the letters A, G, K, L, and N. Thereby, the decoding step between perceiving a code and translating it to the stimuli it represents becomes superfluous. We tested 11 healthy volunteers and four end-users with motor impairment in the copy spelling mode. Spelling was successful with an average accuracy of 84% in the healthy sample. Three of the end-users communicated with average accuracies of 80% or higher while one user was not able to communicate reliably. Even though further evaluation is required, the WIN-speller represents a potential alternative for BCI based communication in end-users.

  14. Role of auditory brain function assessment by SPECT in cochlear implant side selection.

    PubMed

    Di Nardo, W; Giannantonio, S; Di Giuda, D; De Corso, E; Schinaia, L; Paludetti, G

    2013-02-01

    Pre-surgery evaluation, indications for cochlear implantation and expectations in terms of post-operative functional results remain challenging topics in pre-lingually deaf adults. Our study has the purpose of determining the benefits of Single Photon Emission Tomography (SPECT) assessment in pre-surgical evaluation of pre-lingually deaf adults who are candidates for cochlear implantation. In 7 pre-lingually profoundly deaf patients, brain SPECT was performed at baseline conditions and in bilateral simultaneous multi-frequency acoustic stimulation. Six sagittal tomograms of both temporal cortices were used for semi-quantitative analysis in each patient. Percentage increases in cortical perfusion resulting from auditory stimulation were calculated. The results showed an inter-hemispherical asymmetry of the activation extension and intensity in the stimulated temporal areas. Consistent with the obtained brain activation data, patients were implanted preferring the side that showed higher activation after acoustic stimulus. Considering the increment in auditory perception performances, it was possible to point out a relationship between cortical brain activity shown by SPECT and hearing performances, and, even more significant, a correlation between post-operative functional performances and the activation of the most medial part of the sagittal temporal tomograms, corresponding to medium-high frequencies. In light of these findings, we believe that brain SPECT could be considered in the evaluation of deaf patients candidate for cochlear implantation, and that it plays a major role in functional assessment of the auditory cortex of pre-lingually deaf subjects, even if further studies are necessary to conclusively establish its utility. Further developments of this technique are possible by using trans-tympanic electrical stimulation of the cochlear promontory, which could give the opportunity to study completely deaf patients, whose evaluation is objectively difficult

  15. Audio representations of multi-channel EEG: a new tool for diagnosis of brain disorders

    PubMed Central

    Vialatte, François B; Dauwels, Justin; Musha, Toshimitsu; Cichocki, Andrzej

    2012-01-01

    Objective: The objective of this paper is to develop audio representations of electroencephalographic (EEG) multichannel signals, useful for medical practitioners and neuroscientists. The fundamental question explored in this paper is whether clinically valuable information contained in the EEG, not available from the conventional graphical EEG representation, might become apparent through audio representations. Methods and Materials: Music scores are generated from sparse time-frequency maps of EEG signals. Specifically, EEG signals of patients with mild cognitive impairment (MCI) and (healthy) control subjects are considered. Statistical differences in the audio representations of MCI patients and control subjects are assessed through mathematical complexity indexes as well as a perception test; in the latter, participants try to distinguish between audio sequences from MCI patients and control subjects. Results: Several characteristics of the audio sequences, including sample entropy, number of notes, and synchrony, are significantly different in MCI patients and control subjects (Mann-Whitney p < 0.01). Moreover, the participants of the perception test were able to accurately classify the audio sequences (89% correctly classified). Conclusions: The proposed audio representation of multi-channel EEG signals helps to understand the complex structure of EEG. Promising results were obtained on a clinical EEG data set. PMID:23383399

  16. Connectivity in the human brain dissociates entropy and complexity of auditory inputs☆

    PubMed Central

    Nastase, Samuel A.; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-01-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. PMID:25536493

  17. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    PubMed

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators.

  18. Experience-based Auditory Predictions Modulate Brain Activity to Silence as do Real Sounds.

    PubMed

    Chouiter, Leila; Tzovara, Athina; Dieguez, Sebastian; Annoni, Jean-Marie; Magezi, David; De Lucia, Marzia; Spierer, Lucas

    2015-10-01

    Interactions between stimuli's acoustic features and experience-based internal models of the environment enable listeners to compensate for the disruptions in auditory streams that are regularly encountered in noisy environments. However, whether auditory gaps are filled in predictively or restored a posteriori remains unclear. The current lack of positive statistical evidence that internal models can actually shape brain activity as would real sounds precludes accepting predictive accounts of filling-in phenomenon. We investigated the neurophysiological effects of internal models by testing whether single-trial electrophysiological responses to omitted sounds in a rule-based sequence of tones with varying pitch could be decoded from the responses to real sounds and by analyzing the ERPs to the omissions with data-driven electrical neuroimaging methods. The decoding of the brain responses to different expected, but omitted, tones in both passive and active listening conditions was above chance based on the responses to the real sound in active listening conditions. Topographic ERP analyses and electrical source estimations revealed that, in the absence of any stimulation, experience-based internal models elicit an electrophysiological activity different from noise and that the temporal dynamics of this activity depend on attention. We further found that the expected change in pitch direction of omitted tones modulated the activity of left posterior temporal areas 140-200 msec after the onset of omissions. Collectively, our results indicate that, even in the absence of any stimulation, internal models modulate brain activity as do real sounds, indicating that auditory filling in can be accounted for by predictive activity.

  19. [Effect of sleep deprivation on visual evoked potentials and brain stem auditory evoked potentials in epileptics].

    PubMed

    Urumova, L T; Kovalenko, G A; Tsunikov, A I; Sumskiĭ, L I

    1984-01-01

    The article reports on the first study of the evoked activity of the brain in epileptic patients (n = 20) following sleep deprivation. An analysis of the data obtained has revealed a tendency to the shortening of the peak latent intervals of visual evoked potentials in the range of 100-200 mu sec and the V component and the interpeak interval III-V of evoked auditory trunk potentials in patients with temporal epilepsy. The phenomenon may indicate the elimination of stabilizing control involving the specific conductive pathways and, possibly, an accelerated conduction of a specific sensor signal.

  20. Effects of Visual and Auditory Background on Reading Achievement Test Performance of Brain-Injured and Non Brain-Injured Children.

    ERIC Educational Resources Information Center

    Carter, John L.

    Forty-two brain injured boys and 42 non brain injured boys (aged 11-6 to 12-6) were tested to determine the effects of increasing amounts of visual and auditory distraction on reading performance. The Stanford Achievement Reading Comprehension Test was administered with three degrees of distraction. The visual distraction consisted of either very…

  1. Synaptic proteome changes in mouse brain regions upon auditory discrimination learning.

    PubMed

    Kähne, Thilo; Kolodziej, Angela; Smalla, Karl-Heinz; Eisenschmidt, Elke; Haus, Utz-Uwe; Weismantel, Robert; Kropf, Siegfried; Wetzel, Wolfram; Ohl, Frank W; Tischmeyer, Wolfgang; Naumann, Michael; Gundelfinger, Eckart D

    2012-08-01

    Changes in synaptic efficacy underlying learning and memory processes are assumed to be associated with alterations of the protein composition of synapses. Here, we performed a quantitative proteomic screen to monitor changes in the synaptic proteome of four brain areas (auditory cortex, frontal cortex, hippocampus striatum) during auditory learning. Mice were trained in a shuttle box GO/NO-GO paradigm to discriminate between rising and falling frequency modulated tones to avoid mild electric foot shock. Control-treated mice received corresponding numbers of either the tones or the foot shocks. Six hours and 24 h later, the composition of a fraction enriched in synaptic cytomatrix-associated proteins was compared to that obtained from naïve mice by quantitative mass spectrometry. In the synaptic protein fraction obtained from trained mice, the average percentage (±SEM) of downregulated proteins (59.9 ± 0.5%) exceeded that of upregulated proteins (23.5 ± 0.8%) in the brain regions studied. This effect was significantly smaller in foot shock (42.7 ± 0.6% down, 40.7 ± 1.0% up) and tone controls (43.9 ± 1.0% down, 39.7 ± 0.9% up). These data suggest that learning processes initially induce removal and/or degradation of proteins from presynaptic and postsynaptic cytoskeletal matrices before these structures can acquire a new, postlearning organisation. In silico analysis points to a general role of insulin-like signalling in this process.

  2. Case study: auditory brain responses in a minimally verbal child with autism and cerebral palsy

    PubMed Central

    Yau, Shu H.; McArthur, Genevieve; Badcock, Nicholas A.; Brock, Jon

    2015-01-01

    An estimated 30% of individuals with autism spectrum disorders (ASD) remain minimally verbal into late childhood, but research on cognition and brain function in ASD focuses almost exclusively on those with good or only moderately impaired language. Here we present a case study investigating auditory processing of GM, a nonverbal child with ASD and cerebral palsy. At the age of 8 years, GM was tested using magnetoencephalography (MEG) whilst passively listening to speech sounds and complex tones. Where typically developing children and verbal autistic children all demonstrated similar brain responses to speech and nonspeech sounds, GM produced much stronger responses to nonspeech than speech, particularly in the 65–165 ms (M50/M100) time window post-stimulus onset. GM was retested aged 10 years using electroencephalography (EEG) whilst passively listening to pure tone stimuli. Consistent with her MEG response to complex tones, GM showed an unusually early and strong response to pure tones in her EEG responses. The consistency of the MEG and EEG data in this single case study demonstrate both the potential and the feasibility of these methods in the study of minimally verbal children with ASD. Further research is required to determine whether GM's atypical auditory responses are characteristic of other minimally verbal children with ASD or of other individuals with cerebral palsy. PMID:26150768

  3. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise

    PubMed Central

    Ioannou, Christos I.; Pereda, Ernesto; Lindsen, Job P.; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies. PMID:26065708

  4. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    PubMed

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  5. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition.

  6. Research of brain activation regions of "yes" and "no" responses by auditory stimulations in human EEG

    NASA Astrophysics Data System (ADS)

    Hu, Min; Liu, GuoZhong

    2011-11-01

    People with neuromuscular disorders are difficult to communicate with the outside world. It is very important to the clinician and the patient's family that how to distinguish vegetative state (VS) and minimally conscious state (MCS) for a disorders of consciousness (DOC) patient. If a patient is diagnosed with VS, this means that the hope of recovery is greatly reduced, thus leading to the family to abandon the treatment. Brain-computer interface (BCI) is aiming to help those people by analyzing patients' electroencephalogram (EEG). This paper focus on analyzing the corresponding activated regions of the brain when a subject responses "yes" or "no" to an auditory stimuli question. When the brain concentrates, the phase of the related area will become orderly from desultorily. So in this paper we analyzed EEG from the angle of phase. Seven healthy subjects volunteered to participate in the experiment. A total of 84 groups of repeatability stimulation test were done. Firstly, the frequency is fragmented by using wavelet method. Secondly, the phase of EEG is extracted by Hilbert. At last, we obtained approximate entropy and information entropy of each frequency band of EEG. The results show that brain areas are activated of the central area when people say "yes", and the areas are activated of the central area and temporal when people say "no". This conclusion is corresponding to magnetic resonance imaging technology. This study provides the theory basis and the algorithm design basis for designing BCI equipment for people with neuromuscular disorders.

  7. Design, simulation and experimental validation of a novel flexible neural probe for deep brain stimulation and multichannel recording.

    PubMed

    Lai, Hsin-Yi; Liao, Lun-De; Lin, Chin-Teng; Hsu, Jui-Hsiang; He, Xin; Chen, You-Yin; Chang, Jyh-Yeong; Chen, Hui-Fen; Tsang, Siny; Shih, Yen-Yu I

    2012-06-01

    An implantable micromachined neural probe with multichannel electrode arrays for both neural signal recording and electrical stimulation was designed, simulated and experimentally validated for deep brain stimulation (DBS) applications. The developed probe has a rough three-dimensional microstructure on the electrode surface to maximize the electrode-tissue contact area. The flexible, polyimide-based microelectrode arrays were each composed of a long shaft (14.9 mm in length) and 16 electrodes (5 µm thick and with a diameter of 16 µm). The ability of these arrays to record and stimulate specific areas in a rat brain was evaluated. Moreover, we have developed a finite element model (FEM) applied to an electric field to evaluate the volume of tissue activated (VTA) by DBS as a function of the stimulation parameters. The signal-to-noise ratio ranged from 4.4 to 5 over a 50 day recording period, indicating that the laboratory-designed neural probe is reliable and may be used successfully for long-term recordings. The somatosensory evoked potential (SSEP) obtained by thalamic stimulations and in vivo electrode-electrolyte interface impedance measurements was stable for 50 days and demonstrated that the neural probe is feasible for long-term stimulation. A strongly linear (positive correlation) relationship was observed among the simulated VTA, the absolute value of the SSEP during the 200 ms post-stimulus period (ΣSSEP) and c-Fos expression, indicating that the simulated VTA has perfect sensitivity to predict the evoked responses (c-Fos expression). This laboratory-designed neural probe and its FEM simulation represent a simple, functionally effective technique for studying DBS and neural recordings in animal models.

  8. Design, simulation and experimental validation of a novel flexible neural probe for deep brain stimulation and multichannel recording

    NASA Astrophysics Data System (ADS)

    Lai, Hsin-Yi; Liao, Lun-De; Lin, Chin-Teng; Hsu, Jui-Hsiang; He, Xin; Chen, You-Yin; Chang, Jyh-Yeong; Chen, Hui-Fen; Tsang, Siny; Shih, Yen-Yu I.

    2012-06-01

    An implantable micromachined neural probe with multichannel electrode arrays for both neural signal recording and electrical stimulation was designed, simulated and experimentally validated for deep brain stimulation (DBS) applications. The developed probe has a rough three-dimensional microstructure on the electrode surface to maximize the electrode-tissue contact area. The flexible, polyimide-based microelectrode arrays were each composed of a long shaft (14.9 mm in length) and 16 electrodes (5 µm thick and with a diameter of 16 µm). The ability of these arrays to record and stimulate specific areas in a rat brain was evaluated. Moreover, we have developed a finite element model (FEM) applied to an electric field to evaluate the volume of tissue activated (VTA) by DBS as a function of the stimulation parameters. The signal-to-noise ratio ranged from 4.4 to 5 over a 50 day recording period, indicating that the laboratory-designed neural probe is reliable and may be used successfully for long-term recordings. The somatosensory evoked potential (SSEP) obtained by thalamic stimulations and in vivo electrode-electrolyte interface impedance measurements was stable for 50 days and demonstrated that the neural probe is feasible for long-term stimulation. A strongly linear (positive correlation) relationship was observed among the simulated VTA, the absolute value of the SSEP during the 200 ms post-stimulus period (ΣSSEP) and c-Fos expression, indicating that the simulated VTA has perfect sensitivity to predict the evoked responses (c-Fos expression). This laboratory-designed neural probe and its FEM simulation represent a simple, functionally effective technique for studying DBS and neural recordings in animal models.

  9. Noise Trauma Induced Plastic Changes in Brain Regions outside the Classical Auditory Pathway

    PubMed Central

    Chen, Guang-Di; Sheppard, Adam; Salvi, Richard

    2017-01-01

    The effects of intense noise exposure on the classical auditory pathway have been extensively investigated; however, little is known about the effects of noise-induced hearing loss on non-classical auditory areas in the brain such as the lateral amygdala (LA) and striatum (Str). To address this issue, we compared the noise-induced changes in spontaneous and tone-evoked responses from multiunit clusters (MUC) in the LA and Str with those seen in auditory cortex (AC). High-frequency octave band noise (10–20 kHz) and narrow band noise (16–20 kHz) induced permanent thresho ld shifts (PTS) at high-frequencies within and above the noise band but not at low frequencies. While the noise trauma significantly elevated spontaneous discharge rate (SR) in the AC, SRs in the LA and Str were only slightly increased across all frequencies. The high-frequency noise trauma affected tone-evoked firing rates in frequency and time dependent manner and the changes appeared to be related to severity of noise trauma. In the LA, tone-evoked firing rates were reduced at the high-frequencies (trauma area) whereas firing rates were enhanced at the low-frequencies or at the edge-frequency dependent on severity of hearing loss at the high frequencies. The firing rate temporal profile changed from a broad plateau to one sharp, delayed peak. In the AC, tone-evoked firing rates were depressed at high frequencies and enhanced at the low frequencies while the firing rate temporal profiles became substantially broader. In contrast, firing rates in the Str were generally decreased and firing rate temporal profiles become more phasic and less prolonged. The altered firing rate and pattern at low frequencies induced by high frequency hearing loss could have perceptual consequences. The tone-evoked hyperactivity in low-frequency MUC could manifest as hyperacusis whereas the discharge pattern changes could affect temporal resolution and integration. PMID:26701290

  10. Auditory Hallucinations and the Brain's Resting-State Networks: Findings and Methodological Observations.

    PubMed

    Alderson-Day, Ben; Diederen, Kelly; Fernyhough, Charles; Ford, Judith M; Horga, Guillermo; Margulies, Daniel S; McCarthy-Jones, Simon; Northoff, Georg; Shine, James M; Turner, Jessica; van de Ven, Vincent; van Lutterveld, Remko; Waters, Flavie; Jardri, Renaud

    2016-09-01

    In recent years, there has been increasing interest in the potential for alterations to the brain's resting-state networks (RSNs) to explain various kinds of psychopathology. RSNs provide an intriguing new explanatory framework for hallucinations, which can occur in different modalities and population groups, but which remain poorly understood. This collaboration from the International Consortium on Hallucination Research (ICHR) reports on the evidence linking resting-state alterations to auditory hallucinations (AH) and provides a critical appraisal of the methodological approaches used in this area. In the report, we describe findings from resting connectivity fMRI in AH (in schizophrenia and nonclinical individuals) and compare them with findings from neurophysiological research, structural MRI, and research on visual hallucinations (VH). In AH, various studies show resting connectivity differences in left-hemisphere auditory and language regions, as well as atypical interaction of the default mode network and RSNs linked to cognitive control and salience. As the latter are also evident in studies of VH, this points to a domain-general mechanism for hallucinations alongside modality-specific changes to RSNs in different sensory regions. However, we also observed high methodological heterogeneity in the current literature, affecting the ability to make clear comparisons between studies. To address this, we provide some methodological recommendations and options for future research on the resting state and hallucinations.

  11. Subthalamic nucleus deep brain stimulation affects distractor interference in auditory working memory.

    PubMed

    Camalier, Corrie R; Wang, Alice Y; McIntosh, Lindsey G; Park, Sohee; Neimat, Joseph S

    2017-03-01

    Computational and theoretical accounts hypothesize the basal ganglia play a supramodal "gating" role in the maintenance of working memory representations, especially in preservation from distractor interference. There are currently two major limitations to this account. The first is that supporting experiments have focused exclusively on the visuospatial domain, leaving questions as to whether such "gating" is domain-specific. The second is that current evidence relies on correlational measures, as it is extremely difficult to causally and reversibly manipulate subcortical structures in humans. To address these shortcomings, we examined non-spatial, auditory working memory performance during reversible modulation of the basal ganglia, an approach afforded by deep brain stimulation of the subthalamic nucleus. We found that subthalamic nucleus stimulation impaired auditory working memory performance, specifically in the group tested in the presence of distractors, even though the distractors were predictable and completely irrelevant to the encoding of the task stimuli. This study provides key causal evidence that the basal ganglia act as a supramodal filter in working memory processes, further adding to our growing understanding of their role in cognition.

  12. Learning to modulate sensorimotor rhythms with stereo auditory feedback for a brain-computer interface.

    PubMed

    McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh

    2012-01-01

    Motor imagery can be used to modulate sensorimotor rhythms (SMR) enabling detection of voltage fluctuations on the surface of the scalp using electroencephalographic (EEG) electrodes. Feedback is essential in learning how to intentionally modulate SMR in non-muscular communication using a brain-computer interface (BCI). A BCI that is not reliant upon the visual modality for feedback is an attractive means of communication for the blind and the vision impaired and to release the visual channel for other purposes during BCI usage. The aim of this study is to demonstrate the feasibility of replacing the traditional visual feedback modality with stereo auditory feedback. Twenty participants split into equal groups took part in ten BCI sessions involving motor imagery. The visual feedback group performed best using two performance measures but did not show improvement over time whilst the auditory group improved as the study progressed. Multiple loudspeaker presentation of audio allows the listener to intuitively assign each of two classes to the corresponding lateral position in a free-field listening environment.

  13. Sources of variability in auditory brain stem evoked potential measures over time.

    PubMed

    Edwards, R M; Buchwald, J S; Tanguay, P E; Schwafel, J A

    1982-02-01

    Auditory brain stem EPs elicited in 10 normal adults by monaural clicks delivered at 72 dB HL, 20/sec showed no significant change in wave latencies or in the ratio of wave I to wave Y amplitude across 250 trial subsets, across 250 trial subsets, across 1500 trial blocks within a test session, or across two test sessions separated by several months. Sources of maximum variability were determined by using mean squared differences with all but one condition constant. 'Subjects' was shown to contribute the most variability followed by 'ears', 'sessions' and 'runs'; collapsing across conditions, wave III latencies were found to be the least variable, while wave II showed the most variability. Some EP morphologies showed extra peaks between waves II and IV, missing wave IV or wave IV fused with wave V. Such variations in wave form morphology were independent of EMG amplitude and were characteristic of certain individuals.

  14. Brain dynamics in the auditory Go/NoGo task as a function of EEG frequency.

    PubMed

    Barry, Robert J; De Blasio, Frances; Rushby, Jacqueline A; Clarke, Adam R

    2010-11-01

    We examined relationships between the phase of narrow-band electroencephalographic (EEG) activity at stimulus onset and the resultant event-related potentials (ERPs) in an equiprobable auditory Go/NoGo task with a fixed SOA, in the context of a novel conceptualisation of orthogonal phase effects (cortical negativity vs. positivity, negative driving vs. positive driving, waxing vs. waning). ERP responses to each stimulus type were analysed. Prestimulus narrow-band EEG activity (in 1Hz bands from 1 to 13Hz) at Cz was assessed for each trial using FFT decomposition of the EEG data. For each frequency, the cycle at stimulus onset was used to sort trials into four phases, for which ERPs were derived from the raw EEG activity at 9 central sites. The occurrence of preferred phase-defined brain states was confirmed at a number of frequencies, crossing the traditional frequency bands. As expected, these did not differ between Go and NoGo stimuli. These preferred states were associated with more efficient processing of the stimulus, as reflected in differences in latency and amplitude of the N1 and P3 ERP components. The present results, although derived in a different paradigm by EEG decomposition methods different from those used previously, confirm the existence of preferred brain states and their impact on the efficiency of brain dynamics involved in perceptual and cognitive processing.

  15. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach.

    PubMed

    Cichy, Radoslaw Martin; Teng, Santani

    2017-02-19

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'.

  16. The musical centers of the brain: Vladimir E. Larionov (1857-1929) and the functional neuroanatomy of auditory perception.

    PubMed

    Triarhou, Lazaros C; Verina, Tatyana

    2016-11-01

    In 1899 a landmark paper entitled "On the musical centers of the brain" was published in Pflügers Archiv, based on work carried out in the Anatomo-Physiological Laboratory of the Neuropsychiatric Clinic of Vladimir M. Bekhterev (1857-1927) in St. Petersburg, Imperial Russia. The author of that paper was Vladimir E. Larionov (1857-1929), a military doctor and devoted brain scientist, who pursued the problem of the localization of function in the canine and human auditory cortex. His data detailed the existence of tonotopy in the temporal lobe and further demonstrated centrifugal auditory pathways emanating from the auditory cortex and directed to the opposite hemisphere and lower brain centers. Larionov's discoveries have been largely considered as findings of the Bekhterev school. Perhaps this is why there are limited resources on Larionov, especially keeping in mind his military medical career and the fact that after 1917 he just seems to have practiced otorhinolaryngology in Odessa. Larionov died two years after Bekhterev's mysterious death of 1927. The present study highlights the pioneering contributions of Larionov to auditory neuroscience, trusting that the life and work of Vladimir Efimovich will finally, and deservedly, emerge from the shadow of his celebrated master, Vladimir Mikhailovich.

  17. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    PubMed Central

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  18. Proteome rearrangements after auditory learning: high-resolution profiling of synapse-enriched protein fractions from mouse brain.

    PubMed

    Kähne, Thilo; Richter, Sandra; Kolodziej, Angela; Smalla, Karl-Heinz; Pielot, Rainer; Engler, Alexander; Ohl, Frank W; Dieterich, Daniela C; Seidenbecher, Constanze; Tischmeyer, Wolfgang; Naumann, Michael; Gundelfinger, Eckart D

    2016-07-01

    Learning and memory processes are accompanied by rearrangements of synaptic protein networks. While various studies have demonstrated the regulation of individual synaptic proteins during these processes, much less is known about the complex regulation of synaptic proteomes. Recently, we reported that auditory discrimination learning in mice is associated with a relative down-regulation of proteins involved in the structural organization of synapses in various brain regions. Aiming at the identification of biological processes and signaling pathways involved in auditory memory formation, here, a label-free quantification approach was utilized to identify regulated synaptic junctional proteins and phosphoproteins in the auditory cortex, frontal cortex, hippocampus, and striatum of mice 24 h after the learning experiment. Twenty proteins, including postsynaptic scaffolds, actin-remodeling proteins, and RNA-binding proteins, were regulated in at least three brain regions pointing to common, cross-regional mechanisms. Most of the detected synaptic proteome changes were, however, restricted to individual brain regions. For example, several members of the Septin family of cytoskeletal proteins were up-regulated only in the hippocampus, while Septin-9 was down-regulated in the hippocampus, the frontal cortex, and the striatum. Meta analyses utilizing several databases were employed to identify underlying cellular functions and biological pathways. Data are available via ProteomeExchange with identifier PXD003089. How does the protein composition of synapses change in different brain areas upon auditory learning? We unravel discrete proteome changes in mouse auditory cortex, frontal cortex, hippocampus, and striatum functionally implicated in the learning process. We identify not only common but also area-specific biological pathways and cellular processes modulated 24 h after training, indicating individual contributions of the regions to memory processing.

  19. Bigger brains or bigger nuclei? Regulating the size of auditory structures in birds.

    PubMed

    Kubke, M Fabiana; Massoglia, Dino P; Carr, Catherine E

    2004-01-01

    Increases in the size of the neuronal structures that mediate specific behaviors are believed to be related to enhanced computational performance. It is not clear, however, what developmental and evolutionary mechanisms mediate these changes, nor whether an increase in the size of a given neuronal population is a general mechanism to achieve enhanced computational ability. We addressed the issue of size by analyzing the variation in the relative number of cells of auditory structures in auditory specialists and generalists. We show that bird species with different auditory specializations exhibit variation in the relative size of their hindbrain auditory nuclei. In the barn owl, an auditory specialist, the hindbrain auditory nuclei involved in the computation of sound location show hyperplasia. This hyperplasia was also found in songbirds, but not in non-auditory specialists. The hyperplasia of auditory nuclei was also not seen in birds with large body weight suggesting that the total number of cells is selected for in auditory specialists. In barn owls, differences observed in the relative size of the auditory nuclei might be attributed to modifications in neurogenesis and cell death. Thus, hyperplasia of circuits used for auditory computation accompanies auditory specialization in different orders of birds.

  20. Combining functional and anatomical connectivity reveals brain networks for auditory language comprehension.

    PubMed

    Saur, Dorothee; Schelter, Björn; Schnell, Susanne; Kratochvil, David; Küpper, Hanna; Kellmeyer, Philipp; Kümmerer, Dorothee; Klöppel, Stefan; Glauche, Volkmar; Lange, Rüdiger; Mader, Wolfgang; Feess, David; Timmer, Jens; Weiller, Cornelius

    2010-02-15

    Cognitive functions are organized in distributed, overlapping, and interacting brain networks. Investigation of those large-scale brain networks is a major task in neuroimaging research. Here, we introduce a novel combination of functional and anatomical connectivity to study the network topology subserving a cognitive function of interest. (i) In a given network, direct interactions between network nodes are identified by analyzing functional MRI time series with the multivariate method of directed partial correlation (dPC). This method provides important improvements over shortcomings that are typical for ordinary (partial) correlation techniques. (ii) For directly interacting pairs of nodes, a region-to-region probabilistic fiber tracking on diffusion tensor imaging data is performed to identify the most probable anatomical white matter fiber tracts mediating the functional interactions. This combined approach is applied to the language domain to investigate the network topology of two levels of auditory comprehension: lower-level speech perception (i.e., phonological processing) and higher-level speech recognition (i.e., semantic processing). For both processing levels, dPC analyses revealed the functional network topology and identified central network nodes by the number of direct interactions with other nodes. Tractography showed that these interactions are mediated by distinct ventral (via the extreme capsule) and dorsal (via the arcuate/superior longitudinal fascicle fiber system) long- and short-distance association tracts as well as commissural fibers. Our findings demonstrate how both processing routines are segregated in the brain on a large-scale network level. Combining dPC with probabilistic tractography is a promising approach to unveil how cognitive functions emerge through interaction of functionally interacting and anatomically interconnected brain regions.

  1. Descending brain neurons in the cricket Gryllus bimaculatus (de Geer): auditory responses and impact on walking.

    PubMed

    Zorović, Maja; Hedwig, Berthold

    2013-01-01

    The activity of four types of sound-sensitive descending brain neurons in the cricket Gryllus bimaculatus was recorded intracellularly while animals were standing or walking on an open-loop trackball system. In a neuron with a contralaterally descending axon, the male calling song elicited responses that copied the pulse pattern of the song during standing and walking. The accuracy of pulse copying increased during walking. Neurons with ipsilaterally descending axons responded weakly to sound only during standing. The responses were mainly to the first pulse of each chirp, whereas the complete pulse pattern of a chirp was not copied. During walking the auditory responses were suppressed in these neurons. The spiking activity of all four neuron types was significantly correlated to forward walking velocity, indicating their relevance for walking. Additionally, injection of depolarizing current elicited walking and/or steering in three of four neuron types described. In none of the neurons was the spiking activity both sufficient and necessary to elicit and maintain walking behaviour. Some neurons showed arborisations in the lateral accessory lobes, pointing to the relevance of this brain region for cricket audition and descending motor control.

  2. Physiological modulators of Kv3.1 channels adjust firing patterns of auditory brain stem neurons.

    PubMed

    Brown, Maile R; El-Hassar, Lynda; Zhang, Yalan; Alvaro, Giuseppe; Large, Charles H; Kaczmarek, Leonard K

    2016-07-01

    Many rapidly firing neurons, including those in the medial nucleus of the trapezoid body (MNTB) in the auditory brain stem, express "high threshold" voltage-gated Kv3.1 potassium channels that activate only at positive potentials and are required for stimuli to generate rapid trains of actions potentials. We now describe the actions of two imidazolidinedione derivatives, AUT1 and AUT2, which modulate Kv3.1 channels. Using Chinese hamster ovary cells stably expressing rat Kv3.1 channels, we found that lower concentrations of these compounds shift the voltage of activation of Kv3.1 currents toward negative potentials, increasing currents evoked by depolarization from typical neuronal resting potentials. Single-channel recordings also showed that AUT1 shifted the open probability of Kv3.1 to more negative potentials. Higher concentrations of AUT2 also shifted inactivation to negative potentials. The effects of lower and higher concentrations could be mimicked in numerical simulations by increasing rates of activation and inactivation respectively, with no change in intrinsic voltage dependence. In brain slice recordings of mouse MNTB neurons, both AUT1 and AUT2 modulated firing rate at high rates of stimulation, a result predicted by numerical simulations. Our results suggest that pharmaceutical modulation of Kv3.1 currents represents a novel avenue for manipulation of neuronal excitability and has the potential for therapeutic benefit in the treatment of hearing disorders.

  3. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks.

    PubMed

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.

  4. Noninvasive brain stimulation for the treatment of auditory verbal hallucinations in schizophrenia: methods, effects and challenges

    PubMed Central

    Kubera, Katharina M.; Barth, Anja; Hirjak, Dusan; Thomann, Philipp A.; Wolf, Robert C.

    2015-01-01

    This mini-review focuses on noninvasive brain stimulation techniques as an augmentation method for the treatment of persistent auditory verbal hallucinations (AVH) in patients with schizophrenia. Paradigmatically, we place emphasis on transcranial magnetic stimulation (TMS). We specifically discuss rationales of stimulation and consider methodological questions together with issues of phenotypic diversity in individuals with drug-refractory and persistent AVH. Eventually, we provide a brief outlook for future investigations and treatment directions. Taken together, current evidence suggests TMS as a promising method in the treatment of AVH. Low-frequency stimulation of the superior temporal cortex (STC) may reduce symptom severity and frequency. Yet clinical effects are of relatively short duration and effect sizes appear to decrease over time along with publication of larger trials. Apart from considering other innovative stimulation techniques, such as transcranial Direct Current Stimulation (tDCS), and optimizing stimulation protocols, treatment of AVH using noninvasive brain stimulation will essentially rely on accurate identification of potential responders and non-responders for these treatment modalities. In this regard, future studies will need to consider distinct phenotypic presentations of AVH in patients with schizophrenia, together with the putative functional neurocircuitry underlying these phenotypes. PMID:26528145

  5. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    NASA Astrophysics Data System (ADS)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  6. Abnormal Effective Connectivity in the Brain is Involved in Auditory Verbal Hallucinations in Schizophrenia.

    PubMed

    Li, Baojuan; Cui, Long-Biao; Xi, Yi-Bin; Friston, Karl J; Guo, Fan; Wang, Hua-Ning; Zhang, Lin-Chuan; Bai, Yuan-Han; Tan, Qing-Rong; Yin, Hong; Lu, Hongbing

    2017-02-21

    Information flow among auditory and language processing-related regions implicated in the pathophysiology of auditory verbal hallucinations (AVHs) in schizophrenia (SZ) remains unclear. In this study, we used stochastic dynamic causal modeling (sDCM) to quantify connections among the left dorsolateral prefrontal cortex (inner speech monitoring), auditory cortex (auditory processing), hippocampus (memory retrieval), thalamus (information filtering), and Broca's area (language production) in 17 first-episode drug-naïve SZ patients with AVHs, 15 without AVHs, and 19 healthy controls using resting-state functional magnetic resonance imaging. Finally, we performed receiver operating characteristic (ROC) analysis and correlation analysis between image measures and symptoms. sDCM revealed an increased sensitivity of auditory cortex to its thalamic afferents and a decrease in hippocampal sensitivity to auditory inputs in SZ patients with AVHs. The area under the ROC curve showed the diagnostic value of these two connections to distinguish SZ patients with AVHs from those without AVHs. Furthermore, we found a positive correlation between the strength of the connectivity from Broca's area to the auditory cortex and the severity of AVHs. These findings demonstrate, for the first time, augmented AVH-specific excitatory afferents from the thalamus to the auditory cortex in SZ patients, resulting in auditory perception without external auditory stimuli. Our results provide insights into the neural mechanisms underlying AVHs in SZ. This thalamic-auditory cortical-hippocampal dysconnectivity may also serve as a diagnostic biomarker of AVHs in SZ and a therapeutic target based on direct in vivo evidence.

  7. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    PubMed Central

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  8. A case of generalized auditory agnosia with unilateral subcortical brain lesion.

    PubMed

    Suh, Hyee; Shin, Yong-Il; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-12-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia.

  9. Effect of stimulus intensity level on auditory middle latency response brain maps in human adults.

    PubMed

    Tucker, D A; Dietrich, S; McPherson, D L; Salamat, M T

    2001-05-01

    Auditory middle latency response (AMLR) brain maps were obtained in 11 young adults with normal hearing. AMLR waveforms were elicited with monaural clicks presented at three stimulus intensity levels (50, 70, and 90 dB nHL). Recordings were made for right and left ear stimulus presentations. All recordings were obtained in an eyes open/awake status for each subject. Peak-to-peak amplitudes and absolute latencies of the AMLR Pa and Pb waveforms were measured at the Cz electrode site. Pa and Pb waveforms were present 100 percent of the time in response to the 90 dB nHL presentation. The prevalence of Pa and Pb to the 70 dB nHL presentation varied from 86 to 95 percent. The prevalence of Pa and Pb to the 50 dB nHL stimulus never reached 100 percent, ranging in prevalence from 77 to 68 percent. No significant ear effect was seen for amplitude or latency measures of Pa or Pb. AMLR brain maps of the voltage field distributions of Pa and Pb waveforms showed different topographic features. Scalp topography of the Pa waveform was altered by a reduction in stimulus intensity level. At 90 dB nHL, the Pa brain map showed a large positivity midline over the frontal and central scalp areas. At lower stimulus intensity levels, frontal positivity was reduced, and scalp negativity over occipital regions was increased. Pb scalp topography was also altered by a reduction in stimulus intensity level. Varying the stimulus intensity significantly altered Pa and Pb distributions of amplitude and latency measures. Pa and Pb distributions were skewed regardless of stimulus intensity.

  10. Brain stem auditory nuclei and their connections in a carnivorous marsupial, the northern native cat (Dasyurus hallucatus).

    PubMed

    Aitkin, L M; Byers, M; Nelson, J E

    1986-01-01

    The cytoarchitecture and connections of the brain stem auditory nuclei in the marsupial native cat (Dasyurus hallucatus) were studied using Nissl material in conjunction with the retrograde transport of horseradish peroxidase injected into the inferior colliculus. Some features different from those of Eutheria include the disposition of the cochlear nuclear complex medial to the restiform body, a lack of large spherical cells in the anteroventral cochlear nucleus, a small medial superior olive, and a large superior paraolivary nucleus.

  11. Event-related brain potentials to irrelevant auditory stimuli during selective listening: effects of channel probability.

    PubMed

    Akai, Toshiyuki

    2004-03-01

    The purpose of this study was to identify the cognitive process reflected by a positive deflection to irrelevant auditory stimuli (Pdi) during selective listening. Event-related brain potentials were recorded from 9 participants in a two-channel (left/right ears) selective listening task. Relative event probabilities of the relevant/irrelevant channels were 25%/75%, 50%/50%, and 75%/25%. With increasing probability of the relevant channel, behavioral performances (the reaction time and hit rate) for the targets within the relevant channel improved, reflecting development of a more robust attentional trace. At the same time, the amplitude of the early Pdi (200-300 ms after stimulus onset) elicited by the stimuli in the irrelevant channel with a decreased probability was enhanced in the central region. This positive relation between the strength of the attentional trace and the amplitude of the early Pdi suggests that the early Pdi is elicited by a mismatching between an incoming irrelevant stimulus and an attentional trace.

  12. Recovery function of the human brain stem auditory-evoked potential.

    PubMed

    Kevanishvili, Z; Lagidze, Z

    1979-01-01

    Amplitude reduction and peak latency prolongation were observed in the human brain stem auditory-evoked potential (BEP) with preceding (conditioning) stimulation. At a conditioning interval (CI) of 5 ms the alteration of BEP was greater than at a CI of 10 ms. At a CI of 10 ms the amplitudes of some BEP components (e.g. waves I and II) were more decreased than those of others (e.g. wave V), while the peak latency prolongation did not show any obvious component selectivity. At a CI of 5 ms, the extent of the amplitude decrement of individual BEP components differed less, while the increase in the peak latencies of the later components was greater than that of the earlier components. The alterations of the parameters of the test BEPs at both CIs are ascribed to the desynchronization of intrinsic neural events. The differential amplitude reduction at a CI of 10 ms is explained by the different durations of neural firings determining various effects of desynchronization upon the amplitudes of individual BEP components. The decrease in the extent of the component selectivity and the preferential increase in the peak latencies of the later BEP components observed at a CI of 5 ms are explained by the intensification of the mechanism of the relative refractory period.

  13. Hyperpolarization-independent maturation and refinement of GABA/glycinergic connections in the auditory brain stem

    PubMed Central

    Lee, Hanmi; Bach, Eva; Noh, Jihyun; Delpire, Eric

    2015-01-01

    During development GABA and glycine synapses are initially excitatory before they gradually become inhibitory. This transition is due to a developmental increase in the activity of neuronal potassium-chloride cotransporter 2 (KCC2), which shifts the chloride equilibrium potential (ECl) to values more negative than the resting membrane potential. While the role of early GABA and glycine depolarizations in neuronal development has become increasingly clear, the role of the transition to hyperpolarization in synapse maturation and circuit refinement has remained an open question. Here we investigated this question by examining the maturation and developmental refinement of GABA/glycinergic and glutamatergic synapses in the lateral superior olive (LSO), a binaural auditory brain stem nucleus, in KCC2-knockdown mice, in which GABA and glycine remain depolarizing. We found that many key events in the development of synaptic inputs to the LSO, such as changes in neurotransmitter phenotype, strengthening and elimination of GABA/glycinergic connection, and maturation of glutamatergic synapses, occur undisturbed in KCC2-knockdown mice compared with wild-type mice. These results indicate that maturation of inhibitory and excitatory synapses in the LSO is independent of the GABA and glycine depolarization-to-hyperpolarization transition. PMID:26655825

  14. Frontal brain activation in premature infants' response to auditory stimuli in neonatal intensive care unit.

    PubMed

    Saito, Yuri; Fukuhara, Rie; Aoyama, Shiori; Toshima, Tamotsu

    2009-07-01

    The present study was focusing on the very few contacts with the mother's voice that NICU infants have in the womb as well as after birth, we examined whether they can discriminate between their mothers' utterances and those of female nurses in terms of the emotional bonding that is facilitated by prosodic utterances. Twenty-six premature infants were included in this study, and their cerebral blood flows were measured by near-infrared spectroscopy. They were exposed to auditory stimuli in the form of utterances made by their mothers and female nurses. A two (stimulus: mother and nurse) x two (recording site: right frontal area and left frontal area) analysis of variance (ANOVA) for these relative oxy-Hb values was conducted. The ANOVA showed a significant interaction between stimulus and recording site. The mother's and the nurse's voices were activated in the same way in the left frontal area, but showed different reactions in the right frontal area. We presume that the nurse's voice might become associated with pain and stress for premature infants. Our results showed that the premature infants reacted differently to the different voice stimuli. Therefore, we presume that both mothers' and nurses' voices represent positive stimuli for premature infants because both activate the frontal brain. Accordingly, we cannot explain our results only in terms of the state-dependent marker for infantile individual differences, but must also address the stressful trigger of nurses' voices for NICU infants.

  15. Non-invasive Brain Stimulation and Auditory Verbal Hallucinations: New Techniques and Future Directions

    PubMed Central

    Moseley, Peter; Alderson-Day, Ben; Ellison, Amanda; Jardri, Renaud; Fernyhough, Charles

    2016-01-01

    Auditory verbal hallucinations (AVHs) are the experience of hearing a voice in the absence of any speaker. Results from recent attempts to treat AVHs with neurostimulation (rTMS or tDCS) to the left temporoparietal junction have not been conclusive, but suggest that it may be a promising treatment option for some individuals. Some evidence suggests that the therapeutic effect of neurostimulation on AVHs may result from modulation of cortical areas involved in the ability to monitor the source of self-generated information. Here, we provide a brief overview of cognitive models and neurostimulation paradigms associated with treatment of AVHs, and discuss techniques that could be explored in the future to improve the efficacy of treatment, including alternating current and random noise stimulation. Technical issues surrounding the use of neurostimulation as a treatment option are discussed (including methods to localize the targeted cortical area, and the state-dependent effects of brain stimulation), as are issues surrounding the acceptability of neurostimulation for adolescent populations and individuals who experience qualitatively different types of AVH. PMID:26834541

  16. Auditory neglect.

    PubMed Central

    De Renzi, E; Gentilini, M; Barbieri, C

    1989-01-01

    Auditory neglect was investigated in normal controls and in patients with a recent unilateral hemispheric lesion, by requiring them to detect the interruptions that occurred in one ear in a sound delivered through earphones either mono-aurally or binaurally. Control patients accurately detected interruptions. One left brain damaged (LBD) patient missed only once in the ipsilateral ear while seven of the 30 right brain damaged (RBD) patients missed more than one signal in the monoaural test and nine patients did the same in the binaural test. Omissions were always more marked in the left ear and in the binaural test with a significant ear by test interaction. The lesion of these patients was in the parietal lobe (five patients) and the thalamus (four patients). The relation of auditory neglect to auditory extinction was investigated and found to be equivocal, in that there were seven RBD patients who showed extinction, but not neglect and, more importantly, two patients who exhibited the opposite pattern, thus challenging the view that extinction is a minor form of neglect. Also visual and auditory neglect were not consistently correlated, the former being present in nine RBD patients without auditory neglect and the latter in two RBD patients without visual neglect. The finding that in some RBD patients with auditory neglect omissions also occurred, though with less frequency, in the right ear, points to a right hemisphere participation in the deployment of attention not only to the contralateral, but also to the ipsilateral space. PMID:2732732

  17. Rey's Auditory Verbal Learning Test scores can be predicted from whole brain MRI in Alzheimer's disease.

    PubMed

    Moradi, Elaheh; Hallikainen, Ilona; Hänninen, Tuomo; Tohka, Jussi

    2017-01-01

    Rey's Auditory Verbal Learning Test (RAVLT) is a powerful neuropsychological tool for testing episodic memory, which is widely used for the cognitive assessment in dementia and pre-dementia conditions. Several studies have shown that an impairment in RAVLT scores reflect well the underlying pathology caused by Alzheimer's disease (AD), thus making RAVLT an effective early marker to detect AD in persons with memory complaints. We investigated the association between RAVLT scores (RAVLT Immediate and RAVLT Percent Forgetting) and the structural brain atrophy caused by AD. The aim was to comprehensively study to what extent the RAVLT scores are predictable based on structural magnetic resonance imaging (MRI) data using machine learning approaches as well as to find the most important brain regions for the estimation of RAVLT scores. For this, we built a predictive model to estimate RAVLT scores from gray matter density via elastic net penalized linear regression model. The proposed approach provided highly significant cross-validated correlation between the estimated and observed RAVLT Immediate (R = 0.50) and RAVLT Percent Forgetting (R = 0.43) in a dataset consisting of 806 AD, mild cognitive impairment (MCI) or healthy subjects. In addition, the selected machine learning method provided more accurate estimates of RAVLT scores than the relevance vector regression used earlier for the estimation of RAVLT based on MRI data. The top predictors were medial temporal lobe structures and amygdala for the estimation of RAVLT Immediate and angular gyrus, hippocampus and amygdala for the estimation of RAVLT Percent Forgetting. Further, the conversion of MCI subjects to AD in 3-years could be predicted based on either observed or estimated RAVLT scores with an accuracy comparable to MRI-based biomarkers.

  18. Brain stem auditory-evoked potentials in different strains of rodents.

    PubMed

    Chen, T J; Chen, S S

    1990-04-01

    This study was conducted to evaluate variations in brain stem auditory-evoked potentials (BAEPs) among different strains of rodents. BAEPs were recorded by routine procedures from rodents of different strains or species. These included 22 Long-Evans, 28 Wistar and 28 Sprague-Dawley rats, and six hamsters. Within the first 10 ms, there were five consistent and reproducible positive waves of BAEPs in each rodent, named I, II, III, IV and V in correspondence with the nomenclature of waves I-VII in human BAEPs. These BAEPs were also similar to those observed in other vertebrates and in human controls. However, there were variations in waveforms and peak latencies among rodents, even in the rats of the same strain that came from different laboratory centres. At optimal stimulation intensity, usually around 90 dB, the mean latencies of the waves varied as follows: I, 1.23-1.53 ms; II, 1.88-2.28 ms; III, 2.62-2.94 ms; IV, 3.49-3.97 ms; and V, 4.47-5.14 ms. They were significantly different between species, but not in different strains of rats if they came from the same animal centre. The conduction time in the central portion illustrated by interpeak latencies between I and III, III and V, and I and V was dependent on the species (P less than 0.05). When recorded in a soundproof incubator, the minimal hearing threshold showed a significant species difference. The animal BAEP model can be employed for evaluating the physiological function or the pathological conditions of the brain stem. The confirmation of BAEP variations among different species or strains will be helpful in deciding which kind of rodents will be appropriate to serve as animal models for the various purposes of BAEP studies.

  19. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    PubMed

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  20. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    PubMed Central

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  1. Top-down controlled and bottom-up triggered orienting of auditory attention to pitch activate overlapping brain networks.

    PubMed

    Alho, Kimmo; Salmi, Juha; Koistinen, Sonja; Salonen, Oili; Rinne, Teemu

    2015-11-11

    A number of previous studies have suggested segregated networks of brain areas for top-down controlled and bottom-up triggered orienting of visual attention. However, the corresponding networks involved in auditory attention remain less studied. Our participants attended selectively to a tone stream with either a lower pitch or higher pitch in order to respond to infrequent changes in duration of attended tones. The participants were also required to shift their attention from one stream to the other when guided by a visual arrow cue. In addition to these top-down controlled cued attention shifts, infrequent task-irrelevant louder tones occurred in both streams to trigger attention in a bottom-up manner. Both cued shifts and louder tones were associated with enhanced activity in the superior temporal gyrus and sulcus, temporo-parietal junction, superior parietal lobule, inferior and middle frontal gyri, frontal eye field, supplementary motor area, and anterior cingulate gyrus. Thus, the present findings suggest that in the auditory modality, unlike in vision, top-down controlled and bottom-up triggered attention activate largely the same cortical networks. Comparison of the present results with our previous results from a similar experiment on spatial auditory attention suggests that fronto-parietal networks of attention to location or pitch overlap substantially. However, the auditory areas in the anterior superior temporal cortex might have a more important role in attention to the pitch than location of sounds. This article is part of a Special Issue entitled SI: Prediction and Attention.

  2. An Evaluation of Training with an Auditory P300 Brain-Computer Interface for the Japanese Hiragana Syllabary

    PubMed Central

    Halder, Sebastian; Takano, Kouji; Ora, Hiroki; Onishi, Akinari; Utsumi, Kota; Kansaku, Kenji

    2016-01-01

    Gaze-independent brain-computer interfaces (BCIs) are a possible communication channel for persons with paralysis. We investigated if it is possible to use auditory stimuli to create a BCI for the Japanese Hiragana syllabary, which has 46 Hiragana characters. Additionally, we investigated if training has an effect on accuracy despite the high amount of different stimuli involved. Able-bodied participants (N = 6) were asked to select 25 syllables (out of fifty possible choices) using a two step procedure: First the consonant (ten choices) and then the vowel (five choices). This was repeated on 3 separate days. Additionally, a person with spinal cord injury (SCI) participated in the experiment. Four out of six healthy participants reached Hiragana syllable accuracies above 70% and the information transfer rate increased from 1.7 bits/min in the first session to 3.2 bits/min in the third session. The accuracy of the participant with SCI increased from 12% (0.2 bits/min) to 56% (2 bits/min) in session three. Reliable selections from a 10 × 5 matrix using auditory stimuli were possible and performance is increased by training. We were able to show that auditory P300 BCIs can be used for communication with up to fifty symbols. This enables the use of the technology of auditory P300 BCIs with a variety of applications. PMID:27746716

  3. Mother’s voice and heartbeat sounds elicit auditory plasticity in the human brain before full gestation

    PubMed Central

    Webb, Alexandra R.; Heller, Howard T.; Benson, Carol B.; Lahav, Amir

    2015-01-01

    Brain development is largely shaped by early sensory experience. However, it is currently unknown whether, how early, and to what extent the newborn’s brain is shaped by exposure to maternal sounds when the brain is most sensitive to early life programming. The present study examined this question in 40 infants born extremely prematurely (between 25- and 32-wk gestation) in the first month of life. Newborns were randomized to receive auditory enrichment in the form of audio recordings of maternal sounds (including their mother’s voice and heartbeat) or routine exposure to hospital environmental noise. The groups were otherwise medically and demographically comparable. Cranial ultrasonography measurements were obtained at 30 ± 3 d of life. Results show that newborns exposed to maternal sounds had a significantly larger auditory cortex (AC) bilaterally compared with control newborns receiving standard care. The magnitude of the right and left AC thickness was significantly correlated with gestational age but not with the duration of sound exposure. Measurements of head circumference and the widths of the frontal horn (FH) and the corpus callosum (CC) were not significantly different between the two groups. This study provides evidence for experience-dependent plasticity in the primary AC before the brain has reached full-term maturation. Our results demonstrate that despite the immaturity of the auditory pathways, the AC is more adaptive to maternal sounds than environmental noise. Further studies are needed to better understand the neural processes underlying this early brain plasticity and its functional implications for future hearing and language development. PMID:25713382

  4. Brain-computer interfaces using capacitive measurement of visual or auditory steady-state responses

    NASA Astrophysics Data System (ADS)

    Baek, Hyun Jae; Kim, Hyun Seok; Heo, Jeong; Lim, Yong Gyu; Park, Kwang Suk

    2013-04-01

    Objective. Brain-computer interface (BCI) technologies have been intensely studied to provide alternative communication tools entirely independent of neuromuscular activities. Current BCI technologies use electroencephalogram (EEG) acquisition methods that require unpleasant gel injections, impractical preparations and clean-up procedures. The next generation of BCI technologies requires practical, user-friendly, nonintrusive EEG platforms in order to facilitate the application of laboratory work in real-world settings. Approach. A capacitive electrode that does not require an electrolytic gel or direct electrode-scalp contact is a potential alternative to the conventional wet electrode in future BCI systems. We have proposed a new capacitive EEG electrode that contains a conductive polymer-sensing surface, which enhances electrode performance. This paper presents results from five subjects who exhibited visual or auditory steady-state responses according to BCI using these new capacitive electrodes. The steady-state visual evoked potential (SSVEP) spelling system and the auditory steady-state response (ASSR) binary decision system were employed. Main results. Offline tests demonstrated BCI performance high enough to be used in a BCI system (accuracy: 95.2%, ITR: 19.91 bpm for SSVEP BCI (6 s), accuracy: 82.6%, ITR: 1.48 bpm for ASSR BCI (14 s)) with the analysis time being slightly longer than that when wet electrodes were employed with the same BCI system (accuracy: 91.2%, ITR: 25.79 bpm for SSVEP BCI (4 s), accuracy: 81.3%, ITR: 1.57 bpm for ASSR BCI (12 s)). Subjects performed online BCI under the SSVEP paradigm in copy spelling mode and under the ASSR paradigm in selective attention mode with a mean information transfer rate (ITR) of 17.78 ± 2.08 and 0.7 ± 0.24 bpm, respectively. Significance. The results of these experiments demonstrate the feasibility of using our capacitive EEG electrode in BCI systems. This capacitive electrode may become a flexible and

  5. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    PubMed Central

    Hill, N J; Schölkopf, B

    2012-01-01

    We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135

  6. Auditory perception in the aging brain: the role of inhibition and facilitation in early processing.

    PubMed

    Stothart, George; Kazanina, Nina

    2016-11-01

    Aging affects the interplay between peripheral and cortical auditory processing. Previous studies have demonstrated that older adults are less able to regulate afferent sensory information and are more sensitive to distracting information. Using auditory event-related potentials we investigated the role of cortical inhibition on auditory and audiovisual processing in younger and older adults. Across puretone, auditory and audiovisual speech paradigms older adults showed a consistent pattern of inhibitory deficits, manifested as increased P50 and/or N1 amplitudes and an absent or significantly reduced N2. Older adults were still able to use congruent visual articulatory information to aid auditory processing but appeared to require greater neural effort to resolve conflicts generated by incongruent visual information. In combination, the results provide support for the Inhibitory Deficit Hypothesis of aging. They extend previous findings into the audiovisual domain and highlight older adults' ability to benefit from congruent visual information during speech processing.

  7. Are you listening? Brain activation associated with sustained nonspatial auditory attention in the presence and absence of stimulation.

    PubMed

    Seydell-Greenwald, Anna; Greenberg, Adam S; Rauschecker, Josef P

    2014-05-01

    Neuroimaging studies investigating the voluntary (top-down) control of attention largely agree that this process recruits several frontal and parietal brain regions. Since most studies used attention tasks requiring several higher-order cognitive functions (e.g. working memory, semantic processing, temporal integration, spatial orienting) as well as different attentional mechanisms (attention shifting, distractor filtering), it is unclear what exactly the observed frontoparietal activations reflect. The present functional magnetic resonance imaging study investigated, within the same participants, signal changes in (1) a "Simple Attention" task in which participants attended to a single melody, (2) a "Selective Attention" task in which they simultaneously ignored another melody, and (3) a "Beep Monitoring" task in which participants listened in silence for a faint beep. Compared to resting conditions with identical stimulation, all tasks produced robust activation increases in auditory cortex, cross-modal inhibition in visual and somatosensory cortex, and decreases in the default mode network, indicating that participants were indeed focusing their attention on the auditory domain. However, signal increases in frontal and parietal brain areas were only observed for tasks 1 and 2, but completely absent for task 3. These results lead to the following conclusions: under most conditions, frontoparietal activations are crucial for attention since they subserve higher-order cognitive functions inherently related to attention. However, under circumstances that minimize other demands, nonspatial auditory attention in the absence of stimulation can be maintained without concurrent frontal or parietal activations.

  8. Repetition suppression and repetition enhancement underlie auditory memory-trace formation in the human brain: an MEG study.

    PubMed

    Recasens, Marc; Leung, Sumie; Grimm, Sabine; Nowak, Rafal; Escera, Carles

    2015-03-01

    The formation of echoic memory traces has traditionally been inferred from the enhanced responses to its deviations. The mismatch negativity (MMN), an auditory event-related potential (ERP) elicited between 100 and 250ms after sound deviation is an indirect index of regularity encoding that reflects a memory-based comparison process. Recently, repetition positivity (RP) has been described as a candidate ERP correlate of direct memory trace formation. RP consists of repetition suppression and enhancement effects occurring in different auditory components between 50 and 250ms after sound onset. However, the neuronal generators engaged in the encoding of repeated stimulus features have received little interest. This study intends to investigate the neuronal sources underlying the formation and strengthening of new memory traces by employing a roving-standard paradigm, where trains of different frequencies and different lengths are presented randomly. Source generators of repetition enhanced (RE) and suppressed (RS) activity were modeled using magnetoencephalography (MEG) in healthy subjects. Our results show that, in line with RP findings, N1m (~95-150ms) activity is suppressed with stimulus repetition. In addition, we observed the emergence of a sustained field (~230-270ms) that showed RE. Source analysis revealed neuronal generators of RS and RE located in both auditory and non-auditory areas, like the medial parietal cortex and frontal areas. The different timing and location of neural generators involved in RS and RE points to the existence of functionally separated mechanisms devoted to acoustic memory-trace formation in different auditory processing stages of the human brain.

  9. Formulae Describing Subjective Attributes for Sound Fields Based on a Model of the Auditory-Brain System

    NASA Astrophysics Data System (ADS)

    ANDO, Y.; SAKAI, H.; SATO, S.

    2000-04-01

    This article reviews the background of a workable model of the auditory-brain system, and formulae of calculating fundamental subjective attributes derived from the model. The model consists of the autocorrelation mechanisms, the interaural cross-correlation mechanism between the two auditory pathways, and the specialization of the human cerebral hemispheres for temporal and spatial factors of the sound field. Typical fundamental attributes, the apparent source width, the missing fundamental, and the speech intelligibility of sound fields, for example, in opera houses, are described in terms of the orthogonal spatial factors extracted from the interaural cross-correlation function, and the orthogonal temporal factors extracted from the autocorrelation function, respectively. Also, other important subjective attributes of the sound fields, the subjective diffuseness, and subjective preferences of both listeners and performers for single reflection are demonstrated here.

  10. On the temporal window of auditory-brain system in connection with subjective responses

    NASA Astrophysics Data System (ADS)

    Mouri, Kiminori

    2003-08-01

    The human auditory-brain system processes information extracted from autocorrelation function (ACF) of the source signal and interaural cross correlation function (IACF) of binaural sound signals which are associated with the left and right cerebral hemispheres, respectively. The purpose of this dissertation is to determine the desirable temporal window (2T: integration interval) for ACF and IACF mechanisms. For the ACF mechanism, the visual change of Φ(0), i.e., the power of ACF, was associated with the change of loudness, and it is shown that the recommended temporal window is given as about 30(τe)min [s]. The value of (τe)min is the minimum value of effective duration of the running ACF of the source signal. It is worth noticing from the experiment of EEG that the most preferred delay time of the first reflection sound is determined by the piece indicating (τe)min in the source signal. For the IACF mechanism, the temporal window is determined as below: The measured range of τIACC corresponding to subjective angle for the moving image sound depends on the temporal window. Here, the moving image was simulated by the use of two loudspeakers located at +/-20° in the horizontal plane, reproducing amplitude modulated band-limited noise alternatively. It is found that the temporal window has a wide range of values from 0.03 to 1 [s] for the modulation frequency below 0.2 Hz. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Kiminori Mouri, 5-3-3-1110 Harayama-dai, Sakai city, Osaka 590-0132, Japan. E-mail address: km529756@aol.com

  11. Immuno-modulator inter-alpha inhibitor proteins ameliorate complex auditory processing deficits in rats with neonatal hypoxic-ischemic brain injury.

    PubMed

    Threlkeld, Steven W; Lim, Yow-Pin; La Rue, Molly; Gaudet, Cynthia; Stonestreet, Barbara S

    2017-03-10

    Hypoxic-ischemic (HI) brain injury is recognized as a significant problem in the perinatal period, contributing to life-long language-learning and other cognitive impairments. Central auditory processing deficits are common in infants with hypoxic-ischemic encephalopathy and have been shown to predict language learning deficits in other at risk infant populations. Inter-alpha inhibitor proteins (IAIPs) are a family of structurally related plasma proteins that modulate the systemic inflammatory response to infection and have been shown to attenuate cell death and improve learning outcomes after neonatal brain injury in rats. Here, we show that systemic administration of IAIPs during the early HI injury cascade ameliorates complex auditory discrimination deficits as compared to untreated HI injured subjects, despite reductions in brain weight. These findings have significant clinical implications for improving central auditory processing deficits linked to language learning in neonates with HI related brain injury.

  12. MULTICHANNEL ANALYZER

    DOEpatents

    Kelley, G.G.

    1959-11-10

    A multichannel pulse analyzer having several window amplifiers, each amplifier serving one group of channels, with a single fast pulse-lengthener and a single novel interrogation circuit serving all channels is described. A pulse followed too closely timewise by another pulse is disregarded by the interrogation circuit to prevent errors due to pulse pileup. The window amplifiers are connected to the pulse lengthener output, rather than the linear amplifier output, so need not have the fast response characteristic formerly required.

  13. Music and natural sounds in an auditory steady-state response based brain-computer interface to increase user acceptance.

    PubMed

    Heo, Jeong; Baek, Hyun Jae; Hong, Seunghyeok; Chang, Min Hye; Lee, Jeong Su; Park, Kwang Suk

    2017-03-18

    Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy.

  14. Brain bases for auditory stimulus-driven figure-ground segregation.

    PubMed

    Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Griffiths, Timothy D

    2011-01-05

    Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.

  15. Towards User-Friendly Spelling with an Auditory Brain-Computer Interface: The CharStreamer Paradigm

    PubMed Central

    Höhne, Johannes; Tangermann, Michael

    2014-01-01

    Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences. PMID:24886978

  16. Reduced auditory M100 asymmetry in schizophrenia and dyslexia: applying a developmental instability approach to assess atypical brain asymmetry.

    PubMed

    Edgar, J Christopher; Yeo, Ron A; Gangestad, Steven W; Blake, Melissa B; Davis, John T; Lewine, Jeffrey D; Cañive, José M

    2006-01-01

    Although atypical structural and functional superior temporal gyrus (STG) asymmetries are frequently observed in patients with schizophrenia and individuals with dyslexia, their significance is unclear. One possibility is that atypical asymmetries reflect a general risk factor that can be seen across multiple neurodevelopmental conditions--a risk factor whose origins are best understood in the context of Developmental Instability (DI) theory. DI measures (minor physical anomalies (MPAs) and fluctuating asymmetries (FAs)) reflect perturbation of the genetic plan. The present study sought to assess whether the presence of peripheral indices of DI predicts anomalous functional auditory cortex asymmetry in schizophrenia patients and dyslexia subjects. The location of the auditory M100 response was used as a measure of functional STG asymmetry, as it has been reported that in controls (but not in subjects with schizophrenia or dyslexia) the M100 source location in the right hemisphere is shifted anterior to that seen for the left hemisphere. Whole-brain auditory evoked magnetic field data were successfully recorded from 14 male schizophrenia patients, 21 male subjects with dyslexia, and 16 normal male control subjects. MPA and FA measures were also obtained. Replicating previous studies, both schizophrenia and dyslexia groups showed less M100 asymmetry than did controls. Schizophrenia and dyslexia subjects also had higher MPA scores than normal controls. Although neither total MPA nor FA measures predicted M100 asymmetry, analyses on individual MPA items revealed a relationship between high palate and M100 asymmetry. Findings suggest that M100 positional asymmetry is not a diagnostically specific feature in several neurodevelopmental conditions. Continued research examining DI and brain asymmetry relationships is warranted.

  17. ARX filtering of single-sweep movement-related brain macropotentials in mono- and multi-channel recordings.

    PubMed

    Capitanio, L; Filligoi, G C; Liberati, D; Cerutti, S; Babiloni, F; Fattorini, L; Urbano, A

    1994-03-01

    A technique of stochastic parametric identification and filtering is applied to the analysis of single-sweep event-related potentials. This procedure, called AutoRegressive with n eXogenous inputs (ARXn), models the recorded signal as the sum of n+1 signals: the background EEG activity, modeled as an autoregressive process driven by white noise, and n signals, one of which represents a filtered version of a reference signal carrying the average information contained in each sweep. The other (n-1) signals could represent various sources of noise (i.e., artifacts, EOG, etc.). An evaluation of the effects of both artifact suppression and accurate selection of the average signal on mono- or multi-channel scalp recordings is presented.

  18. Characteristics of Auditory Agnosia in a Child with Severe Traumatic Brain Injury: A Case Report

    ERIC Educational Resources Information Center

    Hattiangadi, Nina; Pillion, Joseph P.; Slomine, Beth; Christensen, James; Trovato, Melissa K.; Speedie, Lynn J.

    2005-01-01

    We present a case that is unusual in many respects from other documented incidences of auditory agnosia, including the mechanism of injury, age of the individual, and location of neurological insult. The clinical presentation is one of disturbance in the perception of spoken language, music, pitch, emotional prosody, and temporal auditory…

  19. Testing domain-general theories of perceptual awareness with auditory brain responses.

    PubMed

    Snyder, Joel S; Yerkes, Breanne D; Pitts, Michael A

    2015-06-01

    Past research has identified several candidate neural correlates of consciousness (NCCs) during visual perception. Recent research on auditory perception shows promise for establishing the generality of various NCCs across sensory modalities, as well as for revealing differences in how conscious processing unfolds in different sensory systems.

  20. Far-field brainstem responses evoked by vestibular and auditory stimuli exhibit increases in interpeak latency as brain temperature is decreased

    NASA Technical Reports Server (NTRS)

    Hoffman, L. F.; Horowitz, J. M.

    1984-01-01

    The effect of decreasing of brain temperature on the brainstem auditory evoked response (BAER) in rats was investigated. Voltage pulses, applied to a piezoelectric crystal attached to the skull, were used to evoke stimuli in the auditory system by means of bone-conducted vibrations. The responses were recorded at 37 C and 34 C brain temperatures. The peaks of the BAER recorded at 34 C were delayed in comparison with the peaks from the 37 C wave, and the later peaks were more delayed than the earlier peaks. These results indicate that an increase in the interpeak latency occurs as the brain temperature is decreased. Preliminary experiments, in which responses to brief angular acceleration were used to measure the brainstem vestibular evoked response (BVER), have also indicated increases in the interpeak latency in response to the lowering of brain temperature.

  1. A trade-off between somatosensory and auditory related brain activity during object naming but not reading.

    PubMed

    Seghier, Mohamed L; Hope, Thomas M H; Prejawa, Susan; Parker Jones, 'Ōiwi; Vitkovitch, Melanie; Price, Cathy J

    2015-03-18

    The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level.

  2. Brain networks of novelty-driven involuntary and cued voluntary auditory attention shifting.

    PubMed

    Huang, Samantha; Belliveau, John W; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-01-01

    In everyday life, we need a capacity to flexibly shift attention between alternative sound sources. However, relatively little work has been done to elucidate the mechanisms of attention shifting in the auditory domain. Here, we used a mixed event-related/sparse-sampling fMRI approach to investigate this essential cognitive function. In each 10-sec trial, subjects were instructed to wait for an auditory "cue" signaling the location where a subsequent "target" sound was likely to be presented. The target was occasionally replaced by an unexpected "novel" sound in the uncued ear, to trigger involuntary attention shifting. To maximize the attention effects, cues, targets, and novels were embedded within dichotic 800-Hz vs. 1500-Hz pure-tone "standard" trains. The sound of clustered fMRI acquisition (starting at t = 7.82 sec) served as a controlled trial-end signal. Our approach revealed notable activation differences between the conditions. Cued voluntary attention shifting activated the superior intra--parietal sulcus (IPS), whereas novelty-triggered involuntary orienting activated the inferior IPS and certain subareas of the precuneus. Clearly more widespread activations were observed during voluntary than involuntary orienting in the premotor cortex, including the frontal eye fields. Moreover, we found -evidence for a frontoinsular-cingular attentional control network, consisting of the anterior insula, inferior frontal cortex, and medial frontal cortices, which were activated during both target discrimination and voluntary attention shifting. Finally, novels and targets activated much wider areas of superior temporal auditory cortices than shifting cues.

  3. Multichannel fiber-based diffuse reflectance spectroscopy for the rat brain exposed to a laser-induced shock wave: comparison between ipsi- and contralateral hemispheres

    NASA Astrophysics Data System (ADS)

    Miyaki, Mai; Kawauchi, Satoko; Okuda, Wataru; Nawashiro, Hiroshi; Takemura, Toshiya; Sato, Shunichi; Nishidate, Izumi

    2015-03-01

    Due to considerable increase in the terrorism using explosive devices, blast-induced traumatic brain injury (bTBI) receives much attention worldwide. However, little is known about the pathology and mechanism of bTBI. In our previous study, we found that cortical spreading depolarization (CSD) occurred in the hemisphere exposed to a laser- induced shock wave (LISW), which was followed by long-lasting hypoxemia-oligemia. However, there is no information on the events occurred in the contralateral hemisphere. In this study, we performed multichannel fiber-based diffuse reflectance spectroscopy for the rat brain exposed to an LISW and compared the results for the ipsilateral and contralateral hemispheres. A pair of optical fibers was put on the both exposed right and left parietal bone; white light was delivered to the brain through source fibers and diffuse reflectance signals were collected with detection fibers for both hemispheres. An LISW was applied to the left (ipsilateral) hemisphere. By analyzing reflectance signals, we evaluated occurrence of CSD, blood volume and oxygen saturation for both hemispheres. In the ipsilateral hemispheres, we observed the occurrence of CSD and long-lasting hypoxemia-oligemia in all rats examined (n=8), as observed in our previous study. In the contralateral hemisphere, on the other hand, no occurrence of CSD was observed, but we observed oligemia in 7 of 8 rats and hypoxemia in 1 of 8 rats, suggesting a mechanism to cause hypoxemia or oligemia or both that is (are) not directly associated with CSD in the contralateral hemisphere.

  4. Suppression and facilitation of auditory neurons through coordinated acoustic and midbrain stimulation: investigating a deep brain stimulator for tinnitus

    NASA Astrophysics Data System (ADS)

    Offutt, Sarah J.; Ryan, Kellie J.; Konop, Alexander E.; Lim, Hubert H.

    2014-12-01

    Objective. The inferior colliculus (IC) is the primary processing center of auditory information in the midbrain and is one site of tinnitus-related activity. One potential option for suppressing the tinnitus percept is through deep brain stimulation via the auditory midbrain implant (AMI), which is designed for hearing restoration and is already being implanted in deaf patients who also have tinnitus. However, to assess the feasibility of AMI stimulation for tinnitus treatment we first need to characterize the functional connectivity within the IC. Previous studies have suggested modulatory projections from the dorsal cortex of the IC (ICD) to the central nucleus of the IC (ICC), though the functional properties of these projections need to be determined. Approach. In this study, we investigated the effects of electrical stimulation of the ICD on acoustic-driven activity within the ICC in ketamine-anesthetized guinea pigs. Main Results. We observed ICD stimulation induces both suppressive and facilitatory changes across ICC that can occur immediately during stimulation and remain after stimulation. Additionally, ICD stimulation paired with broadband noise stimulation at a specific delay can induce greater suppressive than facilitatory effects, especially when stimulating in more rostral and medial ICD locations. Significance. These findings demonstrate that ICD stimulation can induce specific types of plastic changes in ICC activity, which may be relevant for treating tinnitus. By using the AMI with electrode sites positioned with the ICD and the ICC, the modulatory effects of ICD stimulation can be tested directly in tinnitus patients.

  5. Brain stem auditory evoked potentials in patients with multiple system atrophy with progressive autonomic failure (Shy-Drager syndrome).

    PubMed Central

    Prasher, D; Bannister, R

    1986-01-01

    Brain stem potentials from three groups of patients, namely those with pure progressive autonomic failure, Parkinson's disease and multisystem atrophy with progressive autonomic failure (Shy-Drager syndrome) were compared with each other and a group of normal subjects. In virtually all the patients with multisystem atrophy with progressive autonomic failure the brain stem potentials were abnormal in contrast to normal findings with Parkinson's disease. The closely associated group of patients with progressive autonomic failure alone also revealed no abnormalities of the BAEP. This separation of the two groups, Parkinson's disease and progressive autonomic failure from multisystem atrophy with progressive autonomic failure is important clinically as multiple system atrophy of the Shy-Drager type has extra-pyramidal features closely resembling Parkinsonism or a late onset cerebellar degeneration. From the abnormalities of the brain stem response in multisystem atrophy with progressive autonomic failure, it is clear that some disruption of the auditory pathway occurs in the ponto-medullary region as in nearly all patients there is a significant delay or reduction in the amplitude of components of the response generated beyond this region. The most likely area involved is the superior olivary complex. Images PMID:3958741

  6. “Where Do Auditory Hallucinations Come From?”—A Brain Morphometry Study of Schizophrenia Patients With Inner or Outer Space Hallucinations

    PubMed Central

    Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc

    2011-01-01

    Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N = 12) and patients with only inner space hallucinations (N = 15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the “where” auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge. PMID:19666833

  7. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation

    PubMed Central

    Herrojo Ruiz, María; Hong, Sang Bin; Hennig, Holger; Altenmüller, Eckart; Kühn, Andrea A.

    2014-01-01

    Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico

  8. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  9. Brain activity in predominantly-inattentive subtype attention-deficit/hyperactivity disorder during an auditory oddball attention task.

    PubMed

    Orinstein, Alyssa J; Stevens, Michael C

    2014-08-30

    Previous functional neuroimaging studies have found brain activity abnormalities in attention-deficit/hyperactivity disorder (ADHD) on numerous cognitive tasks. However, little is known about brain dysfunction unique to the predominantly-inattentive subtype of ADHD (ADHD-I), despite debate as to whether DSM-IV-defined ADHD subtypes differ in etiology. This study compared brain activity of 18 ADHD-I adolescents (ages 12-18) and 20 non-psychiatric age-matched control participants on a functional magnetic resonance image (fMRI) auditory oddball attention task. ADHD-I participants had significant activation deficits to infrequent target stimuli in bilateral superior temporal gyri, bilateral insula, several midline cingulate/medial frontal gyrus regions, right posterior parietal cortex, thalamus, cerebellum, and brainstem. To novel stimuli, ADHD-I participants had reduced activation in bilateral lateral temporal lobe structures. There were no brain regions where ADHD-I participants had greater hemodynamic activity to targets or novels than controls. Brain activity deficits in ADHD-I participants were found in several regions important to attentional orienting and working memory-related cognitive processes involved in target identification. These results differ from those in previously studied adolescents with combined-subtype ADHD, who had a lesser magnitude of activation abnormalities in frontoparietal regions and relatively more discrete regional deficits to novel stimuli. The divergent findings suggest different etiological factors might underlie attention deficits in different DSM-IV-defined ADHD subtypes, and they have important implications for the DSM-V reconceptualization of subtypes as varying clinical presentations of the same core disorder.

  10. Alterations in brain-stem auditory evoked potentials among drug addicts

    PubMed Central

    Garg, Sonia; Sharma, Rajeev; Mittal, Shilekh; Thapar, Satish

    2015-01-01

    Objective: To compare the absolute latencies, the interpeak latencies, and amplitudes of different waveforms of brainstem auditory evoked potentials (BAEP) in different drug abusers and controls, and to identify early neurological damage in persons who abuse different drugs so that proper counseling and timely intervention can be undertaken. Methods: In this cross-sectional study, BAEP’s were assessed by a data acquisition and analysis system in 58 male drug abusers in the age group of 15-45 years as well as in 30 age matched healthy controls. The absolute peak latencies and the interpeak latencies of BAEP were analyzed by applying one way ANOVA and student t-test. The study was carried out at the GGS Medical College, Faridkot, Punjab, India between July 2012 and May 2013. Results: The difference in the absolute peak latencies and interpeak latencies of BAEP in the 2 groups was found to be statistically significant in both the ears (p<0.05). However, the difference in the amplitude ratio in both the ears was found to be statistically insignificant. Conclusion: Chronic intoxication by different drugs has been extensively associated with prolonged absolute peak latencies and interpeak latencies of BAEP in drug abusers reflecting an adverse effect of drug dependence on neural transmission in central auditory nerve pathways. PMID:26166594

  11. Brain dynamics that correlate with effects of learning on auditory distance perception.

    PubMed

    Wisniewski, Matthew G; Mercado, Eduardo; Church, Barbara A; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  12. Attention effects on auditory scene analysis: insights from event-related brain potentials.

    PubMed

    Spielmann, Mona Isabel; Schröger, Erich; Kotz, Sonja A; Bendixen, Alexandra

    2014-01-01

    Sounds emitted by different sources arrive at our ears as a mixture that must be disentangled before meaningful information can be retrieved. It is still a matter of debate whether this decomposition happens automatically or requires the listener's attention. These opposite positions partly stem from different methodological approaches to the problem. We propose an integrative approach that combines the logic of previous measurements targeting either auditory stream segregation (interpreting a mixture as coming from two separate sources) or integration (interpreting a mixture as originating from only one source). By means of combined behavioral and event-related potential (ERP) measures, our paradigm has the potential to measure stream segregation and integration at the same time, providing the opportunity to obtain positive evidence of either one. This reduces the reliance on zero findings (i.e., the occurrence of stream integration in a given condition can be demonstrated directly, rather than indirectly based on the absence of empirical evidence for stream segregation, and vice versa). With this two-way approach, we systematically manipulate attention devoted to the auditory stimuli (by varying their task relevance) and to their underlying structure (by delivering perceptual tasks that require segregated or integrated percepts). ERP results based on the mismatch negativity (MMN) show no evidence for a modulation of stream integration by attention, while stream segregation results were less clear due to overlapping attention-related components in the MMN latency range. We suggest future studies combining the proposed two-way approach with some improvements in the ERP measurement of sequential stream segregation.

  13. Brain dynamics that correlate with effects of learning on auditory distance perception

    PubMed Central

    Wisniewski, Matthew G.; Mercado, Eduardo; Church, Barbara A.; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4–8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8–12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10–16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance. PMID:25538550

  14. Monitoring therapeutic efficacy of decompressive craniotomy in space occupying cerebellar infarcts using brain-stem auditory evoked potentials.

    PubMed

    Krieger, D; Adams, H P; Rieke, K; Hacke, W

    1993-01-01

    Brain-stem auditory evoked potentials (BAEPs) have been used to gauge effects of brain-stem dysfunction in humans and animal models. The purpose of this study was to evaluate the usefulness of BAEP in monitoring patients undergoing decompressive surgery of the posterior fossa for space occupying cerebellar infarcts. We report on serial BAEP recordings in 11 comatose patients with space occupying cerebellar infarcts undergoing decompressive craniotomy. BAEP studies were performed within 12 h after admission, 24 h following surgery and prior to extubation. BAEP signals were analyzed using latency determination and cross-correlation. Following surgery, 9 patients regained consciousness; 2 patients persisted in a comatose state and died subsequently. BAEP interpeak latency (IPL) I-V assessed prior to surgery exceeded normal values in all patients in whom it could be reliably measured (N = 9). Following decompressive surgery BAEP wave I-V IPL normalized in 5 patients, but remained prolonged despite dramatic clinical improvement in 4 patients. We prospectively computed the coefficient of cross-correlation (MCC) of combined ipsilateral BAEP trials after right and left ear stimulation. In all patients increasing MCC was associated with clinical improvement. Unchanging or decreasing MCC indicated poor outcome. We conclude that serial BAEP studies are an appropriate perioperative monitoring modality in patients with space occupying cerebellar infarcts undergoing decompressive surgery of the posterior fossa. Our study suggests advantages of cross-correlation analysis as an objective signal processing strategy; relevant information can be extracted even if BAEP wave discrimination is impossible due to severe brain-stem dysfunction.

  15. To Study Brain Stem Auditory Evoked Potential in Patients with Type 2 Diabetes Mellitus- A Cross- Sectional Comparative Study

    PubMed Central

    Muneshwar, J.N.; Afroz, Sayeeda

    2016-01-01

    Introduction Neuropathy is one of the commonest complications of Diabetes Mellitus (DM). Apart from having peripheral and autonomic neuropathy patients with type 2 DM may also suffer from sensory neural hearing loss, which is more severe at higher frequencies. However, few studies have done detailed evaluation of sensory pathway in these patients. In this study brain stem auditory evoked potential is used to detect the acoustic and central neuropathy in a group of patients with type 2 DM with controlled and uncontrolled blood sugar. Aim To study brain stem auditory evoked potential in patients of type 2 DM with controlled and uncontrolled blood sugar and to correlate the various parameters e.g., age (years), weight (kilograms), height (meters), BMI (kg/m2), HbA1c (%) in patients with type 2 DM with controlled and uncontrolled blood sugar. Materials and Methods Cross-sectional comparative study conducted from January 2014 to January 2015. Total 60 patients with type 2 DM of either sex, between age groups of 35-50 years were enrolled from the Diabetic Clinic of Medicine department, of a tertiary care hospital. Based on the value of HbA1c, patients were divided in two groups with controlled and uncontrolled blood sugar and with each group comprising of 30 patients. BERA (Brainstem Evoked Response Audiometry) was done in both the groups on RMS ALERON 201/401. Recordings were taken at 70dB, 80dB and 90dB at 2KHz frequency. Absolute latency of wave I, III, V and interpeak latencies I–III, III-V and I-V were recorded. Results Mean±SD of the absolute latency of BERA waves I, III, V and interpeak latencies I–III, III-V and I-V at 2 KHz and at varying intensity of 70dB, 80dB and 90dB in uncontrolled group of DM were delayed and were significant as compared to controlled group of DM. Conclusion If BERA is done in diabetic patients, central neuropathy can be detected earlier in uncontrolled groups of diabetic patients. PMID:28050358

  16. Positron emission tomography (PET) analysis of the effects of auditory stimulation on the distribution of /sup 11/C-N-methylchlorphentermine in the brain

    SciTech Connect

    Paschal, C.B.

    1986-06-01

    This experimental work was launched to study how auditory stimulation effects blood flow in the brain. The technique used was Positron Emission Tomography (PET) with /sup 11/C-N-methylchlorphentermine (/sup 11/C-NMCP) as the tracer. /sup 11/C-NMCP acts as a molecular microsphere and thus measures blood flow. The objectives of this work were: to develop, test, and refine an experimental procedure, to design and construct a universally applicable positioning device, and to develop and test a synthesis for a radiopure solution of /sup 11/C-NMCP; all were accomplished. PET was used to observe the brain distribution of /sup 11/C-NMCP during binaural and monaural stimulation states. The data was analyzed by finding the signal intensity in regions of the image that represented the left and right interior colliculi (IC's), brain structures dedicated to the processing of auditory signals. The binaural tests indicated a statistically significant tendency for slightly higher concentration of the tracer in the left IC than in the right IC. The monaural tests combined with those of the binaural state were not solidly conclusive, however, three of the four cases showed a decrease in tracer uptake in the IC opposite the zero-stimulus ear, as expected. There is some indication that the anesthesia used in the majority of this work may have interferred with blood flow response to auditory stimulation. 39 refs., 17 figs., 3 tabs.

  17. Age-Related Changes in Transient and Oscillatory Brain Responses to Auditory Stimulation during Early Adolescence

    ERIC Educational Resources Information Center

    Poulsen, Catherine; Picton, Terence W.; Paus, Tomas

    2009-01-01

    Maturational changes in the capacity to process quickly the temporal envelope of sound have been linked to language abilities in typically developing individuals. As part of a longitudinal study of brain maturation and cognitive development during adolescence, we employed dense-array EEG and spatiotemporal source analysis to characterize…

  18. Diagnostic System Based on the Human AUDITORY-BRAIN Model for Measuring Environmental NOISE—AN Application to Railway Noise

    NASA Astrophysics Data System (ADS)

    SAKAI, H.; HOTEHAMA, T.; ANDO, Y.; PRODI, N.; POMPOLI, R.

    2002-02-01

    Measurements of railway noise were conducted by use of a diagnostic system of regional environmental noise. The system is based on the model of the human auditory-brain system. The model consists of the interplay of autocorrelators and an interaural crosscorrelator acting on the pressure signals arriving at the ear entrances, and takes into account the specialization of left and right human cerebral hemispheres. Different kinds of railway noise were measured through binaural microphones of a dummy head. To characterize the railway noise, physical factors, extracted from the autocorrelation functions (ACF) and interaural crosscorrelation function (IACF) of binaural signals, were used. The factors extracted from ACF were (1) energy represented at the origin of the delay, Φ (0), (2) effective duration of the envelope of the normalized ACF, τe, (3) the delay time of the first peak, τ1, and (4) its amplitude,ø1 . The factors extracted from IACF were (5) IACC, (6) interaural delay time at which the IACC is defined, τIACC, and (7) width of the IACF at the τIACC,WIACC . The factor Φ (0) can be represented as a geometrical mean of energies at both ears as listening level, LL.

  19. Neuronal coupling by endogenous electric fields: cable theory and applications to coincidence detector neurons in the auditory brain stem.

    PubMed

    Goldwyn, Joshua H; Rinzel, John

    2016-04-01

    The ongoing activity of neurons generates a spatially and time-varying field of extracellular voltage (Ve). This Ve field reflects population-level neural activity, but does it modulate neural dynamics and the function of neural circuits? We provide a cable theory framework to study how a bundle of model neurons generates Ve and how this Ve feeds back and influences membrane potential (Vm). We find that these "ephaptic interactions" are small but not negligible. The model neural population can generate Ve with millivolt-scale amplitude, and this Ve perturbs the Vm of "nearby" cables and effectively increases their electrotonic length. After using passive cable theory to systematically study ephaptic coupling, we explore a test case: the medial superior olive (MSO) in the auditory brain stem. The MSO is a possible locus of ephaptic interactions: sounds evoke large (millivolt scale)Vein vivo in this nucleus. The Ve response is thought to be generated by MSO neurons that perform a known neuronal computation with submillisecond temporal precision (coincidence detection to encode sound source location). Using a biophysically based model of MSO neurons, we find millivolt-scale ephaptic interactions consistent with the passive cable theory results. These subtle membrane potential perturbations induce changes in spike initiation threshold, spike time synchrony, and time difference sensitivity. These results suggest that ephaptic coupling may influence MSO function.

  20. The combined monitoring of brain stem auditory evoked potentials and intracranial pressure in coma. A study of 57 patients.

    PubMed Central

    García-Larrea, L; Artru, F; Bertrand, O; Pernier, J; Mauguière, F

    1992-01-01

    Continuous monitoring of brainstem auditory evoked potentials (BAEPs) was carried out in 57 comatose patients for periods ranging from 5 hours to 13 days. In 53 cases intracranial pressure (ICP) was also simultaneously monitored. The study of relative changes of evoked potentials over time proved more relevant to prognosis than the mere consideration of "statistical normality" of waveforms; thus progressive degradation of the BAEPs was associated with a bad outcome even if the responses remained within normal limits. Contrary to previous reports, a normal BAEP obtained during the second week of coma did not necessarily indicate a good vital outcome; it could, however, do so in cases with a low probability of secondary insults. The simultaneous study of BAEPs and ICP showed that apparently significant (greater than 40 mm Hg) acute rises in ICP were not always followed by BAEP changes. The stability of BAEP's despite "significant" ICP rises was associated in our patients with a high probability of survival, while prolongation of central latency of BAEPs in response to ICP modifications was almost invariably followed by brain death. Continuous monitoring of brainstem responses provided a useful physiological counterpart to physical parameters such as ICP. Serial recording of cortical EPs should be added to BAEP monitoring to permit the early detection of rostrocaudal deterioration. Images PMID:1402970

  1. Neuronal coupling by endogenous electric fields: cable theory and applications to coincidence detector neurons in the auditory brain stem

    PubMed Central

    Rinzel, John

    2016-01-01

    The ongoing activity of neurons generates a spatially and time-varying field of extracellular voltage (Ve). This Ve field reflects population-level neural activity, but does it modulate neural dynamics and the function of neural circuits? We provide a cable theory framework to study how a bundle of model neurons generates Ve and how this Ve feeds back and influences membrane potential (Vm). We find that these “ephaptic interactions” are small but not negligible. The model neural population can generate Ve with millivolt-scale amplitude, and this Ve perturbs the Vm of “nearby” cables and effectively increases their electrotonic length. After using passive cable theory to systematically study ephaptic coupling, we explore a test case: the medial superior olive (MSO) in the auditory brain stem. The MSO is a possible locus of ephaptic interactions: sounds evoke large (millivolt scale) Ve in vivo in this nucleus. The Ve response is thought to be generated by MSO neurons that perform a known neuronal computation with submillisecond temporal precision (coincidence detection to encode sound source location). Using a biophysically based model of MSO neurons, we find millivolt-scale ephaptic interactions consistent with the passive cable theory results. These subtle membrane potential perturbations induce changes in spike initiation threshold, spike time synchrony, and time difference sensitivity. These results suggest that ephaptic coupling may influence MSO function. PMID:26823512

  2. Effect of middle ear effusion on the brain-stem auditory evoked response of Cavalier King Charles Spaniels.

    PubMed

    Harcourt-Brown, Thomas R; Parker, John E; Granger, Nicolas; Jeffery, Nick D

    2011-06-01

    Brain-stem auditory evoked responses (BAER) were assessed in 23 Cavalier King Charles Spaniels with and without middle ear effusion at sound intensities ranging from 10 to 100 dB nHL. Significant differences were found between the median BAER threshold for ears where effusions were present (60 dB nHL), compared to those without (30 dB nHL) (P=0.001). The slopes of latency-intensity functions from both groups did not differ, but the y-axis intercept when the x value was zero was greater in dogs with effusions (P=0.009), consistent with conductive hearing loss. Analysis of latency-intensity functions suggested the degree of hearing loss due to middle ear effusion was 21 dB (95% confidence between 10 and 33 dB). Waves I-V inter-wave latency at 90 dB nHL was not significantly different between the two groups. These findings demonstrate that middle ear effusion is associated with a conductive hearing loss of 10-33 dB in affected dogs despite the fact that all animals studied were considered to have normal hearing by their owners.

  3. Auditory verbal hallucinations and brain dysconnectivity in the perisylvian language network: a multimodal investigation.

    PubMed

    Benetti, Stefania; Pettersson-Yeo, William; Allen, Paul; Catani, Marco; Williams, Steven; Barsaglini, Alessio; Kambeitz-Ilankovic, Lana M; McGuire, Philip; Mechelli, Andrea

    2015-01-01

    Neuroimaging studies of schizophrenia have indicated that the development of auditory verbal hallucinations (AVHs) is associated with altered structural and functional connectivity within the perisylvian language network. However, these studies focussed mainly on either structural or functional alterations in patients with chronic schizophrenia. Therefore, they were unable to examine the relationship between the 2 types of measures and could not establish whether the observed alterations would be expressed in the early stage of the illness. We used diffusion tensor imaging and functional magnetic resonance imaging to examine white matter integrity and functional connectivity within the left perisylvian language network of 46 individuals with an at risk mental state for psychosis or a first episode of the illness, including 28 who had developed AVH group and 18 who had not (nonauditory verbal hallucination [nAVH] group), and 22 healthy controls. Inferences were made at P < .05 (corrected). The nAVH group relative to healthy controls showed a reduction of both white matter integrity and functional connectivity as well as a disruption of the normal structure-function relationship along the fronto-temporal pathway. For all measures, the AVH group showed intermediate values between healthy controls and the nAVH group. These findings seem to suggest that, in the early stage of the disorder, a significant impairment of fronto-temporal connectivity is evident in patients who do not experience AVHs. This is consistent with the hypothesis that, whilst mild disruption of connectivity might still enable the emergence of AVHs, more severe alterations may prevent the occurrence of the hallucinatory experience.

  4. Auditory Verbal Hallucinations and Brain Dysconnectivity in the Perisylvian Language Network: A Multimodal Investigation

    PubMed Central

    Pettersson-Yeo, William; Allen, Paul; Catani, Marco; Williams, Steven; Barsaglini, Alessio; Kambeitz-Ilankovic, Lana M.; McGuire, Philip; Mechelli, Andrea

    2015-01-01

    Neuroimaging studies of schizophrenia have indicated that the development of auditory verbal hallucinations (AVHs) is associated with altered structural and functional connectivity within the perisylvian language network. However, these studies focussed mainly on either structural or functional alterations in patients with chronic schizophrenia. Therefore, they were unable to examine the relationship between the 2 types of measures and could not establish whether the observed alterations would be expressed in the early stage of the illness. We used diffusion tensor imaging and functional magnetic resonance imaging to examine white matter integrity and functional connectivity within the left perisylvian language network of 46 individuals with an at risk mental state for psychosis or a first episode of the illness, including 28 who had developed AVH group and 18 who had not (nonauditory verbal hallucination [nAVH] group), and 22 healthy controls. Inferences were made at P < .05 (corrected). The nAVH group relative to healthy controls showed a reduction of both white matter integrity and functional connectivity as well as a disruption of the normal structure−function relationship along the fronto-temporal pathway. For all measures, the AVH group showed intermediate values between healthy controls and the nAVH group. These findings seem to suggest that, in the early stage of the disorder, a significant impairment of fronto-temporal connectivity is evident in patients who do not experience AVHs. This is consistent with the hypothesis that, whilst mild disruption of connectivity might still enable the emergence of AVHs, more severe alterations may prevent the occurrence of the hallucinatory experience. PMID:24361862

  5. Asymmetries of the human social brain in the visual, auditory and chemical modalities

    PubMed Central

    Brancucci, Alfredo; Lucci, Giuliana; Mazzatenta, Andrea; Tommasi, Luca

    2008-01-01

    Structural and functional asymmetries are present in many regions of the human brain responsible for motor control, sensory and cognitive functions and communication. Here, we focus on hemispheric asymmetries underlying the domain of social perception, broadly conceived as the analysis of information about other individuals based on acoustic, visual and chemical signals. By means of these cues the brain establishes the border between ‘self’ and ‘other’, and interprets the surrounding social world in terms of the physical and behavioural characteristics of conspecifics essential for impression formation and for creating bonds and relationships. We show that, considered from the standpoint of single- and multi-modal sensory analysis, the neural substrates of the perception of voices, faces, gestures, smells and pheromones, as evidenced by modern neuroimaging techniques, are characterized by a general pattern of right-hemispheric functional asymmetry that might benefit from other aspects of hemispheric lateralization rather than constituting a true specialization for social information. PMID:19064350

  6. Asymmetries of the human social brain in the visual, auditory and chemical modalities.

    PubMed

    Brancucci, Alfredo; Lucci, Giuliana; Mazzatenta, Andrea; Tommasi, Luca

    2009-04-12

    Structural and functional asymmetries are present in many regions of the human brain responsible for motor control, sensory and cognitive functions and communication. Here, we focus on hemispheric asymmetries underlying the domain of social perception, broadly conceived as the analysis of information about other individuals based on acoustic, visual and chemical signals. By means of these cues the brain establishes the border between 'self' and 'other', and interprets the surrounding social world in terms of the physical and behavioural characteristics of conspecifics essential for impression formation and for creating bonds and relationships. We show that, considered from the standpoint of single- and multi-modal sensory analysis, the neural substrates of the perception of voices, faces, gestures, smells and pheromones, as evidenced by modern neuroimaging techniques, are characterized by a general pattern of right-hemispheric functional asymmetry that might benefit from other aspects of hemispheric lateralization rather than constituting a true specialization for social information.

  7. Conventional and cross-correlation brain-stem auditory evoked responses in the white leghorn chick: rate manipulations

    NASA Technical Reports Server (NTRS)

    Burkard, R.; Jones, S.; Jones, T.

    1994-01-01

    Rate-dependent changes in the chick brain-stem auditory evoked response (BAER) using conventional averaging and a cross-correlation technique were investigated. Five 15- to 19-day-old white leghorn chicks were anesthetized with Chloropent. In each chick, the left ear was acoustically stimulated. Electrical pulses of 0.1-ms duration were shaped, attenuated, and passed through a current driver to an Etymotic ER-2 which was sealed in the ear canal. Electrical activity from stainless-steel electrodes was amplified, filtered (300-3000 Hz) and digitized at 20 kHz. Click levels included 70 and 90 dB peSPL. In each animal, conventional BAERs were obtained at rates ranging from 5 to 90 Hz. BAERs were also obtained using a cross-correlation technique involving pseudorandom pulse sequences called maximum length sequences (MLSs). The minimum time between pulses, called the minimum pulse interval (MPI), ranged from 0.5 to 6 ms. Two BAERs were obtained for each condition. Dependent variables included the latency and amplitude of the cochlear microphonic (CM), wave 2 and wave 3. BAERs were observed in all chicks, for all level by rate combinations for both conventional and MLS BAERs. There was no effect of click level or rate on the latency of the CM. The latency of waves 2 and 3 increased with decreasing click level and increasing rate. CM amplitude decreased with decreasing click level, but was not influenced by click rate for the 70 dB peSPL condition. For the 90 dB peSPL click, CM amplitude was uninfluenced by click rate for conventional averaging. For MLS BAERs, CM amplitude was similar to conventional averaging for longer MPIs.(ABSTRACT TRUNCATED AT 250 WORDS).

  8. Differences in brain circuitry for appetitive and reactive aggression as revealed by realistic auditory scripts

    PubMed Central

    Moran, James K.; Weierstall, Roland; Elbert, Thomas

    2014-01-01

    Aggressive behavior is thought to divide into two motivational elements: The first being a self-defensively motivated aggression against threat and a second, hedonically motivated “appetitive” aggression. Appetitive aggression is the less understood of the two, often only researched within abnormal psychology. Our approach is to understand it as a universal and adaptive response, and examine the functional neural activity of ordinary men (N = 50) presented with an imaginative listening task involving a murderer describing a kill. We manipulated motivational context in a between-subjects design to evoke appetitive or reactive aggression, against a neutral control, measuring activity with Magnetoencephalography (MEG). Results show differences in left frontal regions in delta (2–5 Hz) and alpha band (8–12 Hz) for aggressive conditions and right parietal delta activity differentiating appetitive and reactive aggression. These results validate the distinction of reward-driven appetitive aggression from reactive aggression in ordinary populations at the level of functional neural brain circuitry. PMID:25538590

  9. Central auditory disorders: toward a neuropsychology of auditory objects

    PubMed Central

    Goll, Johanna C.; Crutch, Sebastian J.; Warren, Jason D.

    2012-01-01

    Purpose of review Analysis of the auditory environment, source identification and vocal communication all require efficient brain mechanisms for disambiguating, representing and understanding complex natural sounds as ‘auditory objects’. Failure of these mechanisms leads to a diverse spectrum of clinical deficits. Here we review current evidence concerning the phenomenology, mechanisms and brain substrates of auditory agnosias and related disorders of auditory object processing. Recent findings Analysis of lesions causing auditory object deficits has revealed certain broad anatomical correlations: deficient parsing of the auditory scene is associated with lesions involving the parieto-temporal junction, while selective disorders of sound recognition occur with more anterior temporal lobe or extra-temporal damage. Distributed neural networks have been increasingly implicated in the pathogenesis of such disorders as developmental dyslexia, congenital amusia and tinnitus. Auditory category deficits may arise from defective interaction of spectrotemporal encoding and executive and mnestic processes. Dedicated brain mechanisms are likely to process specialised sound objects such as voices and melodies. Summary Emerging empirical evidence suggests a clinically relevant, hierarchical and fractionated neuropsychological model of auditory object processing that provides a framework for understanding auditory agnosias and makes specific predictions to direct future work. PMID:20975559

  10. Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.

    PubMed

    Bauer, Martin; Trahms, Lutz; Sander, Tilmann

    2015-04-01

    The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.

  11. Bilinguals at the "cocktail party": dissociable neural activity in auditory-linguistic brain regions reveals neurobiological basis for nonnative listeners' speech-in-noise recognition deficits.

    PubMed

    Bidelman, Gavin M; Dexter, Lauren

    2015-04-01

    We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs.

  12. Expression of androgen receptor mRNA in the brain of Gekko gecko: implications for understanding the role of androgens in controlling auditory and vocal processes.

    PubMed

    Tang, Y Z; Piao, Y S; Zhuang, L Z; Wang, Z W

    2001-09-17

    The neuroanatomical distribution of androgen receptor (AR) mRNA-containing cells in the brain of a vocal lizard, Gekko gecko, was mapped using in situ hybridization. Particular attention was given to auditory and vocal nuclei. Within the auditory system, the cochlear nuclei, the central nucleus of the torus semicircularis, the nucleus medialis, and the medial region of the dorsal ventricular ridge contained moderate numbers of labeled neurons. Neurons labeled with the AR probe were located in many nuclei related to vocalization. Within the hindbrain, the mesencephalic nucleus of the trigeminal nerve, the vagal part of the nucleus ambiguus, and the dosal motor nucleus of the vagus nerve contained many neurons that exhibited strong expression of AR mRNA. Neurons located in the peripheral nucleus of the torus in the mesencephalon exhibited moderate levels of hybridization. Intense AR mRNA expression was also observed in neurons within two other areas that may be involved in vocalization, the medial preoptic area and the hypoglossal nucleus. The strongest mRNA signals identified in this study were found in cells of the pallium, hypothalamus, and inferior nucleus of the raphe. The expression patterns of AR mRNA in the auditory and vocal control nuclei of G. gecko suggest that neurons involved in acoustic communication in this species, and perhaps related species, are susceptible to regulation by androgens during the breeding season. The significance of these results for understanding the evolution of reptilian vocal communication is discussed.

  13. Design and evaluation of area-efficient and wide-range impedance analysis circuit for multichannel high-quality brain signal recording system

    NASA Astrophysics Data System (ADS)

    Iwagami, Takuma; Tani, Takaharu; Ito, Keita; Nishino, Satoru; Harashima, Takuya; Kino, Hisashi; Kiyoyama, Koji; Tanaka, Tetsu

    2016-04-01

    To enable chronic and stable neural recording, we have been developing an implantable multichannel neural recording system with impedance analysis functions. One of the important things for high-quality neural signal recording is to maintain well interfaces between recording electrodes and tissues. We have proposed an impedance analysis circuit with a very small circuit area, which is implemented in a multichannel neural recording and stimulating system. In this paper, we focused on the design of an impedance analysis circuit configuration and the evaluation of a minimal voltage measurement unit. The proposed circuit has a very small circuit area of 0.23 mm2 designed with 0.18 µm CMOS technology and can measure interface impedances between recording electrodes and tissues in ultrawide ranges from 100 Ω to 10 MΩ. In addition, we also successfully acquired interface impedances using the proposed circuit in agarose gel experiments.

  14. Comparison of air- and bone-conducted brain stem auditory evoked responses in young dogs and dogs with bilateral ear canal obstruction.

    PubMed

    Wolschrijn, C F; Venker-van Haagen, A J; van den Brom, W E

    1997-11-01

    Brain stem responses to air- and bone-conducted stimuli were analyzed in 11 young dogs, using an in-the-ear transducer and a vibrator designed for human hearing tests, respectively. The mean thresholds were 0 to 10 dB for air-conducted stimuli and 50 to 60 dB for bone-conducted stimuli. The wave forms and inter-peak latencies of the waves of the auditory evoked responses elicited by air-conducted and bone-conducted stimuli were similar. This indicated that the signals had the same origin and thus both the air-conducted and the bone-conducted responses could be considered to be auditory responses. Measurement of air-conducted and bone-conducted brain stem-evoked responses in five dogs with bilateral chronic obstructive ear disease revealed thresholds of 50 to 60 dB for air-conducted stimuli and 60 to 70 dB for bone-conducted stimuli. By comparison of these results with those in the 11 young dogs, it could be concluded that there was hearing loss other than that caused by obstruction of the ear canals.

  15. Non-auditory Effect of Noise Pollution and Its Risk on Human Brain Activity in Different Audio Frequency Using Electroencephalogram Complexity

    PubMed Central

    ALLAHVERDY, Armin; JAFARI, Amir Homayoun

    2016-01-01

    Background: Noise pollution is one of the most harmful ambiance disturbances. It may cause many deficits in ability and activity of persons in the urban and industrial areas. It also may cause many kinds of psychopathies. Therefore, it is very important to measure the risk of this pollution in different area. Methods: This study was conducted in the Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences from June to September of 2015, in which, different frequencies of noise pollution were played for volunteers. 16-channel EEG signal was recorded synchronously, then by using fractal dimension and relative power of Beta sub-band of EEG, the complexity of EEG signals was measured. Results: As the results, it is observed that the average complexity of brain activity is increased in the middle of audio frequency range and the complexity map of brain activity changes in different frequencies, which can show the effects of frequency changes on human brain activity. Conclusion: The complexity of EEG is a good measure for ranking the annoyance and non-auditory risk of noise pollution on human brain activity. PMID:27957440

  16. Attention to natural auditory signals.

    PubMed

    Caporello Bluvas, Emily; Gentner, Timothy Q

    2013-11-01

    The challenge of understanding how the brain processes natural signals is compounded by the fact that such signals are often tied closely to specific natural behaviors and natural environments. This added complexity is especially true for auditory communication signals that can carry information at multiple hierarchical levels, and often occur in the context of other competing communication signals. Selective attention provides a mechanism to focus processing resources on specific components of auditory signals, and simultaneously suppress responses to unwanted signals or noise. Although selective auditory attention has been well-studied behaviorally, very little is known about how selective auditory attention shapes the processing on natural auditory signals, and how the mechanisms of auditory attention are implemented in single neurons or neural circuits. Here we review the role of selective attention in modulating auditory responses to complex natural stimuli in humans. We then suggest how the current understanding can be applied to the study of selective auditory attention in the context natural signal processing at the level of single neurons and populations in animal models amenable to invasive neuroscience techniques. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  17. Subcortical processing in auditory communication.

    PubMed

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2015-10-01

    The voice is a rich source of information, which the human brain has evolved to decode and interpret. Empirical observations have shown that the human auditory system is especially sensitive to the human voice, and that activity within the voice-sensitive regions of the primary and secondary auditory cortex is modulated by the emotional quality of the vocal signal, and may therefore subserve, with frontal regions, the cognitive ability to correctly identify the speaker's affective state. So far, the network involved in the processing of vocal affect has been mainly characterised at the cortical level. However, anatomical and functional evidence suggests that acoustic information relevant to the affective quality of the auditory signal might be processed prior to the auditory cortex. Here we review the animal and human literature on the main subcortical structures along the auditory pathway, and propose a model whereby the distinction between different types of vocal affect in auditory communication begins at very early stages of auditory processing, and relies on the analysis of individual acoustic features of the sound signal. We further suggest that this early feature-based decoding occurs at a subcortical level along the ascending auditory pathway, and provides a preliminary coarse (but fast) characterisation of the affective quality of the auditory signal before the more refined (but slower) cortical processing is completed.

  18. The Drosophila Auditory System

    PubMed Central

    Boekhoff-Falk, Grace; Eberl, Daniel F.

    2013-01-01

    Development of a functional auditory system in Drosophila requires specification and differentiation of the chordotonal sensilla of Johnston’s organ (JO) in the antenna, correct axonal targeting to the antennal mechanosensory and motor center (AMMC) in the brain, and synaptic connections to neurons in the downstream circuit. Chordotonal development in JO is functionally complicated by structural, molecular and functional diversity that is not yet fully understood, and construction of the auditory neural circuitry is only beginning to unfold. Here we describe our current understanding of developmental and molecular mechanisms that generate the exquisite functions of the Drosophila auditory system, emphasizing recent progress and highlighting important new questions arising from research on this remarkable sensory system. PMID:24719289

  19. Central auditory imperception.

    PubMed

    Snow, J B; Rintelmann, W F; Miller, J M; Konkle, D F

    1977-09-01

    The development of clinically applicable techniques for the evaluation of hearing impairment caused by lesions of the central auditory pathways has increased clinical interest in the anatomy and physiology of these pathways. A conceptualization of present understanding of the anatomy and physiology of the central auditory pathways is presented. Clinical tests based on reduction of redundancy of the speech message, degradation of speech and binaural interations are presented. Specifically performance-intensity functions, filtered speech tests, competing message tests and time-compressed speech tests are presented with the emphasis on our experience with time-compressed speech tests. With proper use of these tests not only can central auditory impairments by detected, but brain stem lesions can be distinguished from cortical lesions.

  20. Origins of task-specific sensory-independent organization in the visual and auditory brain: neuroscience evidence, open questions and clinical implications.

    PubMed

    Heimler, Benedetta; Striem-Amit, Ella; Amedi, Amir

    2015-12-01

    Evidence of task-specific sensory-independent (TSSI) plasticity from blind and deaf populations has led to a better understanding of brain organization. However, the principles determining the origins of this plasticity remain unclear. We review recent data suggesting that a combination of the connectivity bias and sensitivity to task-distinctive features might account for TSSI plasticity in the sensory cortices as a whole, from the higher-order occipital/temporal cortices to the primary sensory cortices. We discuss current theories and evidence, open questions and related predictions. Finally, given the rapid progress in visual and auditory restoration techniques, we address the crucial need to develop effective rehabilitation approaches for sensory recovery.

  1. Auditory Imagination.

    ERIC Educational Resources Information Center

    Croft, Martyn

    Auditory imagination is used in this paper to describe a number of issues and activities related to sound and having to do with listening, thinking, recalling, imagining, reshaping, creating, and uttering sounds and words. Examples of auditory imagination in religious and literary works are cited that indicate a belief in an imagined, expected, or…

  2. List mode multichannel analyzer

    DOEpatents

    Archer, Daniel E.; Luke, S. John; Mauger, G. Joseph; Riot, Vincent J.; Knapp, David A.

    2007-08-07

    A digital list mode multichannel analyzer (MCA) built around a programmable FPGA device for onboard data analysis and on-the-fly modification of system detection/operating parameters, and capable of collecting and processing data in very small time bins (<1 millisecond) when used in histogramming mode, or in list mode as a list mode MCA.

  3. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    PubMed Central

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  4. Multichannel compressive sensing MRI using noiselet encoding.

    PubMed

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  5. Impact of Repetitive Transcranial Magnetic Stimulation (rTMS) on Brain Functional Marker of Auditory Hallucinations in Schizophrenia Patients

    PubMed Central

    Maïza, Olivier; Hervé, Pierre-Yve; Etard, Olivier; Razafimandimby, Annick; Montagne-Larmurier, Aurélie; Dollfus, Sonia

    2013-01-01

    Several cross-sectional functional Magnetic Resonance Imaging (fMRI) studies reported a negative correlation between auditory verbal hallucination (AVH) severity and amplitude of the activations during language tasks. The present study assessed the time course of this correlation and its possible structural underpinnings by combining structural, functional MRI and repetitive Transcranial Magnetic Stimulation (rTMS). Methods: Nine schizophrenia patients with AVH (evaluated with the Auditory Hallucination Rating scale; AHRS) and nine healthy participants underwent two sessions of an fMRI speech listening paradigm. Meanwhile, patients received high frequency (20 Hz) rTMS. Results: Before rTMS, activations were negatively correlated with AHRS in a left posterior superior temporal sulcus (pSTS) cluster, considered henceforward as a functional region of interest (fROI). After rTMS, activations in this fROI no longer correlated with AHRS. This decoupling was explained by a significant decrease of AHRS scores after rTMS that contrasted with a relative stability of cerebral activations. A voxel-based-morphometry analysis evidenced a cluster of the left pSTS where grey matter volume negatively correlated with AHRS before rTMS and positively correlated with activations in the fROI at both sessions. Conclusion: rTMS decreases the severity of AVH leading to modify the functional correlate of AVH underlain by grey matter abnormalities. PMID:24961421

  6. Sex, acceleration, brain imaging, and rhesus monkeys: Converging evidence for an evolutionary bias for looming auditory motion

    NASA Astrophysics Data System (ADS)

    Neuhoff, John G.

    2003-04-01

    Increasing acoustic intensity is a primary cue to looming auditory motion. Perceptual overestimation of increasing intensity could provide an evolutionary selective advantage by specifying that an approaching sound source is closer than actual, thus affording advanced warning and more time than expected to prepare for the arrival of the source. Here, multiple lines of converging evidence for this evolutionary hypothesis are presented. First, it is shown that intensity change specifying accelerating source approach changes in loudness more than equivalent intensity change specifying decelerating source approach. Second, consistent with evolutionary hunter-gatherer theories of sex-specific spatial abilities, it is shown that females have a significantly larger bias for rising intensity than males. Third, using functional magnetic resonance imaging in conjunction with approaching and receding auditory motion, it is shown that approaching sources preferentially activate a specific neural network responsible for attention allocation, motor planning, and translating perception into action. Finally, it is shown that rhesus monkeys also exhibit a rising intensity bias by orienting longer to looming tones than to receding tones. Together these results illustrate an adaptive perceptual bias that has evolved because it provides a selective advantage in processing looming acoustic sources. [Work supported by NSF and CDC.

  7. Early experience and domestication affect auditory discrimination learning, open field behaviour and brain size in wild Mongolian gerbils and domesticated laboratory gerbils (Meriones unguiculatus forma domestica).

    PubMed

    Stuermer, Ingo W; Wetzel, Wolfram

    2006-10-02

    The influence of early experience and strain differences on auditory discrimination learning, open field behaviour and brain size was investigated in wild-type Mongolian gerbils (strain Ugoe:MU95) raised in the wild (wild F-0) or in the laboratory (wild F-1) and in domesticated Laboratory Gerbils (LAB). Adult males were conditioned for 10 days in a shuttle box go/no-go paradigm to discriminate two frequency-modulated tones. Significant learning was established within 5 days in wild F-0 and within 3 days in wild F-1 and LAB. Spontaneous jumps in the shuttle box (inter-trial crossings) were frequently seen in wild F-0 and F-1, but rarely in LAB. All groups exhibited nearly the same ability to remember after 2 weeks without training. In the open field test applied on 5 consecutive days, no differences in locomotion patterns and inner field preferences were found. Rearing frequency decreased over 5 days in wild gerbils. Running distances (4-6m/min) were similar in wild F-0 and LAB, but higher in wild F-1. The ratio of brain size to body weight did not differ between wild F-0 and F-1, but was 17.1% lower in LAB. Correspondingly high brain weights in wild F-1 and F-0 support our domestication hypothesis and negate any serious effect of early experience or captivity on brain size in Mongolian gerbils. In contrast, wild F-1 raised in the laboratory show a rapid improvement in learning performance, indicating that early experience rather that genetic differences between strains affect shuttle box discrimination learning in gerbils.

  8. Use of Multichannel Near Infrared Spectroscopy to Study Relationships Between Brain Regions and Neurocognitive Tasks of Selective/Divided Attention and 2-Back Working Memory.

    PubMed

    Tomita, Nozomi; Imai, Shoji; Kanayama, Yusuke; Kawashima, Issaku; Kumano, Hiroaki

    2017-01-01

    While dichotic listening (DL) was originally intended to measure bottom-up selective attention, it has also become a tool for measuring top-down selective attention. This study investigated the brain regions related to top-down selective and divided attention DL tasks and a 2-back task using alphanumeric and Japanese numeric sounds. Thirty-six healthy participants underwent near-infrared spectroscopy scanning while performing a top-down selective attentional DL task, a top-down divided attentional DL task, and a 2-back task. Pearson's correlations were calculated to show relationships between oxy-Hb concentration in each brain region and the score of each cognitive task. Different brain regions were activated during the DL and 2-back tasks. Brain regions activated in the top-down selective attention DL task were the left inferior prefrontal gyrus and left pars opercularis. The left temporopolar area was activated in the top-down divided attention DL task, and the left frontopolar area and left dorsolateral prefrontal cortex were activated in the 2-back task. As further evidence for the finding that each task measured different cognitive and brain area functions, neither the percentages of correct answers for the three tasks nor the response times for the selective attentional task and the divided attentional task were correlated to one another. Thus, the DL and 2-back tasks used in this study can assess multiple areas of cognitive, brain-related dysfunction to explore their relationship to different psychiatric and neurodevelopmental disorders.

  9. Multichannel Human Body Communication

    NASA Astrophysics Data System (ADS)

    Przystup, Piotr; Bujnowski, Adam; Wtorek, Jerzy

    2016-01-01

    Human Body Communication is an attractive alternative for traditional wireless communication (Bluetooth, ZigBee) in case of Body Sensor Networks. Low power, high data rates and data security makes it ideal solution for medical applications. In this paper, signal attenuation for different frequencies, using FR4 electrodes, has been investigated. Performance of single and multichannel transmission with frequency modulation of analog signal has been tested. Experiment results show that HBC is a feasible solution for transmitting data between BSN nodes.

  10. Longer storage of auditory than of visual information in the rabbit brain: evidence from dorsal hippocampal electrophysiology.

    PubMed

    Astikainen, Piia; Ruusuvirta, Timo; Korhonen, Tapani

    2005-01-01

    Whereas sensory memory in humans has been found to store auditory information for a longer time than visual information, it is unclear whether this is the case also in other species. We recorded hippocampal event-related potentials (ERPs) in awake rabbits exposed to occasional changes in a repeated 50-ms acoustic (1000 versus 2000 Hz) and visual (vertical versus horizontal orientation) stimulus. Three intervals (500, 1500, or 3000 ms) between stimulus repetitions were applied. Whereas acoustic changes significantly affected ERPs with the repetition intervals of 500 and 1500 ms, visual changes did so only with the repetition interval of 500 ms. Our finding, thus, suggests a similarity in sensory processing abilities between human and non-human mammals.

  11. Effect Of Electromagnetic Waves Emitted From Mobile Phone On Brain Stem Auditory Evoked Potential In Adult Males.

    PubMed

    Singh, K

    2015-01-01

    Mobile phone (MP) is commonly used communication tool. Electromagnetic waves (EMWs) emitted from MP may have potential health hazards. So, it was planned to study the effect of electromagnetic waves (EMWs) emitted from the mobile phone on brainstem auditory evoked potential (BAEP) in male subjects in the age group of 20-40 years. BAEPs were recorded using standard method of 10-20 system of electrode placement and sound click stimuli of specified intensity, duration and frequency.Right ear was exposed to EMW emitted from MP for about 10 min. On comparison of before and after exposure to MP in right ear (found to be dominating ear), there was significant increase in latency of II, III (p < 0.05) and V (p < 0.001) wave, amplitude of I-Ia wave (p < 0.05) and decrease in IPL of III-V wave (P < 0.05) after exposure to MP. But no significant change was found in waves of BAEP in left ear before vs after MP. On comparison of right (having exposure routinely as found to be dominating ear) and left ears (not exposed to MP), before exposure to MP, IPL of IIl-V wave and amplitude of V-Va is more (< 0.001) in right ear compared to more latency of III and IV wave (< 0.001) in left ear. After exposure to MP, the amplitude of V-Va was (p < 0.05) more in right ear compared to left ear. In conclusion, EMWs emitted from MP affects the auditory potential.

  12. Preferred EEG brain states at stimulus onset in a fixed interstimulus interval equiprobable auditory Go/NoGo task: a definitive study.

    PubMed

    Barry, Robert J; De Blasio, Frances M; De Pascalis, Vilfredo; Karamacoska, Diana

    2014-10-01

    This study examined the occurrence of preferred EEG phase states at stimulus onset in an equiprobable auditory Go/NoGo task with a fixed interstimulus interval, and their effects on the resultant event-related potentials (ERPs). We used a sliding short-time FFT decomposition of the EEG at Cz for each trial to assess prestimulus EEG activity in the delta, theta, alpha and beta bands. We determined the phase of each 2 Hz narrow-band contributing to these four broad bands at 125 ms before each stimulus onset, and for the first time, avoided contamination from poststimulus EEG activity. This phase value was extrapolated 125 ms to obtain the phase at stimulus onset, combined into the broad-band phase, and used to sort trials into four phase groups for each of the four broad bands. For each band, ERPs were derived for each phase from the raw EEG activity at 19 sites. Data sets from each band were separately decomposed using temporal Principal Components Analyses with unrestricted VARIMAX rotation to extract N1-1, PN, P2, P3, SW and LP components. Each component was analysed as a function of EEG phase at stimulus onset in the context of a simple conceptualisation of orthogonal phase effects (cortical negativity vs. positivity, negative driving vs. positive driving, waxing vs. waning). The predicted non-random occurrence of phase-defined brain states was confirmed. The preferred states of negativity, negative driving, and waxing were each associated with more efficient stimulus processing, as reflected in amplitude differences of the components. The present results confirm the existence of preferred brain states and their impact on the efficiency of brain dynamics in perceptual and cognitive processing.

  13. Cross-modal recruitment of primary visual cortex by auditory stimuli in the nonhuman primate brain: a molecular mapping study.

    PubMed

    Hirst, Priscilla; Javadi Khomami, Pasha; Gharat, Amol; Zangenehpour, Shahin

    2012-01-01

    Recent studies suggest that exposure to only one component of audiovisual events can lead to cross-modal cortical activation. However, it is not certain whether such crossmodal recruitment can occur in the absence of explicit conditioning, semantic factors, or long-term associations. A recent study demonstrated that crossmodal cortical recruitment can occur even after a brief exposure to bimodal stimuli without semantic association. In addition, the authors showed that the primary visual cortex is under such crossmodal influence. In the present study, we used molecular activity mapping of the immediate early gene zif268. We found that animals, which had previously been exposed to a combination of auditory and visual stimuli, showed increased number of active neurons in the primary visual cortex when presented with sounds alone. As previously implied, this crossmodal activation appears to be the result of implicit associations of the two stimuli, likely driven by their spatiotemporal characteristics; it was observed after a relatively short period of exposure (~45 min) and lasted for a relatively long period after the initial exposure (~1 day). These results suggest that the previously reported findings may be directly rooted in the increased activity of the neurons occupying the primary visual cortex.

  14. Time-resolved multi-channel optical system for assessment of brain oxygenation and perfusion by monitoring of diffuse reflectance and fluorescence

    NASA Astrophysics Data System (ADS)

    Milej, D.; Gerega, A.; Kacprzak, M.; Sawosz, P.; Weigl, W.; Maniewski, R.; Liebert, A.

    2014-03-01

    Time-resolved near-infrared spectroscopy is an optical technique which can be applied in tissue oxygenation assessment. In the last decade this method is extensively tested as a potential clinical tool for noninvasive human brain function monitoring and imaging. In the present paper we show construction of an instrument which allows for: (i) estimation of changes in brain tissue oxygenation using two-wavelength spectroscopy approach and (ii) brain perfusion assessment with the use of single-wavelength reflectometry or fluorescence measurements combined with ICG-bolus tracking. A signal processing algorithm based on statistical moments of measured distributions of times of flight of photons is implemented. This data analysis method allows for separation of signals originating from extra- and intracerebral tissue compartments. In this paper we present compact and easily reconfigurable system which can be applied in different types of time-resolved experiments: two-wavelength measurements at 687 and 832 nm, single wavelength reflectance measurements at 760 nm (which is at maximum of ICG absorption spectrum) or fluorescence measurements with excitation at 760 nm. Details of the instrument construction and results of its technical tests are shown. Furthermore, results of in-vivo measurements obtained for various modes of operation of the system are presented.

  15. Auditory spatial processing in Alzheimer's disease.

    PubMed

    Golden, Hannah L; Nicholas, Jennifer M; Yong, Keir X X; Downey, Laura E; Schott, Jonathan M; Mummery, Catherine J; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer's disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer's disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer's disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer's disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer's disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer's disease

  16. Fractional channel multichannel analyzer

    DOEpatents

    Brackenbush, Larry W.; Anderson, Gordon A.

    1994-01-01

    A multichannel analyzer incorporating the features of the present invention obtains the effect of fractional channels thus greatly reducing the number of actual channels necessary to record complex line spectra. This is accomplished by using an analog-to-digital converter in the asynscronous mode, i.e., the gate pulse from the pulse height-to-pulse width converter is not synchronized with the signal from a clock oscillator. This saves power and reduces the number of components required on the board to achieve the effect of radically expanding the number of channels without changing the circuit board.

  17. Fractional channel multichannel analyzer

    DOEpatents

    Brackenbush, L.W.; Anderson, G.A.

    1994-08-23

    A multichannel analyzer incorporating the features of the present invention obtains the effect of fractional channels thus greatly reducing the number of actual channels necessary to record complex line spectra. This is accomplished by using an analog-to-digital converter in the asynchronous mode, i.e., the gate pulse from the pulse height-to-pulse width converter is not synchronized with the signal from a clock oscillator. This saves power and reduces the number of components required on the board to achieve the effect of radically expanding the number of channels without changing the circuit board. 9 figs.

  18. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    PubMed

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.

  19. Auditory and visual impairments in patients with blast-related traumatic brain injury: Effect of dual sensory impairment on Functional Independence Measure.

    PubMed

    Lew, Henry L; Garvert, Donn W; Pogoda, Terri K; Hsu, Pei-Te; Devine, Jennifer M; White, Daniel K; Myers, Paula J; Goodrich, Gregory L

    2009-01-01

    The frequencies of hearing impairment (HI), vision impairment (VI), or dual (hearing and vision) sensory impairment (DSI) in patients with blast-related traumatic brain injury (TBI) and their effects on functional recovery are not well documented. In this preliminary study of 175 patients admitted to a Polytrauma Rehabilitation Center, we completed hearing and vision examinations and obtained Functional Independence Measure (FIM) scores at admission and discharge for 62 patients with blast-related TBI. We diagnosed HI only, VI only, and DSI in 19%, 34%, and 32% of patients, respectively. Only 15% of the patients had no sensory impairment in either auditory or visual modality. An analysis of variance showed a group difference for the total and motor FIM scores at discharge (p < 0.04). Regression model analyses demonstrated that DSI significantly contributed to reduced gain in total ( t = -2.25) and motor ( t = -2.50) FIM scores ( p < 0.05). Understanding the long-term consequences of sensory impairments in the functional recovery of patients with blast-related TBI requires further research.

  20. Decreases in energy and increases in phase locking of event related oscillations to auditory stimuli occurs over adolescence in human and rodent brain

    PubMed Central

    Ehlers, Cindy L.; Wills, Derek N.; Desikan, Anita; Phillips, Evelyn; Havstad, James

    2014-01-01

    Synchrony of phase (phase locking) of event-related oscillations (EROs) within and between different brain areas has been suggested to reflect communication exchange between neural networks and as such may be a sensitive and translational measure of changes in brain remodeling that occurs during adolescence. This study sought to investigate developmental changes in EROs using a similar auditory event-related potential (ERP) paradigm in both rats and humans. Energy and phase variability of EROs collected from 38 young adult men (age 18-25 yrs), 33 periadolescent boys (age 10-14 yrs), 15 male periadolescent rats (@ Post Natal Day (PD) 36) and 19 male adult rats (@ PD 103) were investigated. Three channels of ERP data (Frontal Cortex, FZ; Central Cortex, CZ; Parietal Cortex, PZ) were collected from the humans using an oddball plus “noise” paradigm that was presented under passive (no behavioral response required) conditions in the periadolescents and under active conditions (where each subject was instructed to depress a counter each time he detected an infrequent (target) tone) in adults and adolescents. ERPs were recorded in rats using only the passive paradigm. In order to compare the tasks used in rats to those used in humans we first studied whether three ERO measures (energy, phase locking index (within an electrode site, PLI), phase difference locking index (between different electrode sites, PDLI)) differentiated the “active” from “passive” ERP tasks. Secondly we explored our main question of whether the three ERO measures, differentiated adults from periadolescents in a similar manner in both humans and rats. No significant changes were found in measures of ERO energy between the active and passive tasks in the periadolescent human participants. There was a smaller but significant increase in PLI but not PDLI as a function of “active” task requirements. Developmental differences were found in energy, PLI and PDLI values between the

  1. Brain Dynamics of Aging: Multiscale Variability of EEG Signals at Rest and during an Auditory Oddball Task1,2,3

    PubMed Central

    Sleimen-Malkoun, Rita; Perdikis, Dionysios; Müller, Viktor; Blanc, Jean-Luc; Huys, Raoul; Temprado, Jean-Jacques

    2015-01-01

    Abstract The present work focused on the study of fluctuations of cortical activity across time scales in young and older healthy adults. The main objective was to offer a comprehensive characterization of the changes of brain (cortical) signal variability during aging, and to make the link with known underlying structural, neurophysiological, and functional modifications, as well as aging theories. We analyzed electroencephalogram (EEG) data of young and elderly adults, which were collected at resting state and during an auditory oddball task. We used a wide battery of metrics that typically are separately applied in the literature, and we compared them with more specific ones that address their limits. Our procedure aimed to overcome some of the methodological limitations of earlier studies and verify whether previous findings can be reproduced and extended to different experimental conditions. In both rest and task conditions, our results mainly revealed that EEG signals presented systematic age-related changes that were time-scale-dependent with regard to the structure of fluctuations (complexity) but not with regard to their magnitude. Namely, compared with young adults, the cortical fluctuations of the elderly were more complex at shorter time scales, but less complex at longer scales, although always showing a lower variance. Additionally, the elderly showed signs of spatial, as well as between, experimental conditions dedifferentiation. By integrating these so far isolated findings across time scales, metrics, and conditions, the present study offers an overview of age-related changes in the fluctuation electrocortical activity while making the link with underlying brain dynamics. PMID:26464983

  2. Auditory pathways: are 'what' and 'where' appropriate?

    PubMed

    Hall, Deborah A

    2003-05-13

    New evidence confirms that the auditory system encompasses temporal, parietal and frontal brain regions, some of which partly overlap with the visual system. But common assumptions about the functional homologies between sensory systems may be misleading.

  3. Sleep-Disordered Breathing Affects Auditory Processing in 5–7 Year-Old Children: Evidence From Brain Recordings

    PubMed Central

    Key, Alexandra P.F.; Molfese, Dennis L.; O’Brien, Louise; Gozal, David

    2010-01-01

    Poor sleep in children is associated with lower neurocognitive functioning and increased maladaptive behaviors. The current study examined the impact of snoring (the most common manifestation of sleep-disordered breathing) on cognitive and brain functioning in a sample of 35 asymptomatic children ages 5–7 years identified in the community as having habitual snoring (SDB). All participants completed polysomnographic, neurocognitive (NEPSY) and psychophysiological (ERPs to speech sounds) assessments. The results indicated that sub-clinical levels of SDB may not necessarily lead to reduced performance on standardized behavioral measures of attention and memory. However, brain indices of speech perception and discrimination (N1/P2) are sensitive to individual differences in the quality of sleep. We postulate that addition of ERPs to the standard clinical measures of sleep problems could lead to early identification of children who may be more cognitively vulnerable because of chronic sleep disturbances. PMID:20183723

  4. Auditory system

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1973-01-01

    The physical correlations of hearing, i.e. the acoustic stimuli, are reported. The auditory system, consisting of external ear, middle ear, inner ear, organ of Corti, basilar membrane, hair cells, inner hair cells, outer hair cells, innervation of hair cells, and transducer mechanisms, is discussed. Both conductive and sensorineural hearing losses are also examined.

  5. Functional Organization of the Ventral Auditory Pathway.

    PubMed

    Cohen, Yale E; Bennur, Sharath; Christison-Lagay, Kate; Gifford, Adam M; Tsunada, Joji

    2016-01-01

    The fundamental problem in audition is determining the mechanisms required by the brain to transform an unlabelled mixture of auditory stimuli into coherent perceptual representations. This process is called auditory-scene analysis. The perceptual representations that result from auditory-scene analysis are formed through a complex interaction of perceptual grouping, attention, categorization and decision-making. Despite a great deal of scientific energy devoted to understanding these aspects of hearing, we still do not understand (1) how sound perception arises from neural activity and (2) the causal relationship between neural activity and sound perception. Here, we review the role of the "ventral" auditory pathway in sound perception. We hypothesize that, in the early parts of the auditory cortex, neural activity reflects the auditory properties of a stimulus. However, in latter parts of the auditory cortex, neurons encode the sensory evidence that forms an auditory decision and are causally involved in the decision process. Finally, in the prefrontal cortex, which receives input from the auditory cortex, neural activity reflects the actual perceptual decision. Together, these studies indicate that the ventral pathway contains hierarchical circuits that are specialized for auditory perception and scene analysis.

  6. Auditory short-term memory in the primate auditory cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  7. Software Configurable Multichannel Transceiver

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Cornelius, Harold; Hickling, Ron; Brooks, Walter

    2009-01-01

    Emerging test instrumentation and test scenarios increasingly require network communication to manage complexity. Adapting wireless communication infrastructure to accommodate challenging testing needs can benefit from reconfigurable radio technology. A fundamental requirement for a software-definable radio system is independence from carrier frequencies, one of the radio components that to date has seen only limited progress toward programmability. This paper overviews an ongoing project to validate the viability of a promising chipset that performs conversion of radio frequency (RF) signals directly into digital data for the wireless receiver and, for the transmitter, converts digital data into RF signals. The Software Configurable Multichannel Transceiver (SCMT) enables four transmitters and four receivers in a single unit the size of a commodity disk drive, programmable for any frequency band between 1 MHz and 6 GHz.

  8. Multichannel optical sensing device

    DOEpatents

    Selkowitz, S.E.

    1985-08-16

    A multichannel optical sensing device is disclosed, for measuring the outdoor sky luminance or illuminance or the luminance or illuminance distribution in a room, comprising a plurality of light receptors, an optical shutter matrix including a plurality of liquid crystal optical shutter elements operable by electrical control signals between light transmitting and light stopping conditions, fiber optical elements connected between the receptors and the shutter elements, a microprocessor based programmable control unit for selectively supplying control signals to the optical shutter elements in a programmable sequence, a photodetector including an optical integrating spherical chamber having an input port for receiving the light from the shutter matrix and at least one detector element in the spherical chamber for producing output signals corresponding to the light, and output units for utilizing the output signals including a storage unit having a control connection to the microprocessor based programmable control unit for storing the output signals under the sequence control of the programmable control unit.

  9. Multichannel optical sensing device

    DOEpatents

    Selkowitz, Stephen E.

    1990-01-01

    A multichannel optical sensing device is disclosed, for measuring the outr sky luminance or illuminance or the luminance or illuminance distribution in a room, comprising a plurality of light receptors, an optical shutter matrix including a plurality of liquid crystal optical shutter elements operable by electrical control signals between light transmitting and light stopping conditions, fiber optic elements connected between the receptors and the shutter elements, a microprocessor based programmable control unit for selectively supplying control signals to the optical shutter elements in a programmable sequence, a photodetector including an optical integrating spherical chamber having an input port for receiving the light from the shutter matrix and at least one detector element in the spherical chamber for producing output signals corresponding to the light, and output units for utilizing the output signals including a storage unit having a control connection to the microprocessor based programmable control unit for storing the output signals under the sequence control of the programmable control unit.

  10. Sampled sinusoidal stimulation profile and multichannel fuzzy logic classification for monitor-based phase-coded SSVEP brain-computer interfacing

    NASA Astrophysics Data System (ADS)

    Manyakov, Nikolay V.; Chumerin, Nikolay; Robben, Arne; Combaz, Adrien; van Vliet, Marijn; Van Hulle, Marc M.

    2013-06-01

    Objective. The performance and usability of brain-computer interfaces (BCIs) can be improved by new paradigms, stimulation methods, decoding strategies, sensor technology etc. In this study we introduce new stimulation and decoding methods for electroencephalogram (EEG)-based BCIs that have targets flickering at the same frequency but with different phases. Approach. The phase information is estimated from the EEG data, and used for target command decoding. All visual stimulation is done on a conventional (60-Hz) LCD screen. Instead of the ‘on/off’ visual stimulation, commonly used in phase-coded BCI, we propose one based on a sampled sinusoidal intensity profile. In order to fully exploit the circular nature of the evoked phase response, we introduce a filter feature selection procedure based on circular statistics and propose a fuzzy logic classifier designed to cope with circular information from multiple channels jointly. Main results. We show that the proposed visual stimulation enables us not only to encode more commands under the same conditions, but also to obtain EEG responses with a more stable phase. We also demonstrate that the proposed decoding approach outperforms existing ones, especially for the short time windows used. Significance. The work presented here shows how to overcome some of the limitations of screen-based visual stimulation. The superiority of the proposed decoding approach demonstrates the importance of preserving the circularity of the data during the decoding stage.

  11. Harmonic Training and the Formation of Pitch Representation in a Neural Network Model of the Auditory Brain

    PubMed Central

    Ahmad, Nasir; Higgins, Irina; Walker, Kerry M. M.; Stringer, Simon M.

    2016-01-01

    Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises. PMID:27047368

  12. Spatiotemporal Analysis of Multichannel EEG: CARTOOL

    PubMed Central

    Brunet, Denis; Murray, Micah M.; Michel, Christoph M.

    2011-01-01

    This paper describes methods to analyze the brain's electric fields recorded with multichannel Electroencephalogram (EEG) and demonstrates their implementation in the software CARTOOL. It focuses on the analysis of the spatial properties of these fields and on quantitative assessment of changes of field topographies across time, experimental conditions, or populations. Topographic analyses are advantageous because they are reference independents and thus render statistically unambiguous results. Neurophysiologically, differences in topography directly indicate changes in the configuration of the active neuronal sources in the brain. We describe global measures of field strength and field similarities, temporal segmentation based on topographic variations, topographic analysis in the frequency domain, topographic statistical analysis, and source imaging based on distributed inverse solutions. All analysis methods are implemented in a freely available academic software package called CARTOOL. Besides providing these analysis tools, CARTOOL is particularly designed to visualize the data and the analysis results using 3-dimensional display routines that allow rapid manipulation and animation of 3D images. CARTOOL therefore is a helpful tool for researchers as well as for clinicians to interpret multichannel EEG and evoked potentials in a global, comprehensive, and unambiguous way. PMID:21253358

  13. Spatiotemporal analysis of multichannel EEG: CARTOOL.

    PubMed

    Brunet, Denis; Murray, Micah M; Michel, Christoph M

    2011-01-01

    This paper describes methods to analyze the brain's electric fields recorded with multichannel Electroencephalogram (EEG) and demonstrates their implementation in the software CARTOOL. It focuses on the analysis of the spatial properties of these fields and on quantitative assessment of changes of field topographies across time, experimental conditions, or populations. Topographic analyses are advantageous because they are reference independents and thus render statistically unambiguous results. Neurophysiologically, differences in topography directly indicate changes in the configuration of the active neuronal sources in the brain. We describe global measures of field strength and field similarities, temporal segmentation based on topographic variations, topographic analysis in the frequency domain, topographic statistical analysis, and source imaging based on distributed inverse solutions. All analysis methods are implemented in a freely available academic software package called CARTOOL. Besides providing these analysis tools, CARTOOL is particularly designed to visualize the data and the analysis results using 3-dimensional display routines that allow rapid manipulation and animation of 3D images. CARTOOL therefore is a helpful tool for researchers as well as for clinicians to interpret multichannel EEG and evoked potentials in a global, comprehensive, and unambiguous way.

  14. Separating heart and brain: on the reduction of physiological noise from multichannel functional near-infrared spectroscopy (fNIRS) signals

    NASA Astrophysics Data System (ADS)

    Bauernfeind, G.; Wriessnegger, S. C.; Daly, I.; Müller-Putz, G. R.

    2014-10-01

    Objective. Functional near-infrared spectroscopy (fNIRS) is an emerging technique for the in vivo assessment of functional activity of the cerebral cortex as well as in the field of brain-computer interface (BCI) research. A common challenge for the utilization of fNIRS in these areas is a stable and reliable investigation of the spatio-temporal hemodynamic patterns. However, the recorded patterns may be influenced and superimposed by signals generated from physiological processes, resulting in an inaccurate estimation of the cortical activity. Up to now only a few studies have investigated these influences, and still less has been attempted to remove/reduce these influences. The present study aims to gain insights into the reduction of physiological rhythms in hemodynamic signals (oxygenated hemoglobin (oxy-Hb), deoxygenated hemoglobin (deoxy-Hb)). Approach. We introduce the use of three different signal processing approaches (spatial filtering, a common average reference (CAR) method; independent component analysis (ICA); and transfer function (TF) models) to reduce the influence of respiratory and blood pressure (BP) rhythms on the hemodynamic responses. Main results. All approaches produce large reductions in BP and respiration influences on the oxy-Hb signals and, therefore, improve the contrast-to-noise ratio (CNR). In contrast, for deoxy-Hb signals CAR and ICA did not improve the CNR. However, for the TF approach, a CNR-improvement in deoxy-Hb can also be found. Significance. The present study investigates the application of different signal processing approaches to reduce the influences of physiological rhythms on the hemodynamic responses. In addition to the identification of the best signal processing method, we also show the importance of noise reduction in fNIRS data.

  15. Long-term recovery from hippocampal-related behavioral and biochemical abnormalities induced by noise exposure during brain development. Evaluation of auditory pathway integrity.

    PubMed

    Uran, S L; Gómez-Casati, M E; Guelman, L R

    2014-10-01

    Sound is an important part of man's contact with the environment and has served as critical means for survival throughout his evolution. As a result of exposure to noise, physiological functions such as those involving structures of the auditory and non-auditory systems might be damaged. We have previously reported that noise-exposed developing rats elicited hippocampal-related histological, biochemical and behavioral changes. However, no data about the time lapse of these changes were reported. Moreover, measurements of auditory pathway function were not performed in exposed animals. Therefore, with the present work, we aim to test the onset and the persistence of the different extra-auditory abnormalities observed in noise-exposed rats and to evaluate auditory pathway integrity. Male Wistar rats of 15 days were exposed to moderate noise levels (95-97 dB SPL, 2 h a day) during one day (acute noise exposure, ANE) or during 15 days (sub-acute noise exposure, SANE). Hippocampal biochemical determinations as well as short (ST) and long term (LT) behavioral assessments were performed. In addition, histological and functional evaluations of the auditory pathway were carried out in exposed animals. Our results show that hippocampal-related behavioral and biochemical changes (impairments in habituation, recognition and associative memories as well as distortion of anxiety-related behavior, decreases in reactive oxygen species (ROS) levels and increases in antioxidant enzymes activities) induced by noise exposure were almost completely restored by PND 90. In addition, auditory evaluation shows that increased cochlear thresholds observed in exposed rats were re-established at PND 90, although with a remarkable supra-threshold amplitude reduction. These data suggest that noise-induced hippocampal and auditory-related alterations are mostly transient and that the effects of noise on the hippocampus might be, at least in part, mediated by the damage on the auditory pathway

  16. Auditory neuroplasticity, hearing loss and cochlear implants.

    PubMed

    Ryugo, David

    2015-07-01

    Data from our laboratory show that the auditory brain is highly malleable by experience. We establish a base of knowledge that describes the normal structure and workings at the initial stages of the central auditory system. This research is expanded to include the associated pathology in the auditory brain stem created by hearing loss. Utilizing the congenitally deaf white cat, we demonstrate the way that cells, synapses, and circuits are pathologically affected by sound deprivation. We further show that the restoration of auditory nerve activity via electrical stimulation through cochlear implants serves to correct key features of brain pathology caused by hearing loss. The data suggest that rigorous training with cochlear implants and/or hearing aids offers the promise of heretofore unattained benefits.

  17. Auditory-vocal mirroring in songbirds.

    PubMed

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  18. Comparison of tactile, auditory, and visual modality for brain-computer interface use: a case study with a patient in the locked-in state.

    PubMed

    Kaufmann, Tobias; Holz, Elisa M; Kübler, Andrea

    2013-01-01

    This paper describes a case study with a patient in the classic locked-in state, who currently has no means of independent communication. Following a user-centered approach, we investigated event-related potentials (ERP) elicited in different modalities for use in brain-computer interface (BCI) systems. Such systems could provide her with an alternative communication channel. To investigate the most viable modality for achieving BCI based communication, classic oddball paradigms (1 rare and 1 frequent stimulus, ratio 1:5) in the visual, auditory and tactile modality were conducted (2 runs per modality). Classifiers were built on one run and tested offline on another run (and vice versa). In these paradigms, the tactile modality was clearly superior to other modalities, displaying high offline accuracy even when classification was performed on single trials only. Consequently, we tested the tactile paradigm online and the patient successfully selected targets without any error. Furthermore, we investigated use of the visual or tactile modality for different BCI systems with more than two selection options. In the visual modality, several BCI paradigms were tested offline. Neither matrix-based nor so-called gaze-independent paradigms constituted a means of control. These results may thus question the gaze-independence of current gaze-independent approaches to BCI. A tactile four-choice BCI resulted in high offline classification accuracies. Yet, online use raised various issues. Although performance was clearly above chance, practical daily life use appeared unlikely when compared to other communication approaches (e.g., partner scanning). Our results emphasize the need for user-centered design in BCI development including identification of the best stimulus modality for a particular user. Finally, the paper discusses feasibility of EEG-based BCI systems for patients in classic locked-in state and compares BCI to other AT solutions that we also tested during the

  19. Multichannel electrochemical microbial detection unit

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Young, R. N.; Boykin, E. H.

    1978-01-01

    The paper describes the design and capabilities of a compact multichannel electrochemical unit devised to detect and automatically indicate detection time length of bacteria. By connecting this unit to a strip-chart recorder, a permanent record is obtained of the end points and growth curves for each of eight channels. The experimental setup utilizing the multichannel unit consists of a test tube (25 by 150 mm) containing a combination redox electrode plus 18 ml of lauryl tryptose broth and positioned in a 35-C water bath. Leads from the electrodes are connected to the multichannel unit, which in turn is connected to a strip-chart recorder. After addition of 2.0 ml of inoculum to the test tubes, depression of the push-button starter activates the electronics, timer, and indicator light for each channel. The multichannel unit is employed to test tenfold dilutions of various members of the Enterobacteriaceae group, and a typical dose-response curve is presented.

  20. Brain dynamics of distractibility: interaction between top-down and bottom-up mechanisms of auditory attention.

    PubMed

    Bidet-Caulet, Aurélie; Bottemanne, Laure; Fonteneau, Clara; Giard, Marie-Hélène; Bertrand, Olivier

    2015-05-01

    Attention improves the processing of specific information while other stimuli are disregarded. A good balance between bottom-up (attentional capture by unexpected salient stimuli) and top-down (selection of relevant information) mechanisms is crucial to be both task-efficient and aware of our environment. Only few studies have explored how an isolated unexpected task-irrelevant stimulus outside the attention focus can disturb the top-down attention mechanisms necessary to the good performance of the ongoing task, and how these top-down mechanisms can modulate the bottom-up mechanisms of attentional capture triggered by an unexpected event. We recorded scalp electroencephalography in 18 young adults performing a new paradigm measuring distractibility and assessing both bottom-up and top-down attention mechanisms, at the same time. Increasing task load in top-down attention was found to reduce early processing of the distracting sound, but not bottom-up attentional capture mechanisms nor the behavioral distraction cost in reaction time. Moreover, the impact of bottom-up attentional capture by distracting sounds on target processing was revealed as a delayed latency of the N100 sensory response to target sounds mirroring increased reaction times. These results provide crucial information into how bottom-up and top-down mechanisms dynamically interact and compete in the human brain, i.e. on the precarious balance between voluntary attention and distraction.

  1. Digital restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Chin, Roland T.

    1989-01-01

    The Wiener solution of a multichannel restoration scheme is presented. Using matrix diagonalization and block-Toeplitz to block-circulant approximation, the inversion of the multichannel, linear space-invariant imaging system becomes feasible by utilizing a fast iterative matrix inversion procedure. The restoration uses both the within-channel (spatial) and between-channel (spectral) correlation; hence, the restored result is a better estimate than that produced by independent channel restoration. Simulations are also presented.

  2. Brain stem auditory potentials evoked by clicks in the presence of high-pass filtered noise in dogs.

    PubMed

    Poncelet, L; Deltenre, P; Coppens, A; Michaux, C; Coussart, E

    2006-04-01

    This study evaluates the effects of a high-frequency hearing loss simulated by the high-pass-noise masking method, on the click-evoked brain stem-evoked potentials (BAEP) characteristics in dogs. BAEP were obtained in response to rarefaction and condensation click stimuli from 60 dB normal hearing level (NHL, corresponding to 89 dB sound pressure level) to wave V threshold, using steps of 5 dB in eleven 58 to 80-day-old Beagle puppies. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation potential (RCDP). The procedure was repeated while constant level, high-pass filtered (HPF) noise was superposed to the click. Cut-off frequencies of the successively used filters were 8, 4, 2 and 1 kHz. For each condition, wave V and RCDP thresholds, and slope of the wave V latency-intensity curve (LIC) were collected. The intensity range at which RCDP could not be recorded (pre-RCDP range) was calculated. Compared with the no noise condition, the pre-RCDP range significantly diminished and the wave V threshold significantly increased when the superposed HPF noise reached the 4 kHz area. Wave V LIC slope became significantly steeper with the 2 kHz HPF noise. In this non-invasive model of high-frequency hearing loss, impaired hearing of frequencies from 8 kHz and above escaped detection through click BAEP study in dogs. Frequencies above 13 kHz were however not specifically addressed in this study.

  3. Central auditory function of deafness genes.

    PubMed

    Willaredt, Marc A; Ebbers, Lena; Nothwang, Hans Gerd

    2014-06-01

    The highly variable benefit of hearing devices is a serious challenge in auditory rehabilitation. Various factors contribute to this phenomenon such as the diversity in ear defects, the different extent of auditory nerve hypoplasia, the age of intervention, and cognitive abilities. Recent analyses indicate that, in addition, central auditory functions of deafness genes have to be considered in this context. Since reduced neuronal activity acts as the common denominator in deafness, it is widely assumed that peripheral deafness influences development and function of the central auditory system in a stereotypical manner. However, functional characterization of transgenic mice with mutated deafness genes demonstrated gene-specific abnormalities in the central auditory system as well. A frequent function of deafness genes in the central auditory system is supported by a genome-wide expression study that revealed significant enrichment of these genes in the transcriptome of the auditory brainstem compared to the entire brain. Here, we will summarize current knowledge of the diverse central auditory functions of deafness genes. We furthermore propose the intimately interwoven gene regulatory networks governing development of the otic placode and the hindbrain as a mechanistic explanation for the widespread expression of these genes beyond the cochlea. We conclude that better knowledge of central auditory dysfunction caused by genetic alterations in deafness genes is required. In combination with improved genetic diagnostics becoming currently available through novel sequencing technologies, this information will likely contribute to better outcome prediction of hearing devices.

  4. Multichannel demultiplexer-demodulator

    NASA Technical Reports Server (NTRS)

    Courtois, Hector; Sherry, Mike; Cangiane, Peter; Caso, Greg

    1993-01-01

    One of the critical satellite technologies in a meshed VSAT (very small aperture terminal) satellite communication networks utilizing FDMA (frequency division multiple access) uplinks is a multichannel demultiplexer/demodulator (MCDD). TRW Electronic Systems Group developed a proof-of-concept (POC) MCDD using advanced digital technologies. This POC model demonstrates the capability of demultiplexing and demodulating multiple low to medium data rate FDMA uplinks with potential for expansion to demultiplexing and demodulating hundreds to thousands of narrowband uplinks. The TRW approach uses baseband sampling followed by successive wideband and narrowband channelizers with each channelizer feeding into a multirate, time-shared demodulator. A full-scale MCDD would consist of an 8 bit A/D sampling at 92.16 MHz, four wideband channelizers capable of demultiplexing eight wideband channels, thirty-two narrowband channelizers capable of demultiplexing one wideband signal into 32 narrowband channels, and thirty-two multirate demodulators. The POC model consists of an 8 bit A/D sampling at 23.04 MHz, one wideband channelizer, 16 narrowband channelizers, and three multirate demodulators. The implementation loss of the wideband and narrowband channels is 0.3dB and 0.75dB at 10(exp -7) E(sub b)/N(sub o) respectively.

  5. Multichannel demultiplexer-demodulator

    NASA Astrophysics Data System (ADS)

    Courtois, Hector; Sherry, Mike; Cangiane, Peter; Caso, Greg

    1993-11-01

    One of the critical satellite technologies in a meshed VSAT (very small aperture terminal) satellite communication networks utilizing FDMA (frequency division multiple access) uplinks is a multichannel demultiplexer/demodulator (MCDD). TRW Electronic Systems Group developed a proof-of-concept (POC) MCDD using advanced digital technologies. This POC model demonstrates the capability of demultiplexing and demodulating multiple low to medium data rate FDMA uplinks with potential for expansion to demultiplexing and demodulating hundreds to thousands of narrowband uplinks. The TRW approach uses baseband sampling followed by successive wideband and narrowband channelizers with each channelizer feeding into a multirate, time-shared demodulator. A full-scale MCDD would consist of an 8 bit A/D sampling at 92.16 MHz, four wideband channelizers capable of demultiplexing eight wideband channels, thirty-two narrowband channelizers capable of demultiplexing one wideband signal into 32 narrowband channels, and thirty-two multirate demodulators. The POC model consists of an 8 bit A/D sampling at 23.04 MHz, one wideband channelizer, 16 narrowband channelizers, and three multirate demodulators. The implementation loss of the wideband and narrowband channels is 0.3dB and 0.75dB at 10(exp -7) E(sub b)/N(sub o) respectively.

  6. Recording and marking with silicon multichannel electrodes.

    PubMed

    Townsend, George; Peloquin, Pascal; Kloosterman, Fabian; Hetke, Jamille F; Leung, L Stan

    2002-04-01

    This protocol describes an implementation of recording and analysis of evoked potentials in the hippocampal cortex, combined with lesioning using multichannel silicon probes. Multichannel recording offers the advantage of capturing a potential field at one instant in time. The potentials are then subjected to current source density (CSD) analysis, to reveal the layer-by-layer current sources and sinks. Signals from each channel of a silicon probe (maximum 16 channels in this study) were amplified and digitized at up to 40 kHz after sample-and-hold circuits. A modular lesion circuit board could be inserted between the input preamplifiers and the silicon probe, such that any one of the 16 electrodes could be connected to a DC lesion current. By making a lesion at the electrode showing a physiological event of interest, the anatomical location of the event can be precisely identified, as shown for the distal dendritic current sink in CA1 following medial perforant path stimulation. Making two discrete lesions through the silicon probe is useful to indicate the degree of tissue shrinkage during histological procedures. In addition, potential/CSD profiles were stable following small movements of the silicon probe, suggesting that the probe did not cause excessive damage to the brain.

  7. Multichannel SQUID systems for brain research

    SciTech Connect

    Ahonen, A.I.; Hamalainen, M.S.; Kajola, M.J.; Knuutila, J.E.F.; Lounasmaa, O.V.; Simola, J.T.; Vilkman, V.A. . Low Temperature Lab.); Tesche, C.D. . Thomas J. Watson Research Center)

    1991-03-01

    This paper reviews basis principles of magnetoencephalography (MEG) and neuromagnetic instrumentation. The authors' 24-channel system, based on planar gradiometer coils and dc-SQUIDs, is then described. Finally, recent MEG-experiments on human somatotopy and focal epilepsy, carried out in the authors' laboratory, are presented.

  8. Modular multichannel surface plasmon spectrometer

    NASA Astrophysics Data System (ADS)

    Neuert, G.; Kufer, S.; Benoit, M.; Gaub, H. E.

    2005-05-01

    We have developed a modular multichannel surface plasmon resonance (SPR) spectrometer on the basis of a commercially available hybrid sensor chip. Due to its modularity this inexpensive and easy to use setup can readily be adapted to different experimental environments. High temperature stability is achieved through efficient thermal coupling of individual SPR units. With standard systems the performance of the multichannel instrument was evaluated. The absorption kinetics of a cysteamine monolayer, as well as the concentration dependence of the specific receptor-ligand interaction between biotin and streptavidin was measured.

  9. On-Line Statistical Segmentation of a Non-Speech Auditory Stream in Neonates as Demonstrated by Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Kudo, Noriko; Nonaka, Yulri; Mizuno, Noriko; Mizuno, Katsumi; Okanoya, Kazuo

    2011-01-01

    The ability to statistically segment a continuous auditory stream is one of the most important preparations for initiating language learning. Such ability is available to human infants at 8 months of age, as shown by a behavioral measurement. However, behavioral study alone cannot determine how early this ability is available. A recent study using…

  10. Electrophysiological measurement of human auditory function

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1975-01-01

    Contingent negative variations in the presence and amplitudes of brain potentials evoked by sound are considered. Evidence is produced that the evoked brain stem response to auditory stimuli is clearly related to brain events associated with cognitive processing of acoustic signals since their properties depend upon where the listener directs his attention, whether the signal is an expected event or a surprise, and when sound that is listened-for is heard at last.

  11. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  12. Novel Methods for Measuring Depth of Anesthesia by Quantifying Dominant Information Flow in Multichannel EEGs

    PubMed Central

    Choi, Byung-Moon; Noh, Gyu-Jeong

    2017-01-01

    In this paper, we propose novel methods for measuring depth of anesthesia (DOA) by quantifying dominant information flow in multichannel EEGs. Conventional methods mainly use few EEG channels independently and most of multichannel EEG based studies are limited to specific regions of the brain. Therefore the function of the cerebral cortex over wide brain regions is hardly reflected in DOA measurement. Here, DOA is measured by the quantification of dominant information flow obtained from principle bipartition. Three bipartitioning methods are used to detect the dominant information flow in entire EEG channels and the dominant information flow is quantified by calculating information entropy. High correlation between the proposed measures and the plasma concentration of propofol is confirmed from the experimental results of clinical data in 39 subjects. To illustrate the performance of the proposed methods more easily we present the results for multichannel EEG on a two-dimensional (2D) brain map.

  13. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  14. McGurk illusion recalibrates subsequent auditory perception.

    PubMed

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-09-09

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of 'ada'. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as 'ada'. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as 'ada', activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input.

  15. McGurk illusion recalibrates subsequent auditory perception

    PubMed Central

    Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  16. Human auditory neuroimaging of intensity and loudness.

    PubMed

    Uppenkamp, Stefan; Röhl, Markus

    2014-01-01

    The physical intensity of a sound, usually expressed in dB on a logarithmic ratio scale, can easily be measured using technical equipment. Loudness is the perceptual correlate of sound intensity, and is usually determined by means of some sort of psychophysical scaling procedure. The interrelation of sound intensity and perceived loudness is still a matter of debate, and the physiological correlate of loudness perception in the human auditory pathway is not completely understood. Various studies indicate that the activation in human auditory cortex is more a representation of loudness sensation rather than of physical sound pressure level. This raises the questions (1), at what stage or stages in the ascending auditory pathway is the transformation of the physical stimulus into its perceptual correlate completed, and (2), to what extent other factors affecting individual loudness judgements might modulate the brain activation as registered by auditory neuroimaging. An overview is given about recent studies on the effects of sound intensity, duration, bandwidth and individual hearing status on the activation in the human auditory system, as measured by various approaches in auditory neuroimaging. This article is part of a Special Issue entitled Human Auditory Neuroimaging.

  17. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer

  18. Adaptive enhancement of magnetoencephalographic signals via multichannel filtering

    SciTech Connect

    Lewis, P.S.

    1989-01-01

    A time-varying spatial/temporal filter for enhancing multichannel magnetoencephalographic (MEG) recordings of evoked responses is described. This filter is based in projections derived from a combination of measured data and a priori models of the expected response. It produces estimates of the evoked fields in single trial measurements. These estimates can reduce the need for signal averaging in some situations. The filter uses the a priori model information to enhance responses where they exist, but avoids creating responses that do not exist. Examples are included of the filter's application to both MEG single trial data containing an auditory evoked field and control data with no evoked field. 5 refs., 7 figs.

  19. A unique cellular scaling rule in the avian auditory system.

    PubMed

    Corfield, Jeremy R; Long, Brendan; Krilow, Justin M; Wylie, Douglas R; Iwaniuk, Andrew N

    2016-06-01

    Although it is clear that neural structures scale with body size, the mechanisms of this relationship are not well understood. Several recent studies have shown that the relationship between neuron numbers and brain (or brain region) size are not only different across mammalian orders, but also across auditory and visual regions within the same brains. Among birds, similar cellular scaling rules have not been examined in any detail. Here, we examine the scaling of auditory structures in birds and show that the scaling rules that have been established in the mammalian auditory pathway do not necessarily apply to birds. In galliforms, neuronal densities decrease with increasing brain size, suggesting that auditory brainstem structures increase in size faster than neurons are added; smaller brains have relatively more neurons than larger brains. The cellular scaling rules that apply to auditory brainstem structures in galliforms are, therefore, different to that found in primate auditory pathway. It is likely that the factors driving this difference are associated with the anatomical specializations required for sound perception in birds, although there is a decoupling of neuron numbers in brain structures and hair cell numbers in the basilar papilla. This study provides significant insight into the allometric scaling of neural structures in birds and improves our understanding of the rules that govern neural scaling across vertebrates.

  20. Multichannel Error Correction Code Decoder

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA Lewis Research Center's Digital Systems Technology Branch has an ongoing program in modulation, coding, onboard processing, and switching. Recently, NASA completed a project to incorporate a time-shared decoder into the very-small-aperture terminal (VSAT) onboard-processing mesh architecture. The primary goal was to demonstrate a time-shared decoder for a regenerative satellite that uses asynchronous, frequency-division multiple access (FDMA) uplink channels, thereby identifying hardware and power requirements and fault-tolerant issues that would have to be addressed in a operational system. A secondary goal was to integrate and test, in a system environment, two NASA-sponsored, proof-of-concept hardware deliverables: the Harris Corp. high-speed Bose Chaudhuri-Hocquenghem (BCH) codec and the TRW multichannel demultiplexer/demodulator (MCDD). A beneficial byproduct of this project was the development of flexible, multichannel-uplink signal-generation equipment.

  1. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  2. Auditory Training for Central Auditory Processing Disorder

    PubMed Central

    Weihing, Jeffrey; Chermak, Gail D.; Musiek, Frank E.

    2015-01-01

    Auditory training (AT) is an important component of rehabilitation for patients with central auditory processing disorder (CAPD). The present article identifies and describes aspects of AT as they relate to applications in this population. A description of the types of auditory processes along with information on relevant AT protocols that can be used to address these specific deficits is included. Characteristics and principles of effective AT procedures also are detailed in light of research that reflects on their value. Finally, research investigating AT in populations who show CAPD or present with auditory complaints is reported. Although efficacy data in this area are still emerging, current findings support the use of AT for treatment of auditory difficulties. PMID:27587909

  3. The human auditory evoked response

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1974-01-01

    Figures are presented of computer-averaged auditory evoked responses (AERs) that point to the existence of a completely endogenous brain event. A series of regular clicks or tones was administered to the ear, and 'odd-balls' of different intensity or frequency respectively were included. Subjects were asked either to ignore the sounds (to read or do something else) or to attend to the stimuli. When they listened and counted the odd-balls, a P3 wave occurred at 300msec after stimulus. When the odd-balls consisted of omitted clicks or tone bursts, a similar response was observed. This could not have come from auditory nerve, but only from cortex. It is evidence of recognition, a conscious process.

  4. Summary statistics in auditory perception.

    PubMed

    McDermott, Josh H; Schemitsch, Michael; Simoncelli, Eero P

    2013-04-01

    Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here we provide evidence that the auditory system summarizes the temporal details of sounds using time-averaged statistics. We measured discrimination of 'sound textures' that were characterized by particular statistical properties, as normally result from the superposition of many acoustic features in auditory scenes. When listeners discriminated examples of different textures, performance improved with excerpt duration. In contrast, when listeners discriminated different examples of the same texture, performance declined with duration, a paradoxical result given that the information available for discrimination grows with duration. These results indicate that once these sounds are of moderate length, the brain's representation is limited to time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail.

  5. Web-based multi-channel analyzer

    DOEpatents

    Gritzo, Russ E.

    2003-12-23

    The present invention provides an improved multi-channel analyzer designed to conveniently gather, process, and distribute spectrographic pulse data. The multi-channel analyzer may operate on a computer system having memory, a processor, and the capability to connect to a network and to receive digitized spectrographic pulses. The multi-channel analyzer may have a software module integrated with a general-purpose operating system that may receive digitized spectrographic pulses for at least 10,000 pulses per second. The multi-channel analyzer may further have a user-level software module that may receive user-specified controls dictating the operation of the multi-channel analyzer, making the multi-channel analyzer customizable by the end-user. The user-level software may further categorize and conveniently distribute spectrographic pulse data employing non-proprietary, standard communication protocols and formats.

  6. Impairments of auditory scene analysis in Alzheimer's disease.

    PubMed

    Goll, Johanna C; Kim, Lois G; Ridgway, Gerard R; Hailstone, Julia C; Lehmann, Manja; Buckley, Aisling H; Crutch, Sebastian J; Warren, Jason D

    2012-01-01

    Parsing of sound sources in the auditory environment or 'auditory scene analysis' is a computationally demanding cognitive operation that is likely to be vulnerable to the neurodegenerative process in Alzheimer's disease. However, little information is available concerning auditory scene analysis in Alzheimer's disease. Here we undertook a detailed neuropsychological and neuroanatomical characterization of auditory scene analysis in a cohort of 21 patients with clinically typical Alzheimer's disease versus age-matched healthy control subjects. We designed a novel auditory dual stream paradigm based on synthetic sound sequences to assess two key generic operations in auditory scene analysis (object segregation and grouping) in relation to simpler auditory perceptual, task and general neuropsychological factors. In order to assess neuroanatomical associations of performance on auditory scene analysis tasks, structural brain magnetic resonance imaging data from the patient cohort were analysed using voxel-based morphometry. Compared with healthy controls, patients with Alzheimer's disease had impairments of auditory scene analysis, and segregation and grouping operations were comparably affected. Auditory scene analysis impairments in Alzheimer's disease were not wholly attributable to simple auditory perceptual or task factors; however, the between-group difference relative to healthy controls was attenuated after accounting for non-verbal (visuospatial) working memory capacity. These findings demonstrate that clinically typical Alzheimer's disease is associated with a generic deficit of auditory scene analysis. Neuroanatomical associations of auditory scene analysis performance were identified in posterior cortical areas including the posterior superior temporal lobes and posterior cingulate. This work suggests a basis for understanding a class of clinical symptoms in Alzheimer's disease and for delineating cognitive mechanisms that mediate auditory scene analysis

  7. Electrophysiological measurement of human auditory function

    NASA Technical Reports Server (NTRS)

    Galambos, R.

    1975-01-01

    Knowledge of the human auditory evoked response is reviewed, including methods of determining this response, the way particular changes in the stimulus are coupled to specific changes in the response, and how the state of mind of the listener will influence the response. Important practical applications of this basic knowledge are discussed. Measurement of the brainstem evoked response, for instance, can state unequivocally how well the peripheral auditory apparatus functions. It might then be developed into a useful hearing test, especially for infants and preverbal or nonverbal children. Clinical applications of measuring the brain waves evoked 100 msec and later after the auditory stimulus are undetermined. These waves are clearly related to brain events associated with cognitive processing of acoustic signals, since their properties depend upon where the listener directs his attention and whether how long he expects the signal.

  8. Studying brain function with near-infrared spectroscopy concurrently with electroencephalography

    NASA Astrophysics Data System (ADS)

    Tong, Y.; Rooney, E. J.; Bergethon, P. R.; Martin, J. M.; Sassaroli, A.; Ehrenberg, B. L.; Van Toi, Vo; Aggarwal, P.; Ambady, N.; Fantini, S.

    2005-04-01

    Near-infrared spectroscopy (NIRS) has been used for functional brain imaging by employing properly designed source-detector matrices. We demonstrate that by embedding a NIRS source-detector matrix within an electroencephalography (EEG) standard multi-channel cap, we can perform functional brain mapping of hemodynamic response and neuronal response simultaneously. In this study, the P300 endogenous evoked response was generated in human subjects using an auditory odd-ball paradigm while concurrently monitoring the hemodynamic response both spatially and temporally with NIRS. The electrical measurements showed the localization of evoked potential P300, which appeared around 320 ms after the odd-ball stimulus. The NIRS measurements demonstrate a hemodynamic change in the fronto-temporal cortex a few seconds after the appearance of P300.

  9. Multichannel analysis of surface waves

    USGS Publications Warehouse

    Park, C.B.; Miller, R.D.; Xia, J.

    1999-01-01

    The frequency-dependent properties of Rayleigh-type surface waves can be utilized for imaging and characterizing the shallow subsurface. Most surface-wave analysis relies on the accurate calculation of phase velocities for the horizontally traveling fundamental-mode Rayleigh wave acquired by stepping out a pair of receivers at intervals based on calculated ground roll wavelengths. Interference by coherent source-generated noise inhibits the reliability of shear-wave velocities determined through inversion of the whole wave field. Among these nonplanar, nonfundamental-mode Rayleigh waves (noise) are body waves, scattered and nonsource-generated surface waves, and higher-mode surface waves. The degree to which each of these types of noise contaminates the dispersion curve and, ultimately, the inverted shear-wave velocity profile is dependent on frequency as well as distance from the source. Multichannel recording permits effective identification and isolation of noise according to distinctive trace-to-trace coherency in arrival time and amplitude. An added advantage is the speed and redundancy of the measurement process. Decomposition of a multichannel record into a time variable-frequency format, similar to an uncorrelated Vibroseis record, permits analysis and display of each frequency component in a unique and continuous format. Coherent noise contamination can then be examined and its effects appraised in both frequency and offset space. Separation of frequency components permits real-time maximization of the S/N ratio during acquisition and subsequent processing steps. Linear separation of each ground roll frequency component allows calculation of phase velocities by simply measuring the linear slope of each frequency component. Breaks in coherent surface-wave arrivals, observable on the decomposed record, can be compensated for during acquisition and processing. Multichannel recording permits single-measurement surveying of a broad depth range, high levels of

  10. The Brain As a Mixer, I. Preliminary Literature Review: Auditory Integration. Studies in Language and Language Behavior, Progress Report Number VII.

    ERIC Educational Resources Information Center

    Semmel, Melvyn I.; And Others

    Methods to evaluate central hearing deficiencies and to localize brain damage are reviewed beginning with Bocca who showed that patients with temporal lobe tumors made significantly lower discrimination scores in the ear opposite the tumor when speech signals were distorted. Tests were devised to attempt to pinpoint brain damage on the basis of…

  11. 40 Hz auditory steady state response to linguistic features of stimuli during auditory hallucinations.

    PubMed

    Ying, Jun; Yan, Zheng; Gao, Xiao-rong

    2013-10-01

    The auditory steady state response (ASSR) may reflect activity from different regions of the brain, depending on the modulation frequency used. In general, responses induced by low rates (≤40 Hz) emanate mostly from central structures of the brain, and responses from high rates (≥80 Hz) emanate mostly from the peripheral auditory nerve or brainstem structures. Besides, it was reported that the gamma band ASSR (30-90 Hz) played an important role in working memory, speech understanding and recognition. This paper investigated the 40 Hz ASSR evoked by modulated speech and reversed speech. The speech was Chinese phrase voice, and the noise-like reversed speech was obtained by temporally reversing the speech. Both auditory stimuli were modulated with a frequency of 40 Hz. Ten healthy subjects and 5 patients with hallucination symptom participated in the experiment. Results showed reduction in left auditory cortex response when healthy subjects listened to the reversed speech compared with the speech. In contrast, when the patients who experienced auditory hallucinations listened to the reversed speech, the auditory cortex of left hemispheric responded more actively. The ASSR results were consistent with the behavior results of patients. Therefore, the gamma band ASSR is expected to be helpful for rapid and objective diagnosis of hallucination in clinic.

  12. Material identification with multichannel radiographs

    NASA Astrophysics Data System (ADS)

    Collins, Noelle; Jimenez, Edward S.; Thompson, Kyle R.

    2017-02-01

    This work aims to validate previous exploratory work done to characterize materials by matching their attenuation profiles using a multichannel radiograph given an initial energy spectrum. The experiment was performed in order to evaluate the effects of noise on the resulting attenuation profiles, which was ignored in simulation. Spectrum measurements have also been collected from various materials of interest. Additionally, a MATLAB optimization algorithm has been applied to these candidate spectrum measurements in order to extract an estimate of the attenuation profile. Being able to characterize materials through this nondestructive method has an extensive range of applications for a wide variety of fields, including quality assessment, industry, and national security.

  13. A Student-Made Inexpensive Multichannel Pipet

    ERIC Educational Resources Information Center

    Dragojlovic, Veljko

    2009-01-01

    An inexpensive multichannel pipet designed to deliver small volumes of liquid simultaneously to wells in a multiwell plate can be prepared by students in a single laboratory period. The multichannel pipet is made of disposable plastic 1 mL syringes and drilled plastic plates, which are used to make plunger and barrel assemblies. Application of the…

  14. Multichannel Compression, Temporal Cues, and Audibility.

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Turner, Christopher W.

    1998-01-01

    The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…

  15. Multichannel Analyzer Built from a Microcomputer.

    ERIC Educational Resources Information Center

    Spencer, C. D.; Mueller, P.

    1979-01-01

    Describes a multichannel analyzer built using eight-bit S-100 bus microcomputer hardware. The output modes are an oscilloscope display, print data, and send data to another computer. Discusses the system's hardware, software, costs, and advantages relative to commercial multichannels. (Author/GA)

  16. Least squares restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.

    1991-01-01

    Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.

  17. Development of multichannel MEG system at IGCAR

    NASA Astrophysics Data System (ADS)

    Mariyappa, N.; Parasakthi, C.; Gireesan, K.; Sengottuvel, S.; Patel, Rajesh; Janawadkar, M. P.; Radhakrishnan, T. S.; Sundar, C. S.

    2013-02-01

    We describe some of the challenging aspects in the indigenous development of the whole head multichannel magnetoencephalography (MEG) system at IGCAR, Kalpakkam. These are: i) fabrication and testing of a helmet shaped sensor array holder of a polymeric material experimentally tested to be compatible with liquid helium temperatures, ii) the design and fabrication of the PCB adapter modules, keeping in mind the inter-track cross talk considerations between the electrical leads used to provide connections from SQUID at liquid helium temperature (4.2K) to the electronics at room temperature (300K) and iii) use of high resistance manganin wires for the 86 channels (86×8 leads) essential to reduce the total heat leak which, however, inevitably causes an attenuation of the SQUID output signal due to voltage drop in the leads. We have presently populated 22 of the 86 channels, which include 6 reference channels to reject the common mode noise. The whole head MEG system to cover all the lobes of the brain will be progressively assembled when other three PCB adapter modules, presently under fabrication, become available. The MEG system will be used for a variety of basic and clinical studies including localization of epileptic foci during pre-surgical mapping in collaboration with neurologists.

  18. Simultanagnosia does not affect processes of auditory Gestalt perception.

    PubMed

    Rennig, Johannes; Bleyer, Anna Lena; Karnath, Hans-Otto

    2017-03-23

    Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia. We also included control tasks for auditory working memory deficits and for auditory extinction. We tested four simultanagnosia patients who suffered from severe symptoms in the visual domain. Two of them indeed showed significant impairments in recognition of simultaneously presented sounds. However, the same two patients also suffered from severe auditory working memory deficits and from symptoms comparable to auditory extinction, both sufficiently explaining the impairments in simultaneous auditory perception. We thus conclude that deficits in auditory Gestalt perception do not appear to be characteristic for simultanagnosia and that the human brain obviously uses independent mechanisms for visual and for auditory Gestalt perception.

  19. A computer model of auditory stream segregation.

    PubMed

    Beauvois, M W; Meddis, R

    1991-08-01

    A computer model is described which simulates some aspects of auditory stream segregation. The model emphasizes the explanatory power of simple physiological principles operating at a peripheral rather than a central level. The model consists of a multi-channel bandpass-filter bank with a "noisy" output and an attentional mechanism that responds selectively to the channel with the greatest activity. A "leaky integration" principle allows channel excitation to accumulate and dissipate over time. The model produces similar results to two experimental demonstrations of streaming phenomena, which are presented in detail. These results are discussed in terms of the "emergent properties" of a system governed by simple physiological principles. As such the model is contrasted with higher-level Gestalt explanations of the same phenomena while accepting that they may constitute complementary kinds of explanation.

  20. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes.

    PubMed

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  1. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    PubMed Central

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  2. The dorsal auditory pathway is involved in performance of both visual and auditory rhythms.

    PubMed

    Karabanov, Anke; Blom, Orjan; Forsman, Lea; Ullén, Fredrik

    2009-01-15

    We used functional magnetic resonance imaging to investigate the effect of two factors on the neural control of temporal sequence performance: the modality in which the rhythms had been trained, and the modality of the pacing stimuli preceding performance. The rhythms were trained 1-2 days before scanning. Each participant learned two rhythms: one was presented visually, the other auditorily. During fMRI, the rhythms were performed in blocks. In each block, four beats of a visual or auditory pacing metronome were followed by repetitive self-paced rhythm performance from memory. Data from the self-paced performance phase was analysed in a 2x2 factorial design, with the two factors Training Modality (auditory or visual) and Metronome Modality (auditory or visual), as well as with a conjunction analysis across all active conditions, to identify activations that were independent of both Training Modality and Metronome Modality. We found a significant main effect only for visual versus auditory Metronome Modality, in the left angular gyrus, due to a deactivation of this region after auditory pacing. The conjunction analysis revealed a set of brain areas that included dorsal auditory pathway areas (left temporo-parietal junction area and ventral premotor cortex), dorsal premotor cortex, the supplementary and presupplementary premotor areas, the cerebellum and the basal ganglia. We conclude that these regions are involved in controlling performance of well-learned rhythms, regardless of the modality in which the rhythms are trained and paced. This suggests that after extensive short-term training, all rhythms, even those that were both trained and paced in visual modality, had been transformed into auditory-motor representations. The deactivation of the angular cortex following auditory pacing may represent cross-modal auditory-visual inhibition.

  3. A lateralized functional auditory network is involved in anuran sexual selection.

    PubMed

    Xue, Fei; Fang, Guangzhan; Yue, Xizi; Zhao, Ermi; Brauth, Steven E; Tang, Yezhong

    2016-12-01

    Right ear advantage (REA) exists in many land vertebrates in which the right ear and left hemisphere preferentially process conspecific acoustic stimuli such as those related to sexual selection. Although ecological and neural mechanisms for sexual selection have been widely studied, the brain networks involved are still poorly understood. In this study we used multi-channel electroencephalographic data in combination with Granger causal connectivity analysis to demonstrate, for the first time, that auditory neural network interconnecting the left and right midbrain and forebrain function asymmetrically in the Emei music frog (Babina daunchina), an anuran species which exhibits REA. The results showed the network was lateralized. Ascending connections between the mesencephalon and telencephalon were stronger in the left side while descending ones were stronger in the right, which matched with the REA in this species and implied that inhibition from the forebrainmay induce REA partly. Connections from the telencephalon to ipsilateral mesencephalon in response to white noise were the highest in the non-reproductive stage while those to advertisement calls were the highest in reproductive stage, implying the attention resources and living strategy shift when entered the reproductive season. Finally, these connection changes were sexually dimorphic, revealing sex differences in reproductive roles.

  4. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    PubMed

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  5. A Multichannel Bioluminescence Determination Platform for Bioassays.

    PubMed

    Kim, Sung-Bae; Naganawa, Ryuichi

    2016-01-01

    The present protocol introduces a multichannel bioluminescence determination platform allowing a high sample throughput determination of weak bioluminescence with reduced standard deviations. The platform is designed to carry a multichannel conveyer, an optical filter, and a mirror cap. The platform enables us to near-simultaneously determine ligands in multiple samples without the replacement of the sample tubes. Furthermore, the optical filters beneath the multichannel conveyer are designed to easily discriminate colors during assays. This optical system provides excellent time- and labor-efficiency to users during bioassays.

  6. Multi-channel polarized thermal emitter

    DOEpatents

    Lee, Jae-Hwang; Ho, Kai-Ming; Constant, Kristen P

    2013-07-16

    A multi-channel polarized thermal emitter (PTE) is presented. The multi-channel PTE can emit polarized thermal radiation without using a polarizer at normal emergence. The multi-channel PTE consists of two layers of metallic gratings on a monolithic and homogeneous metallic plate. It can be fabricated by a low-cost soft lithography technique called two-polymer microtransfer molding. The spectral positions of the mid-infrared (MIR) radiation peaks can be tuned by changing the periodicity of the gratings and the spectral separation between peaks are tuned by changing the mutual angle between the orientations of the two gratings.

  7. Social experience influences the development of a central auditory area.

    PubMed

    Cousillas, Hugo; George, Isabelle; Mathelier, Maryvonne; Richard, Jean-Pierre; Henry, Laurence; Hausberger, Martine

    2006-12-01

    Vocal communication develops under social influences that can enhance attention, an important factor in memory formation and perceptual tuning. In songbirds, social conditions can delay sensitive periods of development, overcome learning inhibitions and enable exceptional learning or induce selective learning. However, we do not know how social conditions influence auditory processing in the brain. In the present study, we raised young naive starlings under different social conditions but with the same auditory experience of adult songs, and we compared the effects of these different conditions on the development of the auditory cortex analogue. Several features appeared to be influenced by the social experience, among which the proportion of auditory neuronal sites and the neuronal selectivity. Both physical and social isolation from adult models altered the development of the auditory area in parallel to alterations in vocal development. To our knowledge, this is the first evidence that social deprivation has as much influence on neuronal responsiveness as sensory deprivation.

  8. Social experience influences the development of a central auditory area

    NASA Astrophysics Data System (ADS)

    Cousillas, Hugo; George, Isabelle; Mathelier, Maryvonne; Richard, Jean-Pierre; Henry, Laurence; Hausberger, Martine

    2006-12-01

    Vocal communication develops under social influences that can enhance attention, an important factor in memory formation and perceptual tuning. In songbirds, social conditions can delay sensitive periods of development, overcome learning inhibitions and enable exceptional learning or induce selective learning. However, we do not know how social conditions influence auditory processing in the brain. In the present study, we raised young naive starlings under different social conditions but with the same auditory experience of adult songs, and we compared the effects of these different conditions on the development of the auditory cortex analogue. Several features appeared to be influenced by the social experience, among which the proportion of auditory neuronal sites and the neuronal selectivity. Both physical and social isolation from adult models altered the development of the auditory area in parallel to alterations in vocal development. To our knowledge, this is the first evidence that social deprivation has as much influence on neuronal responsiveness as sensory deprivation.

  9. Altered auditory function in rats exposed to hypergravic fields

    NASA Technical Reports Server (NTRS)

    Jones, T. A.; Hoffman, L.; Horowitz, J. M.

    1982-01-01

    The effect of an orthodynamic hypergravic field of 6 G on the brainstem auditory projections was studied in rats. The brain temperature and EEG activity were recorded in the rats during 6 G orthodynamic acceleration and auditory brainstem responses were used to monitor auditory function. Results show that all animals exhibited auditory brainstem responses which indicated impaired conduction and transmission of brainstem auditory signals during the exposure to the 6 G acceleration field. Significant increases in central conduction time were observed for peaks 3N, 4P, 4N, and 5P (N = negative, P = positive), while the absolute latency values for these same peaks were also significantly increased. It is concluded that these results, along with those for fields below 4 G (Jones and Horowitz, 1981), indicate that impaired function proceeds in a rostro-caudal progression as field strength is increased.

  10. Multi-channel fiber photometry for population neuronal activity recording.

    PubMed

    Guo, Qingchun; Zhou, Jingfeng; Feng, Qiru; Lin, Rui; Gong, Hui; Luo, Qingming; Zeng, Shaoqun; Luo, Minmin; Fu, Ling

    2015-10-01

    Fiber photometry has become increasingly popular among neuroscientists as a convenient tool for the recording of genetically defined neuronal population in behaving animals. Here, we report the development of the multi-channel fiber photometry system to simultaneously monitor neural activities in several brain areas of an animal or in different animals. In this system, a galvano-mirror modulates and cyclically couples the excitation light to individual multimode optical fiber bundles. A single photodetector collects excited light and the configuration of fiber bundle assembly and the scanner determines the total channel number. We demonstrated that the system exhibited negligible crosstalk between channels and optical signals could be sampled simultaneously with a sample rate of at least 100 Hz for each channel, which is sufficient for recording calcium signals. Using this system, we successfully recorded GCaMP6 fluorescent signals from the bilateral barrel cortices of a head-restrained mouse in a dual-channel mode, and the orbitofrontal cortices of multiple freely moving mice in a triple-channel mode. The multi-channel fiber photometry system would be a valuable tool for simultaneous recordings of population activities in different brain areas of a given animal and different interacting individuals.

  11. Anatomy, Physiology and Function of the Auditory System

    NASA Astrophysics Data System (ADS)

    Kollmeier, Birger

    The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.

  12. PARATHYROID HORMONE 2 RECEPTOR AND ITS ENDOGENOUS LIGAND TIP39 ARE CONCENTRATED IN ENDOCRINE, VISCEROSENSORY AND AUDITORY BRAIN REGIONS IN MACAQUE AND HUMAN

    PubMed Central

    Bagó, Attila G.; Dimitrov, Eugene; Saunders, Richard; Seress, László; Palkovits, Miklós; Usdin, Ted B.; Dobolyi, Arpád

    2009-01-01

    Parathyroid hormone receptor 2 (PTH2R) and its ligand, tuberoinfundibular peptide of 39 residues (TIP39) constitute a neuromodulator system implicated in endocrine and nociceptive regulations. We now describe the presence and distribution of the PTH2R and TIP39 in the brain of primates using a range of tissues and ages from macaque and human brain. In situ hybridization histochemistry of TIP39 mRNA, studied in young macaque brain, due to its possible decline beyond late postnatal ages, was present only in the thalamic subparafascicular area and the pontine medial paralemniscal nucleus. In contrast in situ hybridization histochemistry in macaque identified high levels of PTH2R expression in the central amygdaloid nucleus, medial preoptic area, hypothalamic paraventricular and periventricular nuclei, medial geniculate, and the pontine tegmentum. PTH2R mRNA was also detected in several human brain areas by RT-PCR. The distribution of PTH2R-immunoreactive fibers in human, determined by immunocytochemistry, was similar to that in rodents including dense fiber networks in the medial preoptic area, hypothalamic paraventricular, periventricular and infundibular (arcuate) nuclei, lateral hypothalamic area, median eminence, thalamic paraventricular nucleus, periaqueductal gray, lateral parabrachial nucleus, nucleus of the solitary tract, sensory trigeminal nuclei, medullary dorsal reticular nucleus, and dorsal horn of the spinal cord. Co-localization suggested that PTH2R fibers are glutamatergic, and that TIP39 may directly influence hypophysiotropic somatostatin containing and indirectly influence corticotropin releasing-hormone containing neurons. The results demonstrate that TIP39 and the PTH2R are expressed in the brain of primates in locations that suggest involvement in regulation of fear, anxiety, reproductive behaviors, release of pituitary hormones, and nociception. PMID:19401215

  13. Spectrotemporal resolution tradeoff in auditory processing as revealed by human auditory brainstem responses and psychophysical indices.

    PubMed

    Bidelman, Gavin M; Syed Khaja, Ameenuddin

    2014-06-20

    Auditory filter theory dictates a physiological compromise between frequency and temporal resolution of cochlear signal processing. We examined neurophysiological correlates of these spectrotemporal tradeoffs in the human auditory system using auditory evoked brain potentials and psychophysical responses. Temporal resolution was assessed using scalp-recorded auditory brainstem responses (ABRs) elicited by paired clicks. The inter-click interval (ICI) between successive pulses was parameterized from 0.7 to 25 ms to map ABR amplitude recovery as a function of stimulus spacing. Behavioral frequency difference limens (FDLs) and auditory filter selectivity (Q10 of psychophysical tuning curves) were obtained to assess relations between behavioral spectral acuity and electrophysiological estimates of temporal resolvability. Neural responses increased monotonically in amplitude with increasing ICI, ranging from total suppression (0.7 ms) to full recovery (25 ms) with a temporal resolution of ∼3-4 ms. ABR temporal thresholds were correlated with behavioral Q10 (frequency selectivity) but not FDLs (frequency discrimination); no correspondence was observed between Q10 and FDLs. Results suggest that finer frequency selectivity, but not discrimination, is associated with poorer temporal resolution. The inverse relation between ABR recovery and perceptual frequency tuning demonstrates a time-frequency tradeoff between the temporal and spectral resolving power of the human auditory system.

  14. Auditory Neuroimaging with fMRI and PET

    PubMed Central

    Talavage, Thomas M.; Gonzalez-Castillo, Javier; Scott, Sophie K.

    2013-01-01

    For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. PMID:24076424

  15. Auditory and visual scene analysis: an overview

    PubMed Central

    2017-01-01

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044011

  16. Auditory and visual scene analysis: an overview.

    PubMed

    Kondo, Hirohito M; van Loon, Anouk M; Kawahara, Jun-Ichiro; Moore, Brian C J

    2017-02-19

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how 'scene analysis' is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  17. From sensation to percept: the neural signature of auditory event-related potentials.

    PubMed

    Joos, Kathleen; Gilles, Annick; Van de Heyning, Paul; De Ridder, Dirk; Vanneste, Sven

    2014-05-01

    An external auditory stimulus induces an auditory sensation which may lead to a conscious auditory perception. Although the sensory aspect is well known, it is still a question how an auditory stimulus results in an individual's conscious percept. To unravel the uncertainties concerning the neural correlates of a conscious auditory percept, event-related potentials may serve as a useful tool. In the current review we mainly wanted to shed light on the perceptual aspects of auditory processing and therefore we mainly focused on the auditory late-latency responses. Moreover, there is increasing evidence that perception is an active process in which the brain searches for the information it expects to be present, suggesting that auditory perception requires the presence of both bottom-up, i.e. sensory and top-down, i.e. prediction-driven processing. Therefore, the auditory evoked potentials will be interpreted in the context of the Bayesian brain model, in which the brain predicts which information it expects and when this will happen. The internal representation of the auditory environment will be verified by sensation samples of the environment (P50, N100). When this incoming information violates the expectation, it will induce the emission of a prediction error signal (Mismatch Negativity), activating higher-order neural networks and inducing the update of prior internal representations of the environment (P300).

  18. Auditory Processing Disorder (For Parents)

    MedlinePlus

    ... Feeding Your 1- to 2-Year-Old Auditory Processing Disorder KidsHealth > For Parents > Auditory Processing Disorder Print A A A What's in this ... Speech Symptoms Causes Diagnosis Helping Your Child Auditory processing disorder (APD), also known as central auditory processing ...

  19. Auditory cortex involvement in emotional learning and memory.

    PubMed

    Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B

    2015-07-23

    Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged.

  20. Dynamics of auditory-vocal interaction in monkey auditory cortex.

    PubMed

    Eliades, Steven J; Wang, Xiaoqin

    2005-10-01

    Single neurons in the primate auditory cortex exhibit vocalization-related modulations (excitatory or inhibitory) during self-initiated vocal production. Previous studies have shown that these modulations of cortical activity are variable in individual neurons' responses to multiple instances of vocalization and diverse between different cortical neurons. The present study investigated dynamic patterns of vocalization-related modulations and demonstrated that much of the variability in cortical modulations was related to the acoustic structures of self-produced vocalization. We found that suppression of single unit activity during multi-phrased vocalizations was temporally specific in that it was maintained during each phrase, but was released between phrases. Furthermore, the degree of suppression or excitation was correlated to the mean energy and frequency of the produced vocalizations, accounting for much of the response variability between multiple instances of vocalization. Simultaneous recordings of pairs of neurons from a single electrode revealed that the modulations by self-produced vocalizations in nearby neurons were largely uncorrelated. Additionally, vocalization-induced suppression was found to be preferentially distributed to upper cortical layers. Finally, we showed that the summation of all auditory cortical activity during vocalization, including both single and multi-unit responses, was weakly excitatory, consistent with observations from studies of the human brain during speech.

  1. Auditory verbal hallucinations: neuroimaging and treatment.

    PubMed

    Bohlken, M M; Hugdahl, K; Sommer, I E C

    2017-01-01

    Auditory verbal hallucinations (AVH) are a frequently occurring phenomenon in the general population and are considered a psychotic symptom when presented in the context of a psychiatric disorder. Neuroimaging literature has shown that AVH are subserved by a variety of alterations in brain structure and function, which primarily concentrate around brain regions associated with the processing of auditory verbal stimuli and with executive control functions. However, the direction of association between AVH and brain function remains equivocal in certain research areas and needs to be carefully reviewed and interpreted. When AVH have significant impact on daily functioning, several efficacious treatments can be attempted such as antipsychotic medication, brain stimulation and cognitive-behavioural therapy. Interestingly, the neural correlates of these treatments largely overlap with brain regions involved in AVH. This suggests that the efficacy of treatment corresponds to a normalization of AVH-related brain activity. In this selected review, we give a compact yet comprehensive overview of the structural and functional neuroimaging literature on AVH, with a special focus on the neural correlates of efficacious treatment.

  2. Processing of spatial sounds in human auditory cortex during visual, discrimination and 2-back tasks

    PubMed Central

    Rinne, Teemu; Ala-Salomäki, Heidi; Stecker, G. Christopher; Pätynen, Jukka; Lokki, Tapio

    2014-01-01

    Previous imaging studies on the brain mechanisms of spatial hearing have mainly focused on sounds varying in the horizontal plane. In this study, we compared activations in human auditory cortex (AC) and adjacent inferior parietal lobule (IPL) to sounds varying in horizontal location, distance, or space (i.e., different rooms). In order to investigate both stimulus-dependent and task-dependent activations, these sounds were presented during visual discrimination, auditory discrimination, and auditory 2-back memory tasks. Consistent with previous studies, activations in AC were modulated by the auditory tasks. During both auditory and visual tasks, activations in AC were stronger to sounds varying in horizontal location than along other feature dimensions. However, in IPL, this enhancement was detected only during auditory tasks. Based on these results, we argue that IPL is not primarily involved in stimulus-level spatial analysis but that it may represent such information for more general processing when relevant to an active auditory task. PMID:25120423

  3. Lexical Influences on Auditory Streaming

    PubMed Central

    Billig, Alexander J.; Davis, Matthew H.; Deeks, John M.; Monstrey, Jolijn; Carlyon, Robert P.

    2013-01-01

    Summary Biologically salient sounds, including speech, are rarely heard in isolation. Our brains must therefore organize the input arising from multiple sources into separate “streams” and, in the case of speech, map the acoustic components of the target signal onto meaning. These auditory and linguistic processes have traditionally been considered to occur sequentially and are typically studied independently [1, 2]. However, evidence that streaming is modified or reset by attention [3], and that lexical knowledge can affect reports of speech sound identity [4, 5], suggests that higher-level factors may influence perceptual organization. In two experiments, listeners heard sequences of repeated words or acoustically matched nonwords. After several presentations, they reported that the initial /s/ sound in each syllable formed a separate stream; the percept then fluctuated between the streamed and fused states in a bistable manner. In addition to measuring these verbal transformations, we assessed streaming objectively by requiring listeners to detect occasional targets—syllables containing a gap after the initial /s/. Performance was better when streaming caused the syllables preceding the target to transform from words into nonwords, rather than from nonwords into words. Our results show that auditory stream formation is influenced not only by the acoustic properties of speech sounds, but also by higher-level processes involved in recognizing familiar words. PMID:23891107

  4. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  5. Physiological Measures of Auditory Function

    NASA Astrophysics Data System (ADS)

    Kollmeier, Birger; Riedel, Helmut; Mauermann, Manfred; Uppenkamp, Stefan

    When acoustic signals enter the ears, they pass several processing stages of various complexities before they will be perceived. The auditory pathway can be separated into structures dealing with sound transmission in air (i.e. the outer ear, ear canal, and the vibration of tympanic membrane), structures dealing with the transformation of sound pressure waves into mechanical vibrations of the inner ear fluids (i.e. the tympanic membrane, ossicular chain, and the oval window), structures carrying mechanical vibrations in the fluid-filled inner ear (i.e. the cochlea with basilar membrane, tectorial membrane, and hair cells), structures that transform mechanical oscillations into a neural code, and finally several stages of neural processing in the brain along the pathway from the brainstem to the cortex.

  6. Multichannel DBS halftoning for improved texture quality

    NASA Astrophysics Data System (ADS)

    Slavuj, Radovan; Pedersen, Marius

    2015-01-01

    The paper aims to develop a method for multichannel halftoning based on the Direct Binary Search (DBS) algorithm. We integrate specifics and benefits of multichannel printing into the halftoning method in order to further improve texture quality of DBS and to create halftoning that would suit for multichannel printing. Originally, multichannel printing is developed for an extended color gamut, at the same time additional channels can help to improve individual and combined texture of color halftoning. It does so in a similar manner to the introduction of the light colors (diluted inks) in printing. Namely, if one observes Red, Green and Blue inks as the light version of the M+Y, C+Y, C+M combinations, the visibility of the unwanted halftoning textures can be reduced. Analogy can be extent to any number of ink combinations, or Neugebauer Primaries (NPs) as the alternative building blocks. The extended variability of printing spatially distributed NPs could provide many practical solution and improvements in color accuracy, image quality, and could enable spectral printing. This could be done by selection of NPs per dot area location based on the constraint of the desired reproduction. Replacement with brighter NP at the location could induce a color difference where a tradeoff between image quality and color accuracy is created. With multichannel enabled DBS haftoning, we are able to reduce visibility of the textures, to provide better rendering of transitions, especially in mid and dark tones.

  7. Wireless multichannel biopotential recording using an integrated FM telemetry circuit.

    PubMed

    Mohseni, Pedram; Najafi, Khalil; Eliades, Steven J; Wang, Xiaoqin

    2005-09-01

    This paper presents a four-channel telemetric microsystem featuring on-chip alternating current amplification, direct current baseline stabilization, clock generation, time-division multiplexing, and wireless frequency-modulation transmission of microvolt- and millivolt-range input biopotentials in the very high frequency band of 94-98 MHz over a distance of approximately 0.5 m. It consists of a 4.84-mm2 integrated circuit, fabricated using a 1.5-microm double-poly double-metal n-well standard complementary metal-oxide semiconductor process, interfaced with only three off-chip components on a custom-designed printed-circuit board that measures 1.7 x 1.2 x 0.16 cm3, and weighs 1.1 g including two miniature 1.5-V batteries. We characterize the microsystem performance, operating in a truly wireless fashion in single-channel and multichannel operation modes, via extensive benchtop and in vitro tests in saline utilizing two different micromachined neural recording microelectrodes, while dissipating approximately 2.2 mW from a 3-V power supply. Moreover, we demonstrate successful wireless in vivo recording of spontaneous neural activity at 96.2 MHz from the auditory cortex of an awake marmoset monkey at several transmission distances ranging from 10 to 50 cm with signal-to-noise ratios in the range of 8.4-9.5 dB.

  8. Effect of Neonatal Asphyxia on the Impairment of the Auditory Pathway by Recording Auditory Brainstem Responses in Newborn Piglets: A New Experimentation Model to Study the Perinatal Hypoxic-Ischemic Damage on the Auditory System

    PubMed Central

    Alvarez, Francisco Jose; Revuelta, Miren; Santaolalla, Francisco; Alvarez, Antonia; Lafuente, Hector; Arteaga, Olatz; Alonso-Alconada, Daniel; Sanchez-del-Rey, Ana; Hilario, Enrique; Martinez-Ibargüen, Agustin

    2015-01-01

    Introduction Hypoxia–ischemia (HI) is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets. Method Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs) of newborn piglets exposed to acute hypoxia/ischemia (n = 6) and a control group with no such exposure (n = 10). ABRs were recorded for both ears before the start of the experiment (baseline), after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury. Results Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant. Conclusion The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway. PMID:26010092

  9. Auditory hallucinations induced by trazodone.

    PubMed

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-04-03

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients.

  10. Auditory hallucinations induced by trazodone

    PubMed Central

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-01-01

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048

  11. Auditory models for speech analysis

    NASA Astrophysics Data System (ADS)

    Maybury, Mark T.

    This paper reviews the psychophysical basis for auditory models and discusses their application to automatic speech recognition. First an overview of the human auditory system is presented, followed by a review of current knowledge gleaned from neurological and psychoacoustic experimentation. Next, a general framework describes established peripheral auditory models which are based on well-understood properties of the peripheral auditory system. This is followed by a discussion of current enhancements to that models to include nonlinearities and synchrony information as well as other higher auditory functions. Finally, the initial performance of auditory models in the task of speech recognition is examined and additional applications are mentioned.

  12. Music training for the development of auditory skills.

    PubMed

    Kraus, Nina; Chandrasekaran, Bharath

    2010-08-01

    The effects of music training in relation to brain plasticity have caused excitement, evident from the popularity of books on this topic among scientists and the general public. Neuroscience research has shown that music training leads to changes throughout the auditory system that prime musicians for listening challenges beyond music processing. This effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness. Therefore, the role of music in shaping individual development deserves consideration.

  13. Integration and segregation in auditory scene analysis.

    PubMed

    Sussman, Elyse S

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech.

  14. Integration and segregation in auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  15. Multivariate sensitivity to voice during auditory categorization.

    PubMed

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex.

  16. Neurotrophic factor intervention restores auditory function in deafened animals

    NASA Astrophysics Data System (ADS)

    Shinohara, Takayuki; Bredberg, Göran; Ulfendahl, Mats; Pyykkö, Ilmari; Petri Olivius, N.; Kaksonen, Risto; Lindström, Bo; Altschuler, Richard; Miller, Josef M.

    2002-02-01

    A primary cause of deafness is damage of receptor cells in the inner ear. Clinically, it has been demonstrated that effective functionality can be provided by electrical stimulation of the auditory nerve, thus bypassing damaged receptor cells. However, subsequent to sensory cell loss there is a secondary degeneration of the afferent nerve fibers, resulting in reduced effectiveness of such cochlear prostheses. The effects of neurotrophic factors were tested in a guinea pig cochlear prosthesis model. After chemical deafening to mimic the clinical situation, the neurotrophic factors brain-derived neurotrophic factor and an analogue of ciliary neurotrophic factor were infused directly into the cochlea of the inner ear for 26 days by using an osmotic pump system. An electrode introduced into the cochlea was used to elicit auditory responses just as in patients implanted with cochlear prostheses. Intervention with brain-derived neurotrophic factor and the ciliary neurotrophic factor analogue not only increased the survival of auditory spiral ganglion neurons, but significantly enhanced the functional responsiveness of the auditory system as measured by using electrically evoked auditory brainstem responses. This demonstration that neurotrophin intervention enhances threshold sensitivity within the auditory system will have great clinical importance for the treatment of deaf patients with cochlear prostheses. The findings have direct implications for the enhancement of responsiveness in deafferented peripheral nerves.

  17. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C.-L.; Olson, W. S.

    1985-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Its properties and limitations are presented. The effect of noise was investigated and a better understanding of the performance of the algorithm with noisy data has been achieved. The restoration scheme with the selection of appropriate constraints was applied to a practical problem. The 6.6, 10.7, 18, and 21 GHz satellite images obtained by the scanning multichannel microwave radiometer (SMMR), each having different spatial resolution, were restored to a common, high resolution (that of the 37 GHz channels) to demonstrate the effectiveness of the method. Both simulated data and real data were used in this study. The restored multichannel images may be utilized to retrieve rainfall distributions.

  18. Restoration of multichannel microwave radiometric images.

    PubMed

    Chin, R T; Yeh, C L; Olson, W S

    1985-04-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Its properties and limitations are presented. The effect of noise was investigated and a better understanding of the performance of the algorithm with noisy data has been achieved. The restoration scheme with the selection of appropriate constraints was applied to a practical problem. The 6.6, 10.7, 18, and 21 GHz satellite images obtained by the scanning multichannel microwave radiometer (SMMR), each having different spatial resolution, were restored to a common, high resolution (that of the 37 GHz channels) to demonstrate the effectiveness of the method. Both simulated data and real data were used in this study. The restored multichannel images may be utilized to retrieve rainfall distributions.

  19. Multichannel framework for singular quantum mechanics

    SciTech Connect

    Camblong, Horacio E.; Epele, Luis N.; Fanchiotti, Huner; García Canal, Carlos A.; Ordóñez, Carlos R.

    2014-01-15

    A multichannel S-matrix framework for singular quantum mechanics (SQM) subsumes the renormalization and self-adjoint extension methods and resolves its boundary-condition ambiguities. In addition to the standard channel accessible to a distant (“asymptotic”) observer, one supplementary channel opens up at each coordinate singularity, where local outgoing and ingoing singularity waves coexist. The channels are linked by a fully unitary S-matrix, which governs all possible scenarios, including cases with an apparent nonunitary behavior as viewed from asymptotic distances. -- Highlights: •A multichannel framework is proposed for singular quantum mechanics and analogues. •The framework unifies several established approaches for singular potentials. •Singular points are treated as new scattering channels. •Nonunitary asymptotic behavior is subsumed in a unitary multichannel S-matrix. •Conformal quantum mechanics and the inverse quartic potential are highlighted.

  20. Prestimulus Network Integration of Auditory Cortex Predisposes Near-Threshold Perception Independently of Local Excitability.

    PubMed

    Leske, Sabine; Ruhnau, Philipp; Frey, Julia; Lithari, Chrysa; Müller, Nadia; Hartmann, Thomas; Weisz, Nathan

    2015-12-01

    An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm.

  1. Pilocarpine Seizures Cause Age-Dependent Impairment in Auditory Location Discrimination

    ERIC Educational Resources Information Center

    Neill, John C.; Liu, Zhao; Mikati, Mohammad; Holmes, Gregory L.

    2005-01-01

    Children who have status epilepticus have continuous or rapidly repeating seizures that may be life-threatening and may cause life-long changes in brain and behavior. The extent to which status epilepticus causes deficits in auditory discrimination is unknown. A naturalistic auditory location discrimination method was used to evaluate this…

  2. Prestimulus Network Integration of Auditory Cortex Predisposes Near-Threshold Perception Independently of Local Excitability

    PubMed Central

    Leske, Sabine; Ruhnau, Philipp; Frey, Julia; Lithari, Chrysa; Müller, Nadia; Hartmann, Thomas; Weisz, Nathan

    2015-01-01

    An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm. PMID:26408799

  3. Hearing it right: Evidence of hemispheric lateralization in auditory imagery.

    PubMed

    Prete, Giulia; Marzoli, Daniele; Brancucci, Alfredo; Tommasi, Luca

    2016-02-01

    An advantage of the right ear (REA) in auditory processing (especially for verbal content) has been firmly established in decades of behavioral, electrophysiological and neuroimaging research. The laterality of auditory imagery, however, has received little attention, despite its potential relevance for the understanding of auditory hallucinations and related phenomena. In Experiments 1-4 we find that right-handed participants required to imagine hearing a voice or a sound unilaterally show a strong population bias to localize the self-generated auditory image at their right ear, likely the result of left-hemispheric dominance in auditory processing. In Experiments 5-8 - by means of the same paradigm - it was also ascertained that the right-ear bias for hearing imagined voices depends just on auditory attention mechanisms, as biases due to other factors (i.e., lateralized movements) were controlled. These results, suggesting a central role of the left hemisphere in auditory imagery, demonstrate that brain asymmetries can drive strong lateral biases in mental imagery.

  4. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  5. A corollary discharge maintains auditory sensitivity during sound production

    NASA Astrophysics Data System (ADS)

    Poulet, James F. A.; Hedwig, Berthold

    2002-08-01

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  6. Optical multichannel sensing of skin blood pulsations

    NASA Astrophysics Data System (ADS)

    Spigulis, Janis; Erts, Renars; Kukulis, Indulis; Ozols, Maris; Prieditis, Karlis

    2004-09-01

    Time resolved detection and analysis of the skin back-scattered optical signals (reflection photoplethysmography or PPG) provide information on skin blood volume pulsations and can serve for cardiovascular assessment. The multi-channel PPG concept has been developed and clinically verified in this study. Portable two- and four-channel PPG monitoring devices have been designed for real-time data acquisition and processing. The multi-channel devices were successfully applied for cardiovascular fitness tests and for early detection of arterial occlusions in extremities. The optically measured heartbeat pulse wave propagation made possible to estimate relative arterial resistances for numerous patients and healthy volunteers.

  7. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C. L.; Olson, W. S.

    1983-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Some of its properties and limitations are also presented. The selection of appropriate constraints was emphasized in a practical application. Multichannel microwave images, each having different spatial resolution, were restored to a common highest resolution to demonstrate the effectiveness of the method. Both noise-free and noisy images were used in this investigation.

  8. Modality specific neural correlates of auditory and somatic hallucinations

    PubMed Central

    Shergill, S; Cameron, L; Brammer, M; Williams, S; Murray, R; McGuire, P

    2001-01-01

    Somatic hallucinations occur in schizophrenia and other psychotic disorders, although auditory hallucinations are more common. Although the neural correlates of auditory hallucinations have been described in several neuroimaging studies, little is known of the pathophysiology of somatic hallucinations. Functional magnetic resonance imaging (fMRI) was used to compare the distribution of brain activity during somatic and auditory verbal hallucinations, occurring at different times in a 36 year old man with schizophrenia. Somatic hallucinations were associated with activation in the primary somatosensory and posterior parietal cortex, areas that normally mediate tactile perception. Auditory hallucinations were associated with activation in the middle and superior temporal cortex, areas involved in processing external speech. Hallucinations in a given modality seem to involve areas that normally process sensory information in that modality.

 PMID:11606687

  9. Multichannel DC SQUID sensor array for biomagnetic applications

    SciTech Connect

    Hoenig, H.E.; Daalmans, G.M.; Bar, L.; Bommel, F.; Paulus, A.; Uhl, D.; Weisse, H.J. ); Schneider, S.; Seifert, H.; Reichenberger, H.; Abraham-Fuchs, K. )

    1991-03-01

    This paper reports on a biomagnetic multichannel system for medical diagnosis of brain and heart KRENIKON has been developed. 37 axial 2st order gradiometers - manufactured as flexible superconducting printed circuits - are arranged in a circular flat array of 19 cm diameter. Additionally, 3 orthogonal magnetometers are provided. The DC SQUIDs are fabricated in all-Nb technology, ten on a chip. The sensor system is operated in a shielded room with two layers of soft magnetic material and one layer of Al. The every day noise level is 10 fT/Hz{sup 1/2} at frequencies above 10 Hz. Within 2 years of operation in a normal urban surrounding, useful clinical applications have been demonstrated (e.g. for epilepsy and heart arrhythmias).

  10. Stretchable multichannel antennas in soft wireless optoelectronic implants for optogenetics.

    PubMed

    Park, Sung Il; Shin, Gunchul; McCall, Jordan G; Al-Hasani, Ream; Norris, Aaron; Xia, Li; Brenner, Daniel S; Noh, Kyung Nim; Bang, Sang Yun; Bhatti, Dionnet L; Jang, Kyung-In; Kang, Seung-Kyun; Mickle, Aaron D; Dussor, Gregory; Price, Theodore J; Gereau, Robert W; Bruchas, Michael R; Rogers, John A

    2016-12-13

    Optogenetic methods to modulate cells and signaling pathways via targeted expression and activation of light-sensitive proteins have greatly accelerated the process of mapping complex neural circuits and defining their roles in physiological and pathological contexts. Recently demonstrated technologies based on injectable, microscale inorganic light-emitting diodes (μ-ILEDs) with wireless control and power delivery strategies offer important functionality in such experiments, by eliminating the external tethers associated with traditional fiber optic approaches. Existing wireless μ-ILED embodiments allow, however, illumination only at a single targeted region of the brain with a single optical wavelength and over spatial ranges of operation that are constrained by the radio frequency power transmission hardware. Here we report stretchable, multiresonance antennas and battery-free schemes for multichannel wireless operation of independently addressable, multicolor μ-ILEDs with fully implantable, miniaturized platforms. This advance, as demonstrated through in vitro and in vivo studies using thin, mechanically soft systems that separately control as many as three different μ-ILEDs, relies on specially designed stretchable antennas in which parallel capacitive coupling circuits yield several independent, well-separated operating frequencies, as verified through experimental and modeling results. When used in combination with active motion-tracking antenna arrays, these devices enable multichannel optogenetic research on complex behavioral responses in groups of animals over large areas at low levels of radio frequency power (<1 W). Studies of the regions of the brain that are involved in sleep arousal (locus coeruleus) and preference/aversion (nucleus accumbens) demonstrate the unique capabilities of these technologies.

  11. Auditory evoked responses to rhythmic sound pulses in dolphins.

    PubMed

    Popov, V V; Supin, A Y

    1998-10-01

    The ability of auditory evoked potentials to follow sound pulse (click or pip) rate was studied in bottlenosed dolphins. Sound pulses were presented in 20-ms rhythmic trains separated by 80-ms pauses. Rhythmic click or pip trains evoked a quasi-sustained response consisting of a sequence of auditory brainstem responses. This was designated as the rate-following response. Rate following response peak-to-peak amplitude dependence on sound pulse rate was almost flat up to 200 s-1, then displayed a few peaks and valleys superimposed on a low-pass filtering function with a cut-off frequency of 1700 s-1 at a 0.1-amplitude level. Peaks and valleys of the function corresponded to the pattern of the single auditory brain stem response spectrum; the low-pass cut-off frequency was below the auditory brain stem response spectrum bandwidth. Rate-following response frequency composition (magnitudes of the fundamental and harmonics) corresponded to the auditory brain stem response frequency spectrum except for lower fundamental magnitudes at frequencies above 1700 Hz. These regularities were similar for both click and pip trains. The rate-following response to steady-state rhythmic stimulation was similar to the rate-following response evoked by short trains except for a slight amplitude decrease with the rate increase above 10 s-1. The latter effect is attributed to a long-term rate-dependent adaptation in conditions of the steady-state pulse stimulation.

  12. Role of the auditory system in speech production.

    PubMed

    Guenther, Frank H; Hickok, Gregory

    2015-01-01

    This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer.

  13. The function of BDNF in the adult auditory system.

    PubMed

    Singer, Wibke; Panford-Walsh, Rama; Knipper, Marlies

    2014-01-01

    The inner ear of vertebrates is specialized to perceive sound, gravity and movements. Each of the specialized sensory organs within the cochlea (sound) and vestibular system (gravity, head movements) transmits information to specific areas of the brain. During development, brain-derived neurotrophic factor (BDNF) orchestrates the survival and outgrowth of afferent fibers connecting the vestibular organ and those regions in the cochlea that map information for low frequency sound to central auditory nuclei and higher-auditory centers. The role of BDNF in the mature inner ear is less understood. This is mainly due to the fact that constitutive BDNF mutant mice are postnatally lethal. Only in the last few years has the improved technology of performing conditional cell specific deletion of BDNF in vivo allowed the study of the function of BDNF in the mature developed organ. This review provides an overview of the current knowledge of the expression pattern and function of BDNF in the peripheral and central auditory system from just prior to the first auditory experience onwards. A special focus will be put on the differential mechanisms in which BDNF drives refinement of auditory circuitries during the onset of sensory experience and in the adult brain. This article is part of the Special Issue entitled 'BDNF Regulation of Synaptic Structure, Function, and Plasticity'.

  14. Reboxetine Improves Auditory Attention and Increases Norepinephrine Levels in the Auditory Cortex of Chronically Stressed Rats

    PubMed Central

    Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies

    2016-01-01

    Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle

  15. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    NASA Astrophysics Data System (ADS)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  16. Cross-Modal Plasticity in Higher-Order Auditory Cortex of Congenitally Deaf Cats Does Not Limit Auditory Responsiveness to Cochlear Implants

    PubMed Central

    Baumhoff, Peter; Tillein, Jochen; Lomber, Stephen G.; Hubka, Peter; Kral, Andrej

    2016-01-01

    Congenital sensory deprivation can lead to reorganization of the deprived cortical regions by another sensory system. Such cross-modal reorganization may either compete with or complement the “original“ inputs to the deprived area after sensory restoration and can thus be either adverse or beneficial for sensory restoration. In congenital deafness, a previous inactivation study documented that supranormal visual behavior was mediated by higher-order auditory fields in congenitally deaf cats (CDCs). However, both the auditory responsiveness of “deaf” higher-order fields and interactions between the reorganized and the original sensory input remain unknown. Here, we studied a higher-order auditory field responsible for the supranormal visual function in CDCs, the auditory dorsal zone (DZ). Hearing cats and visual cortical areas served as a control. Using mapping with microelectrode arrays, we demonstrate spatially scattered visual (cross-modal) responsiveness in the DZ, but show that this did not interfere substantially with robust auditory responsiveness elicited through cochlear implants. Visually responsive and auditory-responsive neurons in the deaf auditory cortex formed two distinct populations that did not show bimodal interactions. Therefore, cross-modal plasticity in the deaf higher-order auditory cortex had limited effects on auditory inputs. The moderate number of scattered cross-modally responsive neurons could be the consequence of exuberant connections formed during development that were not pruned postnatally in deaf cats. Although juvenile brain circuits are modified extensively by experience, the main driving input to the cross-modally (visually) reorganized higher-order auditory cortex remained auditory in congenital deafness. SIGNIFICANCE STATEMENT In a common view, the “unused” auditory cortex of deaf individuals is reorganized to a compensatory sensory function during development. According to this view, cross-modal plasticity takes

  17. A multi-channel waveform digitizer system

    SciTech Connect

    Bieser, F.; Muller, W.F.J. )

    1990-04-01

    The authors report on the design and performance of a multichannel waveform digitizer system for use with the Multiple Sample Ionization Chamber (MUSIC) Detector at the Bevalac. 128 channels of 20 MHz Flash ADC plus 256 word deep memory are housed in a single crate. Digital thresholds and hit pattern logic facilitate zero suppression during readout which is performed over a standard VME bus.

  18. Virtual Auditory Displays

    DTIC Science & Technology

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  19. Modelling auditory attention.

    PubMed

    Kaya, Emine Merve; Elhilali, Mounya

    2017-02-19

    Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'.

  20. Modelling auditory attention

    PubMed Central

    Kaya, Emine Merve

    2017-01-01

    Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information—a phenomenon referred to as the ‘cocktail party problem’. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by ‘bottom-up’ sensory-driven factors, as well as ‘top-down’ task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044012

  1. Auditory Fusion in Children.

    ERIC Educational Resources Information Center

    Davis, Sylvia M.; McCroskey, Robert L.

    1980-01-01

    Focuses on auditory fusion (defined in terms of a listerner's ability to distinguish paired acoustic events from single acoustic events) in 3- to 12-year-old children. The subjects listened to 270 pairs of tones controlled for frequency, intensity, and duration. (CM)

  2. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  3. Cortical functional connectivity under different auditory attentional efforts.

    PubMed

    Hong, Xiangfei; Tong, Shanbao

    2012-01-01

    Auditory attentional effort (AAE) could be tuned to different levels in a top-down manner, while its neural correlates are still poorly understood. In this paper, we investigate the cortical connectivity under different levels of AAE. Multichannel EEG signals were recorded from nine subjects (male/female=6=3) in an auditory discrimination task under low or high AAE. Behavioral results showed that subjects paid more attention under high AAE and detected the probe stimuli better than low AAE. Partial directed coherence (PDC) was used to study the cortical functional connectivity within the first 300 ms post-stimulus period which includes the N100 and P200 components in the event-related potential (ERP). Majority of the cortical connections were strengthened with the increase of AAE. The right hemispheric dominance of connectivity in maintaining auditory attention was found under low AAE, which disappeared when the AAE was increased, indicating that the right hemispheric dominance previously reported might be due to a relatively lower AAE. Besides, most cortical connections under high AAE were found to be from the parietal cortex to the prefrontal cortex, which suggested the initiative role of parietal cortex in maintaining a high AAE.

  4. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    PubMed

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise.

  5. Reduced object related negativity response indicates impaired auditory scene analysis in adults with autistic spectrum disorder.

    PubMed

    Lodhia, Veema; Brock, Jon; Johnson, Blake W; Hautus, Michael J

    2014-01-01

    Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN) in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG) was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400). These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.

  6. Horseradish peroxidase dye tracing and embryonic statoacoustic ganglion cell transplantation in the rat auditory nerve trunk.

    PubMed

    Palmgren, Björn; Jin, Zhe; Jiao, Yu; Kostyszyn, Beata; Olivius, Petri

    2011-03-04

    At present severe damage to hair cells and sensory neurons in the inner ear results in non-treatable auditory disorders. Cell implantation is a potential treatment for various neurological disorders and has already been used in clinical practice. In the inner ear, delivery of therapeutic substances including neurotrophic factors and stem cells provide strategies that in the future may ameliorate or restore hearing impairment. In order to describe a surgical auditory nerve trunk approach, in the present paper we injected the neuronal tracer horseradish peroxidase (HRP) into the central part of the nerve by an intra cranial approach. We further evaluated the applicability of the present approach by implanting statoacoustic ganglion (SAG) cells into the same location of the auditory nerve in normal hearing rats or animals deafened by application of β-bungarotoxin to the round window niche. The HRP results illustrate labeling in the cochlear nucleus in the brain stem as well as peripherally in the spiral ganglion neurons in the cochlea. The transplanted SAGs were observed within the auditory nerve trunk but no more peripheral than the CNS-PNS transitional zone. Interestingly, the auditory nerve injection did not impair auditory function, as evidenced by the auditory brainstem response. The present findings illustrate that an auditory nerve trunk approach may well access the entire auditory nerve and does not compromise auditory function. We suggest that such an approach might compose a suitable route for cell transplantation into this sensory cranial nerve.

  7. Multichannel activity propagation across an engineered axon network

    NASA Astrophysics Data System (ADS)

    Chen, H. Isaac; Wolf, John A.; Smith, Douglas H.

    2017-04-01

    . These results provide insight into how the brain potentially processes information and generates the neural code and could guide the development of clinical therapies based on multichannel brain stimulation.

  8. Auditory, Tactile, and Audiotactile Information Processing Following Visual Deprivation

    ERIC Educational Resources Information Center

    Occelli, Valeria; Spence, Charles; Zampini, Massimiliano

    2013-01-01

    We highlight the results of those studies that have investigated the plastic reorganization processes that occur within the human brain as a consequence of visual deprivation, as well as how these processes give rise to behaviorally observable changes in the perceptual processing of auditory and tactile information. We review the evidence showing…

  9. Auditory Technology and Its Impact on Bilingual Deaf Education

    ERIC Educational Resources Information Center

    Mertes, Jennifer

    2015-01-01

    Brain imaging studies suggest that children can simultaneously develop, learn, and use two languages. A visual language, such as American Sign Language (ASL), facilitates development at the earliest possible moments in a child's life. Spoken language development can be delayed due to diagnostic evaluations, device fittings, and auditory skill…

  10. Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System

    PubMed Central

    Anderson, Lucy A.

    2016-01-01

    High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the

  11. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    PubMed Central

    Meyer, Martin; Baumann, Simon; Marchina, Sarah; Jancke, Lutz

    2007-01-01

    Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control) habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control) habituation phase they heard brief telephone ringing. In the third (conditioning) phase we coincidently presented the visual stimulus (CS) paired with the auditory stimulus (UCS). In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS-) or viewed the visual stimulus in isolation (extinction, CS+) according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation) we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale) and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able to show that brain

  12. Spatiotemporal reconstruction of auditory steady-state responses to acoustic amplitude modulations: Potential sources beyond the auditory pathway.

    PubMed

    Farahani, Ehsan Darestani; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid

    2017-03-01

    Investigating the neural generators of auditory steady-state responses (ASSRs), i.e., auditory evoked brain responses, with a wide range of screening and diagnostic applications, has been the focus of various studies for many years. Most of these studies employed a priori assumptions regarding the number and location of neural generators. The aim of this study is to reconstruct ASSR sources with minimal assumptions in order to gain in-depth insight into the number and location of brain regions that are activated in response to low- as well as high-frequency acoustically amplitude modulated signals. In order to reconstruct ASSR sources, we applied independent component analysis with subsequent equivalent dipole modeling to single-subject EEG data (young adults, 20-30 years of age). These data were based on white noise stimuli, amplitude modulated at 4, 20, 40, or 80Hz. The independent components that exhibited a significant ASSR were clustered among all participants by means of a probabilistic clustering method based on a Gaussian mixture model. Results suggest that a widely distributed network of sources, located in cortical as well as subcortical regions, is active in response to 4, 20, 40, and 80Hz amplitude modulated noises. Some of these sources are located beyond the central auditory pathway. Comparison of brain sources in response to different modulation frequencies suggested that the identified brain sources in the brainstem, the left and the right auditory cortex show a higher responsiveness to 40Hz than to the other modulation frequencies.

  13. Preliminary Evidence for Reduced Auditory Lateral Suppression in Schizophrenia

    PubMed Central

    Ramage, Erin. M.; Weintraub, David M.; Vogel, Sally J.; Sutton, Griffin P.; Ringdahl, Erik N.; Allen, Daniel N.; Snyder, Joel S.

    2014-01-01

    Background Well-documented auditory processing deficits such as impaired frequency discrimination and reduced suppression of auditory brain responses in schizophrenia (SZ) may contribute to abnormal auditory functioning in everyday life. Lateral suppression of non-stimulated neurons by stimulated neurons has not been extensively assessed in SZ and likely plays an important role in precise encoding of sounds. Therefore, this study evaluated whether lateral suppression of activity in auditory cortex is impaired in SZ. Methods SZ participants and control participants watched a silent movie with subtitles while listening to trials composed of a 0.5 s control stimulus (CS), a 3 s filtered masking noise (FN), and a 0.5 s test stimulus (TS). The CS and TS were identical on each trial and had energy corresponding to the high energy (recurrent suppression) or low energy (lateral suppression) portions of the FN. Event-related potentials were recorded and suppression was measured as the amplitude change between CS and TS. Results Peak amplitudes of the auditory P2 component (160–260 ms) showed reduced lateral but not recurrent suppression in SZ participants. Conclusions Reduced lateral suppression in SZ participants may lead to overlap of neuronal populations representing different auditory stimuli. Such imprecise neural representations may contribute to the difficulties SZ participants have in discriminating complex stimuli in everyday life. PMID:25583249

  14. Central projections of auditory nerve fibers in the barn owl.

    PubMed

    Carr, C E; Boudreau, R E

    1991-12-08

    The central projections of the auditory nerve were examined in the barn owl. Each auditory nerve fiber enters the brain and divides to terminate in both the cochlear nucleus angularis and the cochlear nucleus magnocellularis. This division parallels a functional division into intensity and time coding in the auditory system. The lateral branch of the auditory nerve innervates the nucleus angularis and gives rise to a major and a minor terminal field. The terminals range in size and shape from small boutons to large irregular boutons with thorn-like appendages. The medial branch of the auditory nerve conveys phase information to the cells of the nucleus magnocellularis via large axosomatic endings or end bulbs of Held. Each medial branch divides to form 3-6 end bulbs along the rostrocaudal orientation of a single tonotopic band, and each magnocellular neuron receives 1-4 end bulbs. The end bulb envelops the postsynaptic cell body and forms large numbers of synapses. The auditory nerve profiles contain round clear vesicles and form punctate asymmetric synapses on both somatic spines and the cell body.

  15. Multimodal lexical processing in auditory cortex is literacy skill dependent.

    PubMed

    McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R

    2014-09-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading.

  16. Development of the auditory system.

    PubMed

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity.

  17. Animal models for auditory streaming.

    PubMed

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  18. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  19. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  20. Auditory Learning. Dimensions in Early Learning Series.

    ERIC Educational Resources Information Center

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  1. Auditory Processing Disorders. Revised. Technical Assistance Paper.

    ERIC Educational Resources Information Center

    Florida State Dept. of Education, Tallahassee. Bureau of Instructional Support and Community Services.

    Designed to assist audiologists in the educational setting in responding to frequently asked questions concerning audiological auditory processing disorder (APD) evaluations, this paper addresses: (1) auditory processes; (2) auditory processing skills; (3) characteristics of auditory processing disorders; (4) causes of auditory overload; (5) why…

  2. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia.

    PubMed

    Kuga, Hironori; Onitsuka, Toshiaki; Hirano, Yoji; Nakamura, Itta; Oribe, Naoya; Mizuhara, Hiroaki; Kanai, Ryota; Kanba, Shigenobu; Ueno, Takefumi

    2016-10-01

    Recent MRI studies have shown that schizophrenia is characterized by reductions in brain gray matter, which progress in the acute state of the disease. Cortical circuitry abnormalities in gamma oscillations, such as deficits in the auditory steady state response (ASSR) to gamma frequency (>30-Hz) stimulation, have also been reported in schizophrenia patients. In the current study, we investigated neural responses during click stimulation by BOLD signals. We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ), 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ), and 24 healthy controls (HC), assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  3. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease.

    PubMed

    Golden, Hannah L; Agustus, Jennifer L; Goll, Johanna C; Downey, Laura E; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.

  4. Musical and auditory hallucinations: A spectrum.

    PubMed

    E Fischer, Corinne; Marchie, Anthony; Norris, Mireille

    2004-02-01

    Musical hallucinosis is a rare and poorly understood clinical phenomenon. While an association appears to exist between this phenomenon and organic brain pathology, aging and sensory impairment the precise association remains unclear. The authors present two cases of musical hallucinosis, both in elderly patients with mild-moderate cognitive impairment and mild-moderate hearing loss, who subsequently developed auditory hallucinations and in one case command hallucinations. The literature in reference to musical hallucinosis will be reviewed and a theory relating to the development of musical hallucinations will be proposed.

  5. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study.

    PubMed

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-01-01

    Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event.

  6. Modulation of Auditory Spatial Attention by Angry Prosody: An fMRI Auditory Dot-Probe Study

    PubMed Central

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-01-01

    Emotional stimuli have been shown to modulate attentional orienting through signals sent by subcortical brain regions that modulate visual perception at early stages of processing. Fewer studies, however, have investigated a similar effect of emotional stimuli on attentional orienting in the auditory domain together with an investigation of brain regions underlying such attentional modulation, which is the general aim of the present study. Therefore, we used an original auditory dot-probe paradigm involving simultaneously presented neutral and angry non-speech vocal utterances lateralized to either the left or the right auditory space, immediately followed by a short and lateralized single sine wave tone presented in the same (valid trial) or in the opposite space as the preceding angry voice (invalid trial). Behavioral results showed an expected facilitation effect for target detection during valid trials while functional data showed greater activation in the middle and posterior superior temporal sulci (STS) and in the medial frontal cortex for valid vs. invalid trials. The use of reaction time facilitation [absolute value of the Z-score of valid-(invalid+neutral)] as a group covariate extended enhanced activity in the amygdalae, auditory thalamus, and visual cortex. Taken together, our results suggest the involvement of a large and distributed network of regions among which the STS, thalamus, and amygdala are crucial for the decoding of angry prosody, as well as for orienting and maintaining attention within an auditory space that was previously primed by a vocal emotional event. PMID:27242420

  7. Mechanisms underlying the auditory continuity illusion

    NASA Astrophysics Data System (ADS)

    Pressnitzer, Daniel; Tardieu, Julien; Ragot, Richard; Baillet, Sylvain

    2004-05-01

    This study investigates the auditory continuity illusion, combining psychophysics and magnetoencephalography. Stimuli consisted of amplitude-modulated (AM) noise interrupted by bursts of louder, unmodulated noise. Subjective judgments confirmed that the AM was perceived as continuous, a case of illusory continuity. Psychophysical measurements showed that the illusory modulation had little effect on the detection of a physical modulation, i.e., the illusory modulation produced no modulation masking. Duration discrimination thresholds for the AM noise segments, however, were elevated by the illusion. A whole-head magnetoencephalographic system was used to record brain activity when listeners attended passively to the stimuli. The AM noise produced a modulated magnetic activity, the auditory steady-state response. The illusory modulation did not produce such a response, instead, a possible neural correlate of the illusion was found in transient evoked responses. When the AM was interrupted by silence, oscillatory activity in the gamma-band range as well as slow evoked potentials were observed at each AM onset. In the case of the illusion, these neural responses were largely reduced. Both sets of results are inconsistent with a restoration of the modulation in the case of illusory continuity. Rather, they point to a role for onset-detection mechanisms in auditory scene analysis.

  8. Perineuronal nets in the auditory system.

    PubMed

    Sonntag, Mandy; Blosa, Maren; Schmidt, Sophie; Rübsamen, Rudolf; Morawski, Markus

    2015-11-01

    Perineuronal nets (PNs) are a unique and complex meshwork of specific extracellular matrix molecules that ensheath a subset of neurons in many regions of the central nervous system (CNS). PNs appear late in development and are supposed to restrict synaptic plasticity and to stabilize functional neuronal connections. PNs were further hypothesized to create a charged milieu around the neurons and thus, might directly modulate synaptic activity. Although PNs were first described more than 120 years ago, their exact functions still remain elusive. The purpose of the present review is to propose the nuclei of the auditory system, which are highly enriched in PN-wearing neurons, as particularly suitable structures to study the functional significance of PNs. We provide a detailed description of the distribution of PNs from the cochlear nucleus to the auditory cortex considering distinct markers for detection of PNs. We further point to the suitability of specific auditory neurons to serve as promising model systems to study in detail the contribution of PNs to synaptic physiology and also more generally to the functionality of the brain.

  9. Coupling output of multichannel high power microwaves

    SciTech Connect

    Li Guolin; Shu Ting; Yuan Chengwei; Zhang Jun; Yang Jianhua; Jin Zhenxing; Yin Yi; Wu Dapeng; Zhu Jun; Ren Heming; Yang Jie

    2010-12-15

    The coupling output of multichannel high power microwaves is a promising technique for the development of high power microwave technologies, as it can enhance the output capacities of presently studied devices. According to the investigations on the spatial filtering method and waveguide filtering method, the hybrid filtering method is proposed for the coupling output of multichannel high power microwaves. As an example, a specific structure is designed for the coupling output of S/X/X band three-channel high power microwaves and investigated with the hybrid filtering method. In the experiments, a pulse of 4 GW X band beat waves and a pulse of 1.8 GW S band microwave are obtained.

  10. Early hominin auditory capacities

    PubMed Central

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis

    2015-01-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  11. Early hominin auditory capacities.

    PubMed

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats.

  12. Multichannel algorithms for seismic reflectivity inversion

    NASA Astrophysics Data System (ADS)

    Wang, Ruo; Wang, Yanghua

    2017-02-01

    Seismic reflectivity inversion is a deconvolution process for quantitatively extracting the reflectivity series and depicting the layered subsurface structure. The conventional method is a single channel inversion and cannot clearly characterise stratified structures, especially from seismic data with low signal-to-noise ratio. Because it is implemented on a trace-by-trace basis, the continuity along reflections in the original seismic data is deteriorated in the inversion results. We propose here multichannel inversion algorithms that apply the information of adjacent traces during seismic reflectivity inversion. Explicitly, we incorporate a spatial prediction filter into the conventional Cauchy-constrained inversion method. We verify the validity and feasibility of the method using field data experiments and find an improved lateral continuity and clearer structures achieved by the multichannel algorithms. Finally, we compare the performance of three multichannel algorithms and merit the effectiveness based on the lateral coherency and structure characterisation of the inverted reflectivity profiles, and the residual energy of the seismic data at the same time.

  13. Auditory interfaces: The human perceiver

    NASA Technical Reports Server (NTRS)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  14. Seeing sounds and hearing colors: an event-related potential study of auditory-visual synesthesia.

    PubMed

    Goller, Aviva I; Otten, Leun J; Ward, Jamie

    2009-10-01

    In auditory-visual synesthesia, sounds automatically elicit conscious and reliable visual experiences. It is presently unknown whether this reflects early or late processes in the brain. It is also unknown whether adult audiovisual synesthesia resembles auditory-induced visual illusions that can sometimes occur in the general population or whether it resembles the electrophysiological deflection over occipital sites that has been noted in infancy and has been likened to synesthesia. Electrical brain activity was recorded from adult synesthetes and control participants who were played brief tones and required to monitor for an infrequent auditory target. The synesthetes were instructed to attend either to the auditory or to the visual (i.e., synesthetic) dimension of the tone, whereas the controls attended to the auditory dimension alone. There were clear differences between synesthetes and controls that emerged early (100 msec after tone onset). These differences tended to lie in deflections of the auditory-evoked potential (e.g., the auditory N1, P2, and N2) rather than the presence of an additional posterior deflection. The differences occurred irrespective of what the synesthetes attended to (although attention had a late effect). The results suggest that differences between synesthetes and others occur early in time, and that synesthesia is qualitatively different from similar effects found in infants and certain auditory-induced visual illusions in adults. In addition, we report two novel cases of synesthesia in which colors elicit sounds, and vice versa.

  15. Spontaneous activity in the developing auditory system.

    PubMed

    Wang, Han Chin; Bergles, Dwight E

    2015-07-01

    Spontaneous electrical activity is a common feature of sensory systems during early development. This sensory-independent neuronal activity has been implicated in promoting their survival and maturation, as well as growth and refinement of their projections to yield circuits that can rapidly extract information about the external world. Periodic bursts of action potentials occur in auditory neurons of mammals before hearing onset. This activity is induced by inner hair cells (IHCs) within the developing cochlea, which establish functional connections with spiral ganglion neurons (SGNs) several weeks before they are capable of detecting external sounds. During this pre-hearing period, IHCs fire periodic bursts of Ca(2+) action potentials that excite SGNs, triggering brief but intense periods of activity that pass through auditory centers of the brain. Although spontaneous activity requires input from IHCs, there is ongoing debate about whether IHCs are intrinsically active and their firing periodically interrupted by external inhibitory input (IHC-inhibition model), or are intrinsically silent and their firing periodically promoted by an external excitatory stimulus (IHC-excitation model). There is accumulating evidence that inner supporting cells in Kölliker's organ spontaneously release ATP during this time, which can induce bursts of Ca(2+) spikes in IHCs that recapitulate many features of auditory neuron activity observed in vivo. Nevertheless, the role of supporting cells in this process remains to be established in vivo. A greater understanding of the molecular mechanisms responsible for generating IHC activity in the developing cochlea will help reveal how these events contribute to the maturation of nascent auditory circuits.

  16. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    PubMed

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general.

  17. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    NASA Astrophysics Data System (ADS)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  18. The frequency modulated auditory evoked response (FMAER), a technical advance for study of childhood language disorders: cortical source localization and selected case studies

    PubMed Central

    2013-01-01

    Background Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report's purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. Results FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. Conclusion The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral

  19. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  20. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    ERIC Educational Resources Information Center

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  1. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  2. Auditory tracts identified with combined fMRI and diffusion tractography.

    PubMed

    Javad, Faiza; Warren, Jason D; Micallef, Caroline; Thornton, John S; Golay, Xavier; Yousry, Tarek; Mancini, Laura

    2014-01-01

    The auditory tracts in the human brain connect the inferior colliculus (IC) and medial geniculate body (MGB) to various components of the auditory cortex (AC). While in non-human primates and in humans, the auditory system is differentiated in core, belt and parabelt areas, the correspondence between these areas and anatomical landmarks on the human superior temporal gyri is not straightforward, and at present not completely understood. However it is not controversial that there is a hierarchical organization of auditory stimuli processing in the auditory system. The aims of this study were to demonstrate that it is possible to non-invasively and robustly identify auditory projections between the auditory thalamus/brainstem and different functional levels of auditory analysis in the cortex of human subjects in vivo combining functional magnetic resonance imaging (fMRI) with diffusion MRI, and to investigate the possibility of differentiating between different components of the auditory pathways (e.g. projections to areas responsible for sound, pitch and melody processing). We hypothesized that the major limitation in the identification of the auditory pathways is the known problem of crossing fibres and addressed this issue acquiring DTI with b-values higher than commonly used and adopting a multi-fibre ball-and-stick analysis model combined with probabilistic tractography. Fourteen healthy subjects were studied. Auditory areas were localized functionally using an established hierarchical pitch processing fMRI paradigm. Together fMRI and diffusion MRI allowed the successful identification of tracts connecting IC with AC in 64 to 86% of hemispheres and left sound areas with homologous areas in the right hemisphere in 86% of hemispheres. The identified tracts corresponded closely with a three-dimensional stereotaxic atlas based on postmortem data. The findings have both neuroscientific and clinical implications for delineation of the human auditory system in vivo.

  3. Brain activity associated with skilled finger movements: multichannel magnetic recordings.

    PubMed

    Chiarenza, G A; Hari, R K; Karhu, J J; Tessore, S

    1991-01-01

    We recorded with a 24-channel SQUID magnetometer cerebral activity preceding and following self-paced voluntary 'skilled' movements in four healthy adults. The subject pressed buttons successively with the right index and middle fingers aiming at a time difference of 40-60 ms; on-line feedback on performance was given after each movement. Slow magnetic readiness fields (RFs) preceded the movements by 0.5 s and culminated about 20 ms after the electromyogram (EMG) onset. Movement-evoked fields, MEFs, opposite in polarity to RFs, were observed 90-120 ms after the EMG onset. They were followed by an additional 'skilled-performance field', SPF, 400-500 ms after the EMG onset. The source locations of RF, MEF, and SPF were within 2 cm from sources of the somatosensory evoked responses, which were situated in the posterior wall of the Rolandic fissure; the sources of MEF were closest to the midline. Neural generators of these deflections and of the corresponding electric potentials are discussed.

  4. Auditory Reserve and the Legacy of Auditory Experience

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381

  5. Practiced musical style shapes auditory skills.

    PubMed

    Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari

    2012-04-01

    Musicians' processing of sounds depends highly on instrument, performance practice, and level of expertise. Here, we measured the mismatch negativity (MMN), a preattentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, and rock/pop) and in nonmusicians using a novel, fast, and musical sounding multifeature MMN paradigm. We found MMN to all six deviants, showing that MMN paradigms can be adapted to resemble a musical context. Furthermore, we found that jazz musicians had larger MMN amplitude than all other experimental groups across all sound features, indicating greater overall sensitivity to auditory outliers. Furthermore, we observed a tendency toward shorter latency of the MMN to all feature changes in jazz musicians compared to band musicians. These findings indicate that the characteristics of the style of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in music.

  6. Auditory Evoked Potential Response and Hearing Loss: A Review

    PubMed Central

    Paulraj, M. P; Subramaniam, Kamalraj; Yaccob, Sazali Bin; Adom, Abdul H. Bin; Hema, C. R

    2015-01-01

    Hypoacusis is the most prevalent sensory disability in the world and consequently, it can lead to impede speech in human beings. One best approach to tackle this issue is to conduct early and effective hearing screening test using Electroencephalogram (EEG). EEG based hearing threshold level determination is most suitable for persons who lack verbal communication and behavioral response to sound stimulation. Auditory evoked potential (AEP) is a type of EEG signal emanated from the brain scalp by an acoustical stimulus. The goal of this review is to assess the current state of knowledge in estimating the hearing threshold levels based on AEP response. AEP response reflects the auditory ability level of an individual. An intelligent hearing perception level system enables to examine and determine the functional integrity of the auditory system. Systematic evaluation of EEG based hearing perception level system predicting the hearing loss in newborns, infants and multiple handicaps will be a priority of interest for future research. PMID:25893012

  7. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children

    PubMed Central

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442

  8. Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.

    PubMed

    Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.

  9. Issues in Human Auditory Development

    ERIC Educational Resources Information Center

    Werner, Lynne A.

    2007-01-01

    The human auditory system is often portrayed as precocious in its development. In fact, many aspects of basic auditory processing appear to be adult-like by the middle of the first year of postnatal life. However, processes such as attention and sound source determination take much longer to develop. Immaturity of higher-level processes limits the…

  10. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  11. Auditory neglect and related disorders.

    PubMed

    Gutschalk, Alexander; Dykstra, Andrew

    2015-01-01

    Neglect is a neurologic disorder, typically associated with lesions of the right hemisphere, in which patients are biased towards their ipsilesional - usually right - side of space while awareness for their contralesional - usually left - side is reduced or absent. Neglect is a multimodal disorder that often includes deficits in the auditory domain. Classically, auditory extinction, in which left-sided sounds that are correctly perceived in isolation are not detected in the presence of synchronous right-sided stimulation, has been considered the primary sign of auditory neglect. However, auditory extinction can also be observed after unilateral auditory cortex lesions and is thus not specific for neglect. Recent research has shown that patients with neglect are also impaired in maintaining sustained attention, on both sides, a fact that is reflected by an impairment of auditory target detection in continuous stimulation conditions. Perhaps the most impressive auditory symptom in full-blown neglect is alloacusis, in which patients mislocalize left-sided sound sources to their right, although even patients with less severe neglect still often show disturbance of auditory spatial perception, most commonly a lateralization bias towards the right. We discuss how these various disorders may be explained by a single model of neglect and review emerging interventions for patient rehabilitation.

  12. Age-related changes in the central auditory system.

    PubMed

    Ouda, Ladislav; Profant, Oliver; Syka, Josef

    2015-07-01

    Aging is accompanied by the deterioration of hearing that complicates our understanding of speech, especially in noisy environments. This deficit is partially caused by the loss of hair cells as well as by the dysfunction of the stria vascularis. However, the central part of the auditory system is also affected by processes accompanying aging that may run independently of those affecting peripheral receptors. Here, we review major changes occurring in the central part of the auditory system during aging. Most of the information that is focused on age-related changes in the central auditory system of experimental animals arises from experiments using immunocytochemical targeting on changes in the glutamic-acid-decarboxylase, parvalbumin, calbindin and calretinin. These data are accompanied by information about age-related changes in the number of neurons as well as about changes in the behavior of experimental animals. Aging is in principle accompanied by atrophy of the gray as well as white matter, resulting in the enlargement of the cerebrospinal fluid space. The human auditory cortex suffers not only from atrophy but also from changes in the content of some metabolites in the aged brain, as shown by magnetic resonance spectroscopy. In addition to this, functional magnetic resonance imaging reveals differences between activation of the central auditory system in the young and old brain. Altogether, the information reviewed in this article speaks in favor of specific age-related changes in the central auditory system that occur mostly independently of the changes in the inner ear and that form the basis of the central presbycusis.

  13. A frequency-selective feedback model of auditory efferent suppression and its implications for the recognition of speech in noise.

    PubMed

    Clark, Nicholas R; Brown, Guy J; Jürgens, Tim; Meddis, Ray

    2012-09-01

    The potential contribution of the peripheral auditory efferent system to our understanding of speech in a background of competing noise was studied using a computer model of the auditory periphery and assessed using an automatic speech recognition system. A previous study had shown that a fixed efferent attenuation applied to all channels of a multi-channel model could improve the recognition of connected digit triplets in noise [G. J. Brown, R. T. Ferry, and R. Meddis, J. Acoust. Soc. Am. 127, 943-954 (2010)]. In the current study an anatomically justified feedback loop was used to automatically regulate separate attenuation values for each auditory channel. This arrangement resulted in a further enhancement of speech recognition over fixed-attenuation conditions. Comparisons between multi-talker babble and pink noise interference conditions suggest that the benefit originates from the model's ability to modify the amount of suppression in each channel separately according to the spectral shape of the interfering sounds.

  14. Processing and Analysis of Multichannel Extracellular Neuronal Signals: State-of-the-Art and Challenges.

    PubMed

    Mahmud, Mufti; Vassanelli, Stefano

    2016-01-01

    In recent years multichannel neuronal signal acquisition systems have allowed scientists to focus on research questions which were otherwise impossible. They act as a powerful means to study brain (dys)functions in in-vivo and in in-vitro animal models. Typically, each session of electrophysiological experiments with multichannel data acquisition systems generate large amount of raw data. For example, a 128 channel signal acquisition system with 16 bits A/D conversion and 20 kHz sampling rate will generate approximately 17 GB data per hour (uncompressed). This poses an important and challenging problem of inferring conclusions from the large amounts of acquired data. Thus, automated signal processing and analysis tools are becoming a key component in neuroscience research, facilitating extraction of relevant information from neuronal recordings in a reasonable time. The purpose of this review is to introduce the reader to the current state-of-the-art of open-source packages for (semi)automated processing and analysis of multichannel extracellular neuronal signals (i.e., neuronal spikes, local field potentials, electroencephalogram, etc.), and the existing Neuroinformatics infrastructure for tool and data sharing. The review is concluded by pinpointing some major challenges that are being faced, which include the development of novel benchmarking techniques, cloud-based distributed processing and analysis tools, as well as defining novel means to share and standardize data.

  15. Processing and Analysis of Multichannel Extracellular Neuronal Signals: State-of-the-Art and Challenges

    PubMed Central

    Mahmud, Mufti; Vassanelli, Stefano

    2016-01-01

    In recent years multichannel neuronal signal acquisition systems have allowed scientists to focus on research questions which were otherwise impossible. They act as a powerful means to study brain (dys)functions in in-vivo and in in-vitro animal models. Typically, each session of electrophysiological experiments with multichannel data acquisition systems generate large amount of raw data. For example, a 128 channel signal acquisition system with 16 bits A/D conversion and 20 kHz sampling rate will generate approximately 17 GB data per hour (uncompressed). This poses an important and challenging problem of inferring conclusions from the large amounts of acquired data. Thus, automated signal processing and analysis tools are becoming a key component in neuroscience research, facilitating extraction of relevant information from neuronal recordings in a reasonable time. The purpose of this review is to introduce the reader to the current state-of-the-art of open-source packages for (semi)automated processing and analysis of multichannel extracellular neuronal signals (i.e., neuronal spikes, local field potentials, electroencephalogram, etc.), and the existing Neuroinformatics infrastructure for tool and data sharing. The review is concluded by pinpointing some major challenges that are being faced, which include the development of novel benchmarking techniques, cloud-based distributed processing and analysis tools, as well as defining novel means to share and standardize data. PMID:27313507

  16. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  17. Training-Induced Plasticity of Auditory Localization in Adult Mammals

    PubMed Central

    Kacelnik, Oliver; Nodal, Fernando R; Parsons, Carl H

    2006-01-01

    Accurate auditory localization relies on neural computations based on spatial cues present in the sound waves at each ear. The values of these cues depend on the size, shape, and separation of the two ears and can therefore vary from one individual to another. As with other perceptual skills, the neural circuits involved in spatial hearing are shaped by experience during development and retain some capacity for plasticity in later life. However, the factors that enable and promote plasticity of auditory localization in the adult brain are unknown. Here we show that mature ferrets can rapidly relearn to localize sounds after having their spatial cues altered by reversibly occluding one ear, but only if they are trained to use these cues in a behaviorally relevant task, with greater and more rapid improvement occurring with more frequent training. We also found that auditory adaptation is possible in the absence of vision or error feedback. Finally, we show that this process involves a shift in sensitivity away from the abnormal auditory spatial cues to other cues that are less affected by the earplug. The mature auditory system is therefore capable of adapting to abnormal spatial information by reweighting different localization cues. These results suggest that training should facilitate acclimatization to hearing aids in the hearing impaired. PMID:16509769

  18. Effects of musical training on the auditory cortex in children.

    PubMed

    Trainor, Laurel J; Shahin, Antoine; Roberts, Larry E

    2003-11-01

    Several studies of the effects of musical experience on sound representations in the auditory cortex are reviewed. Auditory evoked potentials are compared in response to pure tones, violin tones, and piano tones in adult musicians versus nonmusicians as well as in 4- to 5-year-old children who have either had or not had extensive musical experience. In addition, the effects of auditory frequency discrimination training in adult nonmusicians on auditory evoked potentials are examined. It was found that the P2-evoked response is larger in both adult and child musicians than in nonmusicians and that auditory training enhances this component in nonmusician adults. The results suggest that the P2 is particularly neuroplastic and that the effects of musical experience can be seen early in development. They also suggest that although the effects of musical training on cortical representations may be greater if training begins in childhood, the adult brain is also open to change. These results are discussed with respect to potential benefits of early musical training as well as potential benefits of musical experience in aging.

  19. Branched Projections in the Auditory Thalamocortical and Corticocortical Systems

    PubMed Central

    Kishan, Amar U.; Lee, Charles C.; Winer, Jeffery A.

    2008-01-01

    Branched axons (BAs) projecting to different areas of the brain can create multiple feature-specific maps or synchronize processing in remote targets. We examined the organization of BAs in the cat auditory forebrain using two sensitive retrograde tracers. In one set of experiments (n=4), the tracers were injected into different frequency-matched loci in the primary auditory area (AI) and the anterior auditory field (AAF). In the other set (n=4), we injected primary, non-primary, or limbic cortical areas. After mapped injections, percentages of double labeled cells (PDLs) in the medial geniculate body (MGB) ranged from 1.4% (ventral division) to 2.8% (rostral pole). In both ipsilateral and contralateral areas AI and AAF, the average PDLs were <1%. In the unmapped cases, the MGB PDLs ranged from 0.6% (ventral division) after insular cortex injections to 6.7% (dorsal division) after temporal cortex injections. Cortical PDLs ranged from 0.1% (ipsilateral AI injections) to 3.7% AII (contralateral AII injections). PDLs within the smaller (minority) projection population were significantly higher than those in the overall population. About 2% of auditory forebrain projection cells have BAs and such cells are organized differently than those in the subcortical auditory system, where BAs can be far more numerous. Forebrain branched projections follow different organizational rules than their unbranched counterparts. Finally, the relatively larger proportion of visual and somatic sensory forebrain BAs suggests modality specific rules for BA organization. PMID:18294776

  20. Options for Auditory Training for Adults with Hearing Loss

    PubMed Central

    Olson, Anne D.

    2015-01-01

    Hearing aid devices alone do not adequately compensate for sensory losses despite significant technological advances in digital technology. Overall use rates of amplification among adults with hearing loss remain low, and overall satisfaction and performance in noise can be improved. Although improved technology may partially address some listening problems, auditory training may be another alternative to improve speech recognition in noise and satisfaction with devices. The literature underlying auditory plasticity following placement of sensory devices suggests that additional auditory training may be needed for reorganization of the brain to occur. Furthermore, training may be required to acquire optimal performance from devices. Several auditory training programs that are readily accessible for adults with hearing loss, hearing aids, or cochlear implants are described. Programs that can be accessed via Web-based formats and smartphone technology are reviewed. A summary table is provided for easy access to programs with descriptions of features that allow hearing health care providers to assist clients in selecting the most appropriate auditory training program to fit their needs. PMID:27587915

  1. Insult-induced adaptive plasticity of the auditory system

    PubMed Central

    Gold, Joshua R.; Bajo, Victoria M.

    2014-01-01

    The brain displays a remarkable capacity for both widespread and region-specific modifications in response to environmental challenges, with adaptive processes bringing about the reweighing of connections in neural networks putatively required for optimizing performance and behavior. As an avenue for investigation, studies centered around changes in the mammalian auditory system, extending from the brainstem to the cortex, have revealed a plethora of mechanisms that operate in the context of sensory disruption after insult, be it lesion-, noise trauma, drug-, or age-related. Of particular interest in recent work are those aspects of auditory processing which, after sensory disruption, change at multiple—if not all—levels of the auditory hierarchy. These include changes in excitatory, inhibitory and neuromodulatory networks, consistent with theories of homeostatic plasticity; functional alterations in gene expression and in protein levels; as well as broader network processing effects with cognitive and behavioral implications. Nevertheless, there abounds substantial debate regarding which of these processes may only be sequelae of the original insult, and which may, in fact, be maladaptively compelling further degradation of the organism's competence to cope with its disrupted sensory context. In this review, we aim to examine how the mammalian auditory system responds in the wake of particular insults, and to disambiguate how the changes that develop might underlie a correlated class of phantom disorders, including tinnitus and hyperacusis, which putatively are brought about through maladaptive neuroplastic disruptions to auditory networks governing the spatial and temporal processing of acoustic sensory information. PMID:24904256

  2. Multichannel image regularization using anisotropic geodesic filtering

    SciTech Connect

    Grazzini, Jacopo A

    2010-01-01

    This paper extends a recent image-dependent regularization approach introduced in aiming at edge-preserving smoothing. For that purpose, geodesic distances equipped with a Riemannian metric need to be estimated in local neighbourhoods. By deriving an appropriate metric from the gradient structure tensor, the associated geodesic paths are constrained to follow salient features in images. Following, we design a generalized anisotropic geodesic filter; incorporating not only a measure of the edge strength, like in the original method, but also further directional information about the image structures. The proposed filter is particularly efficient at smoothing heterogeneous areas while preserving relevant structures in multichannel images.

  3. Multichannel euv spectroscopy of high temperature plasmas

    SciTech Connect

    Fonck, R.J.

    1983-11-01

    Spectroscopy of magnetically confined high temperature plasmas in the visible through x-ray spectral ranges deals primarily with the study of impurity line radiation or continuum radiation. Detailed knowledge of absolute intensities, temporal behavior, and spatial distributions of the emitted radiation is desired. As tokamak facilities become more complex, larger, and less accessible, there has been an increased emphasis on developing new instrumentation to provide such information in a minimum number of discharges. The availability of spatially-imaging detectors for use in the vacuum ultraviolet region (especially the intensified photodiode array) has generated the development of a variety of multichannel spectrometers for applications on tokamak facilities.

  4. Multichannel correlation recognition method of optical images

    NASA Astrophysics Data System (ADS)

    Wang, Hongxia; He, Junfa; Sun, Honghui

    2000-10-01

    In this paper a multi-channel real-time hybrid joint transform correlator is proposed. In this correlator, the computer control is used to divide the screen into several equal size windows, reference images of the windows are all the same one and object images are adopted from different frames of image sequences by CCD, twice Fourier transforms of every channel images are realized by using hololens array. Areas of LCLV and the output light energy can be used effectively. The correlation performance can be improved.

  5. Multichannel analysis of forward scattered body waves

    NASA Astrophysics Data System (ADS)

    Neal, Scott Lawrence

    We describe a series of innovations which are the basis for a multichannel approach to direct imaging of forward scattered body waves recorded on broadband seismic arrays. The foundation is a method through which the irregularly sampled observed seismograms are interpolated onto an arbitrarily fine grid by means of a convolution between a spatial window function and the actual station locations. The result is a weighted stack which employs all the data to compute a robust and stable multichannel estimate of the wavefield. Deconvolution of the stacked data is shown to be equivalent to a multichannel deconvolution, with spatially variable weights equal to those used in stacking. Application to data from the Lodore array in Colorado and Wyoming shows variations in crustal structure across the array and also images upper mantle discontinuities. A second innovation focuses on the design of deconvolution operators that account for the loss of high frequency components of P-to- S conversions. Two variants are presented, the first increases linearly with P-to-S lag time, the second is based on convolutional quelling and a t* attenuation model. Both methods account for the high attentuation of S waves in the upper mantle. The quelling approach however, has two advantages; it is physically based, and it provides a unified framework for the combination of stacking and deconvolution. We apply multichannel stacking to derive three quantities from the observed data and the associated receiver functions: (1) correlation between stacks of the entire array and local subarray stacks, (2) RMS amplitude of the receiver functions, and (3) Pms-to- P amplitude variations. Application of these attributes to data from recent broadband array deployments in southern Africa, Colorado and Wyoming, and the Tien Shan of central Asia shows these attributes to be highly correlated with the geology of the study areas and to be indicative of major lithospheric discontinuities beneath an array

  6. Lateralization of Auditory Language: An EEG Study of Bilingual Crow Indian Adolescents.

    ERIC Educational Resources Information Center

    Vocate, Donna R.

    A study was undertaken to learn whether involvement of the brain's right hemisphere in auditory language processing, a phenomenon found in a previous study of Crow-English bilinguals, was language-specific. Alpha blocking response as measured by electroencephalography (EEG) was used as an indicator of brain activity. It was predicted that (1)…

  7. The hippocampus may be more susceptible to environmental noise than the auditory cortex.

    PubMed

    Cheng, Liang; Wang, Shao-Hui; Huang, Yun; Liao, Xiao-Mei

    2016-03-01

    Noise exposure can cause structural and functional problem in the auditory cortex (AC) and hippocampus, the two brain regions in the auditory and non-auditory systems respectively. The aim of the present study was to explore which one of these two brain regions may be more susceptible to environmental noise. The AC and hippocampus of mice were separated following 1 or 3 weeks exposure to moderate noise (80 dB SPL, 2 h/day). The levels of oxidative stress and tau phosphorylation were then measured to evaluate the effects by noise. Results showed significant peroxidation and tau hyperphosphorylation in the hippocampus with 1 week of noise exposure. However, the AC did not show significant changes until exposure for 3 weeks. These data suggest that although the hippocampus and AC were affected by moderate noise exposure, the hippocampus in the non-auditory system may have been more vulnerable to environmental noise than the AC.

  8. Super multi-channel recording systems with UWB wireless transmitter for BMI.

    PubMed

    Suzuki, Takafumi; Ando, Hiroshi; Yoshida, Takeshi; Sawahata, Hirohito; Kawasaki, Keisuke; Hasegawa, Isao; Matsushita, Kojiro; Hirata, Masayuki; Yoshimine, Toshiki; Takizawa, Kenichi

    2014-01-01

    In order to realize a low-invasive and high accuracy Brain-Machine Interface (BMI) system for clinical applications, a super multi-channel recording system was developed in which 4096 channels of Electrocorticogram (ECoG) signal can be amplified and transmitted to outside the body by using an Ultra Wide Band (UWB) wireless system. Also, a high density, flexible electrode array made by using a Parylene-C substrate was developed that is composed of units of 32-ch recording arrays. We have succeeded in an evaluation test of UWB wireless transmitting using a body phantom system.

  9. Oscillatory alpha modulations in right auditory regions reflect the validity of acoustic cues in an auditory spatial attention task.

    PubMed

    Weisz, Nathan; Müller, Nadia; Jatzev, Sabine; Bertrand, Olivier

    2014-10-01

    Anticipation of targets in the left or right hemifield leads to alpha modulations in posterior brain areas. Recently using magnetoencephalography, we showed increased right auditory alpha activity when attention was cued ipsilaterally. Here, we investigated the issue how cue validity itself influences oscillatory alpha activity. Acoustic cues were presented either to the right or left ear, followed by a compound dichotically presented target plus distractor. The preceding cue was either informative (75% validity) or uninformative (50%) about the location of the upcoming target. Cue validity × side-related alpha modulations were identified in pre- and posttarget periods in a right lateralized network, comprising auditory and nonauditory regions. This replicates and extends our previous finding of the right hemispheric dominance of auditory attentional modulations. Importantly, effective connectivity analysis showed that, in the pretarget period, this effect is accompanied by a pronounced and time-varying connectivity pattern of the right auditory cortex to the right intraparietal sulcus (IPS), with influence of IPS on superior temporal gyrus dominating at earlier intervals of the cue-target period. Our study underlines the assumption that alpha oscillations may play a similar functional role in auditory cortical regions as reported in other sensory modalities and suggests that these effects may be mediated via IPS.

  10. Representations of Pitch and Timbre Variation in Human Auditory Cortex.

    PubMed

    Allen, Emily J; Burton, Philip C; Olman, Cheryl A; Oxenham, Andrew J

    2017-02-01

    Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex.

  11. Nonverbal auditory agnosia with lesion to Wernicke’s area

    PubMed Central

    Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic

    2009-01-01

    We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M’s unusual neuropsychological profile. We also examined the patient’s and controls’ neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient’s brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls’ data. This substantial reorganization of auditory processing likely supported the recovery of M’s speech processing. PMID:19698727

  12. Nonverbal auditory agnosia with lesion to Wernicke's area.

    PubMed

    Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic

    2010-01-01

    We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.

  13. Sounds and beyond: multisensory and other non-auditory signals in the inferior colliculus

    PubMed Central

    Gruters, Kurtis G.; Groh, Jennifer M.

    2012-01-01

    The inferior colliculus (IC) is a major processing center situated mid-way along both the ascending and descending auditory pathways of the brain stem. Although it is fundamentally an auditory area, the IC also receives anatomical input from non-auditory sources. Neurophysiological studies corroborate that non-auditory stimuli can modulate auditory processing in the IC and even elicit responses independent of coincident auditory stimulation. In this article, we review anatomical and physiological evidence for multisensory and other non-auditory processing in the IC. Specifically, the contributions of signals related to vision, eye movements and position, somatosensation, and behavioral context to neural activity in the IC will be described. These signals are potentially important for localizing sound sources, attending to salient stimuli, distinguishing environmental from self-generated sounds, and perceiving and generating communication sounds. They suggest that the IC should be thought of as a node in a highly interconnected sensory, motor, and cognitive network dedicated to synthesizing a higher-order auditory percept rather than simply reporting patterns of air pressure detected by the cochlea. We highlight some of the potential pitfalls that can arise from experimental manipulations that may disrupt the normal function of this network, such as the use of anesthesia or the severing of connections from cortical structures that project to the IC. Finally, we note that the presence of these signals in the IC has implications for our understanding not just of the IC but also of the multitude of other regions within and beyond the auditory system that are dependent on signals that pass through the IC. Whatever the IC “hears” would seem to be passed both “upward” to thalamus and thence to auditory cortex and beyond, as well as “downward” via centrifugal connections to earlier areas of the auditory pathway such as the cochlear nucleus. PMID:23248584

  14. Visual cues release the temporal coherence of auditory objects in auditory scene analysis.

    PubMed

    Rahne, Torsten; Böckmann-Barthel, Martin

    2009-12-01

    Auditory scene analysis can arrange alternating tones of high and low pitch in a single, integrated melody, or in two parallel, segregated melodies, depending on the presentation rate and pitch contrast of the tones. We conducted an electrophysiological experiment to determine whether an inherently stable sound organization can be altered by a synchronous presentation of visual cues. To this end, two tones with different frequencies were presented in alternation. Frequency distance was selected as narrow or wide, inducing an inherently stable integrated or segregated organization, respectively. To modulate the integration or segregation organization, visual stimuli were synchronized to either the within-set frequency pattern or with a superimposed intensity pattern. Occasional deviations of the regular frequency pattern were introduced. Elicitation of the mismatch negativity (MMN) component of event-related brain potentials by these deviants indexed the presence of a segregated organization. MMN was elicited by tone sequences with wide frequency distance irrespective of the presence of visual cues. At a narrow frequency distance, however, an MMN was elicited when the visual pattern promoted segregation of the sounds showing a release of the inherently stable integrated organization due to visual stimulation. The results demonstrate cross-modal effects on auditory object perceptual organization even on an inherently stable auditory organization.

  15. Cortical auditory disorders: clinical and psychoacoustic features.

    PubMed Central

    Mendez, M F; Geehan, G R

    1988-01-01

    The symptoms of two patients with bilateral cortical auditory lesions evolved from cortical deafness to other auditory syndromes: generalised auditory agnosia, amusia and/or pure word deafness, and a residual impairment of temporal sequencing. On investigation, both had dysacusis, absent middle latency evoked responses, acoustic errors in sound recognition and matching, inconsistent auditory behaviours, and similarly disturbed psychoacoustic discrimination tasks. These findings indicate that the different clinical syndromes caused by cortical auditory lesions form a spectrum of related auditory processing disorders. Differences between syndromes may depend on the degree of involvement of a primary cortical processing system, the more diffuse accessory system, and possibly the efferent auditory system. Images PMID:2450968

  16. Norepinephrine Modulates Coding of Complex Vocalizations in the Songbird Auditory Cortex Independent of Local Neuroestrogen Synthesis.

    PubMed

    Ikeda, Maaya Z; Jeon, Sung David; Cowell, Rosemary A; Remage-Healey, Luke

    2015-06-24

    The catecholamine norepinephrine plays a significant role in auditory processing. Most studies to date have examined the effects of norepinephrine on the neuronal response to relatively simple stimuli, such as tones and calls. It is less clear how norepinephrine shapes the detection of complex syntactical sounds, as well as the coding properties of sensory neurons. Songbirds provide an opportunity to understand how auditory neurons encode complex, learned vocalizations, and the potential role of norepinephrine in modulating the neuronal computations for acoustic communication. Here, we infused norepinephrine into the zebra finch auditory cortex and performed extracellular recordings to study the modulation of song representations in single neurons. Consistent with its proposed role in enhancing signal detection, norepinephrine decreased spontaneous activity and firing during stimuli, yet it significantly enhanced the auditory signal-to-noise ratio. These effects were all mimicked by clonidine, an α-2 receptor agonist. Moreover, a pattern classifier analysis indicated that norepinephrine enhanced the ability of single neurons to accurately encode complex auditory stimuli. Because neuroestrogens are also known to enhance auditory processing in the songbird brain, we tested the hypothesis that norepinephrine actions depend on local estrogen synthesis. Neither norepinephrine nor adrenergic receptor antagonist infusion into the auditory cortex had detectable effects on local estradiol levels. Moreover, pretreatment with fadrozole, a specific aromatase inhibitor, did not block norepinephrine's neuromodulatory effects. Together, these findings indicate that norepinephrine enhances signal detection and information encoding for complex auditory stimuli by suppressing spontaneous "noise" activity and that these actions are independent of local neuroestrogen synthesis.

  17. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI

    PubMed Central

    Zhou, Sijie; Allison, Brendan Z.; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing

    2016-01-01

    Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged. PMID:27790111

  18. Functional Imaging of Auditory Cortex in Adult Cats using High-field fMRI

    PubMed Central

    Brown, Trecia A.; Gati, Joseph S.; Hughes, Sarah M.; Nixon, Pam L.; Menon, Ravi S.; Lomber, Stephen G.

    2014-01-01

    Current knowledge of sensory processing in the mammalian auditory system is mainly derived from electrophysiological studies in a variety of animal models, including monkeys, ferrets, bats, rodents, and cats. In order to draw suitable parallels between human and animal models of auditory function, it is important to establish a bridge between human functional imaging studies and animal electrophysiological studies. Functional magnetic resonance imaging (fMRI) is an established, minimally invasive method of measuring broad patterns of hemodynamic activity across different regions of the cerebral cortex. This technique is widely used to probe sensory function in the human brain, is a useful tool in linking studies of auditory processing in both humans and animals and has been successfully used to investigate auditory function in monkeys and rodents. The following protocol describes an experimental procedure for investigating auditory function in anesthetized adult cats by measuring stimulus-evoked hemodynamic changes in auditory cortex using fMRI. This method facilitates comparison of the hemodynamic responses across different models of auditory function thus leading to a better understanding of species-independent features of the mammalian auditory cortex. PMID:24637937

  19. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI.

    PubMed

    Zhou, Sijie; Allison, Brendan Z; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing

    2016-01-01

    Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.

  20. Effects of chronic stress on the auditory system and fear learning: an evolutionary approach.

    PubMed

    Dagnino-Subiabre, Alexies

    2013-01-01

    Stress is a complex biological reaction common to all living organisms that allows them to adapt to their environments. Chronic stress alters the dendritic architecture and function of the limbic brain areas that affect memory, learning, and emotional processing. This review summarizes our research about chronic stress effects on the auditory system, providing the details of how we developed the main hypotheses that currently guide our research. The aims of our studies are to (1) determine how chronic stress impairs the dendritic morphology of the main nuclei of the rat auditory system, the inferior colliculus (auditory mesencephalon), the medial geniculate nucleus (auditory thalamus), and the primary auditory cortex; (2) correlate the anatomic alterations with the impairments of auditory fear learning; and (3) investigate how the stress-induced alterations in the rat limbic system may spread to nonlimbic areas, affecting specific sensory system, such as the auditory and olfactory systems, and complex cognitive functions, such as auditory attention. Finally, this article gives a new evolutionary approach to understanding the neurobiology of stress and the stress-related disorders.

  1. Coffee improves auditory neuropathy in diabetic mice.

    PubMed

    Hong, Bin Na; Yi, Tae Hoo; Park, Raekil; Kim, Sun Yeou; Kang, Tong Ho

    2008-08-29

    Coffee is a widely consumed beverage and has recently received considerable attention for its possible beneficial effects. Auditory neuropathy is a hearing disorder characterized by an abnormal auditory brainstem response. This study examined the auditory neuropathy induced by diabetes and investigated the action of coffee, trigonelline, and caffeine to determine whether they improved diabetic auditory neuropathy in mice. Auditory brainstem responses, auditory middle latency responses, and otoacoustic emissions were evaluated to assess auditory neuropathy. Coffee or trigonelline ameliorated the hearing threshold shift and delayed latency of the auditory evoked potential in diabetic neuropathy. These findings demonstrate that diabetes can produce a mouse model of auditory neuropathy and that coffee consumption potentially facilitates recovery from diabetes-induced auditory neuropathy. Furthermore, the active constituent in coffee may be trigonelline.

  2. Auditory brainstem responses and auditory thresholds in woodpeckers.

    PubMed

    Lohr, Bernard; Brittan-Powell, Elizabeth F; Dooling, Robert J

    2013-01-01

    Auditory sensitivity in three species of woodpeckers was estimated using the auditory brainstem response (ABR), a measure of the summed electrical activity of auditory neurons. For all species, the ABR waveform showed at least two, and sometimes three prominent peaks occurring within 10 ms of stimulus onset. Also ABR peak amplitude increased and latency decreased as a function of increasing sound pressure levels. Results showed no significant differences in overall auditory abilities between the three species of woodpeckers. The average ABR audiogram showed that woodpeckers have lowest thresholds between 1.5 and 5.7 kHz. The shape of the average woodpecker ABR audiogram was similar to the shape of the ABR-measured audiograms of other small birds at most frequencies, but at the highest frequency data suggest that woodpecker thresholds may be lower than those of domesticated birds, while similar to those of wild birds.

  3. The cortical language circuit: from auditory perception to sentence comprehension.

    PubMed

    Friederici, Angela D

    2012-05-01

    Over the years, a large body of work on the brain basis of language comprehension has accumulated, paving the way for the formulation of a comprehensive model. The model proposed here describes the functional neuroanatomy of the different processing steps from auditory perception to comprehension as located in different gray matter brain regions. It also specifies the information flow between these regions, taking into account white matter fiber tract connections. Bottom-up, input-driven processes proceeding from the auditory cortex to the anterior superior temporal cortex and from there to the prefrontal cortex, as well as top-down, controlled and predictive processes from the prefrontal cortex back to the temporal cortex are proposed to constitute the cortical language circuit.

  4. Dynamic auditory processing, musical experience and language development.

    PubMed

    Tallal, Paula; Gaab, Nadine

    2006-07-01

    Children with language-learning impairments (LLI) form a heterogeneous population with the majority having both spoken and written language deficits as well as sensorimotor deficits, specifically those related to dynamic processing. Research has focused on whether or not sensorimotor deficits, specifically auditory spectrotemporal processing deficits, cause phonological deficit, leading to language and reading impairments. New trends aimed at resolving this question include prospective longitudinal studies of genetically at-risk infants, electrophysiological and neuroimaging studies, and studies aimed at evaluating the effects of auditory training (including musical training) on brain organization for language. Better understanding of the origins of developmental LLI will advance our understanding of the neurobiological mechanisms underlying individual differences in language development and lead to more effective educational and intervention strategies. This review is part of the INMED/TINS special issue "Nature and nurture in brain development and neurological disorders", based on presentations at the annual INMED/TINS symposium (http://inmednet.com/).

  5. Auditory perspective taking.

    PubMed

    Martinson, Eric; Brock, Derek

    2013-06-01

    Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners.

  6. Auditory-olfactory synesthesia coexisting with auditory-visual synesthesia.

    PubMed

    Jackson, Thomas E; Sandramouli, Soupramanien

    2012-09-01

    Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.

  7. Auditory evoked field measurement using magneto-impedance sensors

    SciTech Connect

    Wang, K. Tajima, S.; Song, D.; Uchiyama, T.; Hamada, N.; Cai, C.

    2015-05-07

    The magnetic field of the human brain is extremely weak, and it is mostly measured and monitored in the magnetoencephalography method using superconducting quantum interference devices. In this study, in order to measure the weak magnetic field of the brain, we constructed a Magneto-Impedance sensor (MI sensor) system that can cancel out the background noise without any magnetic shield. Based on our previous studies of brain wave measurements, we used two MI sensors in this system for monitoring both cerebral hemispheres. In this study, we recorded and compared the auditory evoked field signals of the subject, including the N100 (or N1) and the P300 (or P3) brain waves. The results suggest that the MI sensor can be applied to brain activity measurement.

  8. Auditory evoked field measurement using magneto-impedance sensors

    NASA Astrophysics Data System (ADS)

    Wang, K.; Tajima, S.; Song, D.; Hamada, N.; Cai, C.; Uchiyama, T.

    2015-05-01

    The magnetic field of the human brain is extremely weak, and it is mostly measured and monitored in the magnetoencephalography method using superconducting quantum interference devices. In this study, in order to measure the weak magnetic field of the brain, we constructed a Magneto-Impedance sensor (MI sensor) system that can cancel out the background noise without any magnetic shield. Based on our previous studies of brain wave measurements, we used two MI sensors in this system for monitoring both cerebral hemispheres. In this study, we recorded and compared the auditory evoked field signals of the subject, including the N100 (or N1) and the P300 (or P3) brain waves. The results suggest that the MI sensor can be applied to brain activity measurement.

  9. Nonlinear Auditory Modeling as a Basis for Speaker Recognition

    DTIC Science & Technology

    2010-08-26

    development of new "common modulation" features based on modeling a more central region of auditory processing in the brain’s inferior colliculus...performance improvements have been achieved by estimating the onset times of secondary excitation pulses within glottal cycles . Here we had assumed...secondary excitations (per glottal cycle ) were associated with a nonlinear production model, e.g., multiple vocal fold vibrations or sound generation by

  10. Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.

    PubMed

    Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd

    2014-11-01

    In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and

  11. Multi-channel scanning SQUID microscopy

    NASA Astrophysics Data System (ADS)

    Lee, Su-Young

    I designed, fabricated, assembled, and tested an 8-channel high- Tc scanning SQUID system. I started by modifying an existing single-channel 77 K high-Tc scanning SQUID microscope into a multi-channel system with the goal of reducing the scanning time and improving the spatial resolution by increasing the signal-to-noise ratio S/N. I modified the window assembly, SQUID chip assembly, cold-finger, and vacuum connector. The main concerns for the multi-channel system design were to reduce interaction between channels, to optimize the use of the inside space of the dewar for more than 50 shielded wires, and to achieve good spatial resolution. In the completed system, I obtained the transfer function and the dynamic range (phimax ˜ 11phi0) for each SQUID. At 1kHz, the slew rate is about 3000 phi0/s. I also found that the white noise level varies from 5 muphi0/Hz1/2 to 20 muphi 0/Hz1/2 depending on the SQUID. A new data acquisition program was written that triggered on position and collects data from up to eight SQUIDS. To generate a single image from the multichannel system, I calibrated the tilt of the xy-stage and z-stage manually, rearranged the scanned data by cutting overlapping parts, and determined the applied field by multiplying by the mutual inductance matrix. I found that I could reduce scanning time and improve the image quality by doing so. In addition, I have analyzed and observed the effect of position noise on magnetic field images and used these results to find the position noise in my scanning SQUID microscope. My analysis reveals the relationship between spatial resolution and position noise and that my system was dominated by position noise under typical operating conditions. I found that the smaller the sensor-sample separation, the greater the effect of position noise is on the total effective magnetic field noise and on spatial resolution. By averaging several scans, I found that I could reduce position noise and that the spatial resolution can

  12. Classroom Demonstrations of Auditory Perception.

    ERIC Educational Resources Information Center

    Haws, LaDawn; Oppy, Brian J.

    2002-01-01

    Presents activities to help students gain understanding about auditory perception. Describes demonstrations that cover topics, such as sound localization, wave cancellation, frequency/pitch variation, and the influence of media on sound propagation. (CMK)

  13. Auditory Processing Disorder (For Parents)

    MedlinePlus

    ... or other speech-language difficulties? Are verbal (word) math problems difficult for your child? Is your child ... inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  14. Auditory Processing Disorder in Children

    MedlinePlus

    ... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick Statistics About Voice, Speech, Language Speech and Language Developmental Milestones What Is ...

  15. The ability of the auditory system to cope with temporal subsampling depends on the hierarchical level of processing.

    PubMed

    Zoefel, Benedikt; Reddy Pasham, Naveen; Brüers, Sasskia; VanRullen, Rufin

    2015-09-09

    Evidence for rhythmic or 'discrete' sensory processing is abundant for the visual system, but sparse and inconsistent for the auditory system. Fundamental differences in the nature of visual and auditory inputs might account for this discrepancy: whereas the visual system mainly relies on spatial information, time might be the most important factor for the auditory system. In contrast to vision, temporal subsampling (i.e. taking 'snapshots') of the auditory input stream might thus prove detrimental for the brain as essential information would be lost. Rather than embracing the view of a continuous auditory processing, we recently proposed that discrete 'perceptual cycles' might exist in the auditory system, but on a hierarchically higher level of processing, involving temporally more stable features. This proposal leads to the prediction that the auditory system would be more robust to temporal subsampling when applied on a 'high-level' decomposition of auditory signals. To test this prediction, we constructed speech stimuli that were subsampled at different frequencies, either at the input level (following a wavelet transform) or at the level of auditory features (on the basis of LPC vocoding), and presented them to human listeners. Auditory recognition was significantly more robust to subsampling in the latter case, that is on a relatively high level of auditory processing. Although our results do not directly demonstrate perceptual cycles in the auditory domain, they (a) show that their existence is possible without disrupting temporal information to a critical extent and (b) confirm our proposal that, if they do exist, they should operate on a higher level of auditory processing.

  16. Auditory Perception of Complex Sounds.

    DTIC Science & Technology

    1987-10-30

    processes that underlie several aspects of complex pattern recog- nition -- whether of speech, of music , or of environmental sounds. These patterns differ...quality or timbre can play similar grouping roles in auditory steams. Most of the experimental work has concerned timing of successive sounds in sequences...auditory perceptual processes that underlie several aspects of complex pattern recognition - whether of speech, of music , or of environmental sounds

  17. Fault analysis of multichannel spacecraft power systems

    NASA Technical Reports Server (NTRS)

    Dugal-Whitehead, Norma R.; Lollar, Louis F.

    1990-01-01

    The NASA Marshall Space Flight Center proposes to implement computer-controlled fault injection into an electrical power system breadboard to study the reactions of the various control elements of this breadboard. Elements under study include the remote power controllers, the algorithms in the control computers, and the artificially intelligent control programs resident in this breadboard. To this end, a study of electrical power system faults is being performed to yield a list of the most common power system faults. The results of this study will be applied to a multichannel high-voltage DC spacecraft power system called the large autonomous spacecraft electrical power system (LASEPS) breadboard. The results of the power system fault study and the planned implementation of these faults into the LASEPS breadboard are described.

  18. Photonic generation for multichannel THz wireless communication.

    PubMed

    Shams, Haymen; Fice, Martyn J; Balakier, Katarzyna; Renaud, Cyril C; van Dijk, Frédéric; Seeds, Alwyn J

    2014-09-22

    We experimentally demonstrate photonic generation of a multichannel THz wireless signal at carrier frequency 200 GHz, with data rate up to 75 Gbps in QPSK modulation format, using an optical heterodyne technique and digital coherent detection. BER measurements were carried out for three subcarriers each modulated with 5 Gbaud QPSK or for two subcarriers modulated with 10 Gbaud QPSK, giving a total speed of 30 Gbps or 40 Gbps, respectively. The system evaluation was also performed with three subcarriers modulated with 12.5 Gbaud QPSK (75 Gbps total) without and with 40 km fibre transmission. The proposed system enhances the capacity of high-speed THz wireless transmission by using spectrally efficient modulated subcarriers spaced at the baud rate. This approach increases the overall transmission capacity and reduces the bandwidth requirement for electronic devices.

  19. Genetics of isolated auditory neuropathies.

    PubMed

    Del Castillo, Francisco J; Del Castillo, Ignacio

    2012-01-01

    Auditory neuropathies are disorders combining absent or abnormal auditory brainstem responses with preserved otoacoustic emissions and/or cochlear microphonics. These features indicate a normal function of cochlear outer hair cells. Thus, the primary lesion might be located in the inner hair cells, in the auditory nerve or in the intervening synapse. Auditory neuropathy is observed in up to 10 percent of deaf infants and children, either as part of some systemic neurodegenerative diseases or as an isolated entity. Research on the genetic causes of isolated auditory neuropathies has been remarkably successful in the last few years. Here we review the current knowledge on the structure, expression and function of the genes and proteins so far known to be involved in these disorders, as well as the clinical features that are associated with mutations in the different genes. This knowledge is permitting to classify isolated auditory neuropathies into etiologically homogeneous types, so providing clues for the better diagnosis, management and therapy of the affected subjects.

  20. Multi-sensory integration in brainstem and auditory cortex.

    PubMed

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience.

  1. Representation of speech in human auditory cortex: is it special?

    PubMed

    Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I

    2013-11-01

    Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled

  2. Wireless multichannel electroencephalography in the newborn

    PubMed Central

    Ibrahim, Z.H.; Chari, G.; Abdel Baki, S.; Bronshtein, V.; Kim, M.R.; Weedon, J.; Cracco, J.; Aranda, J.V.

    2016-01-01

    OBJECTIVES: First, to determine the feasibility of an ultra-compact wireless device (microEEG) to obtain multichannel electroencephalographic (EEG) recording in the Neonatal Intensive Care Unit (NICU). Second, to identify problem areas in order to improve wireless EEG performance. STUDY DESIGN: 28 subjects (gestational age 24–30 weeks, postnatal age <30 days) were recruited at 2 sites as part of an ongoing study of neonatal apnea and wireless EEG. Infants underwent 8-9 hour EEG recordings every 2–4 weeks using an electrode cap (ANT-Neuro) connected to the wireless EEG device (Bio-Signal Group). A 23 electrode configuration was used incorporating the International 10–20 System. The device transmitted recordings wirelessly to a laptop computer for bedside assessment. The recordings were assessed by a pediatric neurophysiologist for interpretability. RESULTS: A total of 84 EEGs were recorded from 28 neonates. 61 EEG studies were obtained in infants prior to 35 weeks corrected gestational age (CGA). NICU staff placed all electrode caps and initiated all recordings. Of these recordings 6 (10%) were uninterpretable due to artifacts and one study could not be accessed. The remaining 54 (89%) EEG recordings were acceptable for clinical review and interpretation by a pediatric neurophysiologist. Of the recordings obtained at 35 weeks corrected gestational age or later only 11 out of 23 (48%) were interpretable. CONCLUSIONS: Wireless EEG devices can provide practical, continuous, multichannel EEG monitoring in preterm neonates. Their small size and ease of use could overcome obstacles associated with EEG recording and interpretation in the NICU. PMID:28009337

  3. Programmable auditory stimulus generator and electro-acoustic transducers--measurements of sound pressure in an artificial ear and human ear canal.

    PubMed

    Maurer, K; Schröder, K; Schäfer, E

    1984-07-01

    The measurement of sound pressure wave forms of different headphones resulted in considerable differences in an artificial ear and in the external auditory canal in man. This concerns, in particular, the pattern of stimuli used to elicit the auditory nerve and brain-stem auditory evoked potentials. By varying the electrical input to the headphones by means of a programmable stimulus generator, it can be shown that the sound pressure wave form can be influenced considerably.

  4. Auditory Processing, Plasticity, and Learning in the Barn Owl

    PubMed Central

    Peña, José L.; DeBello, William M.

    2011-01-01

    The human brain has accumulated many useful building blocks over its evolutionary history, and the best knowledge of these has often derived from experiments performed in animal species that display finely honed abilities. In this article we review a model system at the forefront of investigation into the neural bases of information processing, plasticity, and learning: the barn owl auditory localization pathway. In addition to the broadly applicable principles gleaned from three decades of work in this system, there are good reasons to believe that continued exploration of the owl brain will be invaluable for further advances in understanding of how neuronal networks give rise to behavior. PMID:21131711

  5. Biomedical Simulation Models of Human Auditory Processes

    NASA Technical Reports Server (NTRS)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  6. Theta, beta and gamma rate modulations in the developing auditory system.

    PubMed

    Vanvooren, Sophie; Hofmann, Michael; Poelmans, Hanne; Ghesquière, Pol; Wouters, Jan

    2015-09-01

    In the brain, the temporal analysis of many important auditory features relies on the synchronized firing of neurons to the auditory input rhythm. These so-called neural oscillations play a crucial role in sensory and cognitive processing and deviances in oscillatory activity have shown to be associated with neurodevelopmental disorders. Given the importance of neural auditory oscillations in normal and impaired sensory and cognitive functioning, there has been growing interest in their developmental trajectory from early childhood on. In the present study, neural auditory processing was investigated in typically developing young children (n = 40) and adults (n = 27). In all participants, auditory evoked theta, beta and gamma responses were recorded. The results of this study show maturational differences between children and adults in neural auditory processing at cortical as well as at brainstem level. Neural background noise at cortical level was shown to be higher in children compared to adults. In addition, higher theta response amplitudes were measured in children compared to adults. For beta and gamma rate modulations, different processing asymmetry patterns were observed between both age groups. The mean response phase was also shown to differ significantly between children and adults for all rates. Results suggest that cortical auditory processing of beta develops from a general processing pattern into a more specialized asymmetric processing preference over age. Moreover, the results indicate an enhancement of bilateral representation of monaural sound input at brainstem with age. A dissimilar efficiency of auditory signal transmission from brainstem to cortex along the auditory pathway between children and adults is suggested. These developmental differences might be due to both functional experience-dependent as well as anatomical changes. The findings of the present study offer important information about maturational differences between children

  7. Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners.

    PubMed

    Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim

    2015-06-15

    Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception.

  8. Determining hierarchical functional networks from auditory stimuli fMRI.

    PubMed

    Patel, Rajan S; Bowman, F Dubois; Rilling, James K

    2006-05-01

    We determined connectivity of the human brain using functional magnetic resonance imaging (fMRI) while subjects experienced auditory stimuli in a 2-by-2 factorial design. The two factors in this study were "speaker" (same or different speaker) and "sentence" (same or different sentences). Connectivity studies allow us to ask how spatially remote brain regions are neurophysiologically related given these stimuli. In the context of this study, we examined how the "speaker" effect and "sentence" effect influenced these relationships. We applied a Bayesian connectivity method that determines hierarchical functional networks of functionally connected brain regions. Hierarchy in these functional networks is determined by conditional probabilities of elevated activity. For example, a brain region that becomes active a superset of the time of another region is considered ascendant to that brain region in the hierarchical network. For each factor level, we found a baseline functional network connecting the primary auditory cortex (Brodmann's Area [BA] 41) with the BA 42 and BA 22 of the superior temporal gyrus (STG). We also found a baseline functional network that includes Wernicke's Area (BA 22 posterior), STG, and BA 44 for each factor level. However, we additionally observed a strong ascendant connection from BA 41 to the posterior cingulate (BA 30) and Broca's Area and a stronger connection from Wernicke's Area to STG and the posterior cingulate while passively listening to different sentences rather than the same sentence repeatedly. Finally, our results revealed no significant "speaker" effect or interaction between "speaker" and "sentence."

  9. Pre-target axon sorting in the avian auditory brainstem

    PubMed Central

    Kashima, Daniel T.; Rubel, Edwin W; Seidl, Armin H.

    2012-01-01

    Topographic organization of neurons is a hallmark of brain structure. The establishment of the connections between topographically organized brain regions has attracted much experimental attention and it is widely accepted that molecular cues guide outgrowing axons to their targets in order to construct topographic maps. In a number of systems afferent axons are organized topographically along their trajectory as well and it has been suggested that this pre-target sorting contributes to map formation. Neurons in auditory regions of the brain are arranged according to their best frequency (BF), the sound frequency they respond to optimally. This BF changes predictably with position along the so-called tonotopic axis. In the avian auditory brainstem, the tonotopic organization of the second- and third-order auditory neurons in nucleus magnocellularis (NM) and nucleus laminaris (NL) has been well described. In this study we examine whether the decussating NM axons forming the crossed dorsal cochlear tract (XDCT) and innervating the contralateral NL are arranged in a systematic manner. We electroporated dye into cells in different frequency regions of NM to anterogradely label their axons in the XDCT. The placement of dye in NM was compared to the location of labeled axons in XDCT. Our results show that NM axons in XDCT are organized in a precise tonotopic manner along the rostrocaudal axis, spanning over the entire rostrocaudal extent of both the origin and target nuclei. We propose that in the avian auditory brainstem, this pre-target axon sorting contributes to tonotopic map formation in NL. PMID:23239056

  10. Tonotopic maps in human auditory cortex using arterial spin labeling

    PubMed Central

    Ivanov, Dimo; Havlicek, Martin; Formisano, Elia; Uludağ, Kâmil

    2016-01-01

    Abstract A tonotopic organization of the human auditory cortex (AC) has been reliably found by neuroimaging studies. However, a full characterization and parcellation of the AC is still lacking. In this study, we employed pseudo‐continuous arterial spin labeling (pCASL) to map tonotopy and voice selective regions using, for the first time, cerebral blood flow (CBF). We demonstrated the feasibility of CBF‐based tonotopy and found a good agreement with BOLD signal‐based tonotopy, despite the lower contrast‐to‐noise ratio of CBF. Quantitative perfusion mapping of baseline CBF showed a region of high perfusion centered on Heschl's gyrus and corresponding to the main high‐low‐high frequency gradients, co‐located to the presumed primary auditory core and suggesting baseline CBF as a novel marker for AC parcellation. Furthermore, susceptibility weighted imaging was employed to investigate the tissue specificity of CBF and BOLD signal and the possible venous bias of BOLD‐based tonotopy. For BOLD only active voxels, we found a higher percentage of vein contamination than for CBF only active voxels. Taken together, we demonstrated that both baseline and stimulus‐induced CBF is an alternative fMRI approach to the standard BOLD signal to study auditory processing and delineate the functional organization of the auditory cortex. Hum Brain Mapp 38:1140–1154, 2017. © 2016 Wiley Periodicals, Inc. PMID:27790786

  11. Transcranial direct current stimulation for refractory auditory hallucinations in schizophrenia.

    PubMed

    Andrade, Chittaranjan

    2013-11-01

    Some patients with schizophrenia may suffer from continuous or severe auditory hallucinations that are refractory to antipsychotic drugs, including clozapine. Such patients may benefit from a short trial of once- to twice-daily transcranial direct current stimulation (tDCS) with the cathode placed over the left temporoparietal cortex and the anode over the left dorsolateral prefrontal cortex; negative, cognitive, and other symptoms, if present, may also improve. At present, the case for tDCS treatment of refractory auditory hallucinations rests on 1 well-conducted randomized, sham tDCS-controlled trial and several carefully documented and instructive case reports. Benefits with up to 3 years of maintenance tDCS have also been described. In patients with refractory auditory hallucinations, tDCS has been delivered at 1- to 3-mA current intensity during 20-30 minutes in once- to twice-daily sessions for up to 3 years with no apparent adverse effects. Transcranial direct current stimulation therefore appears to be a promising noninvasive brain stimulation technique for patients with antipsychotic-refractory auditory hallucinations.

  12. Tonotopic maps in human auditory cortex using arterial spin labeling.

    PubMed

    Gardumi, Anna; Ivanov, Dimo; Havlicek, Martin; Formisano, Elia; Uludağ, Kâmil

    2017-03-01

    A tonotopic organization of the human auditory cortex (AC) has been reliably found by neuroimaging studies. However, a full characterization and parcellation of the AC is still lacking. In this study, we employed pseudo-continuous arterial spin labeling (pCASL) to map tonotopy and voice selective regions using, for the first time, cerebral blood flow (CBF). We demonstrated the feasibility of CBF-based tonotopy and found a good agreement with BOLD signal-based tonotopy, despite the lower contrast-to-noise ratio of CBF. Quantitative perfusion mapping of baseline CBF showed a region of high perfusion centered on Heschl's gyrus and corresponding to the main high-low-high frequency gradients, co-located to the presumed primary auditory core and suggesting baseline CBF as a novel marker for AC parcellation. Furthermore, susceptibility weighted imaging was employed to investigate the tissue specificity of CBF and BOLD signal and the possible venous bias of BOLD-based tonotopy. For BOLD only active voxels, we found a higher percentage of vein contamination than for CBF only active voxels. Taken together, we demonstrated that both baseline and stimulus-induced CBF is an alternative fMRI approach to the standard BOLD signal to study auditory processing and delineate the functional organization of the auditory cortex. Hum Brain Mapp 38:1140-1154, 2017. © 2016 Wiley Periodicals, Inc.

  13. Differential auditory signal processing in an animal model

    NASA Astrophysics Data System (ADS)

    Lim, Dukhwan; Kim, Chongsun; Chang, Sun O.

    2002-05-01

    Auditory evoked responses were collected in male zebra finches (Poephila guttata) to objectively determine differential frequency selectivity. First, the mating call of the animal was recorded and analyzed for its frequency components through the customized program. Then, auditory brainstem responses and cortical responses of each anesthetized animal were routinely recorded in response to tone bursts of 1-8 kHz derived from the corresponding mating call spectrum. From the results, most mating calls showed relatively consistent spectral structures. The upper limit of the spectrum was well under 10 kHz. The peak energy bands were concentrated in the region less than 5 kHz. The assessment of auditory brainstem responses and cortical evoked potentials showed differential selectivity with a series of characteristic scales. This system appears to be an excellent model to investigate complex sound processing and related language behaviors. These data could also be used in designing effective signal processing strategies in auditory rehabilitation devices such as hearing aids and cochlear implants. [Work supported by Brain Science & Engineering Program from Korean Ministry of Science and Technology.

  14. Integrating information from different senses in the auditory cortex.

    PubMed

    King, Andrew J; Walker, Kerry M M

    2012-12-01

    Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies.

  15. Adaptation to vocal expressions reveals multistep perception of auditory emotion.

    PubMed

    Bestelmeyer, Patricia E G; Maurage, Pierre; Rouger, Julien; Latinus, Marianne; Belin, Pascal

    2014-06-11

    The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect.

  16. What works in auditory working memory? A neural oscillations perspective.

    PubMed

    Wilsch, Anna; Obleser, Jonas

    2016-06-01

    Working memory is a limited resource: brains can only maintain small amounts of sensory input (memory load) over a brief period of time (memory decay). The dynamics of slow neural oscillations as recorded using magneto- and electroencephalography (M/EEG) provide a window into the neural mechanics of these limitations. Especially oscillations in the alpha range (8-13Hz) are a sensitive marker for memory load. Moreover, according to current models, the resultant working memory load is determined by the relative noise in the neural representation of maintained information. The auditory domain allows memory researchers to apply and test the concept of noise quite literally: Employing degraded stimulus acoustics increases memory load and, at the same time, allows assessing the cognitive resources required to process speech in noise in an ecologically valid and clinically relevant way. The present review first summarizes recent findings on neural oscillations, especially alpha power, and how they reflect memory load and memory decay in auditory working memory. The focus is specifically on memory load resulting from acoustic degradation. These findings are then contrasted with contextual factors that benefit neural as well as behavioral markers of memory performance, by reducing representational noise. We end on discussing the functional role of alpha power in auditory working memory and suggest extensions of the current methodological toolkit. This article is part of a Special Issue entitled SI: Auditory working memory.

  17. Auditory scene analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    PubMed Central

    Brown, David J.; Simpson, Andrew J. R.; Proulx, Michael J.

    2015-01-01

    A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36) performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio–visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this. PMID:26528202

  18. Auditory scene analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    PubMed

    Brown, David J; Simpson, Andrew J R; Proulx, Michael J

    2015-01-01

    A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don't yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36) performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  19. Packed multi-channels for parallel chromatographic separations in microchips.

    PubMed

    Nagy, Andrea; Gaspar, Attila

    2013-08-23

    Here we report on a simple method to fabricate microfluidic chip incorporating multi-channel systems packed by conventional chromatographic particles without the use of frits. The retaining effectivities of different bottlenecks created in the channels were studied. For the parallel multi-channel chromatographic separations several channel patterns were designed. The obtained multipackings were applied for parallel separations of dyes. The implementation of several chromatographic separation units in microscopic size makes possible faster and high throughput separations.

  20. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing

    PubMed Central

    Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  1. Premotor cortex is sensitive to auditory-visual congruence for biological motion.

    PubMed

    Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F

    2012-03-01

    The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.

  2. Excitability changes induced in the human auditory cortex by transcranial direct current stimulation: direct electrophysiological evidence.

    PubMed

    Zaehle, Tino; Beretta, Manuela; Jäncke, Lutz; Herrmann, Christoph S; Sandmann, Pascale

    2011-11-01

    Transcranial direct current stimulation (tDCS) can systematically modify behavior by inducing changes in the underlying brain function. Objective electrophysiological evidence for tDCS-induced excitability changes has been demonstrated for the visual and somatosensory cortex, while evidence for excitability changes in the auditory cortex is lacking. In the present study, we applied tDCS over the left temporal as well as the left temporo-parietal cortex and investigated tDCS-induced effects on auditory evoked potentials after anodal, cathodal, and sham stimulation. Results show that anodal and cathodal tDCS can modify auditory cortex reactivity. Moreover, auditory evoked potentials were differentially modulated as a function of site of stimulation. While anodal tDCS over the temporal cortex increased auditory P50 amplitudes, cathodal tDCS over the temporo-parietal cortex induced larger N1 amplitudes. The results directly demonstrate excitability changes in the auditory cortex induced by active tDCS over the temporal and temporo-parietal cortex and might contribute to the understanding of mechanisms involved in the successful treatment of auditory disorders like tinnitus via tDCS.

  3. Demonstration of prosthetic activation of central auditory pathways using ( sup 14 C)-2-deoxyglucose

    SciTech Connect

    Evans, D.A.; Niparko, J.K.; Altschuler, R.A.; Frey, K.A.; Miller, J.M. )

    1990-02-01

    The cochlear prosthesis is not applicable to patients who lack an implantable cochlea or an intact vestibulocochlear nerve. Direct electrical stimulation of the cochlear nucleus (CN) of the brain stem might provide a method for auditory rehabilitation of these patients. A penetrating CN electrode has been developed and tissue tolerance to this device demonstrated. This study was undertaken to evaluate metabolic activation of central nervous system (CNS) auditory tracts produced by such implants. Regional cerebral glucose use resulting from CN stimulation was estimated in a series of chronically implanted guinea pigs with the use of ({sup 14}C)-2-deoxyglucose (2-DG). Enhanced 2-DG uptake was observed in structures of the auditory tract. The activation of central auditory structures achieved with CN stimulation was similar to that produced by acoustic stimulation and by electrical stimulation of the modiolar portion of the auditory nerve in control groups. An interesting banding pattern was observed in the inferior colliculus following CN stimulation, as previously described with acoustic stimulation. This study demonstrates that functional metabolic activation of central auditory pathways can be achieved with a penetrating CNS auditory prosthesis.

  4. A review on auditory space adaptations to altered head-related cues

    PubMed Central

    Mendonça, Catarina

    2014-01-01

    In this article we present a review of current literature on adaptations to altered head-related auditory localization cues. Localization cues can be altered through ear blocks, ear molds, electronic hearing devices, and altered head-related transfer functions (HRTFs). Three main methods have been used to induce auditory space adaptation: sound exposure, training with feedback, and explicit training. Adaptations induced by training, rather than exposure, are consistently faster. Studies on localization with altered head-related cues have reported poor initial localization, but improved accuracy and discriminability with training. Also, studies that displaced the auditory space by altering cue values reported adaptations in perceived source position to compensate for such displacements. Auditory space adaptations can last for a few months even without further contact with the learned cues. In most studies, localization with the subject's own unaltered cues remained intact despite the adaptation to a second set of cues. Generalization is observed from trained to untrained sound source positions, but there is mixed evidence regarding cross-frequency generalization. Multiple brain areas might be involved in auditory space adaptation processes, but the auditory cortex (AC) may play a critical role. Auditory space plasticity may involve context-dependent cue reweighting. PMID:25120422

  5. Auditory and non-auditory effects of noise on health.

    PubMed

    Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen

    2014-04-12

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-auditory effects of environmental noise exposure on public health is growing. Observational and experimental studies have shown that noise exposure leads to annoyance, disturbs sleep and causes daytime sleepiness, affects patient outcomes and staff performance in hospitals, increases the occurrence of hypertension and cardiovascular disease, and impairs cognitive performance in schoolchildren. In this Review, we stress the importance of adequate noise prevention and mitigation strategies for public health.

  6. Two distinct auditory-motor circuits for monitoring speech production as revealed by content-specific suppression of auditory cortex.

    PubMed

    Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet

    2015-06-01

    Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech.

  7. Mouse Auditory Brainstem Response Testing

    PubMed Central

    Akil, Omar; Oursler, A. E.; Fan, Kevin; Lustig, Lawrence R.

    2016-01-01

    The auditory brainstem response (ABR) test provides information about the inner ear (cochlea) and the central pathways for hearing. The ABR reflects the electrical responses of both the cochlear ganglion neurons and the nuclei of the central auditory pathway to sound stimulation (Zhou et al., 2006; Burkard et al., 2007). The ABR contains 5 identifiable wave forms, labeled as I-V. Wave I represents the summated response from the spiral ganglion and auditory nerve while waves II-V represent responses from the ascending auditory pathway. The ABR is recorded via electrodes placed on the scalp of an anesthetized animal. ABR thresholds refer to the lowest sound pressure level (SPL) that can generate identifiable electrical response waves. This protocol describes the process of measuring the ABR of small rodents (mouse, rat, guinea pig, etc.), including anesthetizing the mouse, placing the electrodes on the scalp, recording click and tone burst stimuli and reading the obtained waveforms for ABR threshold values. As technology continues to evolve, ABR will likely provide more qualitative and quantitative information regarding the function of the auditory nerve and brainstem pathways involved in hearing.

  8. Central gain restores auditory processing following near-complete cochlear denervation

    PubMed Central

    Chambers, Anna R.; Resnik, Jennifer; Yuan, Yasheng; Whitton, Jonathon P.; Edge, Albert S.; Liberman, M. Charles; Polley, Daniel B.

    2016-01-01

    Sensory organ damage induces a host of cellular and physiological changes in the periphery and the brain. Here, we show that some aspects of auditory processing recover after profound cochlear denervation due to a progressive, compensatory plasticity at higher stages of the central auditory pathway. Lesioning >95% of cochlear nerve afferent synapses, while sparing hair cells, in adult mice virtually eliminated the auditory brainstem response and acoustic startle reflex, yet tone detection behavior was nearly normal. As sound-evoked responses from the auditory nerve grew progressively weaker following denervation, sound-evoked activity in the cortex – and to a lesser extent the midbrain – rebounded or surpassed control levels. Increased central gain supported the recovery of rudimentary sound features encoded by firing rate, but not features encoded by precise spike timing such as modulated noise or speech. These findings underscore the importance of central plasticity in the perceptual sequelae of cochlear hearing impairment. PMID:26833137

  9. Imaging white-matter pathways of the auditory system with diffusion imaging tractography.

    PubMed

    Maffei, Chiara; Soria, Guadalupe; Prats-Galino, Alberto; Catani, Marco

    2015-01-01

    The recent advent of diffusion imaging tractography has opened a new window into the in vivo white-matter anatomy of the human brain. This is of particular importance for the connections of the auditory system, which may have undergone substantial development in humans in relation to language. However, tractography of the human auditory pathways has proved to be challenging due to current methodologic limitations and the intrinsic anatomic features of the subcortical connections that carry acoustic information in the brainstem. More reliable findings are forthcoming from tractography studies of corticocortical connections associated with language processing. In this chapter we introduce the reader to basic principles of diffusion imaging and tractography. A selected review of the tractography studies of the auditory pathways will be presented, with particular attention given to the cerebral association pathways of the temporal lobe. Finally, new diffusion methods based on advanced model for mapping fiber crossing will be discussed in the context of the auditory and language networks.

  10. Modulation of auditory processing during speech movement planning is limited in adults who stutter

    PubMed Central

    Daliri, Ayoub; Max, Ludo

    2015-01-01

    Stuttering is associated with atypical structural and functional connectivity in sensorimotor brain areas, in particular premotor, motor, and auditory regions. It remains unknown, however, which specific mechanisms of speech planning and execution are affected by these neurological abnormalities. To investigate pre-movement sensory modulation, we recorded 12 stuttering and 12 nonstuttering adults’ auditory evoked potentials in response to probe tones presented prior to speech onset in a delayed-response speaking condition vs. no-speaking control conditions (silent reading; seeing nonlinguistic symbols). Findings indicate that, during speech movement planning, the nonstuttering group showed a statistically significant modulation of auditory processing (reduced N1 amplitude) that was not observed in the stuttering group. Thus, the obtained results provide electrophysiological evidence in support of the hypothesis that stuttering is associated with deficiencies in modulating the cortical auditory system during speech movement planning. This specific sensorimotor integration deficiency may contribute to inefficient feedback monitoring and, consequently, speech dysfluencies. PMID:25796060

  11. Simultaneous recording of rat auditory cortex and thalamus via a titanium-based, microfabricated, microelectrode device

    NASA Astrophysics Data System (ADS)

    McCarthy, P. T.; Rao, M. P.; Otto, K. J.

    2011-08-01

    Direct recording from sequential processing stations within the brain has provided opportunity for enhancing understanding of important neural circuits, such as the corticothalamic loops underlying auditory, visual, and somatosensory processing. However, the common reliance upon microwire-based electrodes to perform such recordings often necessitates complex surgeries and increases trauma to neural tissues. This paper reports the development of titanium-based, microfabricated, microelectrode devices designed to address these limitations by allowing acute recording from the thalamic nuclei and associated cortical sites simultaneously in a minimally invasive manner. In particular, devices were designed to simultaneously probe rat auditory cortex and auditory thalamus, with the intent of recording auditory response latencies and isolated action potentials within the separate anatomical sites. Details regarding the design, fabrication, and characterization of these devices are presented, as are preliminary results from acute in vivo recording.

  12. Spatial versus object feature processing in human auditory cortex: a magnetoencephalographic study.

    PubMed

    Herrmann, Christoph S; Senkowski, Daniel; Maess, Burkhard; Friederici, Angela D

    2002-12-06

    The human visual system is divided into two pathways specialized for the processing of either objects or spatial locations. Neuroanatomical studies in monkeys have suggested that a similar specialization may also divide auditory cortex into two such pathways. We used the identical stimulus material in two experimental sessions in which subjects had to either identify auditory objects or their location. Magnetoencephalograms were recorded and M100 dipoles were fitted into individual brain models. In the right hemisphere, the processing of auditory spatial information lead to more lateral activations within the temporal plane while object identification lead to more medial activations. These findings suggest that the human auditory system processes object features and spatial features in distinct areas.

  13. Auditory agnosia and auditory spatial deficits following left hemispheric lesions: evidence for distinct processing pathways.

    PubMed

    Clarke, S; Bellmann, A; Meuli, R A; Assal, G; Steck, A J

    2000-01-01

    Auditory recognition and auditory spatial functions were studied in four patients with circumscribed left hemispheric lesions. Patient FD was severely deficient in recognition of environmental sounds but normal in auditory localisation and auditory motion perception. The lesion included the left superior, middle and inferior temporal gyri and lateral auditory areas (as identified in previous anatomical studies), but spared Heschl's gyrus, the acoustic radiation and the thalamus. Patient SD had the same profile as FD, with deficient recognition of environmental sounds but normal auditory localisation and motion perception. The lesion comprised the postero-inferior part of the frontal convexity and the anterior third of the temporal lobe; data from non-human primates indicate that the latter are interconnected with lateral auditory areas. Patient MA was deficient in recognition of environmental sounds, auditory localisation and auditory motion perception, confirming that auditory spatial functions can be disturbed by left unilateral damage; the lesion involved the supratemporal region as well as the temporal, postero-inferior frontal and antero-inferior parietal convexities. Patient CZ was severely deficient in auditory motion perception and partially deficient in auditory localisation, but normal in recognition of environmental sounds; the lesion involved large parts of the parieto-frontal convexity and the supratemporal region. We propose that auditory information is processed in the human auditory cortex along two distinct pathways, one lateral devoted to auditory recognition and one medial and posterior devoted to auditory spatial functions.

  14. Behind the scenes of auditory perception.

    PubMed

    Shamma, Shihab A; Micheyl, Christophe

    2010-06-01

    'Auditory scenes' often contain contributions from multiple acoustic sources. These are usually heard as separate auditory 'streams', which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the past two years indicate that both cortical and subcortical processes contribute to the formation of auditory streams, and they raise important questions concerning the roles of primary and secondary areas of auditory cortex in this phenomenon. In addition, these findings underline the importance of taking into account the relative timing of neural responses, and the influence of selective attention, in the search for neural correlates of the perception of auditory streams.

  15. Music training alters the course of adolescent auditory development

    PubMed Central

    Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina

    2015-01-01

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739

  16. Music training alters the course of adolescent auditory development.

    PubMed

    Tierney, Adam T; Krizman, Jennifer; Kraus, Nina

    2015-08-11

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes.

  17. The case for early identification of hearing loss in children. Auditory system development, experimental auditory deprivation, and development of speech perception and hearing.

    PubMed

    Sininger, Y S; Doyle, K J; Moore, J K

    1999-02-01

    Human infants spend the first year of life learning about their environment through experience. Although it is not visible to observers, infants with hearing are learning to process speech and understand language and are quite linguistically sophisticated by 1 year of age. At this same time, the neurons in the auditory brain stem are maturing, and billions of major neural connections are being formed. During this time, the auditory brain stem and thalamus are just beginning to connect to the auditory cortex. When sensory input to the auditory nervous system is interrupted, especially during early development, the morphology and functional properties of neurons in the central auditory system can break down. In some instances, these deleterious effects of lack of sound input can be ameliorated by reintroduction of stimulation, but critical periods may exist for intervention. Hearing loss in newborn infants can go undetected until as late as 2 years of age without specialized testing. When hearing loss is detected in the newborn period, infants can benefit from amplification (hearing aids) and intervention to facilitate speech and language development. All evidence regarding neural development supports such early intervention for maximum development of communication ability and hearing in infants.

  18. Octave effect in auditory attention.

    PubMed

    Borra, Tobias; Versnel, Huib; Kemner, Chantal; van Opstal, A John; van Ee, Raymond

    2013-09-17

    After hearing a tone, the human auditory system becomes more sensitive to similar tones than to other tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone. Intriguingly, this "octave effect" not only occurs for physically presented tones, but even persists for the missing fundamental in complex tones, and for imagined tones. Our results suggest neural interactions combining octave-related frequencies, likely located in nonprimary cortical regions. We speculate that this connectivity scheme evolved from exposure to natural vibrations containing octave-related spectral peaks, e.g., as produced by vocal cords.

  19. The neural correlates of subjectively perceived and passively matched loudness perception in auditory phantom perception

    PubMed Central

    De Ridder, Dirk; Congedo, Marco; Vanneste, Sven

    2015-01-01

    Introduction A fundamental question in phantom perception is determining whether the brain creates a network that represents the sound intensity of the auditory phantom as measured by tinnitus matching (in dB), or whether the phantom perception is actually only a representation of the subjectively perceived loudness. Methods In tinnitus patients, tinnitus loudness was tested in two ways, by a numeric rating scale for subjectively perceived loudness and a more objective tinnitus-matching test, albeit it is still a subjective measure. Results Passively matched tinnitus does not correlate with subjective numeric rating scale, and has no electrophysiological correlates. Subjective loudness, in a whole-brain analysis, is correlated with activity in the left anterior insula (alpha), the rostral/dorsal anterior cingulate cortex (beta), and the left parahippocampus (gamma). A ROI analysis finds correlations with the auditory cortex (high beta and gamma) as well. The theta band links gamma band activity in the auditory cortex and parahippocampus via theta–gamma nesting. Conclusions Apparently the brain generates a network that represents subjectively perceived tinnitus loudness only, which is context dependent. The subjective loudness network consists of the anterior cingulate/insula, the parahippocampus, and the auditory cortex. The gamma band activity in the parahippocampus and the auditory cortex is functionally linked via theta–gamma nested lagged phase synchronization. PMID:25874164

  20. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization.

    PubMed

    Canales-Rodríguez, Erick J; Daducci, Alessandro; Sotiropoulos, Stamatios N; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data.

  1. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization

    PubMed Central

    Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  2. Multi-Channel Capacitive Sensor Arrays.

    PubMed

    Wang, Bingnan; Long, Jiang; Teo, Koon Hoo

    2016-01-25

    In this paper, multi-channel capacitive sensor arrays based on microstrip band-stop filters are studied. The sensor arrays can be used to detect the proximity of objects at different positions and directions. Each capacitive sensing structure in the array is connected to an inductive element to form resonance at different frequencies. The resonances are designed to be isolated in the frequency spectrum, such that the change in one channel does not affect resonances at other channels. The inductive element associated with each capacitive sensor can be surface-mounted inductors, integrated microstrip inductors or metamaterial-inspired structures. We show that by using metamaterial split-ring structures coupled to a microstrip line, the quality factor of each resonance can be greatly improved compared to conventional surface-mounted or microstrip meander inductors. With such a microstrip-coupled split-ring design, more sensing elements can be integrated in the same frequency spectrum, and the sensitivity can be greatly improved.

  3. Fault-tolerant multichannel demultiplexer subsystems

    NASA Technical Reports Server (NTRS)

    Redinbo, Robert

    1991-01-01

    Fault tolerance in future processing and switching communication satellites is addressed by showing new methods for detecting hardware failures in the first major subsystem, the multichannel demultiplexer. An efficient method for demultiplexing frequency slotted channels uses multirate filter banks which contain fast Fourier transform processing. All numerical processing is performed at a lower rate commensurate with the small bandwidth of each bandbase channel. The integrity of the demultiplexing operations is protected by using real number convolutional codes to compute comparable parity values which detect errors at the data sample level. High rate, systematic convolutional codes produce parity values at a much reduced rate, and protection is achieved by generating parity values in two ways and comparing them. Parity values corresponding to each output channel are generated in parallel by a subsystem, operating even slower and in parallel with the demultiplexer that is virtually identical to the original structure. These parity calculations may be time shared with the same processing resources because they are so similar.

  4. Novel revolving multichannel electromechanical optical switch

    NASA Astrophysics Data System (ADS)

    Ge, Wenping; Yin, Zongmin; Liu, Jingjing; Zhou, Zhengli

    2001-10-01

    In this paper, we described a kind of structures and the principle about a multi-channel optical switch. we designed a novel revolving single mode optical switch, which based on electronically controlled fiber collimators directing the light to desired output fibers, and the movement of fiber collimator is implemented by the rotation of stepping micro-electromotor. The main parts of the optical switch are two cylinders being carrier of fiber collimators, one of which can revolve driven by stepping micro-electromotor which is controlled by micro-computer. With flexibility of structure,it is easy to design the series of 1xN optical switches. Furthermore, by using two or more revolving axes, we can design reasonably the position of the optical collimators, and get no-blocking 2x2 or 4x4 optical switch matrix. We fabricated a 1×8 single1 mode optical switch, and the experiment results indicate that the technical performance of the optical switch can satisfy requires for changing light channel.

  5. Sparse reconstruction of correlated multichannel activity.

    PubMed

    Peelman, Sem; Van der Herten, Joachim; De Vos, Maarten; Lee, Wen-Shin; Van Huffel, Sabine; Cuyt, Annie

    2013-01-01

    Parametric methods for modeling sinusoidal signals with line spectra have been studied for decades. In general, these methods start by representing each sinusoidal component by means of two complex exponential functions, thereby doubling the number of unknown parameters. Recently, a Hankel-plus-Toeplitz matrix pencil method was proposed which directly models sinusoidal signals with discrete spectral content. Compared to its counterpart, which uses a Hankel matrix pencil, it halves the required number of time-domain samples and reduces the size of the involved linear systems. The aim of this paper is twofold. Firstly, to show that this Hankel-plus-Toeplitz matrix pencil also applies to continuous spectra. Secondly, to explore its use in the reconstruction of real-life signals. Promising preliminary results in the reconstruction of correlated multichannel electroencephalographic (EEG) activity are presented. A principal component analysis preprocessing step is carried out to exploit the redundancy in the channel domain. Then the reduced signal representation is successfully reconstructed from fewer samples using the Hankel-plus-Toeplitz matrix pencil. The obtained results encourage the future development of this matrix pencil method along the lines of well-established spectral analysis methods.

  6. Multichannel hierarchical image classification using multivariate copulas

    NASA Astrophysics Data System (ADS)

    Voisin, Aurélie; Krylov, Vladimir A.; Moser, Gabriele; Serpico, Sebastiano B.; Zerubia, Josiane

    2012-03-01

    This paper focuses on the classification of multichannel images. The proposed supervised Bayesian classification method applied to histological (medical) optical images and to remote sensing (optical and synthetic aperture radar) imagery consists of two steps. The first step introduces the joint statistical modeling of the coregistered input images. For each class and each input channel, the class-conditional marginal probability density functions are estimated by finite mixtures of well-chosen parametric families. For optical imagery, the normal distribution is a well-known model. For radar imagery, we have selected generalized gamma, log-normal, Nakagami and Weibull distributions. Next, the multivariate d-dimensional Clayton copula, where d can be interpreted as the number of input channels, is applied to estimate multivariate joint class-conditional statistics. As a second step, we plug the estimated joint probability density functions into a hierarchical Markovian model based on a quadtree structure. Multiscale features are extracted by discrete wavelet transforms, or by using input multiresolution data. To obtain the classification map, we integrate an exact estimator of the marginal posterior mode.

  7. Multi-Channel Capacitive Sensor Arrays

    PubMed Central

    Wang, Bingnan; Long, Jiang; Teo, Koon Hoo

    2016-01-01

    In this paper, multi-channel capacitive sensor arrays based on microstrip band-stop filters are studied. The sensor arrays can be used to detect the proximity of objects at different positions and directions. Each capacitive sensing structure in the array is connected to an inductive element to form resonance at different frequencies. The resonances are designed to be isolated in the frequency spectrum, such that the change in one channel does not affect resonances at other channels. The inductive element associated with each capacitive sensor can be surface-mounted inductors, integrated microstrip inductors or metamaterial-inspired structures. We show that by using metamaterial split-ring structures coupled to a microstrip line, the quality factor of each resonance can be greatly improved compared to conventional surface-mounted or microstrip meander inductors. With such a microstrip-coupled split-ring design, more sensing elements can be integrated in the same frequency spectrum, and the sensitivity can be greatly improved. PMID:26821023

  8. AOSC multichannel electronic variable optical attenuator

    NASA Astrophysics Data System (ADS)

    Vonsovici, Adrian P.; Day, Ian E.; House, Andrew A.; Asghari, Mehdi

    2001-05-01

    Optical networks are becoming a reality as the physical layer of high-performance telecommunication networks. The deployment of wavelength-division multiplexing (WDM) technology allows the extended exploitation of installed fibers now facing an increasing traffic capacity demand. Performances of such systems can be degraded by wide variations of the optical channel power following propagation in the network. Therefore a tilt control of optical amplifiers in WDM networks and dynamic channel power regulation and equalisation in cross-connected nodes is necessary. An important tool for the system designer is the variable optical attenuator (VOA). We present the design and the realization of newly developed VOAs using the ASOC technology. This technology refers to the fabrication of integrated optics components in silicon-on-insulator (SOI) material. The device is based on the light absorption by the free-carriers that are injected in the core of a rib waveguide from a p-i-n diode. The devices incorporate horizontally and vertically tapered waveguides for minimum fiber coupling loss. The p-i-n diode for carrier injection into the active region of the rib waveguide was optimised in order to enhance the attenuation. One major advantage of the ASOC technology is the possibility of monolithic integration of many integrated optics devices on one chip. In the light of this the paper illustrates the result of characterisation of multichannel VOAs.

  9. Capacitance Probe Resonator for Multichannel Electrometer

    NASA Technical Reports Server (NTRS)

    Blaes, Brent R.; Schaefer, Rembrandt T> ; Glaser, Robert J.

    2012-01-01

    A multichannel electrometer voltmeter has been developed that employs a mechanical resonator with voltage-sensing capacitance-probe electrodes that enable high-impedance, high-voltage, radiation-hardened measurement of an Internal Electrostatic Discharge Monitor (IESDM) sensor. The IESDM is new sensor technology targeted for integration into a Space Environmental Monitor (SEM) subsystem used for the characterization and monitoring of deep dielectric charging on spacecraft. The resonator solution relies on a non-contact, voltage-sensing, sinusoidal-varying capacitor to achieve input impedances as high as 10 petaohms as determined by the resonator materials, geometries, cleanliness, and construction. The resonator is designed with one dominant mechanical degree of freedom, so it resonates as a simple harmonic oscillator and because of the linearity of the variable sense capacitor to displacement, generates a pure sinusoidal current signal for a fixed input voltage under measurement. This enables the use of an idealized phase-lock sensing scheme for optimal signal detection in the presence of noise.

  10. Time estimation with multichannel digital silicon photomultipliers.

    PubMed

    Venialgo, Esteban; Mandai, Shingo; Gong, Tim; Schaart, Dennis R; Charbon, Edoardo

    2015-03-21

    Accuracy in timemark estimation is crucial for time-of-flight positron emission tomography, in order to ensure high quality images after reconstruction. Since the introduction of multichannel digital silicon photomultipliers, it is possible to acquire several photoelectron timestamps for each individual gamma event. We study several timemark estimators based on multiple photoelectron timestamps by means of a comprehensive statistical model. In addition, we calculate the MSE of the estimators in comparison to the Cramér-Rao lower bound as a function of the system design parameters. We investigate the effect of skipping some of the photoelectron timestamps, which is a direct consequence of the limited number of time-to-digital converters and we propose a technique to compensate for this effect. In addition, we carry out an extensive analysis to evaluate the influence of dark counts on the detector timing performance. Moreover, we investigate the improvement of the timing performance that can be obtained with dark count filtering and we propose an appropriate filtering method based on measuring the time difference between sorted timestamps. Finally, we perform a full Monte Carlo simulation to compare different timemark estimators by exploring several system design parameters. It is demonstrated that a simple weighted-average estimator can achieve a comparable performance as the more complex maximum likelihood estimator.

  11. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  12. Ageing and the auditory system

    PubMed Central

    Howarth, A; Shone, G R

    2006-01-01

    There are a number of pathophysiological processes underlying age related changes in the auditory system. The effects of hearing loss can have consequences beyond the immediate loss of hearing, and may have profound effects on the functioning of the person. While a deficit in hearing can be corrected to some degree by a hearing aid, auditory rehabilitation requires much more than simply amplifying external sound. It is important that those dealing with elderly people are aware of all the issues involved in age related hearing loss. PMID:16517797

  13. Loudspeaker equalization for auditory research.

    PubMed

    MacDonald, Justin A; Tran, Phuong K

    2007-02-01

    The equalization of loudspeaker frequency response is necessary to conduct many types of well-controlled auditory experiments. This article introduces a program that includes functions to measure a loudspeaker's frequency response, design equalization filters, and apply the filters to a set of stimuli to be used in an auditory experiment. The filters can compensate for both magnitude and phase distortions introduced by the loudspeaker. A MATLAB script is included in the Appendix to illustrate the details of the equalization algorithm used in the program.

  14. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates

    PubMed Central

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-01-01

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys. DOI: http://dx.doi.org/10.7554/eLife.15441.001 PMID:27438411

  15. Differential deviant probability effects on two hierarchical levels of the auditory novelty system.

    PubMed

    López-Caballero, Fran; Zarnowiec, Katarzyna; Escera, Carles

    2016-10-01

    Deviance detection is a key functional property of the auditory system that allows pre-attentive discrimination of incoming stimuli not conforming to a rule extracted from the ongoing constant stimulation, thereby proving that regularities in the auditory scene have been encoded in the auditory system. Using simple-feature stimulus deviations, regularity encoding and deviance detection have been reported in brain responses at multiple latencies of the human Auditory Evoked Potential (AEP), such as the Mismatch Negativity (MMN; peaking at 100-250ms from stimulus onset) and Middle-Latency Responses (MLR; peaking at 12-50ms). More complex levels of regularity violations, however, are only indexed by AEPs generated at higher stages of the auditory system, suggesting a hierarchical organization in the encoding of auditory regularities. The aim of the current study is to further characterize the auditory hierarchy of novelty responses, by assessing the sensitivity of MLR components to deviant probability manipulations. MMNs and MLRs were recorded in 24 healthy participants, using an oddball location paradigm with three different deviant probabilities (5%, 10% and 20%), and a reversed-standard (91.5%). We analyzed differences in the MLRs elicited to each of the deviant stimuli and the reversed-standard, as well as within deviant stimuli. Our results confirmed deviance detection at the level of both MLRs and MMN, but significant differences for deviant probabilities were found only for the MMN. These results suggest a functional dissociation between regularity encoding, already present at early stages of auditory processing, and the encoding of the probability with which this regularity is disrupted, which is only processed at higher stages of the auditory hierarchy.

  16. From ear to body: the auditory-motor loop in spatial cognition

    PubMed Central

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space. PMID:25249933

  17. Association of Concurrent fNIRS and EEG Signatures in Response to Auditory and Visual Stimuli.

    PubMed

    Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Herrmann, Christoph S; Debener, Stefan

    2015-09-01

    Functional near-infrared spectroscopy (fNIRS) has been proven reliable for investigation of low-level visual processing in both infants and adults. Similar investigation of fundamental auditory processes with fNIRS, however, remains only partially complete. Here we employed a systematic three-level validation approach to investigate whether fNIRS could capture fundamental aspects of bottom-up acoustic processing. We performed a simultaneous fNIRS-EEG experiment with visual and auditory stimulation in 24 participants, which allowed the relationship between changes in neural activity and hemoglobin concentrations to be studied. In the first level, the fNIRS results showed a clear distinction between visual and auditory sensory modalities. Specifically, the results demonstrated area specificity, that is, maximal fNIRS responses in visual and auditory areas for the visual and auditory stimuli respectively, and stimulus selectivity, whereby the visual and auditory areas responded mainly toward their respective stimuli. In the second level, a stimulus-dependent modulation of the fNIRS signal was observed in the visual area, as well as a loudness modulation in the auditory area. Finally in the last level, we observed significant correlations between simultaneously-recorded visual evoked potentials and deoxygenated hemoglobin (DeoxyHb) concentration, and between late auditory evoked potentials and oxygenated hemoglobin (OxyHb) concentration. In sum, these results suggest good sensitivity of fNIRS to low-level sensory processing in both the visual and the auditory domain, and provide further evidence of the neurovascular coupling between hemoglobin concentration changes and non-invasive brain electrical activity.

  18. Electrostimulation mapping of comprehension of auditory and visual words.

    PubMed

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

  19. Delays in auditory processing identified in preschool children with FASD

    PubMed Central

    Stephen, Julia M.; Kodituwakku, Piyadasa W.; Kodituwakku, Elizabeth L.; Romero, Lucinda; Peters, Amanda M.; Sharadamma, Nirupama Muniswamy; Caprihan, Arvind; Coffman, Brian A.

    2012-01-01

    Background Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool aged children. Since sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Materials and Methods Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control children aged 3-6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multi-dipole spatio-temporal modeling technique (CSST – Ranken et al. 2002) to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. Results There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Discussion Auditory delay revealed by MEG in children with FASD may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. PMID:22458372

  20. Auditory Detection of the Human Brainstem Auditory Evoked Response.

    ERIC Educational Resources Information Center

    Kidd, Gerald, Jr.; And Others

    1993-01-01

    This study evaluated whether listeners can distinguish human brainstem auditory evoked responses elicited by acoustic clicks from control waveforms obtained with no acoustic stimulus when the waveforms are presented auditorily. Detection performance for stimuli presented visually was slightly, but consistently, superior to that which occurred for…

  1. Predictive motor control of sensory dynamics in auditory active sensing.

    PubMed

    Morillon, Benjamin; Hackett, Troy A; Kajikawa, Yoshinao; Schroeder, Charles E

    2015-04-01

    Neuronal oscillations present potential physiological substrates for brain operations that require temporal prediction. We review this idea in the context of auditory perception. Using speech as an exemplar, we illustrate how hierarchically organized oscillations can be used to parse and encode complex input streams. We then consider the motor system as a major source of rhythms (temporal priors) in auditory processing, that act in concert with attention to sharpen sensory representations and link them across areas. We discuss the circuits that could mediate this audio-motor interaction, notably the potential role of the somatosensory system. Finally, we reposition temporal predictions in the context of internal models, discussing how they interact with feature-based or spatial predictions. We argue that complementary predictions interact synergistically according to the organizational principles of each sensory system, forming multidimensional filters crucial to perception.

  2. Auditory stream segregation in children with Asperger syndrome

    PubMed Central

    Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E.; Nieminen-von Wendt, T.; Kujala, T.

    2009-01-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception. PMID:19751798

  3. Auditory stream segregation in children with Asperger syndrome.