Sample records for auditory frequency range

  1. Integration of auditory and vibrotactile stimuli: Effects of frequency

    PubMed Central

    Wilson, E. Courtenay; Reed, Charlotte M.; Braida, Louis D.

    2010-01-01

    Perceptual integration of vibrotactile and auditory sinusoidal tone pulses was studied in detection experiments as a function of stimulation frequency. Vibrotactile stimuli were delivered through a single channel vibrator to the left middle fingertip. Auditory stimuli were presented diotically through headphones in a background of 50 dB sound pressure level broadband noise. Detection performance for combined auditory-tactile presentations was measured using stimulus levels that yielded 63% to 77% correct unimodal performance. In Experiment 1, the vibrotactile stimulus was 250 Hz and the auditory stimulus varied between 125 and 2000 Hz. In Experiment 2, the auditory stimulus was 250 Hz and the tactile stimulus varied between 50 and 400 Hz. In Experiment 3, the auditory and tactile stimuli were always equal in frequency and ranged from 50 to 400 Hz. The highest rates of detection for the combined-modality stimulus were obtained when stimulating frequencies in the two modalities were equal or closely spaced (and within the Pacinian range). Combined-modality detection for closely spaced frequencies was generally consistent with an algebraic sum model of perceptual integration; wider-frequency spacings were generally better fit by a Pythagorean sum model. Thus, perceptual integration of auditory and tactile stimuli at near-threshold levels appears to depend both on absolute frequency and relative frequency of stimulation within each modality. PMID:21117754

  2. Two-channel recording of auditory-evoked potentials to detect age-related deficits in temporal processing.

    PubMed

    Parthasarathy, Aravindakshan; Bartlett, Edward

    2012-07-01

    Auditory brainstem responses (ABRs), and envelope and frequency following responses (EFRs and FFRs) are widely used to study aberrant auditory processing in conditions such as aging. We have previously reported age-related deficits in auditory processing for rapid amplitude modulation (AM) frequencies using EFRs recorded from a single channel. However, sensitive testing of EFRs along a wide range of modulation frequencies is required to gain a more complete understanding of the auditory processing deficits. In this study, ABRs and EFRs were recorded simultaneously from two electrode configurations in young and old Fischer-344 rats, a common auditory aging model. Analysis shows that the two channels respond most sensitively to complementary AM frequencies. Channel 1, recorded from Fz to mastoid, responds better to faster AM frequencies in the 100-700 Hz range of frequencies, while Channel 2, recorded from the inter-aural line to the mastoid, responds better to slower AM frequencies in the 16-100 Hz range. Simultaneous recording of Channels 1 and 2 using AM stimuli with varying sound levels and modulation depths show that age-related deficits in temporal processing are not present at slower AM frequencies but only at more rapid ones, which would not have been apparent recording from either channel alone. Comparison of EFRs between un-anesthetized and isoflurane-anesthetized recordings in young animals, as well as comparison with previously published ABR waveforms, suggests that the generators of Channel 1 may emphasize more caudal brainstem structures while those of Channel 2 may emphasize more rostral auditory nuclei including the inferior colliculus and the forebrain, with the boundary of separation potentially along the cochlear nucleus/superior olivary complex. Simultaneous two-channel recording of EFRs help to give a more complete understanding of the properties of auditory temporal processing over a wide range of modulation frequencies which is useful in understanding neural representations of sound stimuli in normal, developmental or pathological conditions. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Binaural auditory beats affect vigilance performance and mood.

    PubMed

    Lane, J D; Kasian, S J; Owens, J E; Marsh, G R

    1998-01-01

    When two tones of slightly different frequency are presented separately to the left and right ears the listener perceives a single tone that varies in amplitude at a frequency equal to the frequency difference between the two tones, a perceptual phenomenon known as the binaural auditory beat. Anecdotal reports suggest that binaural auditory beats within the electroencephalograph frequency range can entrain EEG activity and may affect states of consciousness, although few scientific studies have been published. This study compared the effects of binaural auditory beats in the EEG beta and EEG theta/delta frequency ranges on mood and on performance of a vigilance task to investigate their effects on subjective and objective measures of arousal. Participants (n = 29) performed a 30-min visual vigilance task on three different days while listening to pink noise containing simple tones or binaural beats either in the beta range (16 and 24 Hz) or the theta/delta range (1.5 and 4 Hz). However, participants were kept blind to the presence of binaural beats to control expectation effects. Presentation of beta-frequency binaural beats yielded more correct target detections and fewer false alarms than presentation of theta/delta frequency binaural beats. In addition, the beta-frequency beats were associated with less negative mood. Results suggest that the presentation of binaural auditory beats can affect psychomotor performance and mood. This technology may have applications for the control of attention and arousal and the enhancement of human performance.

  4. Correspondence between evoked vocal responses and auditory thresholds in Pleurodema thaul (Amphibia; Leptodactylidae).

    PubMed

    Penna, Mario; Velásquez, Nelson; Solís, Rigoberto

    2008-04-01

    Thresholds for evoked vocal responses and thresholds of multiunit midbrain auditory responses to pure tones and synthetic calls were investigated in males of Pleurodema thaul, as behavioral thresholds well above auditory sensitivity have been reported for other anurans. Thresholds for evoked vocal responses to synthetic advertisement calls played back at increasing intensity averaged 43 dB RMS SPL (range 31-52 dB RMS SPL), measured at the subjects' position. Number of pulses increased with stimulus intensities, reaching a plateau at about 18-39 dB above threshold and decreased at higher intensities. Latency to call followed inverse trends relative to number of pulses. Neural audiograms yielded an average best threshold in the high frequency range of 46.6 dB RMS SPL (range 41-51 dB RMS SPL) and a center frequency of 1.9 kHz (range 1.7-2.6 kHz). Auditory thresholds for a synthetic call having a carrier frequency of 2.1 kHz averaged 44 dB RMS SPL (range 39-47 dB RMS SPL). The similarity between thresholds for advertisement calling and auditory thresholds for the advertisement call indicates that male P. thaul use the full extent of their auditory sensitivity in acoustic interactions, likely an evolutionary adaptation allowing chorusing activity in low-density aggregations.

  5. Tympanal spontaneous oscillations reveal mechanisms for the control of amplified frequency in tree crickets

    NASA Astrophysics Data System (ADS)

    Mhatre, Natasha; Robert, Daniel

    2018-05-01

    Tree cricket hearing shows all the features of an actively amplified auditory system, particularly spontaneous oscillations (SOs) of the tympanal membrane. As expected from an actively amplified auditory system, SO frequency and the peak frequency in evoked responses as observed in sensitivity spectra are correlated. Sensitivity spectra also show compressive non-linearity at this frequency, i.e. a reduction in peak height and sharpness with increasing stimulus amplitude. Both SO and amplified frequency also change with ambient temperature, allowing the auditory system to maintain a filter that is matched to song frequency. In tree crickets, remarkably, song frequency varies with ambient temperature. Interestingly, active amplification has been reported to be switched ON and OFF. The mechanism of this switch is as yet unknown. In order to gain insights into this switch, we recorded and analysed SOs as the auditory system transitioned from the passive (OFF) state to the active (ON) state. We found that while SO amplitude did not follow a fixed pattern, SO frequency changed during the ON-OFF transition. SOs were first detected above noise levels at low frequencies, sometimes well below the known song frequency range (0.5-1 kHz lower). SO frequency was observed to increase over the next ˜30 minutes, in the absence of any ambient temperature change, before settling at a frequency within the range of conspecific song. We examine the frequency shift in SO spectra with temperature and during the ON/OFF transition and discuss the mechanistic implications. To our knowledge, such modulation of active auditory amplification, and its dynamics are unique amongst auditory animals.

  6. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    PubMed

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Auditory fovea and Doppler shift compensation: adaptations for flutter detection in echolocating bats using CF-FM signals.

    PubMed

    Schnitzler, Hans-Ulrich; Denzinger, Annette

    2011-05-01

    Rhythmical modulations in insect echoes caused by the moving wings of fluttering insects are behaviourally relevant information for bats emitting CF-FM signals with a high duty cycle. Transmitter and receiver of the echolocation system in flutter detecting foragers are especially adapted for the processing of flutter information. The adaptations of the transmitter are indicated by a flutter induced increase in duty cycle, and by Doppler shift compensation (DSC) that keeps the carrier frequency of the insect echoes near a reference frequency. An adaptation of the receiver is the auditory fovea on the basilar membrane, a highly expanded frequency representation centred to the reference frequency. The afferent projections from the fovea lead to foveal areas with an overrepresentation of sharply tuned neurons with best frequencies near the reference frequency throughout the entire auditory pathway. These foveal neurons are very sensitive to stimuli with natural and simulated flutter information. The frequency range of the foveal areas with their flutter processing neurons overlaps exactly with the frequency range where DS compensating bats most likely receive echoes from fluttering insects. This tight match indicates that auditory fovea and DSC are adaptations for the detection and evaluation of insects flying in clutter.

  8. Effects of temperature on tuning of the auditory pathway in the cicada Tettigetta josei (Hemiptera, Tibicinidae).

    PubMed

    Fonseca, P J; Correia, T

    2007-05-01

    The effects of temperature on hearing in the cicada Tettigetta josei were studied. The activity of the auditory nerve and the responses of auditory interneurons to stimuli of different frequencies and intensities were recorded at different temperatures ranging from 16 degrees C to 29 degrees C. Firstly, in order to investigate the temperature dependence of hearing processes, we analyzed its effects on auditory tuning, sensitivity, latency and Q(10dB). Increasing temperature led to an upward shift of the characteristic hearing frequency, to an increase in sensitivity and to a decrease in the latency of the auditory response both in the auditory nerve recordings (periphery) and in some interneurons at the metathoracic-abdominal ganglionic complex (MAC). Characteristic frequency shifts were only observed at low frequency (3-8 kHz). No changes were seen in Q(10dB). Different tuning mechanisms underlying frequency selectivity may explain the results observed. Secondly, we investigated the role of the mechanical sensory structures that participate in the transduction process. Laser vibrometry measurements revealed that the vibrations of the tympanum and tympanal apodeme are temperature independent in the biologically relevant range (18-35 degrees C). Since the above mentioned effects of temperature are present in the auditory nerve recordings, the observed shifts in frequency tuning must be performed by mechanisms intrinsic to the receptor cells. Finally, the role of potassium channels in the response of the auditory system was investigated using a specific inhibitor of these channels, tetraethylammonium (TEA). TEA caused shifts on tuning and sensitivity of the summed response of the receptors similar to the effects of temperature. Thus, potassium channels are implicated in the tuning of the receptor cells.

  9. Contrast Enhancement without Transient Map Expansion for Species-Specific Vocalizations in Core Auditory Cortex during Learning.

    PubMed

    Shepard, Kathryn N; Chong, Kelly K; Liu, Robert C

    2016-01-01

    Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups' postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning.

  10. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence.

    PubMed

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective.

  11. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence

    PubMed Central

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628

  12. Long-Term Impairment of Sound Processing in the Auditory Midbrain by Daily Short-Term Exposure to Moderate Noise.

    PubMed

    Cheng, Liang; Wang, Shao-Hui; Peng, Kang; Liao, Xiao-Mei

    2017-01-01

    Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks) and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons.

  13. Long-Term Impairment of Sound Processing in the Auditory Midbrain by Daily Short-Term Exposure to Moderate Noise

    PubMed Central

    Cheng, Liang; Wang, Shao-Hui; Peng, Kang

    2017-01-01

    Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks) and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons. PMID:28589040

  14. Specialization of the auditory processing in harbor porpoise, characterized by brain-stem potentials

    NASA Astrophysics Data System (ADS)

    Bibikov, Nikolay G.

    2002-05-01

    Brain-stem auditory evoked potentials (BAEPs) were recorded from the head surface of the three awaked harbor porpoises (Phocoena phocoena). Silver disk placed on the skin surface above the vertex bone was used as an active electrode. The experiments were performed at the Karadag biological station (the Crimea peninsula). Clicks and tone bursts were used as stimuli. The temporal and frequency selectivity of the auditory system was estimated using the methods of simultaneous and forward masking. An evident minimum of the BAEPs thresholds was observed in the range of 125-135 kHz, where the main spectral component of species-specific echolocation signal is located. In this frequency range the tonal forward masking demonstrated a strong frequency selectivity. Off-response to such tone bursts was a typical observation. An evident BAEP could be recorded up to the frequencies 190-200 kHz, however, outside the acoustical fovea the frequency selectivity was rather poor. Temporal resolution was estimated by measuring BAER recovery functions for double clicks, double tone bursts, and double noise bursts. The half-time of BAERs recovery was in the range of 0.1-0.2 ms. The data indicate that the porpoise auditory system is strongly adapted to detect ultrasonic closely spaced sounds like species-specific locating signals and echoes.

  15. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input

    PubMed Central

    Happel, Max F. K.; Ohl, Frank W.

    2017-01-01

    Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062

  16. Contrast Enhancement without Transient Map Expansion for Species-Specific Vocalizations in Core Auditory Cortex during Learning

    PubMed Central

    Shepard, Kathryn N.; Chong, Kelly K.

    2016-01-01

    Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups’ postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning. PMID:27957529

  17. Auditory sensitivity to local stimulation of the head surface in a beluga whale (Delphinapterus leucas).

    PubMed

    Popov, Vladimir V; Sysueva, Evgeniya V; Nechaev, Dmitry I; Lemazina, Alena A; Supin, Alexander Ya

    2016-08-01

    Using the auditory evoked response technique, sensitivity to local acoustic stimulation of the ventro-lateral head surface was investigated in a beluga whale (Delphinapterus leucas). The stimuli were tone pip trains of carrier frequencies ranging from 16 to 128 kHz with a pip rate of 1 kHz. For higher frequencies (90-128 kHz), the low-threshold point was located next to the medial side of the middle portion of the lower jaw. For middle (32-64 kHz) and lower (16-22.5 kHz) frequencies, the low-threshold point was located at the lateral side of the middle portion of the lower jaw. For lower frequencies, there was an additional low-threshold point next to the bulla-meatus complex. Based on these data, several frequency-specific paths of sound conduction to the auditory bulla are suggested: (i) through an area on the lateral surface of the lower jaw and further through the intra-jaw fat-body channel (for a wide frequency range); (ii) through an area on the ventro-lateral head surface and further through the medial opening of the lower jaw and intra-jaw fat-body channel (for a high-frequency range); and (iii) through an area on the lateral (near meatus) head surface and further through the lateral fat-body channel (for a low-frequency range).

  18. Plasticity of peripheral auditory frequency sensitivity in Emei music frog.

    PubMed

    Zhang, Dian; Cui, Jianguo; Tang, Yezhong

    2012-01-01

    In anurans reproductive behavior is strongly seasonal. During the spring, frogs emerge from hibernation and males vocalize for mating or advertising territories. Female frogs have the ability to evaluate the quality of the males' resources on the basis of these vocalizations. Although studies revealed that central single torus semicircularis neurons in frogs exhibit season plasticity, the plasticity of peripheral auditory sensitivity in frog is unknown. In this study the seasonally plasticity of peripheral auditory sensitivity was test in the Emei music frog Babina daunchina, by comparing thresholds and latencies of auditory brainstem responses (ABRs) evoked by tone pips and clicks in the reproductive and non-reproductive seasons. The results show that both ABR thresholds and latency differ significantly between the reproductive and non-reproductive seasons. The thresholds of tone pip evoked ABRs in the non-reproductive season increased significantly about 10 dB than those in the reproductive season for frequencies from 1 KHz to 6 KHz. ABR latencies to waveform valley values for tone pips for the same frequencies using appropriate threshold stimulus levels are longer than those in the reproductive season for frequencies from 1.5 to 6 KHz range, although from 0.2 to 1.5 KHz range it is shorter in the non-reproductive season. These results demonstrated that peripheral auditory frequency sensitivity exhibits seasonal plasticity changes which may be adaptive to seasonal reproductive behavior in frogs.

  19. The impact of variation in low-frequency interaural cross correlation on auditory spatial imagery in stereophonic loudspeaker reproduction

    NASA Astrophysics Data System (ADS)

    Martens, William

    2005-04-01

    Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.

  20. Processing of frequency-modulated sounds in the lateral auditory belt cortex of the rhesus monkey.

    PubMed

    Tian, Biao; Rauschecker, Josef P

    2004-11-01

    Single neurons were recorded from the lateral belt areas, anterolateral (AL), mediolateral (ML), and caudolateral (CL), of nonprimary auditory cortex in 4 adult rhesus monkeys under gas anesthesia, while the neurons were stimulated with frequency-modulated (FM) sweeps. Responses to FM sweeps, measured as the firing rate of the neurons, were invariably greater than those to tone bursts. In our stimuli, frequency changed linearly from low to high frequencies (FM direction "up") or high to low frequencies ("down") at varying speeds (FM rates). Neurons were highly selective to the rate and direction of the FM sweep. Significant differences were found between the 3 lateral belt areas with regard to their FM rate preferences: whereas neurons in ML responded to the whole range of FM rates, AL neurons responded better to slower FM rates in the range of naturally occurring communication sounds. CL neurons generally responded best to fast FM rates at a speed of several hundred Hz/ms, which have the broadest frequency spectrum. These selectivities are consistent with a role of AL in the decoding of communication sounds and of CL in the localization of sounds, which works best with broader bandwidths. Together, the results support the hypothesis of parallel streams for the processing of different aspects of sounds, including auditory objects and auditory space.

  1. Different auditory feedback control for echolocation and communication in horseshoe bats.

    PubMed

    Liu, Ying; Feng, Jiang; Metzner, Walter

    2013-01-01

    Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this "auditory fovea", horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC) behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs) and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs) and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea.

  2. Different Auditory Feedback Control for Echolocation and Communication in Horseshoe Bats

    PubMed Central

    Liu, Ying; Feng, Jiang; Metzner, Walter

    2013-01-01

    Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this “auditory fovea”, horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC) behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs) and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs) and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea. PMID:23638137

  3. The relationship between tinnitus pitch and parameters of audiometry and distortion product otoacoustic emissions.

    PubMed

    Keppler, H; Degeest, S; Dhooge, I

    2017-11-01

    Chronic tinnitus is associated with reduced auditory input, which results in changes in the central auditory system. This study aimed to examine the relationship between tinnitus pitch and parameters of audiometry and distortion product otoacoustic emissions. For audiometry, the parameters represented the edge frequency of hearing loss, the frequency of maximum hearing loss and the frequency range of hearing loss. For distortion product otoacoustic emissions, the parameters were the frequency of lowest distortion product otoacoustic emission amplitudes and the frequency range of reduced distortion product otoacoustic emissions. Sixty-seven patients (45 males, 22 females) with subjective chronic tinnitus, aged 18 to 73 years, were included. No correlation was found between tinnitus pitch and parameters of audiometry and distortion product otoacoustic emissions. However, tinnitus pitch fell mostly within the frequency range of hearing loss. The current study seems to confirm the relationship between tinnitus pitch and the frequency range of hearing loss, thus supporting the homeostatic plasticity model.

  4. Responses of Middle-Frequency Modulations in Vocal Fundamental Frequency to Different Vocal Intensities and Auditory Feedback.

    PubMed

    Lee, Shao-Hsuan; Fang, Tuan-Jen; Yu, Jen-Fang; Lee, Guo-She

    2017-09-01

    Auditory feedback can make reflexive responses on sustained vocalizations. Among them, the middle-frequency power of F0 (MFP) may provide a sensitive index to access the subtle changes in different auditory feedback conditions. Phonatory airflow temperature was obtained from 20 healthy adults at two vocal intensity ranges under four auditory feedback conditions: (1) natural auditory feedback (NO); (2) binaural speech noise masking (SN); (3) bone-conducted feedback of self-generated voice (BAF); and (4) SN and BAF simultaneously. The modulations of F0 in low-frequency (0.2 Hz-3 Hz), middle-frequency (3 Hz-8 Hz), and high-frequency (8 Hz-25 Hz) bands were acquired using power spectral analysis of F0. Acoustic and aerodynamic analyses were used to acquire vocal intensity, maximum phonation time (MPT), phonatory airflow, and MFP-based vocal efficiency (MBVE). SN and high vocal intensity decreased MFP and raised MBVE and MPT significantly. BAF showed no effect on MFP but significantly lowered MBVE. Moreover, BAF significantly increased the perception of voice feedback and the sensation of vocal effort. Altered auditory feedback significantly changed the middle-frequency modulations of F0. MFP and MBVE could well detect these subtle responses of audio-vocal feedback. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  5. Preattentive extraction of abstract feature conjunctions from auditory stimulation as reflected by the mismatch negativity (MMN).

    PubMed

    Paavilainen, P; Simola, J; Jaramillo, M; Näätänen, R; Winkler, I

    2001-03-01

    Brain mechanisms extracting invariant information from varying auditory inputs were studied using the mismatch-negativity (MMN) brain response. We wished to determine whether the preattentive sound-analysis mechanisms, reflected by MMN, are capable of extracting invariant relationships based on abstract conjunctions between two sound features. The standard stimuli varied over a large range in frequency and intensity dimensions following the rule that the higher the frequency, the louder the intensity. The occasional deviant stimuli violated this frequency-intensity relationship and elicited an MMN. The results demonstrate that preattentive processing of auditory stimuli extends to unexpectedly complex relationships between the stimulus features.

  6. Intracerebral evidence of rhythm transform in the human auditory cortex.

    PubMed

    Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis

    2017-07-01

    Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (<30 Hz) and high-frequency (>30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.

  7. Maturation of the auditory system in clinically normal puppies as reflected by the brain stem auditory-evoked potential wave V latency-intensity curve and rarefaction-condensation differential potentials.

    PubMed

    Poncelet, L C; Coppens, A G; Meuris, S I; Deltenre, P F

    2000-11-01

    To evaluate auditory maturation in puppies. Ten clinically normal Beagle puppies. Puppies were examined repeatedly from days 11 to 36 after birth (8 measurements). Click-evoked brain stem auditory-evoked potentials (BAEP) were obtained in response to rarefaction and condensation click stimuli from 90 dB normal hearing level to wave V threshold, using steps of 10 dB. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation differential potential (RCDP). Steps of 5 dB were used to determine thresholds of RCDP and wave V. Slope of the low-intensity segment of the wave V latency-intensity curve was calculated. The intensity range at which RCDP could not be recorded (ie, pre-RCDP range) was calculated by subtracting the threshold of wave V from threshold of RCDP RESULTS: Slope of the wave V latency-intensity curve low-intensity segment evolved with age, changing from (mean +/- SD) -90.8 +/- 41.6 to -27.8 +/- 4.1 micros/dB. Similar results were obtained from days 23 through 36. The pre-RCDP range diminished as puppies became older, decreasing from 40.0 +/- 7.5 to 20.5 +/- 6.4 dB. Changes in slope of the latency-intensity curve with age suggest enlargement of the audible range of frequencies toward high frequencies up to the third week after birth. Decrease in the pre-RCDP range may indicate an increase of the audible range of frequencies toward low frequencies. Age-related reference values will assist clinicians in detecting hearing loss in puppies.

  8. Binaural beats increase interhemispheric alpha-band coherence between auditory cortices.

    PubMed

    Solcà, Marco; Mottaz, Anaïs; Guggisberg, Adrian G

    2016-02-01

    Binaural beats (BBs) are an auditory illusion occurring when two tones of slightly different frequency are presented separately to each ear. BBs have been suggested to alter physiological and cognitive processes through synchronization of the brain hemispheres. To test this, we recorded electroencephalograms (EEG) at rest and while participants listened to BBs or a monaural control condition during which both tones were presented to both ears. We calculated for each condition the interhemispheric coherence, which expressed the synchrony between neural oscillations of both hemispheres. Compared to monaural beats and resting state, BBs enhanced interhemispheric coherence between the auditory cortices. Beat frequencies in the alpha (10 Hz) and theta (4 Hz) frequency range both increased interhemispheric coherence selectively at alpha frequencies. In a second experiment, we evaluated whether this coherence increase has a behavioral aftereffect on binaural listening. No effects were observed in a dichotic digit task performed immediately after BBs presentation. Our results suggest that BBs enhance alpha-band oscillation synchrony between the auditory cortices during auditory stimulation. This effect seems to reflect binaural integration rather than entrainment. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Frequency encoded auditory display of the critical tracking task

    NASA Technical Reports Server (NTRS)

    Stevenson, J.

    1984-01-01

    The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.

  10. Auditory frequency generalization in the goldfish (Carassius auratus)1

    PubMed Central

    Fay, Richard R.

    1970-01-01

    Auditory frequency generalization in the goldfish was studied at five points within the best hearing range through the use of classical respiratory conditioning. Each experimental group received single-stimulus conditioning sessions at one of five stimulus frequencies (100, 200, 400, 800, and 1600 Hz), and were subsequently tested for generalization at eight neighboring frequencies. All stimuli were presented 30 db above absolute threshold. Significant generalization decrements were found for all subjects. For the subjects conditioned in the range between 100 and 800 Hz, a nearly complete failure to generalize was found at one octave above and below the training frequency. The subjects conditioned at 1600 Hz produced relatively more flat gradients between 900 and 2000 Hz. The widths of the generalization gradients, expressed in Hz, increased as a power function of frequency with a slope greater than one. PMID:16811481

  11. Contrast Gain Control in Auditory Cortex

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D.B.; Schnupp, Jan W.H.; King, Andrew J.

    2011-01-01

    Summary The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neuron's preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds. PMID:21689603

  12. Reduced variability of auditory alpha activity in chronic tinnitus.

    PubMed

    Schlee, Winfried; Schecklmann, Martin; Lehner, Astrid; Kreuzer, Peter M; Vielsmeier, Veronika; Poeppl, Timm B; Langguth, Berthold

    2014-01-01

    Subjective tinnitus is characterized by the conscious perception of a phantom sound which is usually more prominent under silence. Resting state recordings without any auditory stimulation demonstrated a decrease of cortical alpha activity in temporal areas of subjects with an ongoing tinnitus perception. This is often interpreted as an indicator for enhanced excitability of the auditory cortex in tinnitus. In this study we want to further investigate this effect by analysing the moment-to-moment variability of the alpha activity in temporal areas. Magnetoencephalographic resting state recordings of 21 tinnitus subjects and 21 healthy controls were analysed with respect to the mean and the variability of spectral power in the alpha frequency band over temporal areas. A significant decrease of auditory alpha activity was detected for the low alpha frequency band (8-10 Hz) but not for the upper alpha band (10-12 Hz). Furthermore, we found a significant decrease of alpha variability for the tinnitus group. This result was significant for the lower alpha frequency range and not significant for the upper alpha frequencies. Tinnitus subjects with a longer history of tinnitus showed less variability of their auditory alpha activity which might be an indicator for reduced adaptability of the auditory cortex in chronic tinnitus.

  13. Seasonal plasticity of auditory hair cell frequency sensitivity correlates with plasma steroid levels in vocal fish

    PubMed Central

    Rohmann, Kevin N.; Bass, Andrew H.

    2011-01-01

    SUMMARY Vertebrates displaying seasonal shifts in reproductive behavior provide the opportunity to investigate bidirectional plasticity in sensory function. The midshipman teleost fish exhibits steroid-dependent plasticity in frequency encoding by eighth nerve auditory afferents. In this study, evoked potentials were recorded in vivo from the saccule, the main auditory division of the inner ear of most teleosts, to test the hypothesis that males and females exhibit seasonal changes in hair cell physiology in relation to seasonal changes in plasma levels of steroids. Thresholds across the predominant frequency range of natural vocalizations were significantly less in both sexes in reproductive compared with non-reproductive conditions, with differences greatest at frequencies corresponding to call upper harmonics. A subset of non-reproductive males exhibiting an intermediate saccular phenotype had elevated testosterone levels, supporting the hypothesis that rising steroid levels induce non-reproductive to reproductive transitions in saccular physiology. We propose that elevated levels of steroids act via long-term (days to weeks) signaling pathways to upregulate ion channel expression generating higher resonant frequencies characteristic of non-mammalian auditory hair cells, thereby lowering acoustic thresholds. PMID:21562181

  14. Salicylate-induced cochlear impairments, cortical hyperactivity and re-tuning, and tinnitus.

    PubMed

    Chen, Guang-Di; Stolzberg, Daniel; Lobarinas, Edward; Sun, Wei; Ding, Dalian; Salvi, Richard

    2013-01-01

    High doses of sodium salicylate (SS) have long been known to induce temporary hearing loss and tinnitus, effects attributed to cochlear dysfunction. However, our recent publications reviewed here show that SS can induce profound, permanent, and unexpected changes in the cochlea and central nervous system. Prolonged treatment with SS permanently decreased the cochlear compound action potential (CAP) amplitude in vivo. In vitro, high dose SS resulted in a permanent loss of spiral ganglion neurons and nerve fibers, but did not damage hair cells. Acute treatment with high-dose SS produced a frequency-dependent decrease in the amplitude of distortion product otoacoustic emissions and CAP. Losses were greatest at low and high frequencies, but least at the mid-frequencies (10-20 kHz), the mid-frequency band that corresponds to the tinnitus pitch measured behaviorally. In the auditory cortex, medial geniculate body and amygdala, high-dose SS enhanced sound-evoked neural responses at high stimulus levels, but it suppressed activity at low intensities and elevated response threshold. When SS was applied directly to the auditory cortex or amygdala, it only enhanced sound evoked activity, but did not elevate response threshold. Current source density analysis revealed enhanced current flow into the supragranular layer of auditory cortex following systemic SS treatment. Systemic SS treatment also altered tuning in auditory cortex and amygdala; low frequency and high frequency multiunit clusters up-shifted or down-shifted their characteristic frequency into the 10-20 kHz range thereby altering auditory cortex tonotopy and enhancing neural activity at mid-frequencies corresponding to the tinnitus pitch. These results suggest that SS-induced hyperactivity in auditory cortex originates in the central nervous system, that the amygdala potentiates these effects and that the SS-induced tonotopic shifts in auditory cortex, the putative neural correlate of tinnitus, arises from the interaction between the frequency-dependent losses in the cochlea and hyperactivity in the central nervous system. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Cardiac autonomic regulation during exposure to auditory stimulation with classical baroque or heavy metal music of different intensities.

    PubMed

    Amaral, Joice A T; Nogueira, Marcela L; Roque, Adriano L; Guida, Heraldo L; De Abreu, Luiz Carlos; Raimundo, Rodrigo Daminello; Vanderlei, Luiz Carlos M; Ribeiro, Vivian L; Ferreira, Celso; Valenti, Vitor E

    2014-03-01

    The effects of chronic music auditory stimulation on the cardiovascular system have been investigated in the literature. However, data regarding the acute effects of different styles of music on cardiac autonomic regulation are lacking. The literature has indicated that auditory stimulation with white noise above 50 dB induces cardiac responses. We aimed to evaluate the acute effects of classical baroque and heavy metal music of different intensities on cardiac autonomic regulation. The study was performed in 16 healthy men aged 18-25 years. All procedures were performed in the same soundproof room. We analyzed heart rate variability (HRV) in time (standard deviation of normal-to-normal R-R intervals [SDNN], root-mean square of differences [RMSSD] and percentage of adjacent NN intervals with a difference of duration greater than 50 ms [pNN50]) and frequency (low frequency [LF], high frequency [HF] and LF/HF ratio) domains. HRV was recorded at rest for 10 minutes. Subsequently, the volunteers were exposed to one of the two musical styles (classical baroque or heavy metal music) for five minutes through an earphone, followed by a five-minute period of rest, and then they were exposed to the other style for another five minutes. The subjects were exposed to three equivalent sound levels (60-70dB, 70-80dB and 80-90dB). The sequence of songs was randomized for each individual. Auditory stimulation with heavy metal music did not influence HRV indices in the time and frequency domains in the three equivalent sound level ranges. The same was observed with classical baroque musical auditory stimulation with the three equivalent sound level ranges. Musical auditory stimulation of different intensities did not influence cardiac autonomic regulation in men.

  16. Ontogenetic investigation of underwater hearing capabilities in loggerhead sea turtles (Caretta caretta) using a dual testing approach.

    PubMed

    Lavender, Ashley L; Bartol, Soraya M; Bartol, Ian K

    2014-07-15

    Sea turtles reside in different acoustic environments with each life history stage and may have different hearing capacity throughout ontogeny. For this study, two independent yet complementary techniques for hearing assessment, i.e. behavioral and electrophysiological audiometry, were employed to (1) measure hearing in post-hatchling and juvenile loggerhead sea turtles Caretta caretta (19-62 cm straight carapace length) to determine whether these migratory turtles exhibit an ontogenetic shift in underwater auditory detection and (2) evaluate whether hearing frequency range and threshold sensitivity are consistent in behavioral and electrophysiological tests. Behavioral trials first required training turtles to respond to known frequencies, a multi-stage, time-intensive process, and then recording their behavior when they were presented with sound stimuli from an underwater speaker using a two-response forced-choice paradigm. Electrophysiological experiments involved submerging restrained, fully conscious turtles just below the air-water interface and recording auditory evoked potentials (AEPs) when sound stimuli were presented using an underwater speaker. No significant differences in behavior-derived auditory thresholds or AEP-derived auditory thresholds were detected between post-hatchling and juvenile sea turtles. While hearing frequency range (50-1000/1100 Hz) and highest sensitivity (100-400 Hz) were consistent in audiograms pooled by size class for both behavior and AEP experiments, both post-hatchlings and juveniles had significantly higher AEP-derived than behavior-derived auditory thresholds, indicating that behavioral assessment is a more sensitive testing approach. The results from this study suggest that post-hatchling and juvenile loggerhead sea turtles are low-frequency specialists, exhibiting little differences in threshold sensitivity and frequency bandwidth despite residence in acoustically distinct environments throughout ontogeny. © 2014. Published by The Company of Biologists Ltd.

  17. Psychoacoustics

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.

    Psychoacoustics psychological is concerned with the relationships between the physical characteristics of sounds and their perceptual attributes. This chapter describes: the absolute sensitivity of the auditory system for detecting weak sounds and how that sensitivity varies with frequency; the frequency selectivity of the auditory system (the ability to resolve or hear out the sinusoidal components in a complex sound) and its characterization in terms of an array of auditory filters; the processes that influence the masking of one sound by another; the range of sound levels that can be processed by the auditory system; the perception and modeling of loudness; level discrimination; the temporal resolution of the auditory system (the ability to detect changes over time); the perception and modeling of pitch for pure and complex tones; the perception of timbre for steady and time-varying sounds; the perception of space and sound localization; and the mechanisms underlying auditory scene analysis that allow the construction of percepts corresponding to individual sounds sources when listening to complex mixtures of sounds.

  18. Killer whale (Orcinus orca) hearing: auditory brainstem response and behavioral audiograms.

    PubMed

    Szymanski, M D; Bain, D E; Kiehl, K; Pennington, S; Wong, S; Henry, K R

    1999-08-01

    Killer whale (Orcinus orca) audiograms were measured using behavioral responses and auditory evoked potentials (AEPs) from two trained adult females. The mean auditory brainstem response (ABR) audiogram to tones between 1 and 100 kHz was 12 dB (re 1 mu Pa) less sensitive than behavioral audiograms from the same individuals (+/- 8 dB). The ABR and behavioral audiogram curves had shapes that were generally consistent and had the best threshold agreement (5 dB) in the most sensitive range 18-42 kHz, and the least (22 dB) at higher frequencies 60-100 kHz. The most sensitive frequency in the mean Orcinus audiogram was 20 kHz (36 dB), a frequency lower than many other odontocetes, but one that matches peak spectral energy reported for wild killer whale echolocation clicks. A previously reported audiogram of a male Orcinus had greatest sensitivity in this range (15 kHz, approximately 35 dB). Both whales reliably responded to 100-kHz tones (95 dB), and one whale to a 120-kHz tone, a variation from an earlier reported high-frequency limit of 32 kHz for a male Orcinus. Despite smaller amplitude ABRs than smaller delphinids, the results demonstrated that ABR audiometry can provide a useful suprathreshold estimate of hearing range in toothed whales.

  19. High-frequency gamma activity (80-150 Hz) is increased in human cortex during selective attention

    PubMed Central

    Ray, Supratim; Niebur, Ernst; Hsiao, Steven S.; Sinai, Alon; Crone, Nathan E.

    2008-01-01

    Objective: To study the role of gamma oscillations (>30 Hz) in selective attention using subdural electrocorticography (ECoG) in humans. Methods: We recorded ECoG in human subjects implanted with subdural electrodes for epilepsy surgery. Sequences of auditory tones and tactile vibrations of 800 ms duration were presented asynchronously, and subjects were asked to selectively attend to one of the two stimulus modalities in order to detect an amplitude increase at 400 ms in some of the stimuli. Results: Event-related ECoG gamma activity was greater over auditory cortex when subjects attended auditory stimuli and was greater over somatosensory cortex when subjects attended vibrotactile stimuli. Furthermore, gamma activity was also observed over prefrontal cortex when stimuli appeared in either modality, but only when they were attended. Attentional modulation of gamma power began ∼400 ms after stimulus onset, consistent with the temporal demands on attention. The increase in gamma activity was greatest at frequencies between 80 and 150 Hz, in the so-called high gamma frequency range. Conclusions: There appears to be a strong link between activity in the high-gamma range (80-150 Hz) and selective attention. Significance: Selective attention is correlated with increased activity in a frequency range that is significantly higher than what has been reported previously using EEG recordings. PMID:18037343

  20. ASSESSMENT OF LOW-FREQUENCY HEARING WITH NARROW-BAND CHIRP EVOKED 40-HZ SINUSOIDAL AUDITORY STEADY STATE RESPONSE

    PubMed Central

    Wilson, Uzma S.; Kaf, Wafaa A.; Danesh, Ali A.; Lichtenhan, Jeffery T.

    2016-01-01

    Objective To determine the clinical utility of narrow-band chirp evoked 40-Hz sinusoidal auditory steady state responses (s-ASSR) in the assessment of low-frequency hearing in noisy participants. Design Tone bursts and narrow-band chirps were used to respectively evoke auditory brainstem responses (tb-ABR) and 40-Hz s-ASSR thresholds with the Kalman-weighted filtering technique and were compared to behavioral thresholds at 500, 2000, and 4000 Hz. A repeated measure ANOVA and post-hoc t-tests, and simple regression analyses were performed for each of the three stimulus frequencies. Study Sample Thirty young adults aged 18–25 with normal hearing participated in this study. Results When 4000 equivalent responses averages were used, the range of mean s-ASSR thresholds from 500, 2000, and 4000 Hz were 17–22 dB lower (better) than when 2000 averages were used. The range of mean tb-ABR thresholds were lower by 11–15 dB for 2000 and 4000 Hz when twice as many equivalent response averages were used, while mean tb-ABR thresholds for 500 Hz were indistinguishable regardless of additional response averaging Conclusion Narrow band chirp evoked 40-Hz s-ASSR requires a ~15 dB smaller correction factor than tb-ABR for estimating low-frequency auditory threshold in noisy participants when adequate response averaging is used. PMID:26795555

  1. Understanding auditory distance estimation by humpback whales: a computational approach.

    PubMed

    Mercado, E; Green, S R; Schneider, J N

    2008-02-01

    Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.

  2. Electrically-evoked frequency-following response (EFFR) in the auditory brainstem of guinea pigs.

    PubMed

    He, Wenxin; Ding, Xiuyong; Zhang, Ruxiang; Chen, Jing; Zhang, Daoxing; Wu, Xihong

    2014-01-01

    It is still a difficult clinical issue to decide whether a patient is a suitable candidate for a cochlear implant and to plan postoperative rehabilitation, especially for some special cases, such as auditory neuropathy. A partial solution to these problems is to preoperatively evaluate the functional integrity of the auditory neural pathways. For evaluating the strength of phase-locking of auditory neurons, which was not reflected in previous methods using electrically evoked auditory brainstem response (EABR), a new method for recording phase-locking related auditory responses to electrical stimulation, called the electrically evoked frequency-following response (EFFR), was developed and evaluated using guinea pigs. The main objective was to assess feasibility of the method by testing whether the recorded signals reflected auditory neural responses or artifacts. The results showed the following: 1) the recorded signals were evoked by neuron responses rather than by artifact; 2) responses evoked by periodic signals were significantly higher than those evoked by the white noise; 3) the latency of the responses fell in the expected range; 4) the responses decreased significantly after death of the guinea pigs; and 5) the responses decreased significantly when the animal was replaced by an electrical resistance. All of these results suggest the method was valid. Recording obtained using complex tones with a missing fundamental component and using pure tones with various frequencies were consistent with those obtained using acoustic stimulation in previous studies.

  3. Continuous exposure to low-frequency noise and carbon disulfide: Combined effects on hearing.

    PubMed

    Venet, Thomas; Carreres-Pons, Maria; Chalansonnet, Monique; Thomas, Aurélie; Merlen, Lise; Nunge, Hervé; Bonfanti, Elodie; Cosnier, Frédéric; Llorens, Jordi; Campo, Pierre

    2017-09-01

    Carbon disulfide (CS 2 ) is used in industry; it has been shown to have neurotoxic effects, causing central and distal axonopathies.However, it is not considered cochleotoxic as it does not affect hair cells in the organ of Corti, and the only auditory effects reported in the literature were confined to the low-frequency region. No reports on the effects of combined exposure to low-frequency noise and CS 2 have been published to date. This article focuses on the effects on rat hearing of combined exposure to noise with increasing concentrations of CS 2 (0, 63,250, and 500ppm, 6h per day, 5 days per week, for 4 weeks). The noise used was a low-frequency noise ranging from 0.5 to 2kHz at an intensity of 106dB SPL. Auditory function was tested using distortion product oto-acoustic emissions, which mainly reflects the cochlear performances. Exposure to noise alone caused an auditory deficit in a frequency area ranging from 3.6 to 6 kHz. The damaged area was approximately one octave (6kHz) above the highest frequency of the exposure noise (2.8kHz); it was a little wider than expected based on the noise spectrum.Consequently, since maximum hearing sensitivity is located around 8kHz in rats, low-frequency noise exposure can affect the cochlear regions detecting mid-range frequencies. Co-exposure to CS 2 (250-ppm and over) and noise increased the extent of the damaged frequency window since a significant auditory deficit was measured at 9.6kHz in these conditions.Moreover, the significance at 9.6kHz increased with the solvent concentrations. Histological data showed that neither hair cells nor ganglion cells were damaged by CS 2 . This discrepancy between functional and histological data is discussed. Like most aromatic solvents, carbon disulfide should be considered as a key parameter in hearing conservation régulations. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Seasonal plasticity of auditory saccular sensitivity in "sneaker" type II male plainfin midshipman fish, Porichthys notatus.

    PubMed

    Bhandiwad, Ashwin A; Whitchurch, Elizabeth A; Colleye, Orphal; Zeddies, David G; Sisneros, Joseph A

    2017-03-01

    Adult female and nesting (type I) male midshipman fish (Porichthys notatus) exhibit an adaptive form of auditory plasticity for the enhanced detection of social acoustic signals. Whether this adaptive plasticity also occurs in "sneaker" type II males is unknown. Here, we characterize auditory-evoked potentials recorded from hair cells in the saccule of reproductive and non-reproductive "sneaker" type II male midshipman to determine whether this sexual phenotype exhibits seasonal, reproductive state-dependent changes in auditory sensitivity and frequency response to behaviorally relevant auditory stimuli. Saccular potentials were recorded from the middle and caudal region of the saccule while sound was presented via an underwater speaker. Our results indicate saccular hair cells from reproductive type II males had thresholds based on measures of sound pressure and acceleration (re. 1 µPa and 1 ms -2 , respectively) that were ~8-21 dB lower than non-reproductive type II males across a broad range of frequencies, which include the dominant higher frequencies in type I male vocalizations. This increase in type II auditory sensitivity may potentially facilitate eavesdropping by sneaker males and their assessment of vocal type I males for the selection of cuckoldry sites during the breeding season.

  5. Firing-rate resonances in the peripheral auditory system of the cricket, Gryllus bimaculatus.

    PubMed

    Rau, Florian; Clemens, Jan; Naumov, Victor; Hennig, R Matthias; Schreiber, Susanne

    2015-11-01

    In many communication systems, information is encoded in the temporal pattern of signals. For rhythmic signals that carry information in specific frequency bands, a neuronal system may profit from tuning its inherent filtering properties towards a peak sensitivity in the respective frequency range. The cricket Gryllus bimaculatus evaluates acoustic communication signals of both conspecifics and predators. The song signals of conspecifics exhibit a characteristic pulse pattern that contains only a narrow range of modulation frequencies. We examined individual neurons (AN1, AN2, ON1) in the peripheral auditory system of the cricket for tuning towards specific modulation frequencies by assessing their firing-rate resonance. Acoustic stimuli with a swept-frequency envelope allowed an efficient characterization of the cells' modulation transfer functions. Some of the examined cells exhibited tuned band-pass properties. Using simple computational models, we demonstrate how different, cell-intrinsic or network-based mechanisms such as subthreshold resonances, spike-triggered adaptation, as well as an interplay of excitation and inhibition can account for the experimentally observed firing-rate resonances. Therefore, basic neuronal mechanisms that share negative feedback as a common theme may contribute to selectivity in the peripheral auditory pathway of crickets that is designed towards mate recognition and predator avoidance.

  6. Responses of auditory-cortex neurons to structural features of natural sounds.

    PubMed

    Nelken, I; Rotman, Y; Bar Yosef, O

    1999-01-14

    Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.

  7. Development of auditory sensitivity in budgerigars (Melopsittacus undulatus)

    NASA Astrophysics Data System (ADS)

    Brittan-Powell, Elizabeth F.; Dooling, Robert J.

    2004-06-01

    Auditory feedback influences the development of vocalizations in songbirds and parrots; however, little is known about the development of hearing in these birds. The auditory brainstem response was used to track the development of auditory sensitivity in budgerigars from hatch to 6 weeks of age. Responses were first obtained from 1-week-old at high stimulation levels at frequencies at or below 2 kHz, showing that budgerigars do not hear well at hatch. Over the next week, thresholds improved markedly, and responses were obtained for almost all test frequencies throughout the range of hearing by 14 days. By 3 weeks posthatch, birds' best sensitivity shifted from 2 to 2.86 kHz, and the shape of the auditory brainstem response (ABR) audiogram became similar to that of adult budgerigars. About a week before leaving the nest, ABR audiograms of young budgerigars are very similar to those of adult birds. These data complement what is known about vocal development in budgerigars and show that hearing is fully developed by the time that vocal learning begins.

  8. Adaptive hearing in the vocal plainfin midshipman fish: getting in tune for the breeding season and implications for acoustic communication.

    PubMed

    Sisneros, Joseph A

    2009-03-01

    The plainfin midshipman fish (Porichthys notatus Girard, 1854) is a vocal species of batrachoidid fish that generates acoustic signals for intraspecific communication during social and reproductive activity and has become a good model for investigating the neural and endocrine mechanisms of vocal-acoustic communication. Reproductively active female plainfin midshipman fish use their auditory sense to detect and locate "singing" males, which produce a multiharmonic advertisement call to attract females for spawning. The seasonal onset of male advertisement calling in the midshipman fish coincides with an increase in the range of frequency sensitivity of the female's inner ear saccule, the main organ of hearing, thus leading to enhanced encoding of the dominant frequency components of male advertisement calls. Non-reproductive females treated with either testosterone or 17β-estradiol exhibit a dramatic increase in the inner ear's frequency sensitivity that mimics the reproductive female's auditory phenotype and leads to an increased detection of the male's advertisement call. This novel form of auditory plasticity provides an adaptable mechanism that enhances coupling between sender and receiver in vocal communication. This review focuses on recent evidence for seasonal reproductive-state and steroid-dependent plasticity of auditory frequency sensitivity in the peripheral auditory system of the midshipman fish. The potential steroid-dependent mechanism(s) that lead to this novel form of auditory and behavioral plasticity are also discussed. © 2009 ISZS, Blackwell Publishing and IOZ/CAS.

  9. Audio Watermark Embedding Technique Applying Auditory Stream Segregation: "G-encoder Mark" Able to Be Extracted by Mobile Phone

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.

  10. [Auditory processing and high frequency audiometry in students of São Paulo].

    PubMed

    Ramos, Cristina Silveira; Pereira, Liliane Desgualdo

    2005-01-01

    Auditory processing and auditory sensibility to high Frequency sounds. To characterize the localization processes, temporal ordering, hearing patterns and detection of high frequency sounds, looking for possible relations between these factors. 32 hearing fourth grade students, born in city of São Paulo, were submitted to: a simplified evaluation of the auditory processing; duration pattern test; high frequency audiometry. Three (9,4%) individuals presented auditory processing disorder (APD) and in one of them there was the coexistence of lower hearing thresholds in high frequency audiometry. APD associated to an auditory sensibility loss in high frequencies should be further investigated.

  11. Synchronisation signatures in the listening brain: a perspective from non-invasive neuroelectrophysiology.

    PubMed

    Weisz, Nathan; Obleser, Jonas

    2014-01-01

    Human magneto- and electroencephalography (M/EEG) are capable of tracking brain activity at millisecond temporal resolution in an entirely non-invasive manner, a feature that offers unique opportunities to uncover the spatiotemporal dynamics of the hearing brain. In general, precise synchronisation of neural activity within as well as across distributed regions is likely to subserve any cognitive process, with auditory cognition being no exception. Brain oscillations, in a range of frequencies, are a putative hallmark of this synchronisation process. Embedded in a larger effort to relate human cognition to brain oscillations, a field of research is emerging on how synchronisation within, as well as between, brain regions may shape auditory cognition. Combined with much improved source localisation and connectivity techniques, it has become possible to study directly the neural activity of auditory cortex with unprecedented spatio-temporal fidelity and to uncover frequency-specific long-range connectivities across the human cerebral cortex. In the present review, we will summarise recent contributions mainly of our laboratories to this emerging domain. We present (1) a more general introduction on how to study local as well as interareal synchronisation in human M/EEG; (2) how these networks may subserve and influence illusory auditory perception (clinical and non-clinical) and (3) auditory selective attention; and (4) how oscillatory networks further reflect and impact on speech comprehension. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Effects of Electrical Stimulation in the Inferior Colliculus on Frequency Discrimination by Rhesus Monkeys and Implications for the Auditory Midbrain Implant

    PubMed Central

    Ross, Deborah A.; Puñal, Vanessa M.; Agashe, Shruti; Dweck, Isaac; Mueller, Jerel; Grill, Warren M.; Wilson, Blake S.

    2016-01-01

    Understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5–80 μA, 100–300 Hz, n = 172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals' judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site compared with the reference frequency used in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site's response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency-tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated, and to provide a greater range of evoked percepts. SIGNIFICANCE STATEMENT Patients with hearing loss stemming from causes that interrupt the auditory pathway after the cochlea need a brain prosthetic to restore hearing. Recently, prosthetic stimulation in the human inferior colliculus (IC) was evaluated in a clinical trial. Thus far, speech understanding was limited for the subjects and this limitation is thought to be partly due to challenges in harnessing the sound frequency representation in the IC. Here, we tested the effects of IC stimulation in monkeys trained to report the sound frequencies they heard. Our results indicate that the IC can be used to introduce a range of frequency percepts and suggest that placement of a greater number of electrode contacts may improve the effectiveness of such implants. PMID:27147659

  13. One-year audiologic monitoring of individuals exposed to the 1995 Oklahoma City bombing.

    PubMed

    Van Campen, L E; Dennis, J M; Hanlin, R C; King, S B; Velderman, A M

    1999-05-01

    This longitudinal study evaluated subjective, behavioral, and objective auditory function in 83 explosion survivors. Subjects were evaluated quarterly for 1 year with conventional pure-tone and extended high-frequencies audiometry, otoscopic inspections, immittance and speech audiometry, and questionnaires. There was no obvious relationship between subject location and symptoms or test results. Tinnitus, distorted hearing, loudness sensitivity, and otalgia were common symptoms. On average, 76 percent of subjects had predominantly sensorineural hearing loss at one or more frequencies. Twenty-four percent of subjects required amplification. Extended high frequencies showed evidence of acoustic trauma even when conventional frequencies fell within the normal range. Males had significantly poorer responses than females across frequencies. Auditory status of the group was significantly compromised and unchanged at the end of 1-year postblast.

  14. Effects of sensorineural hearing loss on temporal coding of narrowband and broadband signals in the auditory periphery

    PubMed Central

    Henry, Kenneth S.; Heinz, Michael G.

    2013-01-01

    People with sensorineural hearing loss have substantial difficulty understanding speech under degraded listening conditions. Behavioral studies suggest that this difficulty may be caused by changes in auditory processing of the rapidly-varying temporal fine structure (TFS) of acoustic signals. In this paper, we review the presently known effects of sensorineural hearing loss on processing of TFS and slower envelope modulations in the peripheral auditory system of mammals. Cochlear damage has relatively subtle effects on phase locking by auditory-nerve fibers to the temporal structure of narrowband signals under quiet conditions. In background noise, however, sensorineural loss does substantially reduce phase locking to the TFS of pure-tone stimuli. For auditory processing of broadband stimuli, sensorineural hearing loss has been shown to severely alter the neural representation of temporal information along the tonotopic axis of the cochlea. Notably, auditory-nerve fibers innervating the high-frequency part of the cochlea grow increasingly responsive to low-frequency TFS information and less responsive to temporal information near their characteristic frequency (CF). Cochlear damage also increases the correlation of the response to TFS across fibers of varying CF, decreases the traveling-wave delay between TFS responses of fibers with different CFs, and can increase the range of temporal modulation frequencies encoded in the periphery for broadband sounds. Weaker neural coding of temporal structure in background noise and degraded coding of broadband signals along the tonotopic axis of the cochlea are expected to contribute considerably to speech perception problems in people with sensorineural hearing loss. PMID:23376018

  15. Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns

    PubMed Central

    Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Valente, Giancarlo; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia

    2017-01-01

    Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2–4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice). PMID:28420788

  16. High lead exposure and auditory sensory-neural function in Andean children.

    PubMed Central

    Counter, S A; Vahter, M; Laurell, G; Buchanan, L H; Ortega, F; Skerfving, S

    1997-01-01

    We investigated blood lead (B-Pb) and mercury (B-Hg) levels and auditory sensory-neural function in 62 Andean school children living in a Pb-contaminated area of Ecuador and 14 children in a neighboring gold mining area with no known Pb exposure. The median B-Pb level for 62 children in the Pb-exposed group was 52.6 micrograms/dl (range 9.9-110.0 micrograms/dl) compared with 6.4 micrograms/dl (range 3.9-12.0 micrograms/dl) for the children in the non-Pb exposed group; the differences were statistically significant (p < 0.001). Auditory thresholds for the Pb-exposed group were normal at the pure tone frequencies of 0.25-8 kHz over the entire range of B-Pb levels, Auditory brain stem response tests in seven children with high B-Pb levels showed normal absolute peak and interpeak latencies. The median B-Hg levels were 0.16 micrograms/dl (range 0.04-0.58 micrograms/dl) for children in the Pb-exposed group and 0.22 micrograms/dl (range 0.1-0.44 micrograms/dl) for children in the non-Pb exposed gold mining area, and showed no significant relationship to auditory function. Images Figure 1. Figure 3. A Figure 3. B PMID:9222138

  17. Thresholding of auditory cortical representation by background noise

    PubMed Central

    Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029

  18. Psycho acoustical Measures in Individuals with Congenital Visual Impairment.

    PubMed

    Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh

    2017-12-01

    In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.

  19. Effects of underwater noise on auditory sensitivity of a cyprinid fish.

    PubMed

    Scholik, A R; Yan, H Y

    2001-02-01

    The ability of a fish to interpret acoustic information in its environment is crucial for its survival. Thus, it is important to understand how underwater noise affects fish hearing. In this study, the fathead minnow (Pimephales promelas) was used to examine: (1) the immediate effects of white noise exposure (0.3-4.0 kHz, 142 dB re: 1 microPa) on auditory thresholds and (2) recovery after exposure. Audiograms were measured using the auditory brainstem response protocol and compared to baseline audiograms of fathead minnows not exposed to noise. Immediately after exposure to 24 h of white noise, five out of the eight frequencies tested showed a significantly higher threshold compared to the baseline fish. Recovery was found to depend on both duration of noise exposure and auditory frequency. These results support the hypothesis that the auditory threshold of the fathead minnow can be altered by white noise, especially in its most sensitive hearing range (0.8-2.0 kHz), and provide evidence that these effects can be long term (>14 days).

  20. Salicylate-induced changes in auditory thresholds of adolescent and adult rats.

    PubMed

    Brennan, J F; Brown, C A; Jastreboff, P J

    1996-01-01

    Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.

  1. Selective Neuronal Activation by Cochlear Implant Stimulation in Auditory Cortex of Awake Primate

    PubMed Central

    Johnson, Luke A.; Della Santina, Charles C.

    2016-01-01

    Despite the success of cochlear implants (CIs) in human populations, most users perform poorly in noisy environments and music and tonal language perception. How CI devices engage the brain at the single neuron level has remained largely unknown, in particular in the primate brain. By comparing neuronal responses with acoustic and CI stimulation in marmoset monkeys unilaterally implanted with a CI electrode array, we discovered that CI stimulation was surprisingly ineffective at activating many neurons in auditory cortex, particularly in the hemisphere ipsilateral to the CI. Further analyses revealed that the CI-nonresponsive neurons were narrowly tuned to frequency and sound level when probed with acoustic stimuli; such neurons likely play a role in perceptual behaviors requiring fine frequency and level discrimination, tasks that CI users find especially challenging. These findings suggest potential deficits in central auditory processing of CI stimulation and provide important insights into factors responsible for poor CI user performance in a wide range of perceptual tasks. SIGNIFICANCE STATEMENT The cochlear implant (CI) is the most successful neural prosthetic device to date and has restored hearing in hundreds of thousands of deaf individuals worldwide. However, despite its huge successes, CI users still face many perceptual limitations, and the brain mechanisms involved in hearing through CI devices remain poorly understood. By directly comparing single-neuron responses to acoustic and CI stimulation in auditory cortex of awake marmoset monkeys, we discovered that neurons unresponsive to CI stimulation were sharply tuned to frequency and sound level. Our results point out a major deficit in central auditory processing of CI stimulation and provide important insights into mechanisms underlying the poor CI user performance in a wide range of perceptual tasks. PMID:27927962

  2. Similar frequency of the McGurk effect in large samples of native Mandarin Chinese and American English speakers.

    PubMed

    Magnotti, John F; Basu Mallick, Debshila; Feng, Guo; Zhou, Bin; Zhou, Wen; Beauchamp, Michael S

    2015-09-01

    Humans combine visual information from mouth movements with auditory information from the voice to recognize speech. A common method for assessing multisensory speech perception is the McGurk effect: When presented with particular pairings of incongruent auditory and visual speech syllables (e.g., the auditory speech sounds for "ba" dubbed onto the visual mouth movements for "ga"), individuals perceive a third syllable, distinct from the auditory and visual components. Chinese and American cultures differ in the prevalence of direct facial gaze and in the auditory structure of their languages, raising the possibility of cultural- and language-related group differences in the McGurk effect. There is no consensus in the literature about the existence of these group differences, with some studies reporting less McGurk effect in native Mandarin Chinese speakers than in English speakers and others reporting no difference. However, these studies sampled small numbers of participants tested with a small number of stimuli. Therefore, we collected data on the McGurk effect from large samples of Mandarin-speaking individuals from China and English-speaking individuals from the USA (total n = 307) viewing nine different stimuli. Averaged across participants and stimuli, we found similar frequencies of the McGurk effect between Chinese and American participants (48 vs. 44 %). In both groups, we observed a large range of frequencies both across participants (range from 0 to 100 %) and stimuli (15 to 83 %) with the main effect of culture and language accounting for only 0.3 % of the variance in the data. High individual variability in perception of the McGurk effect necessitates the use of large sample sizes to accurately estimate group differences.

  3. Audiological Management of Patients Receiving Aminoglycoside Antibiotics

    ERIC Educational Resources Information Center

    Konrad-Martin, Dawn; Wilmington, Debra J.; Gordon, Jane S.; Reavis, Kelly M.; Fausti, Stephen A.

    2005-01-01

    Aminoglycoside antibiotics, commonly prescribed for adults and children to treat a wide range of bacterial infections, are potentially ototoxic, often causing irreversible damage to the auditory and vestibular systems. Ototoxic hearing loss usually begins at the higher frequencies and can progress to lower frequencies necessary for understanding…

  4. Demodulation processes in auditory perception

    NASA Astrophysics Data System (ADS)

    Feth, Lawrence L.

    1994-08-01

    The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.

  5. INDIVIDUAL DIFFERENCES IN AUDITORY PROCESSING IN SPECIFIC LANGUAGE IMPAIRMENT: A FOLLOW-UP STUDY USING EVENT-RELATED POTENTIALS AND BEHAVIOURAL THRESHOLDS

    PubMed Central

    Bishop, Dorothy V.M.; McArthur, Genevieve M.

    2005-01-01

    It has frequently been claimed that children with specific language impairment (SLI) have impaired auditory perception, but there is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area. Discrepancies in this field may arise because (a) a focus on mean results obscures the heterogeneity in the population and (b) insufficient attention has been paid to maturational aspects of auditory processing. We conducted a study of 16 young people with specific language impairment (SLI) and 16 control participants, 24 of whom had had auditory event-related potentials (ERPs) and frequency discrimination thresholds assessed 18 months previously. When originally assessed, around one third of the listeners with SLI had poor behavioural frequency discrimination thresholds, and these tended to be the younger participants. However, most of the SLI group had age-inappropriate late components of the auditory ERP, regardless of their frequency discrimination. At follow-up, the behavioural thresholds of those with poor frequency discrimination improved, though some remained outside the control range. At follow-up, ERPs for many of the individuals in the SLI group were still not age-appropriate. In several cases, waveforms of individuals in the SLI group resembled those of younger typically-developing children, though in other cases the waveform was unlike that of control cases at any age. Electrophysiological methods may reveal underlying immaturity or other abnormality of auditory processing even when behavioural thresholds look normal. This study emphasises the variability seen in SLI, and the importance of studying individual cases rather than focusing on group means. PMID:15871598

  6. Auditory and tactile gap discrimination by observers with normal and impaired hearing.

    PubMed

    Desloge, Joseph G; Reed, Charlotte M; Braida, Louis D; Perez, Zachary D; Delhorne, Lorraine A; Villabona, Timothy J

    2014-02-01

    Temporal processing ability for the senses of hearing and touch was examined through the measurement of gap-duration discrimination thresholds (GDDTs) employing the same low-frequency sinusoidal stimuli in both modalities. GDDTs were measured in three groups of observers (normal-hearing, hearing-impaired, and normal-hearing with simulated hearing loss) covering an age range of 21-69 yr. GDDTs for a baseline gap of 6 ms were measured for four different combinations of 100-ms leading and trailing markers (250-250, 250-400, 400-250, and 400-400 Hz). Auditory measurements were obtained for monaural presentation over headphones and tactile measurements were obtained using sinusoidal vibrations presented to the left middle finger. The auditory GDDTs of the hearing-impaired listeners, which were larger than those of the normal-hearing observers, were well-reproduced in the listeners with simulated loss. The magnitude of the GDDT was generally independent of modality and showed effects of age in both modalities. The use of different-frequency compared to same-frequency markers led to a greater deterioration in auditory GDDTs compared to tactile GDDTs and may reflect differences in bandwidth properties between the two sensory systems.

  7. The temporal representation of speech in a nonlinear model of the guinea pig cochlea

    NASA Astrophysics Data System (ADS)

    Holmes, Stephen D.; Sumner, Christian J.; O'Mard, Lowel P.; Meddis, Ray

    2004-12-01

    The temporal representation of speechlike stimuli in the auditory-nerve output of a guinea pig cochlea model is described. The model consists of a bank of dual resonance nonlinear filters that simulate the vibratory response of the basilar membrane followed by a model of the inner hair cell/auditory nerve complex. The model is evaluated by comparing its output with published physiological auditory nerve data in response to single and double vowels. The evaluation includes analyses of individual fibers, as well as ensemble responses over a wide range of best frequencies. In all cases the model response closely follows the patterns in the physiological data, particularly the tendency for the temporal firing pattern of each fiber to represent the frequency of a nearby formant of the speech sound. In the model this behavior is largely a consequence of filter shapes; nonlinear filtering has only a small contribution at low frequencies. The guinea pig cochlear model produces a useful simulation of the measured physiological response to simple speech sounds and is therefore suitable for use in more advanced applications including attempts to generalize these principles to the response of human auditory system, both normal and impaired. .

  8. A microelectromechanical system artificial basilar membrane based on a piezoelectric cantilever array and its characterization using an animal model.

    PubMed

    Jang, Jongmoon; Lee, JangWoo; Woo, Seongyong; Sly, David J; Campbell, Luke J; Cho, Jin-Ho; O'Leary, Stephen J; Park, Min-Hyun; Han, Sungmin; Choi, Ji-Wong; Jang, Jeong Hun; Choi, Hongsoo

    2015-07-31

    We proposed a piezoelectric artificial basilar membrane (ABM) composed of a microelectromechanical system cantilever array. The ABM mimics the tonotopy of the cochlea: frequency selectivity and mechanoelectric transduction. The fabricated ABM exhibits a clear tonotopy in an audible frequency range (2.92-12.6 kHz). Also, an animal model was used to verify the characteristics of the ABM as a front end for potential cochlear implant applications. For this, a signal processor was used to convert the piezoelectric output from the ABM to an electrical stimulus for auditory neurons. The electrical stimulus for auditory neurons was delivered through an implanted intra-cochlear electrode array. The amplitude of the electrical stimulus was modulated in the range of 0.15 to 3.5 V with incoming sound pressure levels (SPL) of 70.1 to 94.8 dB SPL. The electrical stimulus was used to elicit an electrically evoked auditory brainstem response (EABR) from deafened guinea pigs. EABRs were successfully measured and their magnitude increased upon application of acoustic stimuli from 75 to 95 dB SPL. The frequency selectivity of the ABM was estimated by measuring the magnitude of EABRs while applying sound pressure at the resonance and off-resonance frequencies of the corresponding cantilever of the selected channel. In this study, we demonstrated a novel piezoelectric ABM and verified its characteristics by measuring EABRs.

  9. A circuit for detection of interaural time differences in the nucleus laminaris of turtles.

    PubMed

    Willis, Katie L; Carr, Catherine E

    2017-11-15

    The physiological hearing range of turtles is approximately 50-1000 Hz, as determined by cochlear microphonics ( Wever and Vernon, 1956a). These low frequencies can constrain sound localization, particularly in red-eared slider turtles, which are freshwater turtles with small heads and isolated middle ears. To determine if these turtles were sensitive to interaural time differences (ITDs), we investigated the connections and physiology of their auditory brainstem nuclei. Tract tracing experiments showed that cranial nerve VIII bifurcated to terminate in the first-order nucleus magnocellularis (NM) and nucleus angularis (NA), and the NM projected bilaterally to the nucleus laminaris (NL). As the NL received inputs from each side, we developed an isolated head preparation to examine responses to binaural auditory stimulation. Magnocellularis and laminaris units responded to frequencies from 100 to 600 Hz, and phase-locked reliably to the auditory stimulus. Responses from the NL were binaural, and sensitive to ITD. Measures of characteristic delay revealed best ITDs around ±200 μs, and NL neurons typically had characteristic phases close to 0, consistent with binaural excitation. Thus, turtles encode ITDs within their physiological range, and their auditory brainstem nuclei have similar connections and cell types to other reptiles. © 2017. Published by The Company of Biologists Ltd.

  10. Pairing broadband noise with cortical stimulation induces extensive suppression of ascending sensory activity

    NASA Astrophysics Data System (ADS)

    Markovitz, Craig D.; Hogan, Patrick S.; Wesen, Kyle A.; Lim, Hubert H.

    2015-04-01

    Objective. The corticofugal system can alter coding along the ascending sensory pathway. Within the auditory system, electrical stimulation of the auditory cortex (AC) paired with a pure tone can cause egocentric shifts in the tuning of auditory neurons, making them more sensitive to the pure tone frequency. Since tinnitus has been linked with hyperactivity across auditory neurons, we sought to develop a new neuromodulation approach that could suppress a wide range of neurons rather than enhance specific frequency-tuned neurons. Approach. We performed experiments in the guinea pig to assess the effects of cortical stimulation paired with broadband noise (PN-Stim) on ascending auditory activity within the central nucleus of the inferior colliculus (CNIC), a widely studied region for AC stimulation paradigms. Main results. All eight stimulated AC subregions induced extensive suppression of activity across the CNIC that was not possible with noise stimulation alone. This suppression built up over time and remained after the PN-Stim paradigm. Significance. We propose that the corticofugal system is designed to decrease the brain’s input gain to irrelevant stimuli and PN-Stim is able to artificially amplify this effect to suppress neural firing across the auditory system. The PN-Stim concept may have potential for treating tinnitus and other neurological disorders.

  11. Transcranial alternating current stimulation modulates auditory temporal resolution in elderly people.

    PubMed

    Baltus, Alina; Vosskuhl, Johannes; Boetzel, Cindy; Herrmann, Christoph Siegfried

    2018-05-13

    Recent research provides evidence for a functional role of brain oscillations for perception. For example, auditory temporal resolution seems to be linked to individual gamma frequency of auditory cortex. Individual gamma frequency not only correlates with performance in between-channel gap detection tasks but can be modulated via auditory transcranial alternating current stimulation. Modulation of individual gamma frequency is accompanied by an improvement in gap detection performance. Aging changes electrophysiological frequency components and sensory processing mechanisms. Therefore, we conducted a study to investigate the link between individual gamma frequency and gap detection performance in elderly people using auditory transcranial alternating current stimulation. In a within-subject design, twelve participants were electrically stimulated with two individualized transcranial alternating current stimulation frequencies: 3 Hz above their individual gamma frequency (experimental condition) and 4 Hz below their individual gamma frequency (control condition) while they were performing a between-channel gap detection task. As expected, individual gamma frequencies correlated significantly with gap detection performance at baseline and in the experimental condition, transcranial alternating current stimulation modulated gap detection performance. In the control condition, stimulation did not modulate gap detection performance. In addition, in elderly, the effect of transcranial alternating current stimulation on auditory temporal resolution seems to be dependent on endogenous frequencies in auditory cortex: elderlies with slower individual gamma frequencies and lower auditory temporal resolution profit from auditory transcranial alternating current stimulation and show increased gap detection performance during stimulation. Our results strongly suggest individualized transcranial alternating current stimulation protocols for successful modulation of performance. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  12. Abnormal Auditory Gain in Hyperacusis: Investigation with a Computational Model

    PubMed Central

    Diehl, Peter U.; Schaette, Roland

    2015-01-01

    Hyperacusis is a frequent auditory disorder that is characterized by abnormal loudness perception where sounds of relatively normal volume are perceived as too loud or even painfully loud. As hyperacusis patients show decreased loudness discomfort levels (LDLs) and steeper loudness growth functions, it has been hypothesized that hyperacusis might be caused by an increase in neuronal response gain in the auditory system. Moreover, since about 85% of hyperacusis patients also experience tinnitus, the conditions might be caused by a common mechanism. However, the mechanisms that give rise to hyperacusis have remained unclear. Here, we have used a computational model of the auditory system to investigate candidate mechanisms for hyperacusis. Assuming that perceived loudness is proportional to the summed activity of all auditory nerve (AN) fibers, the model was tuned to reproduce normal loudness perception. We then evaluated a variety of potential hyperacusis gain mechanisms by determining their effects on model equal-loudness contours and comparing the results to the LDLs of hyperacusis patients with normal hearing thresholds. Hyperacusis was best accounted for by an increase in non-linear gain in the central auditory system. Good fits to the average patient LDLs were obtained for a general increase in gain that affected all frequency channels to the same degree, and also for a frequency-specific gain increase in the high-frequency range. Moreover, the gain needed to be applied after subtraction of spontaneous activity of the AN, which is in contrast to current theories of tinnitus generation based on amplification of spontaneous activity. Hyperacusis and tinnitus might therefore be caused by different changes in neuronal processing in the central auditory system. PMID:26236277

  13. Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram.

    PubMed

    Hossain, Mohammad E; Jassim, Wissam A; Zilany, Muhammad S A

    2016-01-01

    Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants.

  14. Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram

    PubMed Central

    Hossain, Mohammad E.; Jassim, Wissam A.; Zilany, Muhammad S. A.

    2016-01-01

    Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants. PMID:26967160

  15. Steady-state MEG responses elicited by a sequence of amplitude-modulated short tones of different carrier frequencies.

    PubMed

    Kuriki, Shinya; Kobayashi, Yusuke; Kobayashi, Takanari; Tanaka, Keita; Uchikawa, Yoshinori

    2013-02-01

    The auditory steady-state response (ASSR) is a weak potential or magnetic response elicited by periodic acoustic stimuli with a maximum response at about a 40-Hz periodicity. In most previous studies using amplitude-modulated (AM) tones of stimulus sound, long lasting tones of more than 10 s in length were used. However, characteristics of the ASSR elicited by short AM tones have remained unclear. In this study, we examined magnetoencephalographic (MEG) ASSR using a sequence of sinusoidal AM tones of 0.78 s in length with various tone frequencies of 440-990 Hz in about one octave variation. It was found that the amplitude of the ASSR was invariant with tone frequencies when the level of sound pressure was adjusted along an equal-loudness curve. The amplitude also did not depend on the existence of preceding tone or difference in frequency of the preceding tone. When the sound level of AM tones was changed with tone frequencies in the same range of 440-990 Hz, the amplitude of ASSR varied in a proportional manner to the sound level. These characteristics are favorable for the use of ASSR in studying temporal processing of auditory information in the auditory cortex. The lack of adaptation in the ASSR elicited by a sequence of short tones may be ascribed to the neural activity of widely accepted generator of magnetic ASSR in the primary auditory cortex. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Spectral context affects temporal processing in awake auditory cortex

    PubMed Central

    Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A; Schreiner, Christoph E.

    2013-01-01

    Amplitude modulation encoding is critical for human speech perception and complex sound processing in general. The modulation transfer function (MTF) is a staple of auditory psychophysics, and has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, including cochlear implant-supported hearing. Although both tonal and broadband carriers have been employed in psychophysical studies of modulation detection and discrimination, relatively little is known about differences in the cortical representation of such signals. We obtained MTFs in response to sinusoidal amplitude modulation (SAM) for both narrowband tonal carriers and 2-octave bandwidth noise carriers in the auditory core of awake squirrel monkeys. MTFs spanning modulation frequencies from 4 to 512 Hz were obtained using 16 channel linear recording arrays sampling across all cortical laminae. Carrier frequency for tonal SAM and center frequency for noise SAM was set at the estimated best frequency for each penetration. Changes in carrier type affected both rate and temporal MTFs in many neurons. Using spike discrimination techniques, we found that discrimination of modulation frequency was significantly better for tonal SAM than for noise SAM, though the differences were modest at the population level. Moreover, spike trains elicited by tonal and noise SAM could be readily discriminated in most cases. Collectively, our results reveal remarkable sensitivity to the spectral content of modulated signals, and indicate substantial interdependence between temporal and spectral processing in neurons of the core auditory cortex. PMID:23719811

  17. Harmonic template neurons in primate auditory cortex underlying complex sound processing

    PubMed Central

    Feng, Lei

    2017-01-01

    Harmonicity is a fundamental element of music, speech, and animal vocalizations. How the auditory system extracts harmonic structures embedded in complex sounds and uses them to form a coherent unitary entity is not fully understood. Despite the prevalence of sounds rich in harmonic structures in our everyday hearing environment, it has remained largely unknown what neural mechanisms are used by the primate auditory cortex to extract these biologically important acoustic structures. In this study, we discovered a unique class of harmonic template neurons in the core region of auditory cortex of a highly vocal New World primate, the common marmoset (Callithrix jacchus), across the entire hearing frequency range. Marmosets have a rich vocal repertoire and a similar hearing range to that of humans. Responses of these neurons show nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures beyond two-tone combinations, and sensitivity to harmonic number and spectral regularity. Our findings suggest that the harmonic template neurons in auditory cortex may play an important role in processing sounds with harmonic structures, such as animal vocalizations, human speech, and music. PMID:28096341

  18. Modulation frequency as a cue for auditory speed perception.

    PubMed

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  19. Psychoacoustic and cognitive aspects of auditory roughness: definitions, models, and applications

    NASA Astrophysics Data System (ADS)

    Vassilakis, Pantelis N.; Kendall, Roger A.

    2010-02-01

    The term "auditory roughness" was first introduced in the 19th century to describe the buzzing, rattling auditory sensation accompanying narrow harmonic intervals (i.e. two tones with frequency difference in the range of ~15-150Hz, presented simultaneously). A broader definition and an overview of the psychoacoustic correlates of the auditory roughness sensation, also referred to as sensory dissonance, is followed by an examination of efforts to quantify it over the past one hundred and fifty years and leads to the introduction of a new roughness calculation model and an application that automates spectral and roughness analysis of sound signals. Implementation of spectral and roughness analysis is briefly discussed in the context of two pilot perceptual experiments, designed to assess the relationship among cultural background, music performance practice, and aesthetic attitudes towards the auditory roughness sensation.

  20. Spatiotemporal reconstruction of auditory steady-state responses to acoustic amplitude modulations: Potential sources beyond the auditory pathway.

    PubMed

    Farahani, Ehsan Darestani; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid

    2017-03-01

    Investigating the neural generators of auditory steady-state responses (ASSRs), i.e., auditory evoked brain responses, with a wide range of screening and diagnostic applications, has been the focus of various studies for many years. Most of these studies employed a priori assumptions regarding the number and location of neural generators. The aim of this study is to reconstruct ASSR sources with minimal assumptions in order to gain in-depth insight into the number and location of brain regions that are activated in response to low- as well as high-frequency acoustically amplitude modulated signals. In order to reconstruct ASSR sources, we applied independent component analysis with subsequent equivalent dipole modeling to single-subject EEG data (young adults, 20-30 years of age). These data were based on white noise stimuli, amplitude modulated at 4, 20, 40, or 80Hz. The independent components that exhibited a significant ASSR were clustered among all participants by means of a probabilistic clustering method based on a Gaussian mixture model. Results suggest that a widely distributed network of sources, located in cortical as well as subcortical regions, is active in response to 4, 20, 40, and 80Hz amplitude modulated noises. Some of these sources are located beyond the central auditory pathway. Comparison of brain sources in response to different modulation frequencies suggested that the identified brain sources in the brainstem, the left and the right auditory cortex show a higher responsiveness to 40Hz than to the other modulation frequencies. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Homeostatic enhancement of sensory transduction

    PubMed Central

    Milewski, Andrew R.; Ó Maoiléidigh, Dáibhid; Salvi, Joshua D.; Hudspeth, A. J.

    2017-01-01

    Our sense of hearing boasts exquisite sensitivity, precise frequency discrimination, and a broad dynamic range. Experiments and modeling imply, however, that the auditory system achieves this performance for only a narrow range of parameter values. Small changes in these values could compromise hair cells’ ability to detect stimuli. We propose that, rather than exerting tight control over parameters, the auditory system uses a homeostatic mechanism that increases the robustness of its operation to variation in parameter values. To slowly adjust the response to sinusoidal stimulation, the homeostatic mechanism feeds back a rectified version of the hair bundle’s displacement to its adaptation process. When homeostasis is enforced, the range of parameter values for which the sensitivity, tuning sharpness, and dynamic range exceed specified thresholds can increase by more than an order of magnitude. Signatures in the hair cell’s behavior provide a means to determine through experiment whether such a mechanism operates in the auditory system. Robustness of function through homeostasis may be ensured in any system through mechanisms similar to those that we describe here. PMID:28760949

  2. Habituation analysis of chirp vs. tone evoked auditory late responses.

    PubMed

    Kern, Kevin; Royter, Vladislav; Corona-Strauss, Farah I; Mariam, Mai; Strauss, Daniel J

    2010-01-01

    We have recently shown that tone evoked auditory late responses are able to proof that habituation is occurring [1], [2]. The sweep to sweep analysis using time scale coherence method from [1] is used. Where clear results using tone evoked ALRs were obtained. Now it is of interest how does the results behave using chirp evoked ALRs compared to tone evoked ALRs so that basilar membrane dispersion is compensated. We presented three different tone bursts and three different band limited chirps to 10 subjects using two different loudness levels which the subjects determined themselves before as medium and uncomfortably loud. The 3 chirps are band limited within 3 different ranges, the chirp with the lowest center frequency has the smallest range (according to octave-band). Chirps and tone bursts are using the same center frequencies.

  3. Narrow sound pressure level tuning in the auditory cortex of the bats Molossus molossus and Macrotus waterhousii.

    PubMed

    Macías, Silvio; Hechavarría, Julio C; Cobo, Ariadna; Mora, Emanuel C

    2014-03-01

    In the auditory system, tuning to sound level appears in the form of non-monotonic response-level functions that depict the response of a neuron to changing sound levels. Neurons with non-monotonic response-level functions respond best to a particular sound pressure level (defined as "best level" or level evoking the maximum response). We performed a comparative study on the location and basic functional organization of the auditory cortex in the gleaning bat, Macrotus waterhousii, and the aerial-hawking bat, Molossus molossus. Here, we describe the response-level function of cortical units in these two species. In the auditory cortices of M. waterhousii and M. molossus, the characteristic frequency of the units increased from caudal to rostral. In M. waterhousii, there was an even distribution of characteristic frequencies while in M. molossus there was an overrepresentation of frequencies present within echolocation pulses. In both species, most of the units showed best levels in a narrow range, without an evident topography in the amplitopic organization, as described in other species. During flight, bats decrease the intensity of their emitted pulses when they approach a prey item or an obstacle resulting in maintenance of perceived echo intensity. Narrow level tuning likely contributes to the extraction of echo amplitudes facilitating echo-intensity compensation. For aerial-hawking bats, like M. molossus, receiving echoes within the optimal sensitivity range can help the bats to sustain consistent analysis of successive echoes without distortions of perception caused by changes in amplitude. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Auditory alert systems with enhanced detectability

    NASA Technical Reports Server (NTRS)

    Begault, Durand R. (Inventor)

    2008-01-01

    Methods and systems for distinguishing an auditory alert signal from a background of one or more non-alert signals. In a first embodiment, a prefix signal, associated with an existing alert signal, is provided that has a signal component in each of three or more selected frequency ranges, with each signal component in each of three or more selected level at least 3-10 dB above an estimated background (non-alert) level in that frequency range. The alert signal may be chirped within one or more frequency bands. In another embodiment, an alert signal moves, continuously or discontinuously, from one location to another over a short time interval, introducing a perceived spatial modulation or jitter. In another embodiment, a weighted sum of background signals adjacent to each ear is formed, and the weighted sum is delivered to each ear as a uniform background; a distinguishable alert signal is presented on top of this weighted sum signal at one ear, or distinguishable first and second alert signals are presented at two ears of a subject.

  5. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    PubMed

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.

  6. A microelectromechanical system artificial basilar membrane based on a piezoelectric cantilever array and its characterization using an animal model

    PubMed Central

    Jang, Jongmoon; Lee, JangWoo; Woo, Seongyong; Sly, David J.; Campbell, Luke J.; Cho, Jin-Ho; O’Leary, Stephen J.; Park, Min-Hyun; Han, Sungmin; Choi, Ji-Wong; Hun Jang, Jeong; Choi, Hongsoo

    2015-01-01

    We proposed a piezoelectric artificial basilar membrane (ABM) composed of a microelectromechanical system cantilever array. The ABM mimics the tonotopy of the cochlea: frequency selectivity and mechanoelectric transduction. The fabricated ABM exhibits a clear tonotopy in an audible frequency range (2.92–12.6 kHz). Also, an animal model was used to verify the characteristics of the ABM as a front end for potential cochlear implant applications. For this, a signal processor was used to convert the piezoelectric output from the ABM to an electrical stimulus for auditory neurons. The electrical stimulus for auditory neurons was delivered through an implanted intra-cochlear electrode array. The amplitude of the electrical stimulus was modulated in the range of 0.15 to 3.5 V with incoming sound pressure levels (SPL) of 70.1 to 94.8 dB SPL. The electrical stimulus was used to elicit an electrically evoked auditory brainstem response (EABR) from deafened guinea pigs. EABRs were successfully measured and their magnitude increased upon application of acoustic stimuli from 75 to 95 dB SPL. The frequency selectivity of the ABM was estimated by measuring the magnitude of EABRs while applying sound pressure at the resonance and off-resonance frequencies of the corresponding cantilever of the selected channel. In this study, we demonstrated a novel piezoelectric ABM and verified its characteristics by measuring EABRs. PMID:26227924

  7. A microelectromechanical system artificial basilar membrane based on a piezoelectric cantilever array and its characterization using an animal model

    NASA Astrophysics Data System (ADS)

    Jang, Jongmoon; Lee, Jangwoo; Woo, Seongyong; Sly, David J.; Campbell, Luke J.; Cho, Jin-Ho; O'Leary, Stephen J.; Park, Min-Hyun; Han, Sungmin; Choi, Ji-Wong; Hun Jang, Jeong; Choi, Hongsoo

    2015-07-01

    We proposed a piezoelectric artificial basilar membrane (ABM) composed of a microelectromechanical system cantilever array. The ABM mimics the tonotopy of the cochlea: frequency selectivity and mechanoelectric transduction. The fabricated ABM exhibits a clear tonotopy in an audible frequency range (2.92-12.6 kHz). Also, an animal model was used to verify the characteristics of the ABM as a front end for potential cochlear implant applications. For this, a signal processor was used to convert the piezoelectric output from the ABM to an electrical stimulus for auditory neurons. The electrical stimulus for auditory neurons was delivered through an implanted intra-cochlear electrode array. The amplitude of the electrical stimulus was modulated in the range of 0.15 to 3.5 V with incoming sound pressure levels (SPL) of 70.1 to 94.8 dB SPL. The electrical stimulus was used to elicit an electrically evoked auditory brainstem response (EABR) from deafened guinea pigs. EABRs were successfully measured and their magnitude increased upon application of acoustic stimuli from 75 to 95 dB SPL. The frequency selectivity of the ABM was estimated by measuring the magnitude of EABRs while applying sound pressure at the resonance and off-resonance frequencies of the corresponding cantilever of the selected channel. In this study, we demonstrated a novel piezoelectric ABM and verified its characteristics by measuring EABRs.

  8. [The influence of various acoustic stimuli upon the cumulative action potential (SAP) of the auditory nerves in guinea pigs (author's transl)].

    PubMed

    Hofmann, G; Kraak, W

    1976-08-31

    The impact of various acoustic stimuli upon the cumulative action potential of the auditory nerves in guinea pigs is investigated by means of the averaging method. It was found that the potential amplitude within the measuring range increases with the logarithm of the rising sonic pressure velocity. Unlike the evoked response audiometry (ERA), this potential seems unsuitable for furnishing information of the frequency-dependent threshold course.

  9. Tonotopically Ordered Traveling Waves in the Hearing Organs of Bushcrickets in-vivo

    NASA Astrophysics Data System (ADS)

    Udayashankar, Arun Palghat; Kössl, Manfred; Nowotny, Manuela

    2011-11-01

    Experimental investigation of auditory mechanics in the mammalian cochlea has been difficult to address in-vivo due to its secure housing inside the temporal bone. Here we studied the easily accessible hearing organ of bushcrickets, located in their forelegs, known as the crista acustica. A characteristic feature of the organ is that it is lined with an array of auditory receptors in a tonotopic fashion with lower frequencies processed at the proximal part and higher frequencies at the distal part of the foreleg. Each receptor cell is associated with so called cap cells. The cap cells, graded in size, are directly involved in the mechanics of transduction along with the part of the acoustic trachea that supports the cap cells. Functional similarities between the crista acustica and the vertebrate cochlea such as frequency selectivity and distortion product otoacoustic emissions have been well documented. In this study we used laser Doppler vibrometry to study the mechanics of the organ and observed sound induced traveling waves (TW) along it's length. Frequency representation was tonotopic with TW propagating from the high frequency to the low frequency region of the organ similar to the situation in the cochlea. Traveling wave velocity increased monotonically from 4 to 12 m/s for a frequency range of 6 to 60 kHz, reflecting a smaller topographic spread (organ length: 1 mm) compared to the guinea pig cochlea (organ length: 18 mm). The wavelength of the traveling wave decreased monotonically from 0.67 mm to 0.27 mm for the same frequency range. Vibration velocity of the organ reached noise threshold levels (10 μm/s) at 30 dB SPL for a frequency of 21 kHz. A small non-linear compression (73 dB increase in velocity for an 80 dB increase in SPL) was also observed at the 21 kHz. Our results indicate that bushcrickets can be a good model system for exploration of auditory mechanics in-vivo.

  10. Auditory Temporal Resolution in Individuals with Diabetes Mellitus Type 2.

    PubMed

    Mishra, Rajkishor; Sanju, Himanshu Kumar; Kumar, Prawin

    2016-10-01

    Introduction  "Diabetes mellitus is a group of metabolic disorders characterized by elevated blood sugar and abnormalities in insulin secretion and action" (American Diabetes Association). Previous literature has reported connection between diabetes mellitus and hearing impairment. There is a dearth of literature on auditory temporal resolution ability in individuals with diabetes mellitus type 2. Objective  The main objective of the present study was to assess auditory temporal resolution ability through GDT (Gap Detection Threshold) in individuals with diabetes mellitus type 2 with high frequency hearing loss. Methods  Fifteen subjects with diabetes mellitus type 2 with high frequency hearing loss in the age range of 30 to 40 years participated in the study as the experimental group. Fifteen age-matched non-diabetic individuals with normal hearing served as the control group. We administered the Gap Detection Threshold (GDT) test to all participants to assess their temporal resolution ability. Result  We used the independent t -test to compare between groups. Results showed that the diabetic group (experimental) performed significantly poorer compared with the non-diabetic group (control). Conclusion  It is possible to conclude that widening of auditory filters and changes in the central auditory nervous system contributed to poorer performance for temporal resolution task (Gap Detection Threshold) in individuals with diabetes mellitus type 2. Findings of the present study revealed the deteriorating effect of diabetes mellitus type 2 at the central auditory processing level.

  11. Signal Processing in Periodically Forced Gradient Frequency Neural Networks

    PubMed Central

    Kim, Ji Chul; Large, Edward W.

    2015-01-01

    Oscillatory instability at the Hopf bifurcation is a dynamical phenomenon that has been suggested to characterize active non-linear processes observed in the auditory system. Networks of oscillators poised near Hopf bifurcation points and tuned to tonotopically distributed frequencies have been used as models of auditory processing at various levels, but systematic investigation of the dynamical properties of such oscillatory networks is still lacking. Here we provide a dynamical systems analysis of a canonical model for gradient frequency neural networks driven by a periodic signal. We use linear stability analysis to identify various driven behaviors of canonical oscillators for all possible ranges of model and forcing parameters. The analysis shows that canonical oscillators exhibit qualitatively different sets of driven states and transitions for different regimes of model parameters. We classify the parameter regimes into four main categories based on their distinct signal processing capabilities. This analysis will lead to deeper understanding of the diverse behaviors of neural systems under periodic forcing and can inform the design of oscillatory network models of auditory signal processing. PMID:26733858

  12. A basic study on universal design of auditory signals in automobiles.

    PubMed

    Yamauchi, Katsuya; Choi, Jong-dae; Maiguma, Ryo; Takada, Masayuki; Iwamiya, Shin-ichiro

    2004-11-01

    In this paper, the impression of various kinds of auditory signals currently used in automobiles and a comprehensive evaluation were measured by a semantic differential method. The desirable acoustic characteristic was examined for each type of auditory signal. Sharp sounds with dominant high-frequency components were not suitable for auditory signals in automobiles. This trend is expedient for the aged whose auditory sensitivity in the high frequency region is lower. When intermittent sounds were used, a longer OFF time was suitable. Generally, "dull (not sharp)" and "calm" sounds were appropriate for auditory signals. Furthermore, the comparison between the frequency spectrum of interior noise in automobiles and that of suitable sounds for various auditory signals indicates that the suitable sounds are not easily masked. The suitable auditory signals for various purposes is a good solution from the viewpoint of universal design.

  13. The possible influence of noise frequency components on the health of exposed industrial workers--a review.

    PubMed

    Mahendra Prashanth, K V; Venugopalachar, Sridhar

    2011-01-01

    Noise is a common occupational health hazard in most industrial settings. An assessment of noise and its adverse health effects based on noise intensity is inadequate. For an efficient evaluation of noise effects, frequency spectrum analysis should also be included. This paper aims to substantiate the importance of studying the contribution of noise frequencies in evaluating health effects and their association with physiological behavior within human body. Additionally, a review of studies published between 1988 and 2009 that investigate the impact of industrial/occupational noise on auditory and non-auditory effects and the probable association and contribution of noise frequency components to these effects is presented. The relevant studies in English were identified in Medknow, Medline, Wiley, Elsevier, and Springer publications. Data were extracted from the studies that fulfilled the following criteria: title and/or abstract of the given study that involved industrial/occupational noise exposure in relation to auditory and non-auditory effects or health effects. Significant data on the study characteristics, including noise frequency characteristics, for assessment were considered in the study. It is demonstrated that only a few studies have considered the frequency contributions in their investigations to study auditory effects and not non-auditory effects. The data suggest that significant adverse health effects due to industrial noise include auditory and heart-related problems. The study provides a strong evidence for the claims that noise with a major frequency characteristic of around 4 kHz has auditory effects and being deficient in data fails to show any influence of noise frequency components on non-auditory effects. Furthermore, specific noise levels and frequencies predicting the corresponding health impacts have not yet been validated. There is a need for advance research to clarify the importance of the dominant noise frequency contribution in evaluating health effects.

  14. Auditory steady-state responses in cochlear implant users: Effect of modulation frequency and stimulation artifacts.

    PubMed

    Gransier, Robin; Deprez, Hanne; Hofmann, Michael; Moonen, Marc; van Wieringen, Astrid; Wouters, Jan

    2016-05-01

    Previous studies have shown that objective measures based on stimulation with low-rate pulse trains fail to predict the threshold levels of cochlear implant (CI) users for high-rate pulse trains, as used in clinical devices. Electrically evoked auditory steady-state responses (EASSRs) can be elicited by modulated high-rate pulse trains, and can potentially be used to objectively determine threshold levels of CI users. The responsiveness of the auditory pathway of profoundly hearing-impaired CI users to modulation frequencies is, however, not known. In the present study we investigated the responsiveness of the auditory pathway of CI users to a monopolar 500 pulses per second (pps) pulse train modulated between 1 and 100 Hz. EASSRs to forty-three modulation frequencies, elicited at the subject's maximum comfort level, were recorded by means of electroencephalography. Stimulation artifacts were removed by a linear interpolation between a pre- and post-stimulus sample (i.e., blanking). The phase delay across modulation frequencies was used to differentiate between the neural response and a possible residual stimulation artifact after blanking. Stimulation artifacts were longer than the inter-pulse interval of the 500pps pulse train for recording electrodes ipsilateral to the CI. As a result the stimulation artifacts could not be removed by artifact removal on the bases of linear interpolation for recording electrodes ipsilateral to the CI. However, artifact-free responses could be obtained in all subjects from recording electrodes contralateral to the CI, when subject specific reference electrodes (Cz or Fpz) were used. EASSRs to modulation frequencies within the 30-50 Hz range resulted in significant responses in all subjects. Only a small number of significant responses could be obtained, during a measurement period of 5 min, that originate from the brain stem (i.e., modulation frequencies in the 80-100 Hz range). This reduced synchronized activity of brain stem responses in long-term severely-hearing impaired CI users could be an attribute of processes associated with long-term hearing impairment and/or electrical stimulation. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Cutaneous sensory nerve as a substitute for auditory nerve in solving deaf-mutes’ hearing problem: an innovation in multi-channel-array skin-hearing technology

    PubMed Central

    Li, Jianwen; Li, Yan; Zhang, Ming; Ma, Weifang; Ma, Xuezong

    2014-01-01

    The current use of hearing aids and artificial cochleas for deaf-mute individuals depends on their auditory nerve. Skin-hearing technology, a patented system developed by our group, uses a cutaneous sensory nerve to substitute for the auditory nerve to help deaf-mutes to hear sound. This paper introduces a new solution, multi-channel-array skin-hearing technology, to solve the problem of speech discrimination. Based on the filtering principle of hair cells, external voice signals at different frequencies are converted to current signals at corresponding frequencies using electronic multi-channel bandpass filtering technology. Different positions on the skin can be stimulated by the electrode array, allowing the perception and discrimination of external speech signals to be determined by the skin response to the current signals. Through voice frequency analysis, the frequency range of the band-pass filter can also be determined. These findings demonstrate that the sensory nerves in the skin can help to transfer the voice signal and to distinguish the speech signal, suggesting that the skin sensory nerves are good candidates for the replacement of the auditory nerve in addressing deaf-mutes’ hearing problems. Scientific hearing experiments can be more safely performed on the skin. Compared with the artificial cochlea, multi-channel-array skin-hearing aids have lower operation risk in use, are cheaper and are more easily popularized. PMID:25317171

  16. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    PubMed

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  17. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture

    PubMed Central

    2017-01-01

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238

  18. Bidirectional Regulation of Innate and Learned Behaviors That Rely on Frequency Discrimination by Cortical Inhibitory Neurons

    PubMed Central

    Aizenberg, Mark; Mwilambwe-Tshilobo, Laetitia; Briguglio, John J.; Natan, Ryan G.; Geffen, Maria N.

    2015-01-01

    The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. PMID:26629746

  19. Intralaminar stimulation of the inferior colliculus facilitates frequency-specific activation in the auditory cortex

    NASA Astrophysics Data System (ADS)

    Allitt, B. J.; Benjaminsen, C.; Morgan, S. J.; Paolini, A. G.

    2013-08-01

    Objective. Auditory midbrain implants (AMI) provide inadequate frequency discrimination for open set speech perception. AMIs that can take advantage of the tonotopic laminar of the midbrain may be able to better deliver frequency specific perception and lead to enhanced performance. Stimulation strategies that best elicit frequency specific activity need to be identified. This research examined the characteristic frequency (CF) relationship between regions of the auditory cortex (AC), in response to stimulated regions of the inferior colliculus (IC), comparing monopolar, and intralaminar bipolar electrical stimulation. Approach. Electrical stimulation using multi-channel micro-electrode arrays in the IC was used to elicit AC responses in anaesthetized male hooded Wistar rats. The rate of activity in AC regions with CFs within 3 kHz (CF-aligned) and unaligned CFs was used to assess the frequency specificity of responses. Main results. Both monopolar and bipolar IC stimulation led to CF-aligned neural activity in the AC. Altering the distance between the stimulation and reference electrodes in the IC led to changes in both threshold and dynamic range, with bipolar stimulation with 400 µm spacing evoking the lowest AC threshold and widest dynamic range. At saturation, bipolar stimulation elicited a significantly higher mean spike count in the AC at CF-aligned areas than at CF-unaligned areas when electrode spacing was 400 µm or less. Bipolar stimulation using electrode spacing of 400 µm or less also elicited a higher rate of elicited activity in the AC in both CF-aligned and CF-unaligned regions than monopolar stimulation. When electrodes were spaced 600 µm apart no benefit over monopolar stimulation was observed. Furthermore, monopolar stimulation of the external cortex of the IC resulted in more localized frequency responses than bipolar stimulation when stimulation and reference sites were 200 µm apart. Significance. These findings have implications for the future development of AMI, as a bipolar stimulation strategy may improve the ability of implant users to discriminate between frequencies.

  20. Comparison of hearing and voicing ranges in singing

    NASA Astrophysics Data System (ADS)

    Hunter, Eric J.; Titze, Ingo R.

    2003-04-01

    The spectral and dynamic ranges of the human voice of professional and nonprofessional vocalists were compared to the auditory hearing and feeling thresholds at a distance of one meter. In order to compare these, an analysis was done in true dB SPL, not just relative dB as is usually done in speech analysis. The methodology of converting the recorded acoustic signal to absolute pressure units was described. The human voice range of a professional vocalist appeared to match the dynamic range of the auditory system at some frequencies. In particular, it was demonstrated that professional vocalists were able to make use of the most sensitive part of the hearing thresholds (around 4 kHz) through the use of a learned vocal ring or singer's formant. [Work sponsored by NIDCD.

  1. Frequency organization and responses to complex sounds in the medial geniculate body of the mustached bat.

    PubMed

    Wenstrup, J J

    1999-11-01

    The auditory cortex of the mustached bat (Pteronotus parnellii) displays some of the most highly developed physiological and organizational features described in mammalian auditory cortex. This study examines response properties and organization in the medial geniculate body (MGB) that may contribute to these features of auditory cortex. About 25% of 427 auditory responses had simple frequency tuning with single excitatory tuning curves. The remainder displayed more complex frequency tuning using two-tone or noise stimuli. Most of these were combination-sensitive, responsive to combinations of different frequency bands within sonar or social vocalizations. They included FM-FM neurons, responsive to different harmonic elements of the frequency modulated (FM) sweep in the sonar signal, and H1-CF neurons, responsive to combinations of the bat's first sonar harmonic (H1) and a higher harmonic of the constant frequency (CF) sonar signal. Most combination-sensitive neurons (86%) showed facilitatory interactions. Neurons tuned to frequencies outside the biosonar range also displayed combination-sensitive responses, perhaps related to analyses of social vocalizations. Complex spectral responses were distributed throughout dorsal and ventral divisions of the MGB, forming a major feature of this bat's analysis of complex sounds. The auditory sector of the thalamic reticular nucleus also was dominated by complex spectral responses to sounds. The ventral division was organized tonotopically, based on best frequencies of singly tuned neurons and higher best frequencies of combination-sensitive neurons. Best frequencies were lowest ventrolaterally, increasing dorsally and then ventromedially. However, representations of frequencies associated with higher harmonics of the FM sonar signal were reduced greatly. Frequency organization in the dorsal division was not tonotopic; within the middle one-third of MGB, combination-sensitive responses to second and third harmonic CF sonar signals (60-63 and 90-94 kHz) occurred in adjacent regions. In the rostral one-third, combination-sensitive responses to second, third, and fourth harmonic FM frequency bands predominated. These FM-FM neurons, thought to be selective for delay between an emitted pulse and echo, showed some organization of delay selectivity. The organization of frequency sensitivity in the MGB suggests a major rewiring of the output of the central nucleus of the inferior colliculus, by which collicular neurons tuned to the bat's FM sonar signals mostly project to the dorsal, not the ventral, division. Because physiological differences between collicular and MGB neurons are minor, a major role of the tecto-thalamic projection in the mustached bat may be the reorganization of responses to provide for cortical representations of sonar target features.

  2. Abnormal frequency discrimination in children with SLI as indexed by mismatch negativity (MMN).

    PubMed

    Rinker, Tanja; Kohls, Gregor; Richter, Cathrin; Maas, Verena; Schulz, Eberhard; Schecker, Michael

    2007-02-14

    For several decades, the aetiology of specific language impairment (SLI) has been associated with a central auditory processing deficit disrupting the normal language development of affected children. One important aspect for language acquisition is the discrimination of different acoustic features, such as frequency information. Concerning SLI, studies to date that examined frequency discrimination abilities have been contradictory. We hypothesized that an auditory processing deficit in children with SLI depends on the frequency range and the difference between the tones used. Using a passive mismatch negativity (MMN)-design, 13 boys with SLI and 13 age- and IQ-matched controls (7-11 years) were tested with two sine tones of different frequency (700Hz versus 750Hz). Reversed hemispheric activity between groups indicated abnormal processing in SLI. In a second time window, MMN2 was absent for the children with SLI. It can therefore be assumed that a frequency discrimination deficit in children with SLI becomes particularly apparent for tones below 750Hz and for a frequency difference of 50Hz. This finding may have important implications for future research and integration of various research approaches.

  3. Comparison of the produced and perceived voice range profiles in untrained and trained classical singers.

    PubMed

    Hunter, Eric J; Svec, Jan G; Titze, Ingo R

    2006-12-01

    Frequency and intensity ranges (in true decibel sound pressure level, 20 microPa at 1 m) of voice production in trained and untrained vocalists were compared with the perceived dynamic range (phons) and units of loudness (sones) of the ear. Results were reported in terms of standard voice range profiles (VRPs), perceived VRPs (as predicted by accepted measures of auditory sensitivities), and a new metric labeled as an overall perceptual level construct. Trained classical singers made use of the most sensitive part of the hearing range (around 3-4 kHz) through the use of the singer's formant. When mapped onto the contours of equal loudness (depicting nonuniform spectral and dynamic sensitivities of the auditory system), the formant is perceived at an even higher sound level, as measured in phons, than a flat or A-weighted spectrum would indicate. The contributions of effects like the singer's formant and the sensitivities of the auditory system helped the trained singers produce 20% to 40% more units of loudness, as measured in sones, than the untrained singers. Trained male vocalists had a maximum overall perceptual level construct that was 40% higher than the untrained male vocalists. Although the A-weighted spectrum (commonly used in VRP measurement) is a reasonable first-order approximation of auditory sensitivities, it misrepresents the most salient part of the sensitivities (where the singer's formant is found) by nearly 10 dB.

  4. Sound Frequency Representation in the Auditory Cortex of the Common Marmoset Visualized Using Optical Intrinsic Signal Imaging

    PubMed Central

    Tani, Toshiki; Abe, Hiroshi; Hayami, Taku; Banno, Taku; Kitamura, Naohito; Mashiko, Hiromi

    2018-01-01

    Abstract Natural sound is composed of various frequencies. Although the core region of the primate auditory cortex has functionally defined sound frequency preference maps, how the map is organized in the auditory areas of the belt and parabelt regions is not well known. In this study, we investigated the functional organizations of the core, belt, and parabelt regions encompassed by the lateral sulcus and the superior temporal sulcus in the common marmoset (Callithrix jacchus). Using optical intrinsic signal imaging, we obtained evoked responses to band-pass noise stimuli in a range of sound frequencies (0.5–16 kHz) in anesthetized adult animals and visualized the preferred sound frequency map on the cortical surface. We characterized the functionally defined organization using histologically defined brain areas in the same animals. We found tonotopic representation of a set of sound frequencies (low to high) within the primary (A1), rostral (R), and rostrotemporal (RT) areas of the core region. In the belt region, the tonotopic representation existed only in the mediolateral (ML) area. This representation was symmetric with that found in A1 along the border between areas A1 and ML. The functional structure was not very clear in the anterolateral (AL) area. Low frequencies were mainly preferred in the rostrotemplatal (RTL) area, while high frequencies were preferred in the caudolateral (CL) area. There was a portion of the parabelt region that strongly responded to higher sound frequencies (>5.8 kHz) along the border between the rostral parabelt (RPB) and caudal parabelt (CPB) regions. PMID:29736410

  5. Auditory cortex of newborn bats is prewired for echolocation.

    PubMed

    Kössl, Manfred; Voss, Cornelia; Mora, Emanuel C; Macias, Silvio; Foeller, Elisabeth; Vater, Marianne

    2012-04-10

    Neuronal computation of object distance from echo delay is an essential task that echolocating bats must master for spatial orientation and the capture of prey. In the dorsal auditory cortex of bats, neurons specifically respond to combinations of short frequency-modulated components of emitted call and delayed echo. These delay-tuned neurons are thought to serve in target range calculation. It is unknown whether neuronal correlates of active space perception are established by experience-dependent plasticity or by innate mechanisms. Here we demonstrate that in the first postnatal week, before onset of echolocation and flight, dorsal auditory cortex already contains functional circuits that calculate distance from the temporal separation of a simulated pulse and echo. This innate cortical implementation of a purely computational processing mechanism for sonar ranging should enhance survival of juvenile bats when they first engage in active echolocation behaviour and flight.

  6. Preliminary evaluation of a novel bone-conduction device for single-sided deafness.

    PubMed

    Popelka, Gerald R; Derebery, Jennifer; Blevins, Nikolas H; Murray, Michael; Moore, Brian C J; Sweetow, Robert W; Wu, Ben; Katsis, Mina

    2010-04-01

    A new intraoral bone-conduction device has advantages over existing bone-conduction devices for reducing the auditory deficits associated with single-sided deafness (SSD). Existing bone-conduction devices effectively mitigate auditory deficits from single-sided deafness but have suboptimal microphone locations, limited frequency range, and/or require invasive surgery. A new device has been designed to improve microphone placement (in the ear canal of the deaf ear), provide a wider frequency range, and eliminate surgery by delivering bone-conduction signals to the teeth via a removable oral appliance. Forces applied by the oral appliance were compared with forces typically experienced by the teeth from normal functions such as mastication or from other appliances. Tooth surface changes were measured on extracted teeth, and transducer temperature was measured under typical use conditions. Dynamic operating range, including gain, bandwidth, and maximum output limits, were determined from uncomfortable loudness levels and vibrotactile thresholds, and speech recognition scores were measured using normal-hearing subjects. Auditory performance in noise (Hearing in Noise Test) was measured in a limited sample of SSD subjects. Overall comfort, ease of insertion, and removal and visibility of the oral appliance in comparison with traditional hearing aids were measured using a rating scale. The oral appliance produces forces that are far below those experienced by the teeth from normal functions or conventional dental appliances. The bone-conduction signal level can be adjusted to prevent tactile perception yet provide sufficient gain and output at frequencies from 250 to 12,000 Hz. The device does not damage tooth surfaces nor produce heat, can be inserted and removed easily, and is as comfortable to wear as traditional hearing aids. The new microphone location has advantages for reducing the auditory deficits caused by SSD, including the potential to provide spatial cues introduced by reflections from the pinna, compared with microphone locations for existing devices. A new approach for SSD has been proposed that optimizes microphone location and delivers sound by bone conduction through a removable oral appliance. Measures in the laboratory using normal-hearing subjects indicate that the device provides useful gain and output for SSD patients, is comfortable, does not seem to have detrimental effects on oral function or oral health, and has several advantages over existing devices. Specifically, microphone placement is optimized for reducing the auditory deficit caused by SSD, frequency bandwidth is much greater, and the system does not require surgical placement. Auditory performance in a small sample of SSD subjects indicated a substantial advantage compared with not wearing the device. Future studies will involve performance measures on SSD patients wearing the device for longer periods.

  7. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    PubMed

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  8. Intensity invariance properties of auditory neurons compared to the statistics of relevant natural signals in grasshoppers.

    PubMed

    Clemens, Jan; Weschke, Gerroth; Vogel, Astrid; Ronacher, Bernhard

    2010-04-01

    The temporal pattern of amplitude modulations (AM) is often used to recognize acoustic objects. To identify objects reliably, intensity invariant representations have to be formed. We approached this problem within the auditory pathway of grasshoppers. We presented AM patterns modulated at different time scales and intensities. Metric space analysis of neuronal responses allowed us to determine how well, how invariantly, and at which time scales AM frequency is encoded. We find that in some neurons spike-count cues contribute substantially (20-60%) to the decoding of AM frequency at a single intensity. However, such cues are not robust when intensity varies. The general intensity invariance of the system is poor. However, there exists a range of AM frequencies around 83 Hz where intensity invariance of local interneurons is relatively high. In this range, natural communication signals exhibit much variation between species, suggesting an important behavioral role for this frequency band. We hypothesize, just as has been proposed for human speech, that the communication signals might have evolved to match the processing properties of the receivers. This contrasts with optimal coding theory, which postulates that neuronal systems are adapted to the statistics of the relevant signals.

  9. Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex

    PubMed Central

    Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie

    2013-01-01

    Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225

  10. Audiometric Characteristics of Hyperacusis Patients

    PubMed Central

    Sheldrake, Jacqueline; Diehl, Peter U.; Schaette, Roland

    2015-01-01

    Hyperacusis is a frequent auditory disorder where sounds of normal volume are perceived as too loud or even painfully loud. There is a high degree of co-morbidity between hyperacusis and tinnitus, most hyperacusis patients also have tinnitus, but only about 30–40% of tinnitus patients also show symptoms of hyperacusis. In order to elucidate the mechanisms of hyperacusis, detailed measurements of loudness discomfort levels (LDLs) across the hearing range would be desirable. However, previous studies have only reported LDLs for a restricted frequency range, e.g., from 0.5 to 4 kHz or from 1 to 8 kHz. We have measured audiograms and LDLs in 381 patients with a primary complaint of hyperacusis for the full standard audiometric frequency range from 0.125 to 8 kHz. On average, patients had mild high-frequency hearing loss, but more than a third of the tested ears had normal hearing thresholds (HTs), i.e., ≤20 dB HL. LDLs were found to be significantly decreased compared to a normal-hearing reference group, with average values around 85 dB HL across the frequency range. However, receiver operating characteristic analysis showed that LDL measurements are neither sensitive nor specific enough to serve as a single test for hyperacusis. There was a moderate positive correlation between HTs and LDLs (r = 0.36), i.e., LDLs tended to be higher at frequencies where hearing loss was present, suggesting that hyperacusis is unlikely to be caused by HT increase, in contrast to tinnitus for which hearing loss is a main trigger. Moreover, our finding that LDLs are decreased across the full range of audiometric frequencies, regardless of the pattern or degree of hearing loss, indicates that hyperacusis might be due to a generalized increase in auditory gain. Tinnitus on the other hand is thought to be caused by neuroplastic changes in a restricted frequency range, suggesting that tinnitus and hyperacusis might not share a common mechanism. PMID:26029161

  11. A nonlinear filter-bank model of the guinea-pig cochlear nerve: Rate responses

    NASA Astrophysics Data System (ADS)

    Sumner, Christian J.; O'Mard, Lowel P.; Lopez-Poveda, Enrique A.; Meddis, Ray

    2003-06-01

    The aim of this study is to produce a functional model of the auditory nerve (AN) response of the guinea-pig that reproduces a wide range of important responses to auditory stimulation. The model is intended for use as an input to larger scale models of auditory processing in the brain-stem. A dual-resonance nonlinear filter architecture is used to reproduce the mechanical tuning of the cochlea. Transduction to the activity on the AN is accomplished with a recently proposed model of the inner-hair-cell. Together, these models have been shown to be able to reproduce the response of high-, medium-, and low-spontaneous rate fibers from the guinea-pig AN at high best frequencies (BFs). In this study we generate parameters that allow us to fit the AN model to data from a wide range of BFs. By varying the characteristics of the mechanical filtering as a function of the BF it was possible to reproduce the BF dependence of frequency-threshold tuning curves, AN rate-intensity functions at and away from BF, compression of the basilar membrane at BF as inferred from AN responses, and AN iso-intensity functions. The model is a convenient computational tool for the simulation of the range of nonlinear tuning and rate-responses found across the length of the guinea-pig cochlear nerve.

  12. Gap detection threshold in the rat before and after auditory cortex ablation.

    PubMed

    Syka, J; Rybalko, N; Mazelová, J; Druga, R

    2002-10-01

    Gap detection threshold (GDT) was measured in adult female pigmented rats (strain Long-Evans) by an operant conditioning technique with food reinforcement, before and after bilateral ablation of the auditory cortex. GDT was dependent on the frequency spectrum and intensity of the continuously present noise in which the gaps were embedded. The mean values of GDT for gaps embedded in white noise or low-frequency noise (upper cutoff frequency 3 kHz) at 70 dB sound pressure level (SPL) were 1.57+/-0.07 ms and 2.9+/-0.34 ms, respectively. Decreasing noise intensity from 80 dB SPL to 20 dB SPL produced a significant increase in GDT. The increase in GDT was relatively small in the range of 80-50 dB SPL for white noise and in the range of 80-60 dB for low-frequency noise. The minimal intensity level of the noise that enabled GDT measurement was 20 dB SPL for white noise and 30 dB SPL for low-frequency noise. Mean GDT values at these intensities were 10.6+/-3.9 ms and 31.3+/-4.2 ms, respectively. Bilateral ablation of the primary auditory cortex (complete destruction of the Te1 and partial destruction of the Te2 and Te3 areas) resulted in an increase in GDT values. The fifth day after surgery, the rats were able to detect gaps in the noise. The values of GDT observed at this time were 4.2+/-1.1 ms for white noise and 7.4+/-3.1 ms for low-frequency noise at 70 dB SPL. During the first month after cortical ablation, recovery of GDT was observed. However, 1 month after cortical ablation GDT still remained slightly higher than in controls (1.8+/-0.18 for white noise, 3.22+/-0.15 for low-frequency noise, P<0.05). A decrease in GDT values during the subsequent months was not observed.

  13. Absolute auditory thresholds in three Old World monkey species (Cercopithecus aethiops, C. neglectus, Macaca fuscata) and humans (Homo sapiens).

    PubMed

    Owren, M J; Hopp, S L; Sinnott, J M; Petersen, M R

    1988-06-01

    We investigated the absolute auditory sensitivities of three monkey species (Cercopithecus aethiops, C. neglectus, and Macaca fuscata) and humans (Homo sapiens). Results indicated that species-typical variation exists in these primates. Vervets, which have the smallest interaural distance of the species that we tested, exhibited the greatest high-frequency sensitivity. This result is consistent with Masterton, Heffner, and Ravizza's (1969) observations that head size and high-frequency acuity are inversely correlated in mammals. Vervets were also the most sensitive in the middle frequency range. Furthermore, we found that de Brazza's monkeys, though they produce a specialized, low-pitched boom call, did not show the enhanced low-frequency sensitivity that Brown and Waser (1984) showed for blue monkeys (C. mitis), a species with a similar sound. This discrepancy may be related to differences in the acoustics of the respective habitats of these animals or in the way their boom calls are used. The acuity of Japanese monkeys was found to closely resemble that of rhesus macaques (M. mulatta) that were tested in previous studies. Finally, humans tested in the same apparatus exhibited normative sensitivities. These subjects responded more readily to low frequencies than did the monkeys but rapidly became less sensitive in the high ranges.

  14. Barn owls have ageless ears.

    PubMed

    Krumm, Bianca; Klump, Georg; Köppl, Christine; Langemann, Ulrike

    2017-09-27

    We measured the auditory sensitivity of the barn owl ( Tyto alba ), using a behavioural Go/NoGo paradigm in two different age groups, one younger than 2 years ( n = 4) and another more than 13 years of age ( n = 3). In addition, we obtained thresholds from one individual aged 23 years, three times during its lifetime. For computing audiograms, we presented test frequencies of between 0.5 and 12 kHz, covering the hearing range of the barn owl. Average thresholds in quiet were below 0 dB sound pressure level (SPL) for frequencies between 1 and 10 kHz. The lowest mean threshold was -12.6 dB SPL at 8 kHz. Thresholds were the highest at 12 kHz, with a mean of 31.7 dB SPL. Test frequency had a significant effect on auditory threshold but age group had no significant effect. There was no significant interaction between age group and test frequency. Repeated threshold estimates over 21 years from a single individual showed only a slight increase in thresholds. We discuss the auditory sensitivity of barn owls with respect to other species and suggest that birds, which generally show a remarkable capacity for regeneration of hair cells in the basilar papilla, are naturally protected from presbycusis. © 2017 The Author(s).

  15. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain.

    PubMed

    Martínez, I; Rosa, M; Arsuaga, J-L; Jarabo, P; Quam, R; Lorenzo, C; Gracia, A; Carretero, J-M; Bermúdez de Castro, J-M; Carbonell, E

    2004-07-06

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range.

  16. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain

    PubMed Central

    Martínez, I.; Rosa, M.; Arsuaga, J.-L.; Jarabo, P.; Quam, R.; Lorenzo, C.; Gracia, A.; Carretero, J.-M.; de Castro, J.-M. Bermúdez; Carbonell, E.

    2004-01-01

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range. PMID:15213327

  17. A Psychophysical Evaluation of Spectral Enhancement

    ERIC Educational Resources Information Center

    DiGiovanni, Jeffrey J.; Nelson, Peggy B.; Schlauch, Robert S.

    2005-01-01

    Listeners with sensorineural hearing loss have well-documented elevated hearing thresholds; reduced auditory dynamic ranges; and reduced spectral (or frequency) resolution that may reduce speech intelligibility, especially in the presence of competing sounds. Amplification and amplitude compression partially compensate for elevated thresholds and…

  18. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    PubMed

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Auditory Gap-in-Noise Detection Behavior in Ferrets and Humans

    PubMed Central

    2015-01-01

    The precise encoding of temporal features of auditory stimuli by the mammalian auditory system is critical to the perception of biologically important sounds, including vocalizations, speech, and music. In this study, auditory gap-detection behavior was evaluated in adult pigmented ferrets (Mustelid putorius furo) using bandpassed stimuli designed to widely sample the ferret’s behavioral and physiological audiogram. Animals were tested under positive operant conditioning, with psychometric functions constructed in response to gap-in-noise lengths ranging from 3 to 270 ms. Using a modified version of this gap-detection task, with the same stimulus frequency parameters, we also tested a cohort of normal-hearing human subjects. Gap-detection thresholds were computed from psychometric curves transformed according to signal detection theory, revealing that for both ferrets and humans, detection sensitivity was worse for silent gaps embedded within low-frequency noise compared with high-frequency or broadband stimuli. Additional psychometric function analysis of ferret behavior indicated effects of stimulus spectral content on aspects of behavioral performance related to decision-making processes, with animals displaying improved sensitivity for broadband gap-in-noise detection. Reaction times derived from unconditioned head-orienting data and the time from stimulus onset to reward spout activation varied with the stimulus frequency content and gap length, as well as the approach-to-target choice and reward location. The present study represents a comprehensive evaluation of gap-detection behavior in ferrets, while similarities in performance with our human subjects confirm the use of the ferret as an appropriate model of temporal processing. PMID:26052794

  20. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256

  1. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    PubMed

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  2. Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex

    PubMed Central

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525

  3. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  4. Studies on Auditory and Vestibular End Organs and Brain Stem Nuclei. [inner ear damage and hearing defects

    NASA Technical Reports Server (NTRS)

    Ades, H. W.

    1974-01-01

    Cats were exposed to tones of 125, 1000, 2000, and 4000 Hz at sound pressure levels in the range 120 to 157.5 db, and for durations of one hour (1000, 2000, 4000 Hz) or four hours (125 Hz). Pure tone audiograms were obtained for each animal before and after exposure. Cochleas of animals were examined by phase-contrast microscopy. Extent of inner ear damage and range of frequencies for which hearing loss occurred increased as exposure tone was decreased in frequency. For example, exposure to 4000 Hz produced damage in a restricted region of the cochlea and hearing loss for a relatively narrow range of frequencies; exposure to 125 Hz produced wide-spread inner ear damage and hearing loss throughout the frequency range 125 to 6000 Hz.

  5. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment

    PubMed Central

    Spiousas, Ignacio; Etchemendy, Pablo E.; Eguia, Manuel C.; Calcagno, Esteban R.; Abregú, Ezequiel; Vergara, Ramiro O.

    2017-01-01

    Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it. PMID:28690556

  6. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment.

    PubMed

    Spiousas, Ignacio; Etchemendy, Pablo E; Eguia, Manuel C; Calcagno, Esteban R; Abregú, Ezequiel; Vergara, Ramiro O

    2017-01-01

    Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1-6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it.

  7. Evidence of auditory insensitivity to vocalization frequencies in two frogs.

    PubMed

    Goutte, Sandra; Mason, Matthew J; Christensen-Dalsgaard, Jakob; Montealegre-Z, Fernando; Chivers, Benedict D; Sarria-S, Fabio A; Antoniazzi, Marta M; Jared, Carlos; Almeida Sato, Luciana; Felipe Toledo, Luís

    2017-09-21

    The emergence and maintenance of animal communication systems requires the co-evolution of signal and receiver. Frogs and toads rely heavily on acoustic communication for coordinating reproduction and typically have ears tuned to the dominant frequency of their vocalizations, allowing discrimination from background noise and heterospecific calls. However, we present here evidence that two anurans, Brachycephalus ephippium and B. pitanga, are insensitive to the sound of their own calls. Both species produce advertisement calls outside their hearing sensitivity range and their inner ears are partly undeveloped, which accounts for their lack of high-frequency sensitivity. If unheard by the intended receivers, calls are not beneficial to the emitter and should be selected against because of the costs associated with signal production. We suggest that protection against predators conferred by their high toxicity might help to explain why calling has not yet disappeared, and that visual communication may have replaced auditory in these colourful, diurnal frogs.

  8. Vocal Responses to Perturbations in Voice Auditory Feedback in Individuals with Parkinson's Disease

    PubMed Central

    Liu, Hanjun; Wang, Emily Q.; Metman, Leo Verhagen; Larson, Charles R.

    2012-01-01

    Background One of the most common symptoms of speech deficits in individuals with Parkinson's disease (PD) is significantly reduced vocal loudness and pitch range. The present study investigated whether abnormal vocalizations in individuals with PD are related to sensory processing of voice auditory feedback. Perturbations in loudness or pitch of voice auditory feedback are known to elicit short latency, compensatory responses in voice amplitude or fundamental frequency. Methodology/Principal Findings Twelve individuals with Parkinson's disease and 13 age- and sex- matched healthy control subjects sustained a vowel sound (/α/) and received unexpected, brief (200 ms) perturbations in voice loudness (±3 or 6 dB) or pitch (±100 cents) auditory feedback. Results showed that, while all subjects produced compensatory responses in their voice amplitude or fundamental frequency, individuals with PD exhibited larger response magnitudes than the control subjects. Furthermore, for loudness-shifted feedback, upward stimuli resulted in shorter response latencies than downward stimuli in the control subjects but not in individuals with PD. Conclusions/Significance The larger response magnitudes in individuals with PD compared with the control subjects suggest that processing of voice auditory feedback is abnormal in PD. Although the precise mechanisms of the voice feedback processing are unknown, results of this study suggest that abnormal voice control in individuals with PD may be related to dysfunctional mechanisms of error detection or correction in sensory feedback processing. PMID:22448258

  9. Auditory evoked potentials.

    PubMed

    De Cosmo, G; Aceto, P; Clemente, A; Congedo, E

    2004-05-01

    Auditory evoked potentials (AEPs) are an electrical manifestation of the brain response to an auditory stimulus. Mid-latency auditory evoked potentials (MLAEPs) and the coherent frequency of the AEP are the most promising for monitoring depth of anaesthesia. MLAEPs show graded changes with increasing anaesthetic concentration over the clinical concentration range. The latencies of Pa and Nb lengthen and their amplitudes reduce. These changes in features of waveform are similar with both inhaled and intravenous anaesthetics. Changes in latency of Pa and Nb waves are highly correlated to a transition from awake to loss of consciousness. MLAEPs recording may also provide information about cerebral processing of the auditory input, probably because it reflects activity in the temporal lobe/primary cortex, sites involved in sounds elaboration and in a complex mechanism of implicit (non declarative) memory processing. The coherent frequency has found to be disrupted by the anaesthetics as well as to be implicated in attentional mechanism. These results support the concept that the AEPs reflects the balance between the arousal effects of surgical stimulation and the depressant effects of anaesthetics. However, AEPs aren't a perfect measure of anaesthesia depth. They can't predict patients movements during surgery and the signal may be affected by muscle artefacts, diathermy and other electrical operating theatre interferences. In conclusion, once reliability of the AEPs recording became proved and the signal acquisition improved it is likely to became a routine feature of clinical anaesthetic practice.

  10. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

  11. Deviance detection based on regularity encoding along the auditory hierarchy: electrophysiological evidence in humans.

    PubMed

    Escera, Carles; Leung, Sumie; Grimm, Sabine

    2014-07-01

    Detection of changes in the acoustic environment is critical for survival, as it prevents missing potentially relevant events outside the focus of attention. In humans, deviance detection based on acoustic regularity encoding has been associated with a brain response derived from the human EEG, the mismatch negativity (MMN) auditory evoked potential, peaking at about 100-200 ms from deviance onset. By its long latency and cerebral generators, the cortical nature of both the processes of regularity encoding and deviance detection has been assumed. Yet, intracellular, extracellular, single-unit and local-field potential recordings in rats and cats have shown much earlier (circa 20-30 ms) and hierarchically lower (primary auditory cortex, medial geniculate body, inferior colliculus) deviance-related responses. Here, we review the recent evidence obtained with the complex auditory brainstem response (cABR), the middle latency response (MLR) and magnetoencephalography (MEG) demonstrating that human auditory deviance detection based on regularity encoding-rather than on refractoriness-occurs at latencies and in neural networks comparable to those revealed in animals. Specifically, encoding of simple acoustic-feature regularities and detection of corresponding deviance, such as an infrequent change in frequency or location, occur in the latency range of the MLR, in separate auditory cortical regions from those generating the MMN, and even at the level of human auditory brainstem. In contrast, violations of more complex regularities, such as those defined by the alternation of two different tones or by feature conjunctions (i.e., frequency and location) fail to elicit MLR correlates but elicit sizable MMNs. Altogether, these findings support the emerging view that deviance detection is a basic principle of the functional organization of the auditory system, and that regularity encoding and deviance detection is organized in ascending levels of complexity along the auditory pathway expanding from the brainstem up to higher-order areas of the cerebral cortex.

  12. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    PubMed

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre-speech modulation is not directly related to limited auditory-motor adaptation; and in AWS, DAF paradoxically tends to normalize their otherwise limited pre-speech auditory modulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Peripheral and central auditory specialization in a gliding marsupial, the feathertail glider, Acrobates pygmaeus.

    PubMed

    Aitkin, L M; Nelson, J E

    1989-01-01

    Two specialized features are described in the auditory system of Acrobates pygmaeus, a small gliding marsupial. Firstly, the ear canal includes a transverse disk of bone that partly occludes the canal near the eardrum. The resultant narrow-necked chamber above the eardrum appears to attenuate sound across a broad frequency range, except at 27-29 kHz at which a net gain of sound pressure occurs. Secondly, the lateral medulla is hypertrophied at the level of the cochlear nucleus, forming a massive lateral lobe comprised of multipolar cells and granule cells. This lobe has connections with the auditory nerve and the cerebellum. Speculations are advanced about the functions of these structures in gliding behaviour and predator avoidance.

  14. COMPARISON OF THE PRODUCED AND PERCEIVED VOICE RANGE PROFILES IN UNTRAINED AND TRAINED CLASSICAL SINGERS

    PubMed Central

    Hunter, Eric J.; Švec, Jan G.; Titze, Ingo R.

    2016-01-01

    Frequency and intensity ranges (in true dB SPL re 20 μPa at 1 meter) of voice production in trained and untrained vocalists were compared to the perceived dynamic range (phons) and units of loudness (sones) of the ear. Results were reported in terms of standard Voice Range Profiles (VRPs), perceived VRPs (as predicted by accepted measures of auditory sensitivities), and a new metric labeled as an Overall Perceptual Level Construct. Trained classical singers made use of the most sensitive part of the hearing range (around 3–4 KHz) through the use of the singer’s formant. When mapped onto the contours of equal-loudness (depicting non-uniform spectral and dynamic sensitivities of the auditory system), the formant is perceived at an even higher sound level, as measured in phons, than a flat or A-weighted spectrum would indicate. The contributions of effects like the singer’s formant and the sensitivities of the auditory system helped the trained singers produce 20–40 percent more units of loudness, as measured in sones, than the untrained singers. Trained male vocalists had a maximum Overall Perceptual Level Construct that was 40% higher than the untrained male vocalists. While the A-weighted spectrum (commonly used in VRP measurement) is a reasonable first order approximation of auditory sensitivities, it misrepresents the most salient part of the sensitivities (where the singer’s formant is found) by nearly 10 dB. PMID:16325373

  15. Anatomic and Physiologic Heterogeneity of Subgroup-A Auditory Sensory Neurons in Fruit Flies.

    PubMed

    Ishikawa, Yuki; Okamoto, Natsuki; Nakamura, Mizuki; Kim, Hyunsoo; Kamikouchi, Azusa

    2017-01-01

    The antennal ear of the fruit fly detects acoustic signals in intraspecific communication, such as the courtship song and agonistic sounds. Among the five subgroups of mechanosensory neurons in the fly ear, subgroup-A neurons respond maximally to vibrations over a wide frequency range between 100 and 1,200 Hz. The functional organization of the neural circuit comprised of subgroup-A neurons, however, remains largely unknown. In the present study, we used 11 GAL4 strains that selectively label subgroup-A neurons and explored the diversity of subgroup-A neurons by combining single-cell anatomic analysis and Ca 2+ imaging. Our findings indicate that the subgroup-A neurons that project into various combinations of subareas in the brain are more anatomically diverse than previously described. Subgroup-A neurons were also physiologically diverse, and some types were tuned to a narrow frequency range, suggesting that the response of subgroup-A neurons to sounds of a wide frequency range is due to the existence of several types of subgroup-A neurons. Further, we found that an auditory behavioral response to the courtship song of flies was attenuated when most subgroup-A neurons were silenced. Together, these findings characterize the heterogeneous functional organization of subgroup-A neurons, which might facilitate species-specific acoustic signal detection.

  16. Anatomic and Physiologic Heterogeneity of Subgroup-A Auditory Sensory Neurons in Fruit Flies

    PubMed Central

    Ishikawa, Yuki; Okamoto, Natsuki; Nakamura, Mizuki; Kim, Hyunsoo; Kamikouchi, Azusa

    2017-01-01

    The antennal ear of the fruit fly detects acoustic signals in intraspecific communication, such as the courtship song and agonistic sounds. Among the five subgroups of mechanosensory neurons in the fly ear, subgroup-A neurons respond maximally to vibrations over a wide frequency range between 100 and 1,200 Hz. The functional organization of the neural circuit comprised of subgroup-A neurons, however, remains largely unknown. In the present study, we used 11 GAL4 strains that selectively label subgroup-A neurons and explored the diversity of subgroup-A neurons by combining single-cell anatomic analysis and Ca2+ imaging. Our findings indicate that the subgroup-A neurons that project into various combinations of subareas in the brain are more anatomically diverse than previously described. Subgroup-A neurons were also physiologically diverse, and some types were tuned to a narrow frequency range, suggesting that the response of subgroup-A neurons to sounds of a wide frequency range is due to the existence of several types of subgroup-A neurons. Further, we found that an auditory behavioral response to the courtship song of flies was attenuated when most subgroup-A neurons were silenced. Together, these findings characterize the heterogeneous functional organization of subgroup-A neurons, which might facilitate species-specific acoustic signal detection. PMID:28701929

  17. Interaural time discrimination of envelopes carried on high-frequency tones as a function of level and interaural carrier mismatch

    PubMed Central

    Blanks, Deidra A.; Buss, Emily; Grose, John H.; Fitzpatrick, Douglas C.; Hall, Joseph W.

    2009-01-01

    Objectives The present study investigated interaural time discrimination for binaurally mismatched carrier frequencies in listeners with normal hearing. One goal of the investigation was to gain insights into binaural hearing in patients with bilateral cochlear implants, where the coding of interaural time differences may be limited by mismatches in the neural populations receiving stimulation on each side. Design Temporal envelopes were manipulated to present low frequency timing cues to high frequency auditory channels. Carrier frequencies near 4 kHz were amplitude modulated at 128 Hz via multiplication with a half-wave rectified sinusoid, and that modulation was either in-phase across ears or delayed to one ear. Detection thresholds for non-zero interaural time differences were measured for a range of stimulus levels and a range of carrier frequency mismatches. Data were also collected under conditions designed to limit cues based on stimulus spectral spread, including masking and truncation of sidebands associated with modulation. Results Listeners with normal hearing can detect interaural time differences in the face of substantial mismatches in carrier frequency across ears. Conclusions The processing of interaural time differences in listeners with normal hearing is likely based on spread of excitation into binaurally matched auditory channels. Sensitivity to interaural time differences in listeners with cochlear implants may depend upon spread of current that results in the stimulation of neural populations that share common tonotopic space bilaterally. PMID:18596646

  18. Functional associations at global brain level during perception of an auditory illusion by applying maximal information coefficient

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Joydeep; Pereda, Ernesto; Ioannou, Christos

    2018-02-01

    Maximal information coefficient (MIC) is a recently introduced information-theoretic measure of functional association with a promising potential of application to high dimensional complex data sets. Here, we applied MIC to reveal the nature of the functional associations between different brain regions during the perception of binaural beat (BB); BB is an auditory illusion occurring when two sinusoidal tones of slightly different frequency are presented separately to each ear and an illusory beat at the different frequency is perceived. We recorded sixty-four channels EEG from two groups of participants, musicians and non-musicians, during the presentation of BB, and systematically varied the frequency difference from 1 Hz to 48 Hz. Participants were also presented non-binuaral beat (NBB) stimuli, in which same frequencies were presented to both ears. Across groups, as compared to NBB, (i) BB conditions produced the most robust changes in the MIC values at the whole brain level when the frequency differences were in the classical alpha range (8-12 Hz), and (ii) the number of electrode pairs showing nonlinear associations decreased gradually with increasing frequency difference. Between groups, significant effects were found for BBs in the broad gamma frequency range (34-48 Hz), but such effects were not observed between groups during NBB. Altogether, these results revealed the nature of functional associations at the whole brain level during the binaural beat perception and demonstrated the usefulness of MIC in characterizing interregional neural dependencies.

  19. Acute Phencyclidine Alters Neural Oscillations Evoked by Tones in the Auditory Cortex of Rats.

    PubMed

    Schnakenberg Martin, Ashley M; OʼDonnell, Brian F; Millward, James B; Vohs, Jenifer L; Leishman, Emma; Bolbecker, Amanda R; Rass, Olga; Morzorati, Sandra L

    2017-01-01

    The onset response to a single tone as measured by electroencephalography (EEG) is diminished in power and synchrony in schizophrenia. Because neural synchrony, particularly at gamma frequencies (30-80 Hz), is hypothesized to be supported by the N-methyl-D-aspartate receptor (NMDAr) system, we tested whether phencyclidine (PCP), an NMDAr antagonist, produced similar deficits to tone stimuli in rats. Experiment 1 tested the effect of a PCP dose (1.0, 2.5, and 4.5 mg/kg) on response to single tones on intracranial EEG recorded over the auditory cortex in rats. Experiment 2 evaluated the effect of PCP after acute administration of saline or PCP (5 mg/kg), after continuous subchronic administration of saline or PCP (5 mg/kg/day), and after a week of drug cessation. In both experiments, a time-frequency analysis quantified mean power (MP) and phase locking factor (PLF) between 1 and 80 Hz. Event-related potentials (ERPs) were also measured to tones, and EEG spectral power in the absence of auditory stimuli. Acute PCP increased PLF and MP between 10 and 30 Hz, while decreasing MP and PLF between approximately 50 and 70 Hz. Acute PCP produced a dose-dependent broad-band increase in EEG power that extended into gamma range frequencies. There were no consistent effects of subchronic administration on gamma range activity. Acute PCP increased ERP amplitudes for the P16 and N70 components. Findings suggest that acute PCP-induced NMDAr hypofunction has differential effects on neural power and synchrony which vary with dose, time course of administration and EEG frequency. EEG synchrony and power appear to be sensitive translational biomarkers for disrupted NMDAr function, which may contribute to the pathophysiology of schizophrenia and other neuropsychiatric disorders. © 2017 S. Karger AG, Basel.

  20. A simulation framework for auditory discrimination experiments: Revealing the importance of across-frequency processing in speech perception.

    PubMed

    Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger

    2016-05-01

    A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.

  1. Perception of Small Frequency Differences in Children with Auditory Processing Disorder or Specific Language Impairment

    PubMed Central

    Rota-Donahue, Christine; Schwartz, Richard G.; Shafer, Valerie; Sussman, Elyse S.

    2016-01-01

    Background Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children’s auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. Purpose This study examined the perception of small frequency differences (Δf) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. Research Design An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of Δf from the 1000-Hz base frequency. Study Sample Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Data Collection and Analysis Behavioral data collected using headphone delivery were analyzed using the sensitivity index d′, calculated for three Δf was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d′ and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. Results TD children and children with APD and/or SLI differed in the detection of small-tone Δf. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d′ showed different strengths of correlation based on the magnitudes of the Δf. Auditory processing scores showed stronger correlation to the sensitivity index d′ for the small Δf, while language scores showed stronger correlation to the sensitivity index d′ for the large Δf. Conclusion Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. PMID:27310407

  2. Perception of Small Frequency Differences in Children with Auditory Processing Disorder or Specific Language Impairment.

    PubMed

    Rota-Donahue, Christine; Schwartz, Richard G; Shafer, Valerie; Sussman, Elyse S

    2016-06-01

    Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children's auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. : This study examined the perception of small frequency differences (∆ƒ) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of ∆ƒ from the 1000-Hz base frequency. Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Behavioral data collected using headphone delivery were analyzed using the sensitivity index d', calculated for three ∆ƒ was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d' and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. TD children and children with APD and/or SLI differed in the detection of small-tone ∆ƒ. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d' showed different strengths of correlation based on the magnitudes of the ∆ƒ. Auditory processing scores showed stronger correlation to the sensitivity index d' for the small ∆ƒ, while language scores showed stronger correlation to the sensitivity index d' for the large ∆ƒ. Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. American Academy of Audiology.

  3. The relationship between auditory exostoses and cold water: a latitudinal analysis.

    PubMed

    Kennedy, G E

    1986-12-01

    The frequency of auditory exostoses was examined by latitude. It was found that discrete bony lesions of the external auditory canal were, with very few exceptions, either absent or in very low frequency (less than 3.0%) in 0-30 degrees N and S latitudes and above 45 degrees N. The highest frequencies of auditory exostoses were found in the middle latitudes (30-45 degrees N and S) among populations who exploit either marine or fresh water resources. Clinical and experimental data are discussed, and these data are found to support strongly the hypothesis that there is a causative relationship between the formation of auditory exostoses and exploitation of resources in cold water, particularly through diving. It is therefore suggested that since auditory exostoses are behavioral rather than genetic in etiology, they should not be included in estimates of population distance based on nonmetric variables.

  4. Calculation of selective filters of a device for primary analysis of speech signals

    NASA Astrophysics Data System (ADS)

    Chudnovskii, L. S.; Ageev, V. M.

    2014-07-01

    The amplitude-frequency responses of filters for primary analysis of speech signals, which have a low quality factor and a high rolloff factor in the high-frequency range, are calculated using the linear theory of speech production and psychoacoustic measurement data. The frequency resolution of the filter system for a sinusoidal signal is 40-200 Hz. The modulation-frequency resolution of amplitude- and frequency-modulated signals is 3-6 Hz. The aforementioned features of the calculated filters are close to the amplitudefrequency responses of biological auditory systems at the level of the eighth nerve.

  5. Electrical stimulation of the midbrain excites the auditory cortex asymmetrically.

    PubMed

    Quass, Gunnar Lennart; Kurt, Simone; Hildebrandt, Jannis; Kral, Andrej

    2018-05-17

    Auditory midbrain implant users cannot achieve open speech perception and have limited frequency resolution. It remains unclear whether the spread of excitation contributes to this issue and how much it can be compensated by current-focusing, which is an effective approach in cochlear implants. The present study examined the spread of excitation in the cortex elicited by electric midbrain stimulation. We further tested whether current-focusing via bipolar and tripolar stimulation is effective with electric midbrain stimulation and whether these modes hold any advantage over monopolar stimulation also in conditions when the stimulation electrodes are in direct contact with the target tissue. Using penetrating multielectrode arrays, we recorded cortical population responses to single pulse electric midbrain stimulation in 10 ketamine/xylazine anesthetized mice. We compared monopolar, bipolar, and tripolar stimulation configurations with regard to the spread of excitation and the characteristic frequency difference between the stimulation/recording electrodes. The cortical responses were distributed asymmetrically around the characteristic frequency of the stimulated midbrain region with a strong activation in regions tuned up to one octave higher. We found no significant differences between monopolar, bipolar, and tripolar stimulation in threshold, evoked firing rate, or dynamic range. The cortical responses to electric midbrain stimulation are biased towards higher tonotopic frequencies. Current-focusing is not effective in direct contact electrical stimulation. Electrode maps should account for the asymmetrical spread of excitation when fitting auditory midbrain implants by shifting the frequency-bands downward and stimulating as dorsally as possible. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Infant Auditory Sensitivity to Pure Tones and Frequency-Modulated Tones

    ERIC Educational Resources Information Center

    Leibold, Lori J.; Werner, Lynne A.

    2007-01-01

    It has been suggested that infants respond preferentially to infant-directed speech because their auditory sensitivity to sounds with extensive frequency modulation (FM) is better than their sensitivity to less modulated sounds. In this experiment, auditory thresholds for FM tones and for unmodulated, or pure, tones in a background of noise were…

  7. Auditory Proprioceptive Integration: Effects of Real-Time Kinematic Auditory Feedback on Knee Proprioception

    PubMed Central

    Ghai, Shashank; Schmitz, Gerd; Hwang, Tong-Hun; Effenberg, Alfred O.

    2018-01-01

    The purpose of the study was to assess the influence of real-time auditory feedback on knee proprioception. Thirty healthy participants were randomly allocated to control (n = 15), and experimental group I (15). The participants performed an active knee-repositioning task using their dominant leg, with/without additional real-time auditory feedback where the frequency was mapped in a convergent manner to two different target angles (40 and 75°). Statistical analysis revealed significant enhancement in knee re-positioning accuracy for the constant and absolute error with real-time auditory feedback, within and across the groups. Besides this convergent condition, we established a second divergent condition. Here, a step-wise transposition of frequency was performed to explore whether a systematic tuning between auditory-proprioceptive repositioning exists. No significant effects were identified in this divergent auditory feedback condition. An additional experimental group II (n = 20) was further included. Here, we investigated the influence of a larger magnitude and directional change of step-wise transposition of the frequency. In a first step, results confirm the findings of experiment I. Moreover, significant effects on knee auditory-proprioception repositioning were evident when divergent auditory feedback was applied. During the step-wise transposition participants showed systematic modulation of knee movements in the opposite direction of transposition. We confirm that knee re-positioning accuracy can be enhanced with concurrent application of real-time auditory feedback and that knee re-positioning can modulated in a goal-directed manner with step-wise transposition of frequency. Clinical implications are discussed with respect to joint position sense in rehabilitation settings. PMID:29568259

  8. Auditory and vestibular dysfunctions in systemic sclerosis: literature review.

    PubMed

    Rabelo, Maysa Bastos; Corona, Ana Paula

    2014-01-01

    To describe the prevalence of auditory and vestibular dysfunction in individuals with systemic sclerosis (SS) and the hypotheses to explain these changes. We performed a systematic review without meta-analysis from PubMed, LILACS, Web of Science, SciELO and SCOPUS databases, using a combination of keywords "systemic sclerosis AND balance OR vestibular" and "systemic sclerosis AND hearing OR auditory." We included articles published in Portuguese, Spanish, or English until December 2011 and reviews, letters, and editorials were excluded. We found 254 articles, out of which 10 were selected. The study design was described, and the characteristics and frequency of the auditory and vestibular dysfunctions in these individuals were listed. Afterwards, we investigated the hypothesis built by the authors to explain the auditory and vestibular dysfunctions in SS. Hearing loss was the most common finding, with prevalence ranging from 20 to 77%, being bilateral sensorineural the most frequent type. It is hypothesized that the hearing impairment in SS is due to vascular changes in the cochlea. The prevalence of vestibular disorders ranged from 11 to 63%, and the most frequent findings were changes in caloric testing, positional nystagmus, impaired oculocephalic response, changes in clinical tests of sensory interaction, and benign paroxysmal positional vertigo. High prevalence of auditory and vestibular dysfunctions in patients with SS was observed. Conducting further research can assist in early identification of these abnormalities, provide resources for professionals who work with these patients, and contribute to improving the quality of life of these individuals.

  9. A comparison of auditory brainstem responses across diving bird species

    USGS Publications Warehouse

    Crowell, Sara E.; Berlin, Alicia; Carr, Catherine E.; Olsen, Glenn H.; Therrien, Ronald E.; Yannuzzi, Sally E.; Ketten, Darlene R.

    2015-01-01

    There is little biological data available for diving birds because many live in hard-to-study, remote habitats. Only one species of diving bird, the black-footed penguin (Spheniscus demersus), has been studied in respect to auditory capabilities (Wever et al., Proc Natl Acad Sci USA 63:676–680, 1969). We, therefore, measured in-air auditory threshold in ten species of diving birds, using the auditory brainstem response (ABR). The average audiogram obtained for each species followed the U-shape typical of birds and many other animals. All species tested shared a common region of the greatest sensitivity, from 1000 to 3000 Hz, although audiograms differed significantly across species. Thresholds of all duck species tested were more similar to each other than to the two non-duck species tested. The red-throated loon (Gavia stellata) and northern gannet (Morus bassanus) exhibited the highest thresholds while the lowest thresholds belonged to the duck species, specifically the lesser scaup (Aythya affinis) and ruddy duck (Oxyura jamaicensis). Vocalization parameters were also measured for each species, and showed that with the exception of the common eider (Somateria mollisima), the peak frequency, i.e., frequency at the greatest intensity, of all species' vocalizations measured here fell between 1000 and 3000 Hz, matching the bandwidth of the most sensitive hearing range.

  10. A comparison of auditory brainstem responses across diving bird species

    PubMed Central

    Crowell, Sara E.; Wells-Berlin, Alicia M.; Carr, Catherine E.; Olsen, Glenn H.; Therrien, Ronald E.; Yannuzzi, Sally E.; Ketten, Darlene R.

    2015-01-01

    There is little biological data available for diving birds because many live in hard-to-study, remote habitats. Only one species of diving bird, the black-footed penguin (Spheniscus demersus), has been studied in respect to auditory capabilities (Wever et al. 1969). We therefore measured in-air auditory threshold in ten species of diving birds, using the auditory brainstem response (ABR). The average audiogram obtained for each species followed the U-shape typical of birds and many other animals. All species tested shared a common region of greatest sensitivity, from 1000 to 3000 Hz, although audiograms differed significantly across species. Thresholds of all duck species tested were more similar to each other than to the two non-duck species tested. The red-throated loon (Gavia stellata) and northern gannet (Morus bassanus) exhibited the highest thresholds while the lowest thresholds belonged to the duck species, specifically the lesser scaup (Aythya affinis) and ruddy duck (Oxyura jamaicensis). Vocalization parameters were also measured for each species, and showed that with the exception of the common eider (Somateria mollisima), the peak frequency, i.e. frequency at the greatest intensity, of all species’ vocalizations measured here fell between 1000 and 3000 Hz, matching the bandwidth of the most sensitive hearing range. PMID:26156644

  11. Prestimulus influences on auditory perception from sensory representations and decision processes.

    PubMed

    Kayser, Stephanie J; McNair, Steven W; Kayser, Christoph

    2016-04-26

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task.

  12. Probing the independence of formant control using altered auditory feedback

    PubMed Central

    MacDonald, Ewen N.; Purcell, David W.; Munhall, Kevin G.

    2011-01-01

    Two auditory feedback perturbation experiments were conducted to examine the nature of control of the first two formants in vowels. In the first experiment, talkers heard their auditory feedback with either F1 or F2 shifted in frequency. Talkers altered production of the perturbed formant by changing its frequency in the opposite direction to the perturbation but did not produce a correlated alteration of the unperturbed formant. Thus, the motor control system is capable of fine-grained independent control of F1 and F2. In the second experiment, a large meta-analysis was conducted on data from talkers who received feedback where both F1 and F2 had been perturbed. A moderate correlation was found between individual compensations in F1 and F2 suggesting that the control of F1 and F2 is processed in a common manner at some level. While a wide range of individual compensation magnitudes were observed, no significant correlations were found between individuals’ compensations and vowel space differences. Similarly, no significant correlations were found between individuals’ compensations and variability in normal vowel production. Further, when receiving normal auditory feedback, most of the population exhibited no significant correlation between the natural variation in production of F1 and F2. PMID:21361452

  13. Prestimulus influences on auditory perception from sensory representations and decision processes

    PubMed Central

    McNair, Steven W.

    2016-01-01

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task. PMID:27071110

  14. A biophysical model for modulation frequency encoding in the cochlear nucleus.

    PubMed

    Eguia, Manuel C; Garcia, Guadalupe C; Romano, Sebastian A

    2010-01-01

    Encoding of amplitude modulated (AM) acoustical signals is one of the most compelling tasks for the mammalian auditory system: environmental sounds, after being filtered and transduced by the cochlea, become narrowband AM signals. Despite much experimental work dedicated to the comprehension of auditory system extraction and encoding of AM information, the neural mechanisms underlying this remarkable feature are far from being understood (Joris et al., 2004). One of the most accepted theories for this processing is the existence of a periodotopic organization (based on temporal information) across the more studied tonotopic axis (Frisina et al., 1990b). In this work, we will review some recent advances in the study of the mechanisms involved in neural processing of AM sounds, and propose an integrated model that runs from the external ear, through the cochlea and the auditory nerve, up to a sub-circuit of the cochlear nucleus (the first processing unit in the central auditory system). We will show that varying the amount of inhibition in our model we can obtain a range of best modulation frequencies (BMF) in some principal cells of the cochlear nucleus. This could be a basis for a synchronicity based, low-level periodotopic organization. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  15. Language impairment is reflected in auditory evoked fields.

    PubMed

    Pihko, Elina; Kujala, Teija; Mickos, Annika; Alku, Paavo; Byring, Roger; Korkman, Marit

    2008-05-01

    Specific language impairment (SLI) is diagnosed when a child has problems in producing or understanding language despite having a normal IQ and there being no other obvious explanation. There can be several associated problems, and no single underlying cause has yet been identified. Some theories propose problems in auditory processing, specifically in the discrimination of sound frequency or rapid temporal frequency changes. We compared automatic cortical speech-sound processing and discrimination between a group of children with SLI and control children with normal language development (mean age: 6.6 years; range: 5-7 years). We measured auditory evoked magnetic fields using two sets of CV syllables, one with a changing consonant /da/ba/ga/ and another one with a changing vowel /su/so/sy/ in an oddball paradigm. The P1m responses for onsets of repetitive stimuli were weaker in the SLI group whereas no significant group differences were found in the mismatch responses. The results indicate that the SLI group, having weaker responses to the onsets of sounds, might have slightly depressed sensory encoding.

  16. Specialization of the auditory system for the processing of bio-sonar information in the frequency domain: Mustached bats.

    PubMed

    Suga, Nobuo

    2018-04-01

    For echolocation, mustached bats emit velocity-sensitive orientation sounds (pulses) containing a constant-frequency component consisting of four harmonics (CF 1-4 ). They show unique behavior called Doppler-shift compensation for Doppler-shifted echoes and hunting behavior for frequency and amplitude modulated echoes from fluttering insects. Their peripheral auditory system is highly specialized for fine frequency analysis of CF 2 (∼61.0 kHz) and detecting echo CF 2 from fluttering insects. In their central auditory system, lateral inhibition occurring at multiple levels sharpens V-shaped frequency-tuning curves at the periphery and creates sharp spindle-shaped tuning curves and amplitude tuning. The large CF 2 -tuned area of the auditory cortex systematically represents the frequency and amplitude of CF 2 in a frequency-versus-amplitude map. "CF/CF" neurons are tuned to a specific combination of pulse CF 1 and Doppler-shifted echo CF 2 or 3 . They are tuned to specific velocities. CF/CF neurons cluster in the CC ("C" stands for CF) and DIF (dorsal intrafossa) areas of the auditory cortex. The CC area has the velocity map for Doppler imaging. The DIF area is particularly for Dopper imaging of other bats approaching in cruising flight. To optimize the processing of behaviorally relevant sounds, cortico-cortical interactions and corticofugal feedback modulate the frequency tuning of cortical and sub-cortical auditory neurons and cochlear hair cells through a neural net consisting of positive feedback associated with lateral inhibition. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Auditory priming improves neural synchronization in auditory-motor entrainment.

    PubMed

    Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J

    2018-05-22

    Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions using rhythmic auditory stimulation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. A quantitative analysis of spectral mechanisms involved in auditory detection of coloration by a single wall reflection.

    PubMed

    Buchholz, Jörg M

    2011-07-01

    Coloration detection thresholds (CDTs) were measured for a single reflection as a function of spectral content and reflection delay for diotic stimulus presentation. The direct sound was a 320-ms long burst of bandpass-filtered noise with varying lower and upper cut-off frequencies. The resulting threshold data revealed that: (1) sensitivity decreases with decreasing bandwidth and increasing reflection delay and (2) high-frequency components contribute less to detection than low-frequency components. The auditory processes that may be involved in coloration detection (CD) are discussed in terms of a spectrum-based auditory model, which is conceptually similar to the pattern-transformation model of pitch (Wightman, 1973). Hence, the model derives an auto-correlation function of the input stimulus by applying a frequency analysis to an auditory representation of the power spectrum. It was found that, to successfully describe the quantitative behavior of the CDT data, three important mechanisms need to be included: (1) auditory bandpass filters with a narrower bandwidth than classic Gammatone filters, the increase in spectral resolution was here linked to cochlear suppression, (2) a spectral contrast enhancement process that reflects neural inhibition mechanisms, and (3) integration of information across auditory frequency bands. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. The effect of superior auditory skills on vocal accuracy

    NASA Astrophysics Data System (ADS)

    Amir, Ofer; Amir, Noam; Kishon-Rabin, Liat

    2003-02-01

    The relationship between auditory perception and vocal production has been typically investigated by evaluating the effect of either altered or degraded auditory feedback on speech production in either normal hearing or hearing-impaired individuals. Our goal in the present study was to examine this relationship in individuals with superior auditory abilities. Thirteen professional musicians and thirteen nonmusicians, with no vocal or singing training, participated in this study. For vocal production accuracy, subjects were presented with three tones. They were asked to reproduce the pitch using the vowel /a/. This procedure was repeated three times. The fundamental frequency of each production was measured using an autocorrelation pitch detection algorithm designed for this study. The musicians' superior auditory abilities (compared to the nonmusicians) were established in a frequency discrimination task reported elsewhere. Results indicate that (a) musicians had better vocal production accuracy than nonmusicians (production errors of 1/2 a semitone compared to 1.3 semitones, respectively); (b) frequency discrimination thresholds explain 43% of the variance of the production data, and (c) all subjects with superior frequency discrimination thresholds showed accurate vocal production; the reverse relationship, however, does not hold true. In this study we provide empirical evidence to the importance of auditory feedback on vocal production in listeners with superior auditory skills.

  20. Selective attention reduces physiological noise in the external ear canals of humans. I: Auditory attention

    PubMed Central

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2014-01-01

    In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring, or not requiring, selective auditory attention. Appended to each stimulus presentation, and included in the calculation of each nSFOAE response, was a 30-ms silent period that was used to estimate the level of the inherent physiological noise in the ear canals of our subjects during each behavioral condition. Physiological-noise magnitudes were higher (noisier) for all subjects in the inattention task, and lower (quieter) in the selective auditory-attention tasks. These noise measures initially were made at the frequency of our nSFOAE probe tone (4.0 kHz), but the same attention effects also were observed across a wide range of frequencies. We attribute the observed differences in physiological-noise magnitudes between the inattention and attention conditions to different levels of efferent activation associated with the differing attentional demands of the behavioral tasks. One hypothesis is that when the attentional demand is relatively great, efferent activation is relatively high, and a decrease in the gain of the cochlear amplifier leads to lower-amplitude cochlear activity, and thus a smaller measure of noise from the ear. PMID:24732069

  1. Local and Global Spatial Organization of Interaural Level Difference and Frequency Preferences in Auditory Cortex

    PubMed Central

    Panniello, Mariangela; King, Andrew J; Dahmen, Johannes C; Walker, Kerry M M

    2018-01-01

    Abstract Despite decades of microelectrode recordings, fundamental questions remain about how auditory cortex represents sound-source location. Here, we used in vivo 2-photon calcium imaging to measure the sensitivity of layer II/III neurons in mouse primary auditory cortex (A1) to interaural level differences (ILDs), the principal spatial cue in this species. Although most ILD-sensitive neurons preferred ILDs favoring the contralateral ear, neurons with either midline or ipsilateral preferences were also present. An opponent-channel decoder accurately classified ILDs using the difference in responses between populations of neurons that preferred contralateral-ear-greater and ipsilateral-ear-greater stimuli. We also examined the spatial organization of binaural tuning properties across the imaged neurons with unprecedented resolution. Neurons driven exclusively by contralateral ear stimuli or by binaural stimulation occasionally formed local clusters, but their binaural categories and ILD preferences were not spatially organized on a more global scale. In contrast, the sound frequency preferences of most neurons within local cortical regions fell within a restricted frequency range, and a tonotopic gradient was observed across the cortical surface of individual mice. These results indicate that the representation of ILDs in mouse A1 is comparable to that of most other mammalian species, and appears to lack systematic or consistent spatial order. PMID:29136122

  2. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822

  3. Swept-sine noise-induced damage as a hearing loss model for preclinical assays

    PubMed Central

    Sanz, Lorena; Murillo-Cuesta, Silvia; Cobo, Pedro; Cediel-Algovia, Rafael; Contreras, Julio; Rivera, Teresa; Varela-Nieto, Isabel; Avendaño, Carlos

    2015-01-01

    Mouse models are key tools for studying cochlear alterations in noise-induced hearing loss (NIHL) and for evaluating new therapies. Stimuli used to induce deafness in mice are usually white and octave band noises that include very low frequencies, considering the large mouse auditory range. We designed different sound stimuli, enriched in frequencies up to 20 kHz (“violet” noises) to examine their impact on hearing thresholds and cochlear cytoarchitecture after short exposure. In addition, we developed a cytocochleogram to quantitatively assess the ensuing structural degeneration and its functional correlation. Finally, we used this mouse model and cochleogram procedure to evaluate the potential therapeutic effect of transforming growth factor β1 (TGF-β1) inhibitors P17 and P144 on NIHL. CBA mice were exposed to violet swept-sine noise (VS) with different frequency ranges (2–20 or 9–13 kHz) and levels (105 or 120 dB SPL) for 30 min. Mice were evaluated by auditory brainstem response (ABR) and otoacoustic emission tests prior to and 2, 14 and 28 days after noise exposure. Cochlear pathology was assessed with gross histology; hair cell number was estimated by a stereological counting method. Our results indicate that functional and morphological changes induced by VS depend on the sound level and frequency composition. Partial hearing recovery followed the exposure to 105 dB SPL, whereas permanent cochlear damage resulted from the exposure to 120 dB SPL. Exposure to 9–13 kHz noise caused an auditory threshold shift (TS) in those frequencies that correlated with hair cell loss in the corresponding areas of the cochlea that were spotted on the cytocochleogram. In summary, we present mouse models of NIHL, which depending on the sound properties of the noise, cause different degrees of cochlear damage, and could therefore be used to study molecules which are potential players in hearing loss protection and repair. PMID:25762930

  4. Encoding of frequency-modulation (FM) rates in human auditory cortex.

    PubMed

    Okamoto, Hidehiko; Kakigi, Ryusuke

    2015-12-14

    Frequency-modulated sounds play an important role in our daily social life. However, it currently remains unclear whether frequency modulation rates affect neural activity in the human auditory cortex. In the present study, using magnetoencephalography, we investigated the auditory evoked N1m and sustained field responses elicited by temporally repeated and superimposed frequency-modulated sweeps that were matched in the spectral domain, but differed in frequency modulation rates (1, 4, 16, and 64 octaves per sec). The results obtained demonstrated that the higher rate frequency-modulated sweeps elicited the smaller N1m and the larger sustained field responses. Frequency modulation rate had a significant impact on the human brain responses, thereby providing a key for disentangling a series of natural frequency-modulated sounds such as speech and music.

  5. Significance of auditory and kinesthetic feedback to singers' pitch control.

    PubMed

    Mürbe, Dirk; Pabst, Friedemann; Hofmann, Gert; Sundberg, Johan

    2002-03-01

    An accurate control of fundamental frequency (F0) is required from singers. This control relies on auditory and kinesthetic feedback. However, a loud accompaniment may mask the auditory feedback, leaving the singers to rely on kinesthetic feedback. The object of the present study was to estimate the significance of auditory and kinesthetic feedback to pitch control in 28 students beginning a professional solo singing education. The singers sang an ascending and descending triad pattern covering their entire pitch range with and without masking noise in legato and staccato and in a slow and a fast tempo. F0 was measured by means of a computer program. The interval sizes between adjacent tones were determined and their departures from equally tempered tuning were calculated. The deviations from this tuning were used as a measure of the accuracy of intonation. Statistical analysis showed a significant effect of masking that amounted to a mean impairment of pitch accuracy by 14 cent across all subjects. Furthermore, significant effects were found of tempo as well as of the staccato/legato conditions. The results indicate that auditory feedback contributes significantly to singers' control of pitch.

  6. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar

    PubMed Central

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-01-01

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power. PMID:27483261

  7. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar.

    PubMed

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-07-28

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power.

  8. Cortical contributions to the auditory frequency-following response revealed by MEG

    PubMed Central

    Coffey, Emily B. J.; Herholz, Sibylle C.; Chepesiuk, Alexander M. P.; Baillet, Sylvain; Zatorre, Robert J.

    2016-01-01

    The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. PMID:27009409

  9. Binaural auditory beats affect long-term memory.

    PubMed

    Garcia-Argibay, Miguel; Santed, Miguel A; Reales, José M

    2017-12-08

    The presentation of two pure tones to each ear separately with a slight difference in their frequency results in the perception of a single tone that fluctuates in amplitude at a frequency that equals the difference of interaural frequencies. This perceptual phenomenon is known as binaural auditory beats, and it is thought to entrain electrocortical activity and enhance cognition functions such as attention and memory. The aim of this study was to determine the effect of binaural auditory beats on long-term memory. Participants (n = 32) were kept blind to the goal of the study and performed both the free recall and recognition tasks after being exposed to binaural auditory beats, either in the beta (20 Hz) or theta (5 Hz) frequency bands and white noise as a control condition. Exposure to beta-frequency binaural beats yielded a greater proportion of correctly recalled words and a higher sensitivity index d' in recognition tasks, while theta-frequency binaural-beat presentation lessened the number of correctly remembered words and the sensitivity index. On the other hand, we could not find differences in the conditional probability for recall given recognition between beta and theta frequencies and white noise, suggesting that the observed changes in recognition were due to the recollection component. These findings indicate that the presentation of binaural auditory beats can affect long-term memory both positively and negatively, depending on the frequency used.

  10. A Model for Amplification of Hair-Bundle Motion by Cyclical Binding of Ca2+ to Mechanoelectrical-Transduction Channels

    NASA Astrophysics Data System (ADS)

    Choe, Yong; Magnasco, Marcelo O.; Hudspeth, A. J.

    1998-12-01

    Amplification of auditory stimuli by hair cells augments the sensitivity of the vertebrate inner ear. Cell-body contractions of outer hair cells are thought to mediate amplification in the mammalian cochlea. In vertebrates that lack these cells, and perhaps in mammals as well, active movements of hair bundles may underlie amplification. We have evaluated a mathematical model in which amplification stems from the activity of mechanoelectrical-transduction channels. The intracellular binding of Ca2+ to channels is posited to promote their closure, which increases the tension in gating springs and exerts a negative force on the hair bundle. By enhancing bundle motion, this force partially compensates for viscous damping by cochlear fluids. Linear stability analysis of a six-state kinetic model reveals Hopf bifurcations for parameter values in the physiological range. These bifurcations signal conditions under which the system's behavior changes from a damped oscillatory response to spontaneous limit-cycle oscillation. By varying the number of stereocilia in a bundle and the rate constant for Ca2+ binding, we calculate bifurcation frequencies spanning the observed range of auditory sensitivity for a representative receptor organ, the chicken's cochlea. Simulations using prebifurcation parameter values demonstrate frequency-selective amplification with a striking compressive nonlinearity. Because transduction channels occur universally in hair cells, this active-channel model describes a mechanism of auditory amplification potentially applicable across species and hair-cell types.

  11. A possible role for a paralemniscal auditory pathway in the coding of slow temporal information

    PubMed Central

    Abrams, Daniel A.; Nicol, Trent; Zecker, Steven; Kraus, Nina

    2010-01-01

    Low frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery. PMID:21094680

  12. Mechanisms of spectral and temporal integration in the mustached bat inferior colliculus

    PubMed Central

    Wenstrup, Jeffrey James; Nataraj, Kiran; Sanchez, Jason Tait

    2012-01-01

    This review describes mechanisms and circuitry underlying combination-sensitive response properties in the auditory brainstem and midbrain. Combination-sensitive neurons, performing a type of auditory spectro-temporal integration, respond to specific, properly timed combinations of spectral elements in vocal signals and other acoustic stimuli. While these neurons are known to occur in the auditory forebrain of many vertebrate species, the work described here establishes their origin in the auditory brainstem and midbrain. Focusing on the mustached bat, we review several major findings: (1) Combination-sensitive responses involve facilitatory interactions, inhibitory interactions, or both when activated by distinct spectral elements in complex sounds. (2) Combination-sensitive responses are created in distinct stages: inhibition arises mainly in lateral lemniscal nuclei of the auditory brainstem, while facilitation arises in the inferior colliculus (IC) of the midbrain. (3) Spectral integration underlying combination-sensitive responses requires a low-frequency input tuned well below a neuron's characteristic frequency (ChF). Low-ChF neurons in the auditory brainstem project to high-ChF regions in brainstem or IC to create combination sensitivity. (4) At their sites of origin, both facilitatory and inhibitory combination-sensitive interactions depend on glycinergic inputs and are eliminated by glycine receptor blockade. Surprisingly, facilitatory interactions in IC depend almost exclusively on glycinergic inputs and are largely independent of glutamatergic and GABAergic inputs. (5) The medial nucleus of the trapezoid body (MNTB), the lateral lemniscal nuclei, and the IC play critical roles in creating combination-sensitive responses. We propose that these mechanisms, based on work in the mustached bat, apply to a broad range of mammals and other vertebrates that depend on temporally sensitive integration of information across the audible spectrum. PMID:23109917

  13. Altered brainstem auditory evoked potentials in a rat central sensitization model are similar to those in migraine

    PubMed Central

    Arakaki, Xianghong; Galbraith, Gary; Pikov, Victor; Fonteh, Alfred N.; Harrington, Michael G.

    2014-01-01

    Migraine symptoms often include auditory discomfort. Nitroglycerin (NTG)-triggered central sensitization (CS) provides a rodent model of migraine, but auditory brainstem pathways have not yet been studied in this example. Our objective was to examine brainstem auditory evoked potentials (BAEPs) in rat CS as a measure of possible auditory abnormalities. We used four subdermal electrodes to record horizontal (h) and vertical (v) dipole channel BAEPs before and after injection of NTG or saline. We measured the peak latencies (PLs), interpeak latencies (IPLs), and amplitudes for detectable waveforms evoked by 8, 16, or 32 KHz auditory stimulation. At 8 KHz stimulation, vertical channel positive PLs of waves 4, 5, and 6 (vP4, vP5, and vP6), and related IPLs from earlier negative or positive peaks (vN1-vP4, vN1-vP5, vN1-vP6; vP3-vP4, vP3-vP6) increased significantly 2 hours after NTG injection compared to the saline group. However, BAEP peak amplitudes at all frequencies, PLs and IPLs from the horizontal channel at all frequencies, and the vertical channel stimulated at 16 and 32 KHz showed no significant/consistent change. For the first time in the rat CS model, we show that BAEP PLs and IPLs ranging from putative bilateral medial superior olivary nuclei (P4) to the more rostral structures such as the medial geniculate body (P6) were prolonged 2 hours after NTG administration. These BAEP alterations could reflect changes in neurotransmitters and/or hypoperfusion in the midbrain. The similarity of our results with previous human studies further validates the rodent CS model for future migraine research. PMID:24680742

  14. Is Auditory Discrimination Mature by Middle Childhood? A Study Using Time-Frequency Analysis of Mismatch Responses from 7 Years to Adulthood

    ERIC Educational Resources Information Center

    Bishop, Dorothy V. M.; Hardiman, Mervyn J.; Barry, Johanna G.

    2011-01-01

    Behavioural and electrophysiological studies give differing impressions of when auditory discrimination is mature. Ability to discriminate frequency and speech contrasts reaches adult levels only around 12 years of age, yet an electrophysiological index of auditory discrimination, the mismatch negativity (MMN), is reported to be as large in…

  15. Auditory phase and frequency discrimination: a comparison of nine procedures.

    PubMed

    Creelman, C D; Macmillan, N A

    1979-02-01

    Two auditory discrimination tasks were thoroughly investigated: discrimination of frequency differences from a sinusoidal signal of 200 Hz and discrimination of differences in relative phase of mixed sinusoids of 200 Hz and 400 Hz. For each task psychometric functions were constructed for three observers, using nine different psychophysical measurement procedures. These procedures included yes-no, two-interval forced-choice, and various fixed- and variable-standard designs that investigators have used in recent years. The data showed wide ranges of apparent sensitivity. For frequency discrimination, models derived from signal detection theory for each psychophysical procedure seem to account for the performance differences. For phase discrimination the models do not account for the data. We conclude that for some discriminative continua the assumptions of signal detection theory are appropriate, and underlying sensitivity may be derived from raw data by appropriate transformations. For other continua the models of signal detection theory are probably inappropriate; we speculate that phase might be discriminable only on the basis of comparison or change and suggest some tests of our hypothesis.

  16. Fundamental deficits of auditory perception in Wernicke's aphasia.

    PubMed

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. The auditory nerve overlapped waveform (ANOW): A new objective measure of low-frequency hearing

    NASA Astrophysics Data System (ADS)

    Lichtenhan, Jeffery T.; Salt, Alec N.; Guinan, John J.

    2015-12-01

    One of the most pressing problems today in the mechanics of hearing is to understand the mechanical motions in the apical half of the cochlea. Almost all available measurements from the cochlear apex of basilar membrane or other organ-of-Corti transverse motion have been made from ears where the health, or sensitivity, in the apical half of the cochlea was not known. A key step in understanding the mechanics of the cochlear base was to trust mechanical measurements only when objective measures from auditory-nerve compound action potentials (CAPs) showed good preparation sensitivity. However, such traditional objective measures are not adequate monitors of cochlear health in the very low-frequency regions of the apex that are accessible for mechanical measurements. To address this problem, we developed the Auditory Nerve Overlapped Waveform (ANOW) that originates from auditory nerve output in the apex. When responses from the round window to alternating low-frequency tones are averaged, the cochlear microphonic is canceled and phase-locked neural firing interleaves in time (i.e., overlaps). The result is a waveform that oscillates at twice the probe frequency. We have demonstrated that this Auditory Nerve Overlapped Waveform - called ANOW - originates from auditory nerve fibers in the cochlear apex [8], relates well to single-auditory-nerve-fiber thresholds, and can provide an objective estimate of low-frequency sensitivity [7]. Our new experiments demonstrate that ANOW is a highly sensitive indicator of apical cochlear function. During four different manipulations to the scala media along the cochlear spiral, ANOW amplitude changed when either no, or only small, changes occurred in CAP thresholds. Overall, our results demonstrate that ANOW can be used to monitor cochlear sensitivity of low-frequency regions during experiments that make apical basilar membrane motion measurements.

  18. Oscillatory support for rapid frequency change processing in infants.

    PubMed

    Musacchia, Gabriella; Choudhury, Naseem A; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P; Benasich, April A

    2013-11-01

    Rapid auditory processing and auditory change detection abilities are crucial aspects of speech and language development, particularly in the first year of life. Animal models and adult studies suggest that oscillatory synchrony, and in particular low-frequency oscillations play key roles in this process. We hypothesize that infant perception of rapid pitch and timing changes is mediated, at least in part, by oscillatory mechanisms. Using event-related potentials (ERPs), source localization and time-frequency analysis of event-related oscillations (EROs), we examined the neural substrates of rapid auditory processing in 4-month-olds. During a standard oddball paradigm, infants listened to tone pairs with invariant standard (STD, 800-800 Hz) and variant deviant (DEV, 800-1200 Hz) pitch. STD and DEV tone pairs were first presented in a block with a short inter-stimulus interval (ISI) (Rapid Rate: 70 ms ISI), followed by a block of stimuli with a longer ISI (Control Rate: 300 ms ISI). Results showed greater ERP peak amplitude in response to the DEV tone in both conditions and later and larger peaks during Rapid Rate presentation, compared to the Control condition. Sources of neural activity, localized to right and left auditory regions, showed larger and faster activation in the right hemisphere for both rate conditions. Time-frequency analysis of the source activity revealed clusters of theta band enhancement to the DEV tone in right auditory cortex for both conditions. Left auditory activity was enhanced only during Rapid Rate presentation. These data suggest that local low-frequency oscillatory synchrony underlies rapid processing and can robustly index auditory perception in young infants. Furthermore, left hemisphere recruitment during rapid frequency change discrimination suggests a difference in the spectral and temporal resolution of right and left hemispheres at a very young age. © 2013 Elsevier Ltd. All rights reserved.

  19. Neural bases of rhythmic entrainment in humans: critical transformation between cortical and lower-level representations of auditory rhythm.

    PubMed

    Nozaradan, Sylvie; Schönwiesner, Marc; Keller, Peter E; Lenc, Tomas; Lehmann, Alexandre

    2018-02-01

    The spontaneous ability to entrain to meter periodicities is central to music perception and production across cultures. There is increasing evidence that this ability involves selective neural responses to meter-related frequencies. This phenomenon has been observed in the human auditory cortex, yet it could be the product of evolutionarily older lower-level properties of brainstem auditory neurons, as suggested by recent recordings from rodent midbrain. We addressed this question by taking advantage of a new method to simultaneously record human EEG activity originating from cortical and lower-level sources, in the form of slow (< 20 Hz) and fast (> 150 Hz) responses to auditory rhythms. Cortical responses showed increased amplitudes at meter-related frequencies compared to meter-unrelated frequencies, regardless of the prominence of the meter-related frequencies in the modulation spectrum of the rhythmic inputs. In contrast, frequency-following responses showed increased amplitudes at meter-related frequencies only in rhythms with prominent meter-related frequencies in the input but not for a more complex rhythm requiring more endogenous generation of the meter. This interaction with rhythm complexity suggests that the selective enhancement of meter-related frequencies does not fully rely on subcortical auditory properties, but is critically shaped at the cortical level, possibly through functional connections between the auditory cortex and other, movement-related, brain structures. This process of temporal selection would thus enable endogenous and motor entrainment to emerge with substantial flexibility and invariance with respect to the rhythmic input in humans in contrast with non-human animals. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Potentials evoked by chirp-modulated tones: a new technique to evaluate oscillatory activity in the auditory pathway.

    PubMed

    Artieda, J; Valencia, M; Alegre, M; Olaziregi, O; Urrestarazu, E; Iriarte, J

    2004-03-01

    Steady-state potentials are oscillatory responses generated by a rhythmic stimulation of a sensory pathway. The frequency of the response, which follows the frequency of stimulation, is maximal at a stimulus rate of 40 Hz for auditory stimuli. The exact cause of these maximal responses is not known, although some authors have suggested that they might be related to the 'working frequency' of the auditory cortex. Testing of the responses to different frequencies of stimulation may be lengthy if a single frequency is studied at a time. Our aim was to develop a fast technique to explore the oscillatory response to auditory stimuli, using a tone modulated in amplitude by a sinusoid whose frequency increases linearly in frequency ('chirp') from 1 to 120 Hz. Time-frequency transforms were used for the analysis of the evoked responses in 10 subjects. Also, we analyzed whether the peaks in these responses were due to increases of amplitude or to phase-locking phenomena, using single-sweep time-frequency transforms and inter-trial phase analysis. The pattern observed in the time-frequency transform of the chirp-evoked potential was very similar in all subjects: a diagonal band of energy was observed, corresponding to the frequency of modulation at each time instant. Two components were present in the band, one around 45 Hz (30-60 Hz) and a smaller one between 80 and 120 Hz. Inter-trial phase analysis showed that these components were mainly due to phase locking phenomena. A simultaneous testing of the amplitude-modulation-following oscillatory responses to auditory stimulation is feasible using a tone modulated in amplitude at increasing frequencies. The maximal energies found at stimulation frequencies around 40 Hz are probably due to increased phase-locking of the individual responses.

  1. Flying in tune: sexual recognition in mosquitoes.

    PubMed

    Gibson, Gabriella; Russell, Ian

    2006-07-11

    Mosquitoes hear with their antennae, which in most species are sexually dimorphic. Johnston, who discovered the mosquito auditory organ at the base of the antenna 150 years ago, speculated that audition was involved with mating behaviour. Indeed, male mosquitoes are attracted to female flight tones. The male auditory organ has been proposed to act as an acoustic filter for female flight tones, but female auditory behavior is unknown. We show, for the first time, interactive auditory behavior between males and females that leads to sexual recognition. Individual males and females both respond to pure tones by altering wing-beat frequency. Behavioral auditory tuning curves, based on minimum threshold sound levels that elicit a change in wing-beat frequency to pure tones, are sharper than the mechanical tuning of the antennae, with males being more sensitive than females. We flew opposite-sex pairs of tethered Toxorhynchites brevipalpis and found that each mosquito alters its wing-beat frequency in response to the flight tone of the other, so that within seconds their flight-tone frequencies are closely matched, if not completely synchronized. The flight tones of same-sex pairs may converge in frequency but eventually diverge dramatically.

  2. High-order synchronization of hair cell bundles

    NASA Astrophysics Data System (ADS)

    Levy, Michael; Molzon, Adrian; Lee, Jae-Hyun; Kim, Ji-Wook; Cheon, Jinwoo; Bozovic, Dolores

    2016-12-01

    Auditory and vestibular hair cell bundles exhibit active mechanical oscillations at natural frequencies that are typically lower than the detection range of the corresponding end organs. We explore how these noisy nonlinear oscillators mode-lock to frequencies higher than their internal clocks. A nanomagnetic technique is used to stimulate the bundles without an imposed mechanical load. The evoked response shows regimes of high-order mode-locking. Exploring a broad range of stimulus frequencies and intensities, we observe regions of high-order synchronization, analogous to Arnold Tongues in dynamical systems literature. Significant areas of overlap occur between synchronization regimes, with the bundle intermittently flickering between different winding numbers. We demonstrate how an ensemble of these noisy spontaneous oscillators could be entrained to efficiently detect signals significantly above the characteristic frequencies of the individual cells.

  3. High-order synchronization of hair cell bundles

    PubMed Central

    Levy, Michael; Molzon, Adrian; Lee, Jae-Hyun; Kim, Ji-wook; Cheon, Jinwoo; Bozovic, Dolores

    2016-01-01

    Auditory and vestibular hair cell bundles exhibit active mechanical oscillations at natural frequencies that are typically lower than the detection range of the corresponding end organs. We explore how these noisy nonlinear oscillators mode-lock to frequencies higher than their internal clocks. A nanomagnetic technique is used to stimulate the bundles without an imposed mechanical load. The evoked response shows regimes of high-order mode-locking. Exploring a broad range of stimulus frequencies and intensities, we observe regions of high-order synchronization, analogous to Arnold Tongues in dynamical systems literature. Significant areas of overlap occur between synchronization regimes, with the bundle intermittently flickering between different winding numbers. We demonstrate how an ensemble of these noisy spontaneous oscillators could be entrained to efficiently detect signals significantly above the characteristic frequencies of the individual cells. PMID:27974743

  4. Selective Impairment in Frequency Discrimination in a Mouse Model of Tinnitus

    PubMed Central

    Mwilambwe-Tshilobo, Laetitia; Davis, Andrew J. O.; Aizenberg, Mark; Geffen, Maria N.

    2015-01-01

    Tinnitus is an auditory disorder, which affects millions of Americans, including active duty service members and veterans. It is manifested by a phantom sound that is commonly restricted to a specific frequency range. Because tinnitus is associated with hearing deficits, understanding how tinnitus affects hearing perception is important for guiding therapies to improve the quality of life in this vast group of patients. In a rodent model of tinnitus, prolonged exposure to a tone leads to a selective decrease in gap detection in specific frequency bands. However, whether and how hearing acuity is affected for sounds within and outside those frequency bands is not well understood. We induced tinnitus in mice by prolonged exposure to a loud mid-range tone, and behaviorally assayed whether mice exhibited a change in frequency discrimination acuity for tones embedded within the mid-frequency range and high-frequency range at 1, 4, and 8 weeks post-exposure. A subset of tone-exposed mice exhibited tinnitus-like symptoms, as demonstrated by selective deficits in gap detection, which were restricted to the high frequency range. These mice exhibited impaired frequency discrimination both for tones in the mid-frequency range and high-frequency range. The remaining tone exposed mice, which did not demonstrate behavioral evidence of tinnitus, showed temporary deficits in frequency discrimination for tones in the mid-frequency range, while control mice remained unimpaired. Our findings reveal that the high frequency-specific deficits in gap detection, indicative of tinnitus, are associated with impairments in frequency discrimination at the frequency of the presumed tinnitus. PMID:26352864

  5. Temporal properties of responses to sound in the ventral nucleus of the lateral lemniscus.

    PubMed

    Recio-Spinoso, Alberto; Joris, Philip X

    2014-02-01

    Besides the rapid fluctuations in pressure that constitute the "fine structure" of a sound stimulus, slower fluctuations in the sound's envelope represent an important temporal feature. At various stages in the auditory system, neurons exhibit tuning to envelope frequency and have been described as modulation filters. We examine such tuning in the ventral nucleus of the lateral lemniscus (VNLL) of the pentobarbital-anesthetized cat. The VNLL is a large but poorly accessible auditory structure that provides a massive inhibitory input to the inferior colliculus. We test whether envelope filtering effectively applies to the envelope spectrum when multiple envelope components are simultaneously present. We find two broad classes of response with often complementary properties. The firing rate of onset neurons is tuned to a band of modulation frequencies, over which they also synchronize strongly to the envelope waveform. Although most sustained neurons show little firing rate dependence on modulation frequency, some of them are weakly tuned. The latter neurons are usually band-pass or low-pass tuned in synchronization, and a reverse-correlation approach demonstrates that their modulation tuning is preserved to nonperiodic, noisy envelope modulations of a tonal carrier. Modulation tuning to this type of stimulus is weaker for onset neurons. In response to broadband noise, sustained and onset neurons tend to filter out envelope components over a frequency range consistent with their modulation tuning to periodically modulated tones. The results support a role for VNLL in providing temporal reference signals to the auditory midbrain.

  6. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields

    PubMed Central

    Lee, Christopher M.; Osman, Ahmad F.; Volgushev, Maxim; Escabí, Monty A.

    2016-01-01

    Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices. PMID:26843599

  7. Probing cochlear tuning and tonotopy in the tiger using otoacoustic emissions.

    PubMed

    Bergevin, Christopher; Walsh, Edward J; McGee, JoAnn; Shera, Christopher A

    2012-08-01

    Otoacoustic emissions (sound emitted from the ear) allow cochlear function to be probed noninvasively. The emissions evoked by pure tones, known as stimulus-frequency emissions (SFOAEs), have been shown to provide reliable estimates of peripheral frequency tuning in a variety of mammalian and non-mammalian species. Here, we apply the same methodology to explore peripheral auditory function in the largest member of the cat family, the tiger (Panthera tigris). We measured SFOAEs in 9 unique ears of 5 anesthetized tigers. The tigers, housed at the Henry Doorly Zoo (Omaha, NE), were of both sexes and ranged in age from 3 to 10 years. SFOAE phase-gradient delays are significantly longer in tigers--by approximately a factor of two above 2 kHz and even more at lower frequencies--than in domestic cats (Felis catus), a species commonly used in auditory studies. Based on correlations between tuning and delay established in other species, our results imply that cochlear tuning in the tiger is significantly sharper than in domestic cat and appears comparable to that of humans. Furthermore, the SFOAE data indicate that tigers have a larger tonotopic mapping constant (mm/octave) than domestic cats. A larger mapping constant in tiger is consistent both with auditory brainstem response thresholds (that suggest a lower upper frequency limit of hearing for the tiger than domestic cat) and with measurements of basilar-membrane length (about 1.5 times longer in the tiger than domestic cat).

  8. Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners.

    PubMed

    Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim

    2015-06-15

    Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Frontal Top-Down Signals Increase Coupling of Auditory Low-Frequency Oscillations to Continuous Speech in Human Listeners

    PubMed Central

    Park, Hyojin; Ince, Robin A.A.; Schyns, Philippe G.; Thut, Gregor; Gross, Joachim

    2015-01-01

    Summary Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. PMID:26028433

  10. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    PubMed

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  11. Differential Receptive Field Properties of Parvalbumin and Somatostatin Inhibitory Neurons in Mouse Auditory Cortex.

    PubMed

    Li, Ling-Yun; Xiong, Xiaorui R; Ibrahim, Leena A; Yuan, Wei; Tao, Huizhong W; Zhang, Li I

    2015-07-01

    Cortical inhibitory circuits play important roles in shaping sensory processing. In auditory cortex, however, functional properties of genetically identified inhibitory neurons are poorly characterized. By two-photon imaging-guided recordings, we specifically targeted 2 major types of cortical inhibitory neuron, parvalbumin (PV) and somatostatin (SOM) expressing neurons, in superficial layers of mouse auditory cortex. We found that PV cells exhibited broader tonal receptive fields with lower intensity thresholds and stronger tone-evoked spike responses compared with SOM neurons. The latter exhibited similar frequency selectivity as excitatory neurons. The broader/weaker frequency tuning of PV neurons was attributed to a broader range of synaptic inputs and stronger subthreshold responses elicited, which resulted in a higher efficiency in the conversion of input to output. In addition, onsets of both the input and spike responses of SOM neurons were significantly delayed compared with PV and excitatory cells. Our results suggest that PV and SOM neurons engage in auditory cortical circuits in different manners: while PV neurons may provide broadly tuned feedforward inhibition for a rapid control of ascending inputs to excitatory neurons, the delayed and more selective inhibition from SOM neurons may provide a specific modulation of feedback inputs on their distal dendrites. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Effects of auditory selective attention on chirp evoked auditory steady state responses.

    PubMed

    Bohr, Andreas; Bernarding, Corinna; Strauss, Daniel J; Corona-Strauss, Farah I

    2011-01-01

    Auditory steady state responses (ASSRs) are frequently used to assess auditory function. Recently, the interest in effects of attention on ASSRs has increased. In this paper, we investigated for the first time possible effects of attention on AS-SRs evoked by amplitude modulated and frequency modulated chirps paradigms. Different paradigms were designed using chirps with low and high frequency content, and the stimulation was presented in a monaural and dichotic modality. A total of 10 young subjects participated in the study, they were instructed to ignore the stimuli and after a second repetition they had to detect a deviant stimulus. In the time domain analysis, we found enhanced amplitudes for the attended conditions. Furthermore, we noticed higher amplitudes values for the condition using frequency modulated low frequency chirps evoked by a monaural stimulation. The most difference between attended and unattended modality was exhibited at the dichotic case of the amplitude modulated condition using chirps with low frequency content.

  13. Establishing the Response of Low Frequency Auditory Filters

    NASA Technical Reports Server (NTRS)

    Rafaelof, Menachem; Christian, Andrew; Shepherd, Kevin; Rizzi, Stephen; Stephenson, James

    2017-01-01

    The response of auditory filters is central to frequency selectivity of sound by the human auditory system. This is true especially for realistic complex sounds that are often encountered in many applications such as modeling the audibility of sound, voice recognition, noise cancelation, and the development of advanced hearing aid devices. The purpose of this study was to establish the response of low frequency (below 100Hz) auditory filters. Two experiments were designed and executed; the first was to measure subject's hearing threshold for pure tones (at 25, 31.5, 40, 50, 63 and 80 Hz), and the second was to measure the Psychophysical Tuning Curves (PTCs) at two signal frequencies (Fs= 40 and 63Hz). Experiment 1 involved 36 subjects while experiment 2 used 20 subjects selected from experiment 1. Both experiments were based on a 3-down 1-up 3AFC adaptive staircase test procedure using either a variable level narrow-band noise masker or a tone. A summary of the results includes masked threshold data in form of PTCs, the response of auditory filters, their distribution, and comparison with similar recently published data.

  14. The Middle Ear Muscle Reflex in Rat: Developing a Biomarker of Auditory Nerve Degeneration.

    PubMed

    Chertoff, Mark E; Martz, Ashley; Sakumura, Joey T; Kamerer, Aryn M; Diaz, Francisco

    The long-term goal of this research is to determine whether the middle ear muscle reflex can be used to predict the number of healthy auditory nerve fibers in hearing-impaired ears. In this study, we develop a high-impedance source and an animal model of the middle ear muscle reflex and explore the influence of signal frequency and level on parameters of the reflex to determine an optimal signal to examine auditory nerve fiber survival. A high-impedance source was developed using a hearing aid receiver attached to a 0.06 diameter 10.5-cm length tube. The impedance probe consisted of a microphone probe placed near the tip of a tube coupled to a sound source. The probe was calibrated by inserting it into a syringe of known volumes and impedances. The reflex in the anesthetized rat was measured with elicitor stimuli ranging from 3 to 16 kHz presented at levels ranging from 35 to 100 dB SPL to one ear while the reflex was measured in the opposite ear containing the probe and probe stimulus. The amplitude of the reflex increased with elicitor level and was largest at 3 kHz. The lowest threshold was approximately 54 dB SPL for the 3-kHz stimulus. The rate of decay of the reflex was greatest at 16 kHz followed by 10 and 3 kHz. The rate of decay did not change significantly with elicitor signal level for 3 and 16 kHz, but decreased as the level of the 10-kHz elicitor increased. A negative feedback model accounts for the reflex decay by having the strength of feedback dependent on auditory nerve input. The rise time of the reflex varied with frequency and changed with level for the 10- and 16-kHz signals but not significantly for the 3-kHz signal. The latency of the reflex increased with a decrease in elicitor level, and the change in latency with level was largest for the 10-kHz stimulus. Because the amplitude of the reflex in rat was largest with an elicitor signal at 3 kHz, had the lowest threshold, and yielded the least amount of decay, this may be the ideal frequency to estimate auditory nerve survival in hearing-impaired ears.

  15. The perception of prosody and associated auditory cues in early-implanted children: the role of auditory working memory and musical activities.

    PubMed

    Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti

    2014-03-01

    To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.

  16. The harmonic organization of auditory cortex.

    PubMed

    Wang, Xiaoqin

    2013-12-17

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.

  17. Statistical learning and auditory processing in children with music training: An ERP study.

    PubMed

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne

    2017-07-01

    The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  18. Explanation of Two Curious Phenomena Regarding the Relationship Between Tectorial Membrane and Basilar Membrane Dynamics

    NASA Astrophysics Data System (ADS)

    Nobili, R.

    2003-02-01

    Two years ago, Ruggero et al. [1] focused attention on two curious phenomena regarding the magnitude and phase of tectorial-membrane (TM) vibration relative to basilar-membrane (BM) vibration at a basal site of the chinchilla cochlea: 1) Over a wide range of stimulus frequencies, auditory-nerve responses, which are believed to reflect closely the TM vibration, behave as a linear combination of both BM displacement and velocity. 2) Near threshold, auditory-nerve responses to low-frequency tones are synchronous with peak BM velocity towards scala tympani, but at 80-90 dB SPL and 100-110 dB SPL responses undergo two large phase shifts approaching 180°. Such drastic phase shifts have no counterpart in BM vibrations. Here, it is argued that both these remarkable phenomena have a common origin: the viscoelastic properties of the TM attachment to limbus spiralis.

  19. Preceding weak noise sharpens the frequency tuning and elevates the response threshold of the mouse inferior collicular neurons through GABAergic inhibition.

    PubMed

    Wang, Xin; Jen, Philip H-S; Wu, Fei-Jian; Chen, Qi-Cai

    2007-09-05

    In acoustic communication, animals must extract biologically relevant signals that are embedded in noisy environment. The present study examines how weak noise may affect the auditory sensitivity of neurons in the central nucleus of the mouse inferior colliculus (IC) which receives convergent excitatory and inhibitory inputs from both lower and higher auditory centers. Specifically, we studied the frequency sensitivity and minimum threshold of IC neurons using a pure tone probe and a weak white noise masker under forward masking paradigm. For most IC neurons, probe-elicited response was decreased by a weak white noise that was presented at a specific gap (i.e. time window). When presented within this time window, weak noise masking sharpened the frequency tuning curve and increased the minimum threshold of IC neurons. The degree of weak noise masking of these two measurements increased with noise duration. Sharpening of the frequency tuning curve and increasing of the minimum threshold of IC neurons during weak noise masking were mostly mediated through GABAergic inhibition. In addition, sharpening of frequency tuning curve by the weak noise masker was more effective at the high than at low frequency limb. These data indicate that in the real world the ambient noise may improve frequency sensitivity of IC neurons through GABAergic inhibition while inevitably decrease the frequency response range and sensitivity of IC neurons.

  20. Auditory sensitivity in settlement-stage larvae of coral reef fishes

    NASA Astrophysics Data System (ADS)

    Wright, K. J.; Higgs, D. M.; Cato, D. H.; Leis, J. M.

    2010-03-01

    The larval phase of most species of coral reef fishes is spent away from the reef in the pelagic environment. At the time of settlement, these larvae need to locate a reef, and recent research indicates that sound emanating from reefs may act as a cue to guide them. Here, the auditory abilities of settlement-stage larvae of four species of coral reef fishes (families Pomacentridae, Lutjanidae and Serranidae) and similar-sized individuals of two pelagic species (Carangidae) were tested using an electrophysiological technique, auditory brainstem response (ABR). Five of the six species heard frequencies in the 100-2,000 Hz range, whilst one carangid species did not detect frequencies higher than 800 Hz. The audiograms of the six species were of similar shape, with best hearing at lower frequencies between 100 and 300 Hz. Strong within-species differences were found in hearing sensitivity both among the coral reef species and among the pelagic species. Larvae of the coral reef species had significantly more sensitive hearing than the larvae of the pelagic species. The results suggest that settlement-stage larval reef fishes may be able to detect reef sounds at distances of a few 100 m. If true hearing thresholds are lower than ABR estimates, as indicated in some comparisons of ABR and behavioural methods, the detection distances would be much larger.

  1. Ambient noise induces independent shifts in call frequency and amplitude within the Lombard effect in echolocating bats

    PubMed Central

    Hage, Steffen R.; Jiang, Tinglei; Berquist, Sean W.; Feng, Jiang; Metzner, Walter

    2013-01-01

    The Lombard effect, an involuntary rise in call amplitude in response to masking ambient noise, represents one of the most efficient mechanisms to optimize signal-to-noise ratio. The Lombard effect occurs in birds and mammals, including humans, and is often associated with several other vocal changes, such as call frequency and duration. Most studies, however, have focused on noise-dependent changes in call amplitude. It is therefore still largely unknown how the adaptive changes in call amplitude relate to associated vocal changes such as frequency shifts, how the underlying mechanisms are linked, and if auditory feedback from the changing vocal output is needed. Here, we examined the Lombard effect and the associated changes in call frequency in a highly vocal mammal, echolocating horseshoe bats. We analyzed how bandpass-filtered noise (BFN; bandwidth 20 kHz) affected their echolocation behavior when BFN was centered on different frequencies within their hearing range. Call amplitudes increased only when BFN was centered on the dominant frequency component of the bats’ calls. In contrast, call frequencies increased for all but one BFN center frequency tested. Both amplitude and frequency rises were extremely fast and occurred in the first call uttered after noise onset, suggesting that no auditory feedback was required. The different effects that varying the BFN center frequency had on amplitude and frequency rises indicate different neural circuits and/or mechanisms underlying these changes. PMID:23431172

  2. Auditory evoked responses to binaural beat illusion: stimulus generation and the derivation of the Binaural Interaction Component (BIC).

    PubMed

    Ozdamar, Ozcan; Bohorquez, Jorge; Mihajloski, Todor; Yavuz, Erdem; Lachowska, Magdalena

    2011-01-01

    Electrophysiological indices of auditory binaural beats illusions are studied using late latency evoked responses. Binaural beats are generated by continuous monaural FM tones with slightly different ascending and descending frequencies lasting about 25 ms presented at 1 sec intervals. Frequency changes are carefully adjusted to avoid any creation of abrupt waveform changes. Binaural Interaction Component (BIC) analysis is used to separate the neural responses due to binaural involvement. The results show that the transient auditory evoked responses can be obtained from the auditory illusion of binaural beats.

  3. Air and Bone Conduction Frequency-specific Auditory Brainstem Response in Children with Agenesis of the External Auditory Canal

    PubMed Central

    Sleifer, Pricila; Didoné, Dayane Domeneghini; Keppeler, Ísis Bicca; Bueno, Claudine Devicari; Riesgo, Rudimar dos Santos

    2017-01-01

    Introduction  The tone-evoked auditory brainstem responses (tone-ABR) enable the differential diagnosis in the evaluation of children until 12 months of age, including those with external and/or middle ear malformations. The use of auditory stimuli with frequency specificity by air and bone conduction allows characterization of hearing profile. Objective  The objective of our study was to compare the results obtained in tone-ABR by air and bone conduction in children until 12 months, with agenesis of the external auditory canal. Method  The study was cross-sectional, observational, individual, and contemporary. We conducted the research with tone-ABR by air and bone conduction in the frequencies of 500 Hz and 2000 Hz in 32 children, 23 boys, from one to 12 months old, with agenesis of the external auditory canal. Results  The tone-ABR thresholds were significantly elevated for air conduction in the frequencies of 500 Hz and 2000 Hz, while the thresholds of bone conduction had normal values in both ears. We found no statistically significant difference between genders and ears for most of the comparisons. Conclusion  The thresholds obtained by bone conduction did not alter the thresholds in children with conductive hearing loss. However, the conductive hearing loss alter all thresholds by air conduction. The tone-ABR by bone conduction is an important tool for assessing cochlear integrity in children with agenesis of the external auditory canal under 12 months. PMID:29018492

  4. Impact of Educational Level on Performance on Auditory Processing Tests.

    PubMed

    Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane

    2016-01-01

    Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  5. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    PubMed Central

    Zaltz, Yael; Globerson, Eitan; Amir, Noam

    2017-01-01

    The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318

  6. A model of anuran auditory periphery reveals frequency-dependent adaptation to be a contributing mechanism for two-tone suppression and amplitude modulation coding.

    PubMed

    Wotton, J M; Ferragamo, M J

    2011-10-01

    Anuran auditory nerve fibers (ANF) tuned to low frequencies display unusual frequency-dependent adaptation which results in a more phasic response to signals above best frequency (BF) and a more tonic response to signals below. A network model of the first two layers of the anuran auditory system was used to test the contribution of this dynamic peripheral adaptation on two-tone suppression and amplitude modulation (AM) tuning. The model included a peripheral sandwich component, leaky-integrate-and-fire cells and adaptation was implemented by means of a non-linear increase in threshold weighted by the signal frequency. The results of simulations showed that frequency-dependent adaptation was both necessary and sufficient to produce high-frequency-side two-tone suppression for the ANF and cells of the dorsal medullary nucleus (DMN). It seems likely that both suppression and this dynamic adaptation share a common mechanism. The response of ANFs to AM signals was influenced by adaptation and carrier frequency. Vector strength synchronization to an AM signal improved with increased adaptation. The spike rate response to a carrier at BF was the expected flat function with AM rate. However, for non-BF carrier frequencies the response showed a weak band-pass pattern due to the influence of signal sidebands and adaptation. The DMN received inputs from three ANFs and when the frequency tuning of inputs was near the carrier, then the rate response was a low-pass or all-pass shape. When most of the inputs were biased above or below the carrier, then band-pass responses were observed. Frequency-dependent adaptation enhanced the band-pass tuning for AM rate, particularly when the response of the inputs was predominantly phasic for a given carrier. Different combinations of inputs can therefore bias a DMN cell to be especially well suited to detect specific ranges of AM rates for a particular carrier frequency. Such selection of inputs would clearly be advantageous to the frog in recognizing distinct spectral and temporal parameters in communication calls. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment.

    PubMed

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus-tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive and low-cost treatment approach for tonal tinnitus into routine clinical practice.

  8. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    PubMed Central

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus—tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive and low-cost treatment approach for tonal tinnitus into routine clinical practice. PMID:22754508

  9. Likely Age-Related Hearing Loss (Presbycusis) in a Stranded Indo-Pacific Humpback Dolphin (Sousa chinensis).

    PubMed

    Li, Songhai; Wang, Ding; Wang, Kexiong; Hoffmann-Kuhnt, Matthias; Fernando, Nimal; Taylor, Elizabeth A; Lin, Wenzhi; Chen, Jialin; Ng, Timothy

    2016-01-01

    The hearing of a stranded Indo-Pacific humpback dolphin (Sousa chinensis) in Zhuhai, China, was measured. The age of this animal was estimated to be ~40 years. The animal's hearing was measured using a noninvasive auditory evoked potential (AEP) method. The results showed that the high-frequency hearing cutoff frequency of the studied dolphin was ~30-40 kHz lower than that of a conspecific younger individual ~13 year old. The lower high-frequency hearing range in the older dolphin was explained as a likely result of age-related hearing loss (presbycusis).

  10. Alterations in interhemispheric gamma-band connectivity are related to the emergence of auditory verbal hallucinations in healthy subjects during NMDA-receptor blockade.

    PubMed

    Thiebes, Stephanie; Steinmann, Saskia; Curic, Stjepan; Polomac, Nenad; Andreou, Christina; Eichler, Iris-Carola; Eichler, Lars; Zöllner, Christian; Gallinat, Jürgen; Leicht, Gregor; Mulert, Christoph

    2018-06-01

    Auditory verbal hallucinations (AVH) are a common positive symptom of schizophrenia. Excitatory-to-inhibitory (E/I) imbalance related to disturbed N-methyl-D-aspartate receptor (NMDAR) functioning has been suggested as a possible mechanism underlying altered connectivity and AVH in schizophrenia. The current study examined the effects of ketamine, a NMDAR antagonist, on glutamate-related mechanisms underlying interhemispheric gamma-band connectivity, conscious auditory perception during dichotic listening (DL), and the emergence of auditory verbal distortions and hallucinations (AVD/AVH) in healthy volunteers. In a single-blind, pseudo-randomized, placebo-controlled crossover design, nineteen male, right-handed volunteers were measured using 64 channel electroencephalography (EEG). Psychopathology was assessed with the PANSS interview and the 5D-ASC questionnaire, including a subscale to detect auditory alterations with regard to AVD/AVH (AUA-AVD/AVH). Interhemispheric connectivity analysis was performed using eLORETA source estimation and lagged phase synchronization (LPS) in the gamma-band range (30-100 Hz). Ketamine induced positive symptoms such as hallucinations in a subgroup of healthy subjects. In addition, interhemispheric gamma-band connectivity was found to be altered under ketamine compared to placebo, and subjects with AUA-AVD/AVH under ketamine showed significantly higher interhemispheric gamma-band connectivity than subjects without AUA-AVD/AVH. These findings demonstrate a relationship between NMDAR functioning, interhemispheric connectivity in the gamma-band frequency range between bilateral auditory cortices and the emergence of AVD/AVH in healthy subjects. The result is in accordance with the interhemispheric miscommunication hypothesis of AVH and argues for a possible role of glutamate in AVH in schizophrenia.

  11. Is the Role of External Feedback in Auditory Skill Learning Age Dependent?

    PubMed

    Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat

    2017-12-20

    The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for frequency) task, with external feedback (EF) provided for half of them. Data supported the following findings: (a) Children learned the difference limen for frequency task only when EF was provided. (b) The ability of the children to benefit from EF was associated with better cognitive skills. (c) Adults showed significant learning whether EF was provided or not. (d) In children, within-session learning following training was dependent on the provision of feedback, whereas between-sessions learning occurred irrespective of feedback. EF was found beneficial for auditory skill learning of 7-9-year-old children but not for young adults. The data support the supervised Hebbian model for auditory skill learning, suggesting combined bottom-up internal neural feedback controlled by top-down monitoring. In the case of immature executive functions, EF enhanced auditory skill learning. This study has implications for the design of training protocols in the auditory modality for different age groups, as well as for special populations.

  12. Quantitative analysis of neuronal response properties in primary and higher-order auditory cortical fields of awake house mice (Mus musculus)

    PubMed Central

    Joachimsthaler, Bettina; Uhlmann, Michaela; Miller, Frank; Ehret, Günter; Kurt, Simone

    2014-01-01

    Because of its great genetic potential, the mouse (Mus musculus) has become a popular model species for studies on hearing and sound processing along the auditory pathways. Here, we present the first comparative study on the representation of neuronal response parameters to tones in primary and higher-order auditory cortical fields of awake mice. We quantified 12 neuronal properties of tone processing in order to estimate similarities and differences of function between the fields, and to discuss how far auditory cortex (AC) function in the mouse is comparable to that in awake monkeys and cats. Extracellular recordings were made from 1400 small clusters of neurons from cortical layers III/IV in the primary fields AI (primary auditory field) and AAF (anterior auditory field), and the higher-order fields AII (second auditory field) and DP (dorsoposterior field). Field specificity was shown with regard to spontaneous activity, correlation between spontaneous and evoked activity, tone response latency, sharpness of frequency tuning, temporal response patterns (occurrence of phasic responses, phasic-tonic responses, tonic responses, and off-responses), and degree of variation between the characteristic frequency (CF) and the best frequency (BF) (CF–BF relationship). Field similarities were noted as significant correlations between CFs and BFs, V-shaped frequency tuning curves, similar minimum response thresholds and non-monotonic rate-level functions in approximately two-thirds of the neurons. Comparative and quantitative analyses showed that the measured response characteristics were, to various degrees, susceptible to influences of anesthetics. Therefore, studies of neuronal responses in the awake AC are important in order to establish adequate relationships between neuronal data and auditory perception and acoustic response behavior. PMID:24506843

  13. Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation

    PubMed Central

    Lopez-Poveda, Enrique A.; Barrios, Pablo

    2013-01-01

    Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176

  14. Effect of EEG Referencing Methods on Auditory Mismatch Negativity

    PubMed Central

    Mahajan, Yatin; Peter, Varghese; Sharma, Mridula

    2017-01-01

    Auditory event-related potentials (ERPs) have consistently been used in the investigation of auditory and cognitive processing in the research and clinical laboratories. There is currently no consensus on the choice of appropriate reference for auditory ERPs. The most commonly used references in auditory ERP research are the mathematically linked-mastoids (LM) and average referencing (AVG). Since LM and AVG referencing procedures do not solve the issue of electrically-neutral reference, Reference Electrode Standardization Technique (REST) was developed to create a neutral reference for EEG recordings. The aim of the current research is to compare the influence of the reference on amplitude and latency of auditory mismatch negativity (MMN) as a function of magnitude of frequency deviance across three commonly used electrode montages (16, 32, and 64-channel) using REST, LM, and AVG reference procedures. The current study was designed to determine if the three reference methods capture the variation in amplitude and latency of MMN with the deviance magnitude. We recorded MMN from 12 normal hearing young adults in an auditory oddball paradigm with 1,000 Hz pure tone as standard and 1,030, 1,100, and 1,200 Hz as small, medium and large frequency deviants, respectively. The EEG data recorded to these sounds was re-referenced using REST, LM, and AVG methods across 16-, 32-, and 64-channel EEG electrode montages. Results revealed that while the latency of MMN decreased with increment in frequency of deviant sounds, no effect of frequency deviance was present for amplitude of MMN. There was no effect of referencing procedure on the experimental effect tested. The amplitude of MMN was largest when the ERP was computed using LM referencing and the REST referencing produced the largest amplitude of MMN for 64-channel montage. There was no effect of electrode-montage on AVG referencing induced ERPs. Contrary to our predictions, the results suggest that the auditory MMN elicited as a function of increments in frequency deviance does not depend on the choice of referencing procedure. The results also suggest that auditory ERPs generated using REST referencing is contingent on the electrode arrays more than the AVG referencing. PMID:29066945

  15. Inferior colliculus contributions to phase encoding of stop consonants in an animal model

    PubMed Central

    Warrier, Catherine M; Abrams, Daniel A; Nicol, Trent G; Kraus, Nina

    2011-01-01

    The human auditory brainstem is known to be exquisitely sensitive to fine-grained spectro-temporal differences between speech sound contrasts, and the ability of the brainstem to discriminate between these contrasts is important for speech perception. Recent work has described a novel method for translating brainstem timing differences in response to speech contrasts into frequency-specific phase differentials. Results from this method have shown that the human brainstem response is surprisingly sensitive to phase-differences inherent to the stimuli across a wide extent of the spectrum. Here we use an animal model of the auditory brainstem to examine whether the stimulus-specific phase signatures measured in human brainstem responses represent an epiphenomenon associated with far field (i.e., scalp-recorded) measurement of neural activity, or alternatively whether these specific activity patterns are also evident in auditory nuclei that contribute to the scalp-recorded response, thereby representing a more fundamental temporal processing phenomenon. Responses in anaesthetized guinea pigs to three minimally-contrasting consonant-vowel stimuli were collected simultaneously from the cortical surface vertex and directly from central nucleus of the inferior colliculus (ICc), measuring volume conducted neural activity and multiunit, near-field activity, respectively. Guinea pig surface responses were similar to human scalp-recorded responses to identical stimuli in gross morphology as well as phase characteristics. Moreover, surface recorded potentials shared many phase characteristics with near-field ICc activity. Response phase differences were prominent during formant transition periods, reflecting spectro-temporal differences between syllables, and showed more subtle differences during the identical steady-state periods. ICc encoded stimulus distinctions over a broader frequency range, with differences apparent in the highest frequency ranges analyzed, up to 3000 Hz. Based on the similarity of phase encoding across sites, and the consistency and sensitivity of response phase measured within ICc, results suggest that a general property of the auditory system is a high degree of sensitivity to fine-grained phase information inherent to complex acoustical stimuli. Furthermore, results suggest that temporal encoding in ICc contributes to temporal features measured in speech-evoked scalp-recorded responses. PMID:21945200

  16. Human cochlear hydrodynamics: A high-resolution μCT-based finite element study.

    PubMed

    De Paolis, Annalisa; Watanabe, Hirobumi; Nelson, Jeremy T; Bikson, Marom; Packer, Mark; Cardoso, Luis

    2017-01-04

    Measurements of perilymph hydrodynamics in the human cochlea are scarce, being mostly limited to the fluid pressure at the basal or apical turn of the scalae vestibuli and tympani. Indeed, measurements of fluid pressure or volumetric flow rate have only been reported in animal models. In this study we imaged the human ear at 6.7 and 3-µm resolution using µCT scanning to produce highly accurate 3D models of the entire ear and particularly the cochlea scalae. We used a contrast agent to better distinguish soft from hard tissues, including the auditory canal, tympanic membrane, malleus, incus, stapes, ligaments, oval and round window, scalae vestibule and tympani. Using a Computational Fluid Dynamics (CFD) approach and this anatomically correct 3D model of the human cochlea, we examined the pressure and perilymph flow velocity as a function of location, time and frequency within the auditory range. Perimeter, surface, hydraulic diameter, Womersley and Reynolds numbers were computed every 45° of rotation around the central axis of the cochlear spiral. CFD results showed both spatial and temporal pressure gradients along the cochlea. Small Reynolds number and large Womersley values indicate that the perilymph fluid flow at auditory frequencies is laminar and its velocity profile is plug-like. The pressure was found 102-106° out of phase with the fluid flow velocity at the scalae vestibule and tympani, respectively. The average flow velocity was found in the sub-µm/s to nm/s range at 20-100Hz, and below the nm/s range at 1-20kHz. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Auditory-motor Mapping for Pitch Control in Singers and Nonsingers

    PubMed Central

    Jones, Jeffery A.; Keough, Dwayne

    2009-01-01

    Little is known about the basic processes underlying the behavior of singing. This experiment was designed to examine differences in the representation of the mapping between fundamental frequency (F0) feedback and the vocal production system in singers and nonsingers. Auditory feedback regarding F0 was shifted down in frequency while participants sang the consonant-vowel /ta/. During the initial frequency-altered trials, singers compensated to a lesser degree than nonsingers, but this difference was reduced with continued exposure to frequency-altered feedback. After brief exposure to frequency altered auditory feedback, both singers and nonsingers suddenly heard their F0 unaltered. When participants received this unaltered feedback, only singers' F0 values were found to be significantly higher than their F0 values produced during baseline and control trials. These aftereffects in singers were replicated when participants sang a different note than the note they produced while hearing altered feedback. Together, these results suggest that singers rely more on internal models than nonsingers to regulate vocal productions rather than real time auditory feedback. PMID:18592224

  18. The influence of fundamental frequency on perceived duration in spectrally comparable sounds

    PubMed Central

    Aalto, Daniel; Simko, Juraj; Vainio, Martti

    2017-01-01

    The perceived duration of a sound is affected by its fundamental frequency and intensity: higher sounds are judged to be longer, as are sounds with greater intensity. Since increasing intensity lengthens the perceived duration of the auditory object, and increasing the fundamental frequency increases the sound’s perceived loudness (up to ca. 3 kHz), frequency modulation of duration could be potentially explained by a confounding effect where the primary cause of the modulation would be variations in intensity. Here, a series of experiments are described that were designed to disentangle the contributions of fundamental frequency, intensity, and duration to perceived loudness and duration. In two forced-choice tasks, participants judged duration and intensity differences between two sounds varying simultaneously in intensity, fundamental frequency, fundamental frequency gliding range, and duration. The results suggest that fundamental frequency and intensity each have an impact on duration judgments, while frequency gliding range did not influence the present results. We also demonstrate that the modulation of perceived duration by sound fundamental frequency cannot be fully explained by the confounding relationship between frequency and intensity. PMID:28879063

  19. The influence of fundamental frequency on perceived duration in spectrally comparable sounds.

    PubMed

    Dawson, Caitlin; Aalto, Daniel; Simko, Juraj; Vainio, Martti

    2017-01-01

    The perceived duration of a sound is affected by its fundamental frequency and intensity: higher sounds are judged to be longer, as are sounds with greater intensity. Since increasing intensity lengthens the perceived duration of the auditory object, and increasing the fundamental frequency increases the sound's perceived loudness (up to ca. 3 kHz), frequency modulation of duration could be potentially explained by a confounding effect where the primary cause of the modulation would be variations in intensity. Here, a series of experiments are described that were designed to disentangle the contributions of fundamental frequency, intensity, and duration to perceived loudness and duration. In two forced-choice tasks, participants judged duration and intensity differences between two sounds varying simultaneously in intensity, fundamental frequency, fundamental frequency gliding range, and duration. The results suggest that fundamental frequency and intensity each have an impact on duration judgments, while frequency gliding range did not influence the present results. We also demonstrate that the modulation of perceived duration by sound fundamental frequency cannot be fully explained by the confounding relationship between frequency and intensity.

  20. Clinical applications of the human brainstem responses to auditory stimuli

    NASA Technical Reports Server (NTRS)

    Galambos, R.; Hecox, K.

    1975-01-01

    A technique utilizing the frequency following response (FFR) (obtained by auditory stimulation, whereby the stimulus frequency and duration are mirror-imaged in the resulting brainwaves) as a clinical tool for hearing disorders in humans of all ages is presented. Various medical studies are discussed to support the clinical value of the technique. The discovery and origin of the FFR and another significant brainstem auditory response involved in studying the eighth nerve is also discussed.

  1. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  2. The effects of divided attention on auditory priming.

    PubMed

    Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W

    2007-09-01

    Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.

  3. Mapping Frequency-Specific Tone Predictions in the Human Auditory Cortex at High Spatial Resolution.

    PubMed

    Berlot, Eva; Formisano, Elia; De Martino, Federico

    2018-05-23

    Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels. SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits. Copyright © 2018 the authors 0270-6474/18/384934-09$15.00/0.

  4. Stay tuned: active amplification tunes tree cricket ears to track temperature-dependent song frequency.

    PubMed

    Mhatre, Natasha; Pollack, Gerald; Mason, Andrew

    2016-04-01

    Tree cricket males produce tonal songs, used for mate attraction and male-male interactions. Active mechanics tunes hearing to conspecific song frequency. However, tree cricket song frequency increases with temperature, presenting a problem for tuned listeners. We show that the actively amplified frequency increases with temperature, thus shifting mechanical and neuronal auditory tuning to maintain a match with conspecific song frequency. Active auditory processes are known from several taxa, but their adaptive function has rarely been demonstrated. We show that tree crickets harness active processes to ensure that auditory tuning remains matched to conspecific song frequency, despite changing environmental conditions and signal characteristics. Adaptive tuning allows tree crickets to selectively detect potential mates or rivals over large distances and is likely to bestow a strong selective advantage by reducing mate-finding effort and facilitating intermale interactions. © 2016 The Author(s).

  5. Follow-up of hearing thresholds among forge hammering workers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamal, A.A.; Mikael, R.A.; Faris, R.

    Hearing threshold was reexamined in a group of forge hammering workers investigated 8 years ago with consideration of the age effect and of auditory symptoms. Workers were exposed to impact noise that ranged from 112 to 139 dB(A)--at an irregular rate of 20 to 50 drop/minute--and a continuous background noise that ranged from 90 to 94 dB(A). Similar to what was observed 8 years ago, the present permanent threshold shift (PTS) showed a maximum notch at the frequency of 6 kHz and considerable elevations at the frequencies of 0.25-1 kHz. The age-corrected PTS and the postexposure hearing threshold were significantlymore » higher than the corresponding previous values at the frequencies 0.25, 0.5, 1, and 8 kHz only. The rise was more evident at the low than at the high frequencies. Temporary threshold shift (TTS) values were significantly less than those 8 years ago. Contrary to the previous TTS, the present TTS were higher at low than at high frequencies. Although progression of PTS at the frequencies 0.25 and 0.5 kHz was continuous throughout the observed durations of exposure, progression at higher frequencies occurred essentially in the first 10 to 15 years of exposure. Thereafter, it followed a much slower rate. Tinnitus was significantly associated with difficulty in hearing the human voice and with elevation of PTS at all the tested frequencies, while acoustic after-image was significantly associated with increment of PTS at the frequencies 0.25-2 kHz. No relation between PTS and smoking was found. PTS at low frequencies may provide an indication of progression of hearing damage when the sensitivity at 6 and 4 kHz diminishes after prolonged years of exposure. Tinnitus and acoustic after-image are related to the auditory effect of forge hammering noise.« less

  6. Auditory-motor adaptation to frequency-altered auditory feedback occurs when participants ignore feedback.

    PubMed

    Keough, Dwayne; Hawco, Colin; Jones, Jeffery A

    2013-03-09

    Auditory feedback is important for accurate control of voice fundamental frequency (F(0)). The purpose of this study was to address whether task instructions could influence the compensatory responding and sensorimotor adaptation that has been previously found when participants are presented with a series of frequency-altered feedback (FAF) trials. Trained singers and musically untrained participants (nonsingers) were informed that their auditory feedback would be manipulated in pitch while they sang the target vowel [/α /]. Participants were instructed to either 'compensate' for, or 'ignore' the changes in auditory feedback. Whole utterance auditory feedback manipulations were either gradually presented ('ramp') in -2 cent increments down to -100 cents (1 semitone) or were suddenly ('constant') shifted down by 1 semitone. Results indicated that singers and nonsingers could not suppress their compensatory responses to FAF, nor could they reduce the sensorimotor adaptation observed during both the ramp and constant FAF trials. Compared to previous research, these data suggest that musical training is effective in suppressing compensatory responses only when FAF occurs after vocal onset (500-2500 ms). Moreover, our data suggest that compensation and adaptation are automatic and are influenced little by conscious control.

  7. Auditory-motor adaptation to frequency-altered auditory feedback occurs when participants ignore feedback

    PubMed Central

    2013-01-01

    Background Auditory feedback is important for accurate control of voice fundamental frequency (F0). The purpose of this study was to address whether task instructions could influence the compensatory responding and sensorimotor adaptation that has been previously found when participants are presented with a series of frequency-altered feedback (FAF) trials. Trained singers and musically untrained participants (nonsingers) were informed that their auditory feedback would be manipulated in pitch while they sang the target vowel [/ɑ /]. Participants were instructed to either ‘compensate’ for, or ‘ignore’ the changes in auditory feedback. Whole utterance auditory feedback manipulations were either gradually presented (‘ramp’) in -2 cent increments down to -100 cents (1 semitone) or were suddenly (’constant‘) shifted down by 1 semitone. Results Results indicated that singers and nonsingers could not suppress their compensatory responses to FAF, nor could they reduce the sensorimotor adaptation observed during both the ramp and constant FAF trials. Conclusions Compared to previous research, these data suggest that musical training is effective in suppressing compensatory responses only when FAF occurs after vocal onset (500-2500 ms). Moreover, our data suggest that compensation and adaptation are automatic and are influenced little by conscious control. PMID:23497238

  8. Hearing diversity in moths confronting a neotropical bat assemblage.

    PubMed

    Cobo-Cuan, Ariadna; Kössl, Manfred; Mora, Emanuel C

    2017-09-01

    The tympanal ear is an evolutionary acquisition which helps moths survive predation from bats. The greater diversity of bats and echolocation strategies in the Neotropics compared with temperate zones would be expected to impose different sensory requirements on the neotropical moths. However, even given some variability among moth assemblages, the frequencies of best hearing of moths from different climate zones studied to date have been roughly the same: between 20 and 60 kHz. We have analyzed the auditory characteristics of tympanate moths from Cuba, a neotropical island with high levels of bat diversity and a high incidence of echolocation frequencies above those commonly at the upper limit of moths' hearing sensitivity. Moths of the superfamilies Noctuoidea, Geometroidea and Pyraloidea were examined. Audiograms were determined by non-invasively measuring distortion-product otoacoustic emissions. We also quantified the frequency spectrum of the echolocation sounds to which this moth community is exposed. The hearing ranges of moths in our study showed best frequencies between 36 and 94 kHz. High sensitivity to frequencies above 50 kHz suggests that the auditory sensitivity of moths is suited to the sounds used by sympatric echolocating bat fauna. Biodiversity characterizes predators and prey in the Neotropics, but the bat-moth acoustic interaction keeps spectrally matched.

  9. Effects of Frequency Separation and Diotic/Dichotic Presentations on the Alternation Frequency Limits in Audition Derived from a Temporal Phase Discrimination Task.

    PubMed

    Kanaya, Shoko; Fujisaki, Waka; Nishida, Shin'ya; Furukawa, Shigeto; Yokosawa, Kazuhiko

    2015-02-01

    Temporal phase discrimination is a useful psychophysical task to evaluate how sensory signals, synchronously detected in parallel, are perceptually bound by human observers. In this task two stimulus sequences synchronously alternate between two states (say, A-B-A-B and X-Y-X-Y) in either of two temporal phases (ie A and B are respectively paired with X and Y, or vice versa). The critical alternation frequency beyond which participants cannot discriminate the temporal phase is measured as an index characterizing the temporal property of the underlying binding process. This task has been used to reveal the mechanisms underlying visual and cross-modal bindings. To directly compare these binding mechanisms with those in another modality, this study used the temporal phase discrimination task to reveal the processes underlying auditory bindings. The two sequences were alternations between two pitches. We manipulated the distance between the two sequences by changing intersequence frequency separation, or presentation ears (diotic vs dichotic). Results showed that the alternation frequency limit ranged from 7 to 30 Hz, becoming higher as the intersequence distance decreased, as is the case with vision. However, unlike vision, auditory phase discrimination limits were higher and more variable across participants. © 2015 SAGE Publications.

  10. Hormone replacement therapy diminishes hearing in peri-menopausal mice.

    PubMed

    Price, Katharine; Zhu, Xiaoxia; Guimaraes, Patricia F; Vasilyeva, Olga N; Frisina, Robert D

    2009-06-01

    We recently discovered that progestin in hormone replacement therapy (HRT) for post-menopausal women has detrimental effects on the ear and central auditory system [Guimaraes, P., Frisina, S.T., Mapes, F., Tadros, S.F., Frisina, D.R., Frisina, R.D., 2006. Progestin negatively affects hearing in aged women. Proc. Natl. Acad. Sci. - PNAS 103, 14246-14249]. To start determining the generality and neural bases of these human findings, the present study examined the effects of combination HRT (estrogen+progestin) and estrogen alone on hearing in peri-menopausal mice. Specifically, auditory brainstem responses (ABRs-sensitivity of the auditory system) and distortion-product otoacoustic emissions (DPOAEs-cochlear outer hair cell system) were employed. Middle age female CBA mice received either a time-release, subcutaneous implanted pellet of estrogen+progestin, estrogen alone, or placebo. Longitudinal comparisons of ABR threshold data obtained at 4 months of treatment revealed statistically significant declines in auditory sensitivity over time for the combined estrogen+progestin treatment group, with the estrogen only group revealing milder changes at 3, 6 and 32 kHz. DPOAE testing revealed statistically significant differences for the estrogen+progestin treatment group in the high and middle frequency ranges (15-29 and 30-45 kHz) after as early as 2 months of treatment (p<0.01 and p<0.001, respectively). Statistically significant changes were also seen at 4 months of treatment across all frequencies for the combined HRT group. These data suggest that estrogen+progestin HRT therapy of 4 months duration impairs outer hair cell functioning and overall auditory sensitivity. These findings indicate that estrogen+progestin HRT may actually accelerate age-related hearing loss, relative to estrogen monotherapy; findings that are consistent with the clinical hearing loss observed in aging women that have taken combination HRT.

  11. Correlated evolution between hearing sensitivity and social calls in bats

    PubMed Central

    Bohn, Kirsten M; Moss, Cynthia F; Wilkinson, Gerald S

    2006-01-01

    Echolocating bats are auditory specialists, with exquisite hearing that spans several octaves. In the ultrasonic range, bat audiograms typically show highest sensitivity in the spectral region of their species-specific echolocation calls. Well-developed hearing in the audible range has been commonly attributed to a need to detect sounds produced by prey. However, bat pups often emit isolation calls with low-frequency components that facilitate mother–young reunions. In this study, we examine whether low-frequency hearing in bats exhibits correlated evolution with (i) body size; (ii) high-frequency hearing sensitivity or (iii) pup isolation call frequency. Using published audiograms, we found that low-frequency hearing sensitivity is not dependent on body size but is related to high-frequency hearing. After controlling for high-frequency hearing, we found that low-frequency hearing exhibits correlated evolution with isolation call frequency. We infer that detection and discrimination of isolation calls have favoured enhanced low-frequency hearing because accurate parental investment is critical: bats have low reproductive rates, non-volant altricial young and must often identify their pups within large crèches. PMID:17148288

  12. Cortical pitch regions in humans respond primarily to resolved harmonics and are located in specific tonotopic regions of anterior auditory cortex.

    PubMed

    Norman-Haignere, Sam; Kanwisher, Nancy; McDermott, Josh H

    2013-12-11

    Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.

  13. Cortical Pitch Regions in Humans Respond Primarily to Resolved Harmonics and Are Located in Specific Tonotopic Regions of Anterior Auditory Cortex

    PubMed Central

    Kanwisher, Nancy; McDermott, Josh H.

    2013-01-01

    Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce “resolved” peaks of excitation in the cochlea, whereas others are “unresolved,” providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior. PMID:24336712

  14. Stimulus-specific suppression preserves information in auditory short-term memory.

    PubMed

    Linke, Annika C; Vicente-Grabovetsky, Alejandro; Cusack, Rhodri

    2011-08-02

    Philosophers and scientists have puzzled for millennia over how perceptual information is stored in short-term memory. Some have suggested that early sensory representations are involved, but their precise role has remained unclear. The current study asks whether auditory cortex shows sustained frequency-specific activation while sounds are maintained in short-term memory using high-resolution functional MRI (fMRI). Investigating short-term memory representations within regions of human auditory cortex with fMRI has been difficult because of their small size and high anatomical variability between subjects. However, we overcame these constraints by using multivoxel pattern analysis. It clearly revealed frequency-specific activity during the encoding phase of a change detection task, and the degree of this frequency-specific activation was positively related to performance in the task. Although the sounds had to be maintained in memory, activity in auditory cortex was significantly suppressed. Strikingly, patterns of activity in this maintenance period correlated negatively with the patterns evoked by the same frequencies during encoding. Furthermore, individuals who used a rehearsal strategy to remember the sounds showed reduced frequency-specific suppression during the maintenance period. Although negative activations are often disregarded in fMRI research, our findings imply that decreases in blood oxygenation level-dependent response carry important stimulus-specific information and can be related to cognitive processes. We hypothesize that, during auditory change detection, frequency-specific suppression protects short-term memory representations from being overwritten by inhibiting the encoding of interfering sounds.

  15. Contributions of spectral frequency analyses to the study of P50 ERP amplitude and suppression in bipolar disorder with or without a history of psychosis.

    PubMed

    Carroll, Christine A; Kieffaber, Paul D; Vohs, Jenifer L; O'Donnell, Brian F; Shekhar, Anantha; Hetrick, William P

    2008-11-01

    The present study investigated event-related brain potential (ERP) indices of auditory processing and sensory gating in bipolar disorder and subgroups of bipolar patients with or without a history of psychosis using the P50 dual-click procedure. Auditory-evoked activity in two discrete frequency bands also was explored to distinguish between sensory registration and selective attention deficits. Thirty-one individuals with bipolar disorder and 28 non-psychiatric controls were compared on ERP indices of auditory processing using a dual-click procedure. In addition to conventional P50 ERP peak-picking techniques, quantitative frequency analyses were applied to the ERP data to isolate stages of information processing associated with sensory registration (20-50 Hz; gamma band) and selective attention (0-20 Hz; low-frequency band). Compared to the non-psychiatric control group, patients with bipolar disorder exhibited reduced S1 response magnitudes for the conventional P50 peak-picking and low-frequency response analyses. A bipolar subgroup effect suggested that the attenuated S1 magnitudes from the P50 peak-picking and low-frequency analyses were largely attributable to patients without a history of psychosis. The analysis of distinct frequency bands of the auditory-evoked response elicited during the dual-click procedure allowed further specification of the nature of auditory sensory processing and gating deficits in bipolar disorder with or without a history of psychosis. The observed S1 effects in the low-frequency band suggest selective attention deficits in bipolar patients, especially those patients without a history of psychosis, which may reflect a diminished capacity to selectively attend to salient stimuli as opposed to impairments of inhibitory sensory processes.

  16. Loudspeaker equalization for auditory research.

    PubMed

    MacDonald, Justin A; Tran, Phuong K

    2007-02-01

    The equalization of loudspeaker frequency response is necessary to conduct many types of well-controlled auditory experiments. This article introduces a program that includes functions to measure a loudspeaker's frequency response, design equalization filters, and apply the filters to a set of stimuli to be used in an auditory experiment. The filters can compensate for both magnitude and phase distortions introduced by the loudspeaker. A MATLAB script is included in the Appendix to illustrate the details of the equalization algorithm used in the program.

  17. The harmonic organization of auditory cortex

    PubMed Central

    Wang, Xiaoqin

    2013-01-01

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds. PMID:24381544

  18. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    PubMed

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.

  19. Encoding of speech sounds at auditory brainstem level in good and poor hearing aid performers.

    PubMed

    Shetty, Hemanth Narayan; Puttabasappa, Manjula

    Hearing aids are prescribed to alleviate loss of audibility. It has been reported that about 31% of hearing aid users reject their own hearing aid because of annoyance towards background noise. The reason for dissatisfaction can be located anywhere from the hearing aid microphone till the integrity of neurons along the auditory pathway. To measure spectra from the output of hearing aid at the ear canal level and frequency following response recorded at the auditory brainstem from individuals with hearing impairment. A total of sixty participants having moderate sensorineural hearing impairment with age range from 15 to 65 years were involved. Each participant was classified as either Good or Poor Hearing aid Performers based on acceptable noise level measure. Stimuli /da/ and /si/ were presented through loudspeaker at 65dB SPL. At the ear canal, the spectra were measured in the unaided and aided conditions. At auditory brainstem, frequency following response were recorded to the same stimuli from the participants. Spectrum measured in each condition at ear canal was same in good hearing aid performers and poor hearing aid performers. At brainstem level, better F 0 encoding; F 0 and F 1 energies were significantly higher in good hearing aid performers than in poor hearing aid performers. Though the hearing aid spectra were almost same between good hearing aid performers and poor hearing aid performers, subtle physiological variations exist at the auditory brainstem. The result of the present study suggests that neural encoding of speech sound at the brainstem level might be mediated distinctly in good hearing aid performers from that of poor hearing aid performers. Thus, it can be inferred that subtle physiological changes are evident at the auditory brainstem in a person who is willing to accept noise from those who are not willing to accept noise. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  20. Fragile Spectral and Temporal Auditory Processing in Adolescents with Autism Spectrum Disorder and Early Language Delay

    ERIC Educational Resources Information Center

    Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean

    2015-01-01

    We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…

  1. Transformation from a pure time delay to a mixed time and phase delay representation in the auditory forebrain pathway.

    PubMed

    Vonderschen, Katrin; Wagner, Hermann

    2012-04-25

    Birds and mammals exploit interaural time differences (ITDs) for sound localization. Subsequent to ITD detection by brainstem neurons, ITD processing continues in parallel midbrain and forebrain pathways. In the barn owl, both ITD detection and processing in the midbrain are specialized to extract ITDs independent of frequency, which amounts to a pure time delay representation. Recent results have elucidated different mechanisms of ITD detection in mammals, which lead to a representation of small ITDs in high-frequency channels and large ITDs in low-frequency channels, resembling a phase delay representation. However, the detection mechanism does not prevent a change in ITD representation at higher processing stages. Here we analyze ITD tuning across frequency channels with pure tone and noise stimuli in neurons of the barn owl's auditory arcopallium, a nucleus at the endpoint of the forebrain pathway. To extend the analysis of ITD representation across frequency bands to a large neural population, we employed Fourier analysis for the spectral decomposition of ITD curves recorded with noise stimuli. This method was validated using physiological as well as model data. We found that low frequencies convey sensitivity to large ITDs, whereas high frequencies convey sensitivity to small ITDs. Moreover, different linear phase frequency regimes in the high-frequency and low-frequency ranges suggested an independent convergence of inputs from these frequency channels. Our results are consistent with ITD being remodeled toward a phase delay representation along the forebrain pathway. This indicates that sensory representations may undergo substantial reorganization, presumably in relation to specific behavioral output.

  2. Cortical mechanisms for the segregation and representation of acoustic textures.

    PubMed

    Overath, Tobias; Kumar, Sukhbinder; Stewart, Lauren; von Kriegstein, Katharina; Cusack, Rhodri; Rees, Adrian; Griffiths, Timothy D

    2010-02-10

    Auditory object analysis requires two fundamental perceptual processes: the definition of the boundaries between objects, and the abstraction and maintenance of an object's characteristic features. Although it is intuitive to assume that the detection of the discontinuities at an object's boundaries precedes the subsequent precise representation of the object, the specific underlying cortical mechanisms for segregating and representing auditory objects within the auditory scene are unknown. We investigated the cortical bases of these two processes for one type of auditory object, an "acoustic texture," composed of multiple frequency-modulated ramps. In these stimuli, we independently manipulated the statistical rules governing (1) the frequency-time space within individual textures (comprising ramps with a given spectrotemporal coherence) and (2) the boundaries between textures (adjacent textures with different spectrotemporal coherences). Using functional magnetic resonance imaging, we show mechanisms defining boundaries between textures with different coherences in primary and association auditory cortices, whereas texture coherence is represented only in association cortex. Furthermore, participants' superior detection of boundaries across which texture coherence increased (as opposed to decreased) was reflected in a greater neural response in auditory association cortex at these boundaries. The results suggest a hierarchical mechanism for processing acoustic textures that is relevant to auditory object analysis: boundaries between objects are first detected as a change in statistical rules over frequency-time space, before a representation that corresponds to the characteristics of the perceived object is formed.

  3. Auditory Pattern Recognition and Brief Tone Discrimination of Children with Reading Disorders

    ERIC Educational Resources Information Center

    Walker, Marianna M.; Givens, Gregg D.; Cranford, Jerry L.; Holbert, Don; Walker, Letitia

    2006-01-01

    Auditory pattern recognition skills in children with reading disorders were investigated using perceptual tests involving discrimination of frequency and duration tonal patterns. A behavioral test battery involving recognition of the pattern of presentation of tone triads was used in which individual components differed in either frequency or…

  4. Effect of age at cochlear implantation on auditory and speech development of children with auditory neuropathy spectrum disorder.

    PubMed

    Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng

    2014-12-01

    To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Blast-induced tinnitus and hyperactivity in the auditory cortex of rats.

    PubMed

    Luo, Hao; Pace, Edward; Zhang, Jinsheng

    2017-01-06

    Blast exposure can cause tinnitus and hearing impairment by damaging the auditory periphery and direct impact to the brain, which trigger neural plasticity in both auditory and non-auditory centers. However, the underlying neurophysiological mechanisms of blast-induced tinnitus are still unknown. In this study, we induced tinnitus in rats using blast exposure and investigated changes in spontaneous firing and bursting activity in the auditory cortex (AC) at one day, one month, and three months after blast exposure. Our results showed that spontaneous activity in the tinnitus-positive group began changing at one month after blast exposure, and manifested as robust hyperactivity at all frequency regions at three months after exposure. We also observed an increased bursting rate in the low-frequency region at one month after blast exposure and in all frequency regions at three months after exposure. Taken together, spontaneous firing and bursting activity in the AC played an important role in blast-induced chronic tinnitus as opposed to acute tinnitus, thus favoring a bottom-up mechanism. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. Auditory word recognition: extrinsic and intrinsic effects of word frequency.

    PubMed

    Connine, C M; Titone, D; Wang, J

    1993-01-01

    Two experiments investigated the influence of word frequency in a phoneme identification task. Speech voicing continua were constructed so that one endpoint was a high-frequency word and the other endpoint was a low-frequency word (e.g., best-pest). Experiment 1 demonstrated that ambiguous tokens were labeled such that a high-frequency word was formed (intrinsic frequency effect). Experiment 2 manipulated the frequency composition of the list (extrinsic frequency effect). A high-frequency list bias produced an exaggerated influence of frequency; a low-frequency list bias showed a reverse frequency effect. Reaction time effects were discussed in terms of activation and postaccess decision models of frequency coding. The results support a late use of frequency in auditory word recognition.

  7. Underwater audiogram of a tucuxi (Sotalia fluviatilis guianensis).

    PubMed

    Sauerland, M; Dehnhardt, G

    1998-02-01

    Using a go/no go response paradigm, a tucuxi (Sotalia fluviatilis guianensis) was trained to respond to pure-tone signals for an underwater hearing test. Auditory thresholds were obtained from 4 to 135 kHz. The audiogram curve shows that this Sotalia had an upper limit of hearing at 135 kHz; from 125 to 135 kHz sensitivity decreased by 475 dB/oct. This coincides with results from electrophysiological threshold measurements. The range of best hearing (defined as 10 dB from maximum sensitivity) was between 64 and 105 kHz. This range appears to be narrower and more restricted to higher frequencies in Sotalia fluviatilis guianensis than in other odontocete species that had been tested before. Peak frequencies of echolocation pulses reported from free-ranging Sotalia correspond with the range of most sensitive hearing of this test subject.

  8. Low-frequency sensitivity in a gerbilline rodent, Pachyuromys duprasi.

    PubMed

    Plassmann, W; Kadel, M

    1991-01-01

    The contribution of the bulla to low-frequency hearing capability was studied in the gerbilline rodent Pachyuromys duprasi. In the frequency range of 0.6-3 kHz, the sound pressure behind the tympanic membrane is higher than the pressure in the meatus acusticus externus near the eardrum. Gradual augmentation of frequencies above 0.6 kHz gives rise to steadily increasing phase lag in the bulla relative to that in the meatus. Severing of the incudostapedial joint yields results indicating that the phase difference between meatus and bulla is caused by resonance properties of the bulla and resistance in the cochlea. Both destruction of the bulla and stiffening of the pars flaccida tympani lead to a sound pressure decrease in the frequency range around 2 kHz. This drop is accompanied by an amplitude decrease of the same magnitude in the cochlear microphonic potentials. These results support the hypothesis that the bulla functions like a Helmholtz resonator in the frequency range of 1-3 kHz, improving sound transduction to the cochlea. These experimental findings, in conjunction with theoretical considerations involving bulla volume, orifice area of the resonator, and resonance frequency of the bulla, suggest that the theoretically required area of the resonator's orifice is, in fact, of the same magnitude as the area of the pars flaccida tympani. The middle-ear system of P. duprasi thus consists of a resonating bulla in which the area of the pars flaccida tympani constitutes the resonator's opening towards the meatus and in which the pars tensa tympani functions as a pressure gradient receiver, due to phase differences caused by the resistance of the cochlea and by the resonance properties of the bulla. By these functional principles the peripheral auditory system of P. duprasi is capable of low-frequency perception despite the smallness of its structures. The middle ear in P. duprasi thus represents a prime example of a strategy: the dimensional constraints derived from a general bauplan for the peripheral auditory system have here been overcome.

  9. Mapping auditory nerve firing density using high-level compound action potentials and high-pass noise masking a

    PubMed Central

    Earl, Brian R.; Chertoff, Mark E.

    2012-01-01

    Future implementation of regenerative treatments for sensorineural hearing loss may be hindered by the lack of diagnostic tools that specify the target(s) within the cochlea and auditory nerve for delivery of therapeutic agents. Recent research has indicated that the amplitude of high-level compound action potentials (CAPs) is a good predictor of overall auditory nerve survival, but does not pinpoint the location of neural damage. A location-specific estimate of nerve pathology may be possible by using a masking paradigm and high-level CAPs to map auditory nerve firing density throughout the cochlea. This initial study in gerbil utilized a high-pass masking paradigm to determine normative ranges for CAP-derived neural firing density functions using broadband chirp stimuli and low-frequency tonebursts, and to determine if cochlear outer hair cell (OHC) pathology alters the distribution of neural firing in the cochlea. Neural firing distributions for moderate-intensity (60 dB pSPL) chirps were affected by OHC pathology whereas those derived with high-level (90 dB pSPL) chirps were not. These results suggest that CAP-derived neural firing distributions for high-level chirps may provide an estimate of auditory nerve survival that is independent of OHC pathology. PMID:22280596

  10. A physiologically based model for temporal envelope encoding in human primary auditory cortex.

    PubMed

    Dugué, Pierre; Le Bouquin-Jeannès, Régine; Edeline, Jean-Marc; Faucon, Gérard

    2010-09-01

    Communication sounds exhibit temporal envelope fluctuations in the low frequency range (<70 Hz) and human speech has prominent 2-16 Hz modulations with a maximum at 3-4 Hz. Here, we propose a new phenomenological model of the human auditory pathway (from cochlea to primary auditory cortex) to simulate responses to amplitude-modulated white noise. To validate the model, performance was estimated by quantifying temporal modulation transfer functions (TMTFs). Previous models considered either the lower stages of the auditory system (up to the inferior colliculus) or only the thalamocortical loop. The present model, divided in two stages, is based on anatomical and physiological findings and includes the entire auditory pathway. The first stage, from the outer ear to the colliculus, incorporates inhibitory interneurons in the cochlear nucleus to increase performance at high stimuli levels. The second stage takes into account the anatomical connections of the thalamocortical system and includes the fast and slow excitatory and inhibitory currents. After optimizing the parameters of the model to reproduce the diversity of TMTFs obtained from human subjects, a patient-specific model was derived and the parameters were optimized to effectively reproduce both spontaneous activity and the oscillatory part of the evoked response. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  11. Sound envelope processing in the developing human brain: A MEG study.

    PubMed

    Tang, Huizhen; Brock, Jon; Johnson, Blake W

    2016-02-01

    This study investigated auditory cortical processing of linguistically-relevant temporal modulations in the developing brains of young children. Auditory envelope following responses to white noise amplitude modulated at rates of 1-80 Hz in healthy children (aged 3-5 years) and adults were recorded using a paediatric magnetoencephalography (MEG) system and a conventional MEG system, respectively. For children, there were envelope following responses to slow modulations but no significant responses to rates higher than about 25 Hz, whereas adults showed significant envelope following responses to almost the entire range of stimulus rates. Our results show that the auditory cortex of preschool-aged children has a sharply limited capacity to process rapid amplitude modulations in sounds, as compared to the auditory cortex of adults. These neurophysiological results are consistent with previous psychophysical evidence for a protracted maturational time course for auditory temporal processing. The findings are also in good agreement with current linguistic theories that posit a perceptual bias for low frequency temporal information in speech during language acquisition. These insights also have clinical relevance for our understanding of language disorders that are associated with difficulties in processing temporal information in speech. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  12. Broadened population-level frequency tuning in the auditory cortex of tinnitus patients.

    PubMed

    Sekiya, Kenichi; Takahashi, Mariko; Murakami, Shingo; Kakigi, Ryusuke; Okamoto, Hidehiko

    2017-03-01

    Tinnitus is a phantom auditory perception without an external sound source and is one of the most common public health concerns that impair the quality of life of many individuals. However, its neural mechanisms remain unclear. We herein examined population-level frequency tuning in the auditory cortex of unilateral tinnitus patients with similar hearing levels in both ears using magnetoencephalography. We compared auditory-evoked neural activities elicited by a stimulation to the tinnitus and nontinnitus ears. Objective magnetoencephalographic data suggested that population-level frequency tuning corresponding to the tinnitus ear was significantly broader than that corresponding to the nontinnitus ear in the human auditory cortex. The results obtained support the hypothesis that pathological alterations in inhibitory neural networks play an important role in the perception of subjective tinnitus. NEW & NOTEWORTHY Although subjective tinnitus is one of the most common public health concerns that impair the quality of life of many individuals, no standard treatment or objective diagnostic method currently exists. We herein revealed that population-level frequency tuning was significantly broader in the tinnitus ear than in the nontinnitus ear. The results of the present study provide an insight into the development of an objective diagnostic method for subjective tinnitus. Copyright © 2017 the American Physiological Society.

  13. Encoding frequency contrast in primate auditory cortex

    PubMed Central

    Scott, Brian H.; Semple, Malcolm N.

    2014-01-01

    Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This “dynamic hyperacuity” suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing. PMID:24598525

  14. Frequency preference and attention effects across cortical depths in the human primary auditory cortex.

    PubMed

    De Martino, Federico; Moerel, Michelle; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia

    2015-12-29

    Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that-in this highly columnar cortex-task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.

  15. Perceptual consequences of disrupted auditory nerve activity.

    PubMed

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique contribution of neural synchrony to sensory perception but also provide guidance for translational research in terms of better diagnosis and management of human communication disorders.

  16. Sensorimotor learning in children and adults: Exposure to frequency-altered auditory feedback during speech production.

    PubMed

    Scheerer, N E; Jacobson, D S; Jones, J A

    2016-02-09

    Auditory feedback plays an important role in the acquisition of fluent speech; however, this role may change once speech is acquired and individuals no longer experience persistent developmental changes to the brain and vocal tract. For this reason, we investigated whether the role of auditory feedback in sensorimotor learning differs across children and adult speakers. Participants produced vocalizations while they heard their vocal pitch predictably or unpredictably shifted downward one semitone. The participants' vocal pitches were measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback modified subsequent speech motor commands. Sensorimotor learning was observed in both children and adults, with participants' initial vocal pitch increasing following trials where they were exposed to predictable, but not unpredictable, frequency-altered feedback. Participants' vocal pitch was also measured across each vocalization, to index the extent to which the deviant auditory feedback was used to modify ongoing vocalizations. While both children and adults were found to increase their vocal pitch following predictable and unpredictable changes to their auditory feedback, adults produced larger compensatory responses. The results of the current study demonstrate that both children and adults rapidly integrate information derived from their auditory feedback to modify subsequent speech motor commands. However, these results also demonstrate that children and adults differ in their ability to use auditory feedback to generate compensatory vocal responses during ongoing vocalization. Since vocal variability also differed across the children and adult groups, these results also suggest that compensatory vocal responses to frequency-altered feedback manipulations initiated at vocalization onset may be modulated by vocal variability. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. High-density EEG characterization of brain responses to auditory rhythmic stimuli during wakefulness and NREM sleep.

    PubMed

    Lustenberger, Caroline; Patel, Yogi A; Alagapan, Sankaraleengam; Page, Jessica M; Price, Betsy; Boyle, Michael R; Fröhlich, Flavio

    2018-04-01

    Auditory rhythmic sensory stimulation modulates brain oscillations by increasing phase-locking to the temporal structure of the stimuli and by increasing the power of specific frequency bands, resulting in Auditory Steady State Responses (ASSR). The ASSR is altered in different diseases of the central nervous system such as schizophrenia. However, in order to use the ASSR as biological markers for disease states, it needs to be understood how different vigilance states and underlying brain activity affect the ASSR. Here, we compared the effects of auditory rhythmic stimuli on EEG brain activity during wake and NREM sleep, investigated the influence of the presence of dominant sleep rhythms on the ASSR, and delineated the topographical distribution of these modulations. Participants (14 healthy males, 20-33 years) completed on the same day a 60 min nap session and two 30 min wakefulness sessions (before and after the nap). During these sessions, amplitude modulated (AM) white noise auditory stimuli at different frequencies were applied. High-density EEG was continuously recorded and time-frequency analyses were performed to assess ASSR during wakefulness and NREM periods. Our analysis revealed that depending on the electrode location, stimulation frequency applied and window/frequencies analysed the ASSR was significantly modulated by sleep pressure (before and after sleep), vigilance state (wake vs. NREM sleep), and the presence of slow wave activity and sleep spindles. Furthermore, AM stimuli increased spindle activity during NREM sleep but not during wakefulness. Thus, (1) electrode location, sleep history, vigilance state and ongoing brain activity needs to be carefully considered when investigating ASSR and (2) auditory rhythmic stimuli during sleep might represent a powerful tool to boost sleep spindles. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  19. The sensitivity of auditory-motor representations to subtle changes in auditory feedback while singing

    PubMed Central

    Keough, Dwayne; Jones, Jeffery A.

    2009-01-01

    Singing requires accurate control of the fundamental frequency (F0) of the voice. This study examined trained singers’ and untrained singers’ (nonsingers’) sensitivity to subtle manipulations in auditory feedback and the subsequent effect on the mapping between F0 feedback and vocal control. Participants produced the consonant-vowel ∕ta∕ while receiving auditory feedback that was shifted up and down in frequency. Results showed that singers and nonsingers compensated to a similar degree when presented with frequency-altered feedback (FAF); however, singers’ F0 values were consistently closer to the intended pitch target. Moreover, singers initiated their compensatory responses when auditory feedback was shifted up or down 6 cents or more, compared to nonsingers who began compensating when feedback was shifted up 26 cents and down 22 cents. Additionally, examination of the first 50 ms of vocalization indicated that participants commenced subsequent vocal utterances, during FAF, near the F0 value on previous shift trials. Interestingly, nonsingers commenced F0 productions below the pitch target and increased their F0 until they matched the note. Thus, singers and nonsingers rely on an internal model to regulate voice F0, but singers’ models appear to be more sensitive in response to subtle discrepancies in auditory feedback. PMID:19640048

  20. The sensitivity of auditory-motor representations to subtle changes in auditory feedback while singing.

    PubMed

    Keough, Dwayne; Jones, Jeffery A

    2009-08-01

    Singing requires accurate control of the fundamental frequency (F0) of the voice. This study examined trained singers' and untrained singers' (nonsingers') sensitivity to subtle manipulations in auditory feedback and the subsequent effect on the mapping between F0 feedback and vocal control. Participants produced the consonant-vowel /ta/ while receiving auditory feedback that was shifted up and down in frequency. Results showed that singers and nonsingers compensated to a similar degree when presented with frequency-altered feedback (FAF); however, singers' F0 values were consistently closer to the intended pitch target. Moreover, singers initiated their compensatory responses when auditory feedback was shifted up or down 6 cents or more, compared to nonsingers who began compensating when feedback was shifted up 26 cents and down 22 cents. Additionally, examination of the first 50 ms of vocalization indicated that participants commenced subsequent vocal utterances, during FAF, near the F0 value on previous shift trials. Interestingly, nonsingers commenced F0 productions below the pitch target and increased their F0 until they matched the note. Thus, singers and nonsingers rely on an internal model to regulate voice F0, but singers' models appear to be more sensitive in response to subtle discrepancies in auditory feedback.

  1. Basic Auditory Processing and Developmental Dyslexia in Chinese

    ERIC Educational Resources Information Center

    Wang, Hsiao-Lan Sharon; Huss, Martina; Hamalainen, Jarmo A.; Goswami, Usha

    2012-01-01

    The present study explores the relationship between basic auditory processing of sound rise time, frequency, duration and intensity, phonological skills (onset-rime and tone awareness, sound blending, RAN, and phonological memory) and reading disability in Chinese. A series of psychometric, literacy, phonological, auditory, and character…

  2. Auditory Frequency Discrimination in Children with Specific Language Impairment: A Longitudinal Study

    ERIC Educational Resources Information Center

    Hill, P. R.; Hogben, J. H.; Bishop, D. M. V.

    2005-01-01

    It has been proposed that specific language impairment (SLI) is caused by an impairment of auditory processing, but it is unclear whether this problem affects temporal processing, frequency discrimination (FD), or both. Furthermore, there are few longitudinal studies in this area, making it hard to establish whether any deficit represents a…

  3. Auditory Attention to Frequency and Time: An Analogy to Visual Local-Global Stimuli

    ERIC Educational Resources Information Center

    Justus, Timothy; List, Alexandra

    2005-01-01

    Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were…

  4. Basic Auditory Processing Deficits in Dyslexia: Systematic Review of the Behavioral and Event-Related Potential/Field Evidence

    ERIC Educational Resources Information Center

    Hämäläinen, Jarmo A.; Salminen, Hanne K.; Leppänen, Paavo H. T.

    2013-01-01

    A review of research that uses behavioral, electroencephalographic, and/or magnetoencephalographic methods to investigate auditory processing deficits in individuals with dyslexia is presented. Findings show that measures of frequency, rise time, and duration discrimination as well as amplitude modulation and frequency modulation detection were…

  5. Auditory Stream Segregation and the Perception of Across-Frequency Synchrony

    ERIC Educational Resources Information Center

    Micheyl, Christophe; Hunter, Cynthia; Oxenham, Andrew J.

    2010-01-01

    This study explored the extent to which sequential auditory grouping affects the perception of temporal synchrony. In Experiment 1, listeners discriminated between 2 pairs of asynchronous "target" tones at different frequencies, A and B, in which the B tone either led or lagged. Thresholds were markedly higher when the target tones were temporally…

  6. Auditory Performance and Electrical Stimulation Measures in Cochlear Implant Recipients With Auditory Neuropathy Compared With Severe to Profound Sensorineural Hearing Loss.

    PubMed

    Attias, Joseph; Greenstein, Tally; Peled, Miriam; Ulanovski, David; Wohlgelernter, Jay; Raveh, Eyal

    The aim of the study was to compare auditory and speech outcomes and electrical parameters on average 8 years after cochlear implantation between children with isolated auditory neuropathy (AN) and children with sensorineural hearing loss (SNHL). The study was conducted at a tertiary, university-affiliated pediatric medical center. The cohort included 16 patients with isolated AN with current age of 5 to 12.2 years who had been using a cochlear implant for at least 3.4 years and 16 control patients with SNHL matched for duration of deafness, age at implantation, type of implant, and unilateral/bilateral implant placement. All participants had had extensive auditory rehabilitation before and after implantation, including the use of conventional hearing aids. Most patients received Cochlear Nucleus devices, and the remainder either Med-El or Advanced Bionics devices. Unaided pure-tone audiograms were evaluated before and after implantation. Implantation outcomes were assessed by auditory and speech recognition tests in quiet and in noise. Data were also collected on the educational setting at 1 year after implantation and at school age. The electrical stimulation measures were evaluated only in the Cochlear Nucleus implant recipients in the two groups. Similar mapping and electrical measurement techniques were used in the two groups. Electrical thresholds, comfortable level, dynamic range, and objective neural response telemetry threshold were measured across the 22-electrode array in each patient. Main outcome measures were between-group differences in the following parameters: (1) Auditory and speech tests. (2) Residual hearing. (3) Electrical stimulation parameters. (4) Correlations of residual hearing at low frequencies with electrical thresholds at the basal, middle, and apical electrodes. The children with isolated AN performed equally well to the children with SNHL on auditory and speech recognition tests in both quiet and noise. More children in the AN group than the SNHL group were attending mainstream educational settings at school age, but the difference was not statistically significant. Significant between-group differences were noted in electrical measurements: the AN group was characterized by a lower current charge to reach subjective electrical thresholds, lower comfortable level and dynamic range, and lower telemetric neural response threshold. Based on pure-tone audiograms, the children with AN also had more residual hearing before and after implantation. Highly positive coefficients were found on correlation analysis between T levels across the basal and midcochlear electrodes and low-frequency acoustic thresholds. Prelingual children with isolated AN who fail to show expected oral and auditory progress after extensive rehabilitation with conventional hearing aids should be considered for cochlear implantation. Children with isolated AN had similar pattern as children with SNHL on auditory performance tests after cochlear implantation. The lower current charge required to evoke subjective and objective electrical thresholds in children with AN compared with children with SNHL may be attributed to the contribution to electrophonic hearing from the remaining neurons and hair cells. In addition, it is also possible that mechanical stimulation of the basilar membrane, as in acoustic stimulation, is added to the electrical stimulation of the cochlear implant.

  7. Seasonal plasticity of auditory saccular sensitivity in the vocal plainfin midshipman fish, Porichthys notatus.

    PubMed

    Sisneros, Joseph A

    2009-08-01

    The plainfin midshipman fish, Porichthys notatus, is a seasonally breeding species of marine teleost fish that generates acoustic signals for intraspecific social and reproductive-related communication. Female midshipman use the inner ear saccule as the main acoustic endorgan for hearing to detect and locate vocalizing males that produce multiharmonic advertisement calls during the breeding season. Previous work showed that the frequency sensitivity of midshipman auditory saccular afferents changed seasonally with female reproductive state such that summer reproductive females became better suited than winter nonreproductive females to encode the dominant higher harmonics of the male advertisement calls. The focus of this study was to test the hypothesis that seasonal reproductive-dependent changes in saccular afferent tuning is paralleled by similar changes in saccular sensitivity at the level of the hair-cell receptor. Here, I examined the evoked response properties of midshipman saccular hair cells from winter nonreproductive and summer reproductive females to determine if reproductive state affects the frequency response and threshold of the saccule to behaviorally relevant single tone stimuli. Saccular potentials were recorded from populations of hair cells in vivo while sound was presented by an underwater speaker. Results indicate that saccular hair cells from reproductive females had thresholds that were approximately 8 to 13 dB lower than nonreproductive females across a broad range of frequencies that included the dominant higher harmonic components and the fundamental frequency of the male's advertisement call. These seasonal-reproductive-dependent changes in thresholds varied differentially across the three (rostral, middle, and caudal) regions of the saccule. Such reproductive-dependent changes in saccule sensitivity may represent an adaptive plasticity of the midshipman auditory sense to enhance mate detection, recognition, and localization during the breeding season.

  8. Functional and structural aspects of tinnitus-related enhancement and suppression of auditory cortex activity.

    PubMed

    Diesch, Eugen; Andermann, Martin; Flor, Herta; Rupp, Andre

    2010-05-01

    The steady-state auditory evoked magnetic field was recorded in tinnitus patients and controls, both either musicians or non-musicians, all of them with high-frequency hearing loss. Stimuli were AM-tones with two modulation frequencies and three carrier frequencies matching the "audiometric edge", i.e. the frequency above which hearing loss increases more rapidly, the tinnitus frequency or the frequency 1 1/2 octaves above the audiometric edge in controls, and a frequency 1 1/2 octaves below the audiometric edge. Stimuli equated in carrier frequency, but differing in modulation frequency, were simultaneously presented to the two ears. The modulation frequency-specific components of the dual steady-state response were recovered by bandpass filtering. In both hemispheres, the source amplitude of the response was larger for contralateral than ipsilateral input. In non-musicians with tinnitus, this laterality effect was enhanced in the hemisphere contralateral and reduced in the hemisphere ipsilateral to the tinnitus ear, especially for the tinnitus frequency. The hemisphere-by-input laterality dominance effect was smaller in musicians than in non-musicians. In both patient groups, source amplitude change over time, i.e. amplitude slope, was increasing with tonal frequency for contralateral input and decreasing for ipsilateral input. However, slope was smaller for musicians than non-musicians. In patients, source amplitude was negatively correlated with the MRI-determined volume of the medial partition of Heschl's gyrus. Tinnitus patients show an altered excitatory-inhibitory balance reflecting the downregulation of inhibition and resulting in a steeper dominance hierarchy among simultaneous processes in auditory cortex. Direction and extent of this alteration are modulated by musicality and auditory cortex volume. 2010 Elsevier Inc. All rights reserved.

  9. Decreased echolocation performance following high-frequency hearing loss in the false killer whale (Pseudorca crassidens).

    PubMed

    Kloepper, L N; Nachtigall, P E; Gisiner, R; Breese, M

    2010-11-01

    Toothed whales and dolphins possess a hypertrophied auditory system that allows for the production and hearing of ultrasonic signals. Although the fossil record provides information on the evolution of the auditory structures found in extant odontocetes, it cannot provide information on the evolutionary pressures leading to the hypertrophied auditory system. Investigating the effect of hearing loss may provide evidence for the reason for the development of high-frequency hearing in echolocating animals by demonstrating how high-frequency hearing assists in the functioning echolocation system. The discrimination abilities of a false killer whale (Pseudorca crassidens) were measured prior to and after documented high-frequency hearing loss. In 1992, the subject had good hearing and could hear at frequencies up to 100 kHz. In 2008, the subject had lost hearing at frequencies above 40 kHz. First in 1992, and then again in 2008, the subject performed an identical echolocation task, discriminating between machined hollow aluminum cylinder targets of differing wall thickness. Performances were recorded for individual target differences and compared between both experimental years. Performances on individual targets dropped between 1992 and 2008, with a maximum performance reduction of 36.1%. These data indicate that, with a loss in high-frequency hearing, there was a concomitant reduction in echolocation discrimination ability, and suggest that the development of a hypertrophied auditory system capable of hearing at ultrasonic frequencies evolved in response to pressures for fine-scale echolocation discrimination.

  10. Functional Topography of Human Auditory Cortex

    PubMed Central

    Rauschecker, Josef P.

    2016-01-01

    Functional and anatomical studies have clearly demonstrated that auditory cortex is populated by multiple subfields. However, functional characterization of those fields has been largely the domain of animal electrophysiology, limiting the extent to which human and animal research can inform each other. In this study, we used high-resolution functional magnetic resonance imaging to characterize human auditory cortical subfields using a variety of low-level acoustic features in the spectral and temporal domains. Specifically, we show that topographic gradients of frequency preference, or tonotopy, extend along two axes in human auditory cortex, thus reconciling historical accounts of a tonotopic axis oriented medial to lateral along Heschl's gyrus and more recent findings emphasizing tonotopic organization along the anterior–posterior axis. Contradictory findings regarding topographic organization according to temporal modulation rate in acoustic stimuli, or “periodotopy,” are also addressed. Although isolated subregions show a preference for high rates of amplitude-modulated white noise (AMWN) in our data, large-scale “periodotopic” organization was not found. Organization by AM rate was correlated with dominant pitch percepts in AMWN in many regions. In short, our data expose early auditory cortex chiefly as a frequency analyzer, and spectral frequency, as imposed by the sensory receptor surface in the cochlea, seems to be the dominant feature governing large-scale topographic organization across human auditory cortex. SIGNIFICANCE STATEMENT In this study, we examine the nature of topographic organization in human auditory cortex with fMRI. Topographic organization by spectral frequency (tonotopy) extended in two directions: medial to lateral, consistent with early neuroimaging studies, and anterior to posterior, consistent with more recent reports. Large-scale organization by rates of temporal modulation (periodotopy) was correlated with confounding spectral content of amplitude-modulated white-noise stimuli. Together, our results suggest that the organization of human auditory cortex is driven primarily by its response to spectral acoustic features, and large-scale periodotopy spanning across multiple regions is not supported. This fundamental information regarding the functional organization of early auditory cortex will inform our growing understanding of speech perception and the processing of other complex sounds. PMID:26818527

  11. Neural mechanisms of mismatch negativity dysfunction in schizophrenia.

    PubMed

    Lee, M; Sehatpour, P; Hoptman, M J; Lakatos, P; Dias, E C; Kantrowitz, J T; Martinez, A M; Javitt, D C

    2017-11-01

    Schizophrenia is associated with cognitive deficits that reflect impaired cortical information processing. Mismatch negativity (MMN) indexes pre-attentive information processing dysfunction at the level of primary auditory cortex. This study investigates mechanisms underlying MMN impairments in schizophrenia using event-related potential, event-related spectral decomposition (ERSP) and resting state functional connectivity (rsfcMRI) approaches. For this study, MMN data to frequency, intensity and duration-deviants were analyzed from 69 schizophrenia patients and 38 healthy controls. rsfcMRI was obtained from a subsample of 38 patients and 23 controls. As expected, schizophrenia patients showed highly significant, large effect size (P=0.0004, d=1.0) deficits in MMN generation across deviant types. In ERSP analyses, responses to deviants occurred primarily the theta (4-7 Hz) frequency range consistent with distributed corticocortical processing, whereas responses to standards occurred primarily in alpha (8-12 Hz) range consistent with known frequencies of thalamocortical activation. Independent deficits in schizophrenia were observed in both the theta response to deviants (P=0.021) and the alpha-response to standards (P=0.003). At the single-trial level, differential patterns of response were observed for frequency vs duration/intensity deviants, along with At the network level, MMN deficits engaged canonical somatomotor, ventral attention and default networks, with a differential pattern of engagement across deviant types (P<0.0001). Findings indicate that deficits in thalamocortical, as well as corticocortical, connectivity contribute to auditory dysfunction in schizophrenia. In addition, differences in ERSP and rsfcMRI profiles across deviant types suggest potential differential engagement of underlying generator mechanisms.

  12. Multichannel electrical stimulation of the auditory nerve in man. I. Basic psychophysics.

    PubMed

    Shannon, R V

    1983-08-01

    Basic psychophysical measurements were obtained from three patients implanted with multichannel cochlear implants. This paper presents measurements from stimulation of a single channel at a time (either monopolar or bipolar). The shape of the threshold vs. frequency curve can be partially related to the membrane biophysics of the remaining spiral ganglion and/or dendrites. Nerve survival in the region of the electrode may produce some increase in the dynamic range on that electrode. Loudness was related to the stimulus amplitude by a power law with exponents between 1.6 and 3.4, depending on frequency. Intensity discrimination was better than for normal auditory stimulation, but not enough to offset the small dynamic range for electrical stimulation. Measures of temporal integration were comparable to normals, indicating a central mechanism that is still intact in implant patients. No frequency analysis of the electrical signal was observed. Each electrode produced a unique pitch sensation, but they were not simply related to the tonotopic position of the stimulated electrode. Pitch increased over more than 4 octaves (for one patient) as the frequency was increased from 100 to 300 Hz, but above 300 Hz no pitch change was observed. Possibly the major limitation of single channel cochlear implants is the 1-2 ms integration time (probably due to the capacitative properties of the nerve membrane which acts as a low-pass filter at 100 Hz). Another limitation of electrical stimulation is that there is no spectral analysis of the electrical waveform so that temporal waveform alone determines the effective stimulus.

  13. Localizing pre-attentive auditory memory-based comparison: magnetic mismatch negativity to pitch change.

    PubMed

    Maess, Burkhard; Jacobsen, Thomas; Schröger, Erich; Friederici, Angela D

    2007-08-15

    Changes in the pitch of repetitive sounds elicit the mismatch negativity (MMN) of the event-related brain potential (ERP). There exist two alternative accounts for this index of automatic change detection: (1) A sensorial, non-comparator account according to which ERPs in oddball sequences are affected by differential refractory states of frequency-specific afferent cortical neurons. (2) A cognitive, comparator account stating that MMN reflects the outcome of a memory comparison between a neuronal model of the frequently presented standard sound with the sensory memory representation of the changed sound. Using a condition controlling for refractoriness effects, the two contributions to MMN can be disentangled. The present study used whole-head MEG to further elucidate the sensorial and cognitive contributions to frequency MMN. Results replicated ERP findings that MMN to pitch change is a compound of the activity of a sensorial, non-comparator mechanism and a cognitive, comparator mechanism which could be separated in time. The sensorial part of frequency MMN consisting of spatially dipolar patterns was maximal in the late N1 range (105-125 ms), while the cognitive part peaked in the late MMN-range (170-200 ms). Spatial principal component analyses revealed that the early part of the traditionally measured MMN (deviant minus standard) is mainly due to the sensorial mechanism while the later mainly due to the cognitive mechanism. Inverse modeling revealed sources for both MMN contributions in the gyrus temporales transversus, bilaterally. These MEG results suggest temporally distinct but spatially overlapping activities of non-comparator-based and comparator-based mechanisms of automatic frequency change detection in auditory cortex.

  14. Comparison of temporal properties of auditory single units in response to cochlear infrared laser stimulation recorded with multi-channel and single tungsten electrodes

    NASA Astrophysics Data System (ADS)

    Tan, Xiaodong; Xia, Nan; Young, Hunter; Richter, Claus-Peter

    2015-02-01

    Auditory prostheses may benefit from Infrared Neural Stimulation (INS) because optical stimulation allows for spatially selective activation of neuron populations. Selective activation of neurons in the cochlear spiral ganglion can be determined in the central nucleus of the inferior colliculus (ICC) because the tonotopic organization of frequencies in the cochlea is maintained throughout the auditory pathway. The activation profile of INS is well represented in the ICC by multichannel electrodes (MCEs). To characterize single unit properties in response to INS, however, single tungsten electrodes (STEs) should be used because of its better signal-to-noise ratio. In this study, we compared the temporal properties of ICC single units recorded with MCEs and STEs in order to characterize the response properties of single auditory neurons in response to INS in guinea pigs. The length along the cochlea stimulated with infrared radiation corresponded to a frequency range of about 0.6 octaves, similar to that recorded with STEs. The temporal properties of single units recorded with MCEs showed higher maximum rates, shorter latencies, and higher firing efficiencies compared to those recorded with STEs. When the preset amplitude threshold for triggering MCE recordings was raised to twice over the noise level, the temporal properties of the single units became similar to those obtained with STEs. Undistinguishable neural activities from multiple sources in MCE recordings could be responsible for the response property difference between MCEs and STEs. Thus, caution should be taken in single unit recordings with MCEs.

  15. Smoke alarms for sleeping adults who are hard-of-hearing: comparison of auditory, visual, and tactile signals.

    PubMed

    Bruck, Dorothy; Thomas, Ian R

    2009-02-01

    People who are hard-of-hearing may rely on auditory, visual, or tactile alarms in a fire emergency, and US standards require strobe lights in hotel bedrooms to provide emergency notification for people with hearing loss. This is the first study to compare the waking effectiveness of a variety of auditory (beeps), tactile (bed and pillow shakers), and visual (strobe lights) signals at a range of intensities. Three auditory signals, a bed shaker, a pillow shaker, and strobe lights were presented to 38 adults (aged 18 to 80 yr) with mild to moderately severe hearing loss of 25 to 70 dB (in both ears), during slow-wave sleep (deep sleep). Two of the auditory signals were selected on the basis that they had the lowest auditory thresholds when awake (from a range of eight signals). The third auditory signal was the current 3100-Hz smoke alarm. All auditory signals were tested below, at, and above the decibel level prescribed by the applicable standard for bedrooms (75 dBA). In the case of bed and pillow shakers intensities below, at, and above the level as purchased were tested. For strobe lights three levels were used, all of which were above the applicable standard. The intensity level at which participants awoke was identified by electroencephalograph monitoring. The most effective signal was a 520-Hz square wave auditory signal, waking 92% at 75 dBA, compared with 56% waking to the 75 dBA high-pitched alarm. Bed and pillow shakers awoke 80 to 84% at the intensity level as purchased. The strobe lights awoke only 27% at an intensity above the US standard. Nonparametric analyses confirmed that the 520-Hz square wave signal was significantly more effective than the current smoke alarm and the strobe lights in waking this population. A low-frequency square wave signal has now been found to be significantly more effective than all tested alternatives in a number of populations (hard-of-hearing, children, older adults, young adults, alcohol impaired) and should be adopted across the whole population as the normal smoke alarm signal. Strobe lights, even at high intensities, are ineffective in reliably waking people with mild to moderate hearing loss.

  16. Brainstem Correlates of Temporal Auditory Processing in Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Basu, Madhavi; Krishnan, Ananthanarayan; Weber-Fox, Christine

    2010-01-01

    Deficits in identification and discrimination of sounds with short inter-stimulus intervals or short formant transitions in children with specific language impairment (SLI) have been taken to reflect an underlying temporal auditory processing deficit. Using the sustained frequency following response (FFR) and the onset auditory brainstem responses…

  17. Injury- and Use-Related Plasticity in the Adult Auditory System.

    ERIC Educational Resources Information Center

    Irvine, Dexter R. F.

    2000-01-01

    This article discusses findings concerning the plasticity of auditory cortical processing mechanisms in adults, including the effects of restricted cochlear damage or behavioral training with acoustic stimuli on the frequency selectivity of auditory cortical neurons and evidence for analogous injury- and use-related plasticity in the adult human…

  18. [Significance of auditory and kinesthetic feedback in vocal training of young professional singers (students)].

    PubMed

    Ciochină, Al D; Ciochină, Paula; Cobzeanu, M D; Burlui, Ada; Zaharia, D

    2004-01-01

    The study was to estimate the significance of auditory and kinesthetic feedback to an accurate control of fundamental frequency (F0) in 18 students beginning a professional singing education. The students sing an ascending and descending triad pattern covering their entire pitch range with and without making noise in legato and staccato and in a slow and fast tempo. F0 was measured by a computer program. The interval sizes between adjacent tones were determined and their departures from equally tempered tuning were calculated, the deviation from this tuning were used as a measure of the accuracy of intonation. Intonation accuracy was reduced by masking noise, by staccato as opposed to legato singing, and by fast as opposed to slow performance. The contribution of the auditory feedback to pitch control was not significantly improved after education, whereas the kinesthetic feedback circuit was improved in slow legato and slow staccato tasks. The results support the assumption that the kinesthetic feedback contributes substantially to intonation accuracy.

  19. Evidence for an Auditory Fovea in the New Zealand Kiwi (Apteryx mantelli)

    PubMed Central

    Corfield, Jeremy; Kubke, M. Fabiana; Parsons, Stuart; Wild, J. Martin; Köppl, Christine

    2011-01-01

    Kiwi are rare and strictly protected birds of iconic status in New Zealand. Yet, perhaps due to their unusual, nocturnal lifestyle, surprisingly little is known about their behaviour or physiology. In the present study, we exploited known correlations between morphology and physiology in the avian inner ear and brainstem to predict the frequency range of best hearing in the North Island brown kiwi. The mechanosensitive hair bundles of the sensory hair cells in the basilar papilla showed the typical change from tall bundles with few stereovilli to short bundles with many stereovilli along the apical-to-basal tonotopic axis. In contrast to most birds, however, the change was considerably less in the basal half of the epithelium. Dendritic lengths in the brainstem nucleus laminaris also showed the typical change along the tonotopic axis. However, as in the basilar papilla, the change was much less pronounced in the presumed high-frequency regions. Together, these morphological data suggest a fovea-like overrepresentation of a narrow high-frequency band in kiwi. Based on known correlations of hair-cell microanatomy and physiological responses in other birds, a specific prediction for the frequency representation along the basilar papilla of the kiwi was derived. The predicted overrepresentation of approximately 4-6 kHz matches potentially salient frequency bands of kiwi vocalisations and may thus be an adaptation to a nocturnal lifestyle in which auditory communication plays a dominant role. PMID:21887317

  20. Re-examining the upper limit of temporal pitch

    PubMed Central

    Macherey, Olivier; Carlyon, Robert P.

    2015-01-01

    Five normally-hearing listeners pitch-ranked harmonic complexes of different fundamental frequencies (F0s) filtered in three different frequency regions. Harmonics were summed either in sine, alternating sine-cosine (ALT), or pulse-spreading (PSHC) phase. The envelopes of ALT and PSHC complexes repeated at rates of 2F0 and 4F0. Pitch corresponded to those rates at low F0s, but, as F0 increased, there was a range of F0s over which pitch remained constant or dropped. Gammatone-filterbank simulations showed that, as F0 increased and the number of harmonics interacting in a filter dropped, the output of that filter switched from repeating at 2F0 or 4F0 to repeating at F0. A model incorporating this phenomenon accounted well for the data, except for complexes filtered into the highest frequency region (7800-10800 Hz). To account for the data in that region it was necessary to assume either that auditory filters at very high frequencies are sharper than traditionally believed, and/or that the auditory system applies smaller weights to filters whose outputs repeat at high rates. The results also provide new evidence on the highest pitch that can be derived from purely temporal cues, and corroborate recent reports that a complex pitch can be derived from very-high-frequency resolved harmonics. PMID:25480066

  1. Audiovisual Perception of Noise Vocoded Speech in Dyslexic and Non-Dyslexic Adults: The Role of Low-Frequency Visual Modulations

    ERIC Educational Resources Information Center

    Megnin-Viggars, Odette; Goswami, Usha

    2013-01-01

    Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…

  2. Hearing conspecific vocal signals alters peripheral auditory sensitivity

    PubMed Central

    Gall, Megan D.; Wilczynski, Walter

    2015-01-01

    We investigated whether hearing advertisement calls over several nights, as happens in natural frog choruses, modified the responses of the peripheral auditory system in the green treefrog, Hyla cinerea. Using auditory evoked potentials (AEP), we found that exposure to 10 nights of a simulated male chorus lowered auditory thresholds in males and females, while exposure to random tones had no effect in males, but did result in lower thresholds in females. The threshold change was larger at the lower frequencies stimulating the amphibian papilla than at higher frequencies stimulating the basilar papilla. Suprathreshold responses to tonal stimuli were assessed for two peaks in the AEP recordings. For the peak P1 (assessed for 0.8–1.25 kHz), peak amplitude increased following chorus exposure. For peak P2 (assessed for 2–4 kHz), peak amplitude decreased at frequencies between 2.5 and 4.0 kHz, but remained unaltered at 2.0 kHz. Our results show for the first time, to our knowledge, that hearing dynamic social stimuli, like frog choruses, can alter the responses of the auditory periphery in a way that could enhance the detection of and response to conspecific acoustic communication signals. PMID:25972471

  3. Bottom-up driven involuntary auditory evoked field change: constant sound sequencing amplifies but does not sharpen neural activity.

    PubMed

    Okamoto, Hidehiko; Stracke, Henning; Lagemann, Lothar; Pantev, Christo

    2010-01-01

    The capability of involuntarily tracking certain sound signals during the simultaneous presence of noise is essential in human daily life. Previous studies have demonstrated that top-down auditory focused attention can enhance excitatory and inhibitory neural activity, resulting in sharpening of frequency tuning of auditory neurons. In the present study, we investigated bottom-up driven involuntary neural processing of sound signals in noisy environments by means of magnetoencephalography. We contrasted two sound signal sequencing conditions: "constant sequencing" versus "random sequencing." Based on a pool of 16 different frequencies, either identical (constant sequencing) or pseudorandomly chosen (random sequencing) test frequencies were presented blockwise together with band-eliminated noises to nonattending subjects. The results demonstrated that the auditory evoked fields elicited in the constant sequencing condition were significantly enhanced compared with the random sequencing condition. However, the enhancement was not significantly different between different band-eliminated noise conditions. Thus the present study confirms that by constant sound signal sequencing under nonattentive listening the neural activity in human auditory cortex can be enhanced, but not sharpened. Our results indicate that bottom-up driven involuntary neural processing may mainly amplify excitatory neural networks, but may not effectively enhance inhibitory neural circuits.

  4. The Effect of Gender on the N1-P2 Auditory Complex while Listening and Speaking with Altered Auditory Feedback

    ERIC Educational Resources Information Center

    Swink, Shannon; Stuart, Andrew

    2012-01-01

    The effect of gender on the N1-P2 auditory complex was examined while listening and speaking with altered auditory feedback. Fifteen normal hearing adult males and 15 females participated. N1-P2 components were evoked while listening to self-produced nonaltered and frequency shifted /a/ tokens and during production of /a/ tokens during nonaltered…

  5. Representation of complex vocalizations in the Lusitanian toadfish auditory system: evidence of fine temporal, frequency and amplitude discrimination

    PubMed Central

    Vasconcelos, Raquel O.; Fonseca, Paulo J.; Amorim, M. Clara P.; Ladich, Friedrich

    2011-01-01

    Many fishes rely on their auditory skills to interpret crucial information about predators and prey, and to communicate intraspecifically. Few studies, however, have examined how complex natural sounds are perceived in fishes. We investigated the representation of conspecific mating and agonistic calls in the auditory system of the Lusitanian toadfish Halobatrachus didactylus, and analysed auditory responses to heterospecific signals from ecologically relevant species: a sympatric vocal fish (meagre Argyrosomus regius) and a potential predator (dolphin Tursiops truncatus). Using auditory evoked potential (AEP) recordings, we showed that both sexes can resolve fine features of conspecific calls. The toadfish auditory system was most sensitive to frequencies well represented in the conspecific vocalizations (namely the mating boatwhistle), and revealed a fine representation of duration and pulsed structure of agonistic and mating calls. Stimuli and corresponding AEP amplitudes were highly correlated, indicating an accurate encoding of amplitude modulation. Moreover, Lusitanian toadfish were able to detect T. truncatus foraging sounds and A. regius calls, although at higher amplitudes. We provide strong evidence that the auditory system of a vocal fish, lacking accessory hearing structures, is capable of resolving fine features of complex vocalizations that are probably important for intraspecific communication and other relevant stimuli from the auditory scene. PMID:20861044

  6. The Coupling between Ca2+ Channels and the Exocytotic Ca2+ Sensor at Hair Cell Ribbon Synapses Varies Tonotopically along the Mature Cochlea

    PubMed Central

    Cho, Soyoun

    2017-01-01

    The cochlea processes auditory signals over a wide range of frequencies and intensities. However, the transfer characteristics at hair cell ribbon synapses are still poorly understood at different frequency locations along the cochlea. Using recordings from mature gerbils, we report here a surprisingly strong block of exocytosis by the slow Ca2+ buffer EGTA (10 mM) in basal hair cells tuned to high frequencies (∼30 kHz). In addition, using recordings from gerbil, mouse, and bullfrog auditory organs, we find that the spatial coupling between Ca2+ influx and exocytosis changes from nanodomain in low-frequency tuned hair cells (∼<2 kHz) to progressively more microdomain in high-frequency cells (∼>2 kHz). Hair cell synapses have thus developed remarkable frequency-dependent tuning of exocytosis: accurate low-latency encoding of onset and offset of sound intensity in the cochlea's base and submillisecond encoding of membrane receptor potential fluctuations in the apex for precise phase-locking to sound signals. We also found that synaptic vesicle pool recovery from depletion was sensitive to high concentrations of EGTA, suggesting that intracellular Ca2+ buffers play an important role in vesicle recruitment in both low- and high-frequency hair cells. In conclusion, our results indicate that microdomain coupling is important for exocytosis in high-frequency hair cells, suggesting a novel hypothesis for why these cells are more susceptible to sound-induced damage than low-frequency cells; high-frequency inner hair cells must have a low Ca2+ buffer capacity to sustain exocytosis, thus making them more prone to Ca2+-induced cytotoxicity. SIGNIFICANCE STATEMENT In the inner ear, sensory hair cells signal reception of sound. They do this by converting the sound-induced movement of their hair bundles present at the top of these cells, into an electrical current. This current depolarizes the hair cell and triggers the calcium-induced release of the neurotransmitter glutamate that activates the postsynaptic auditory fibers. The speed and precision of this process enables the brain to perceive the vital components of sound, such as frequency and intensity. We show that the coupling strength between calcium channels and the exocytosis calcium sensor at inner hair cell synapses changes along the mammalian cochlea such that the timing and/or intensity of sound is encoded with high precision. PMID:28154149

  7. Speech training alters consonant and vowel responses in multiple auditory cortex fields

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927

  8. An acoustic gap between the NICU and womb: a potential risk for compromised neuroplasticity of the auditory system in preterm infants.

    PubMed

    Lahav, Amir; Skoe, Erika

    2014-01-01

    The intrauterine environment allows the fetus to begin hearing low-frequency sounds in a protected fashion, ensuring initial optimal development of the peripheral and central auditory system. However, the auditory nursery provided by the womb vanishes once the preterm newborn enters the high-frequency (HF) noisy environment of the neonatal intensive care unit (NICU). The present article draws a concerning line between auditory system development and HF noise in the NICU, which we argue is not necessarily conducive to fostering this development. Overexposure to HF noise during critical periods disrupts the functional organization of auditory cortical circuits. As a result, we theorize that the ability to tune out noise and extract acoustic information in a noisy environment may be impaired, leading to increased risks for a variety of auditory, language, and attention disorders. Additionally, HF noise in the NICU often masks human speech sounds, further limiting quality exposure to linguistic stimuli. Understanding the impact of the sound environment on the developing auditory system is an important first step in meeting the developmental demands of preterm newborns undergoing intensive care.

  9. Auditory Discrimination Learning: Role of Working Memory.

    PubMed

    Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.

  10. Comparing the information conveyed by envelope modulation for speech intelligibility, speech quality, and music quality.

    PubMed

    Kates, James M; Arehart, Kathryn H

    2015-10-01

    This paper uses mutual information to quantify the relationship between envelope modulation fidelity and perceptual responses. Data from several previous experiments that measured speech intelligibility, speech quality, and music quality are evaluated for normal-hearing and hearing-impaired listeners. A model of the auditory periphery is used to generate envelope signals, and envelope modulation fidelity is calculated using the normalized cross-covariance of the degraded signal envelope with that of a reference signal. Two procedures are used to describe the envelope modulation: (1) modulation within each auditory frequency band and (2) spectro-temporal processing that analyzes the modulation of spectral ripple components fit to successive short-time spectra. The results indicate that low modulation rates provide the highest information for intelligibility, while high modulation rates provide the highest information for speech and music quality. The low-to-mid auditory frequencies are most important for intelligibility, while mid frequencies are most important for speech quality and high frequencies are most important for music quality. Differences between the spectral ripple components used for the spectro-temporal analysis were not significant in five of the six experimental conditions evaluated. The results indicate that different modulation-rate and auditory-frequency weights may be appropriate for indices designed to predict different types of perceptual relationships.

  11. Auditory Discrimination Learning: Role of Working Memory

    PubMed Central

    Zhang, Yu-Xuan; Moore, David R.; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068

  12. Comparing the information conveyed by envelope modulation for speech intelligibility, speech quality, and music quality

    PubMed Central

    Kates, James M.; Arehart, Kathryn H.

    2015-01-01

    This paper uses mutual information to quantify the relationship between envelope modulation fidelity and perceptual responses. Data from several previous experiments that measured speech intelligibility, speech quality, and music quality are evaluated for normal-hearing and hearing-impaired listeners. A model of the auditory periphery is used to generate envelope signals, and envelope modulation fidelity is calculated using the normalized cross-covariance of the degraded signal envelope with that of a reference signal. Two procedures are used to describe the envelope modulation: (1) modulation within each auditory frequency band and (2) spectro-temporal processing that analyzes the modulation of spectral ripple components fit to successive short-time spectra. The results indicate that low modulation rates provide the highest information for intelligibility, while high modulation rates provide the highest information for speech and music quality. The low-to-mid auditory frequencies are most important for intelligibility, while mid frequencies are most important for speech quality and high frequencies are most important for music quality. Differences between the spectral ripple components used for the spectro-temporal analysis were not significant in five of the six experimental conditions evaluated. The results indicate that different modulation-rate and auditory-frequency weights may be appropriate for indices designed to predict different types of perceptual relationships. PMID:26520329

  13. Frequency-specific attentional modulation in human primary auditory cortex and midbrain.

    PubMed

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Poser, Benedikt A; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2018-07-01

    Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Noise Trauma Induced Plastic Changes in Brain Regions outside the Classical Auditory Pathway

    PubMed Central

    Chen, Guang-Di; Sheppard, Adam; Salvi, Richard

    2017-01-01

    The effects of intense noise exposure on the classical auditory pathway have been extensively investigated; however, little is known about the effects of noise-induced hearing loss on non-classical auditory areas in the brain such as the lateral amygdala (LA) and striatum (Str). To address this issue, we compared the noise-induced changes in spontaneous and tone-evoked responses from multiunit clusters (MUC) in the LA and Str with those seen in auditory cortex (AC). High-frequency octave band noise (10–20 kHz) and narrow band noise (16–20 kHz) induced permanent thresho ld shifts (PTS) at high-frequencies within and above the noise band but not at low frequencies. While the noise trauma significantly elevated spontaneous discharge rate (SR) in the AC, SRs in the LA and Str were only slightly increased across all frequencies. The high-frequency noise trauma affected tone-evoked firing rates in frequency and time dependent manner and the changes appeared to be related to severity of noise trauma. In the LA, tone-evoked firing rates were reduced at the high-frequencies (trauma area) whereas firing rates were enhanced at the low-frequencies or at the edge-frequency dependent on severity of hearing loss at the high frequencies. The firing rate temporal profile changed from a broad plateau to one sharp, delayed peak. In the AC, tone-evoked firing rates were depressed at high frequencies and enhanced at the low frequencies while the firing rate temporal profiles became substantially broader. In contrast, firing rates in the Str were generally decreased and firing rate temporal profiles become more phasic and less prolonged. The altered firing rate and pattern at low frequencies induced by high frequency hearing loss could have perceptual consequences. The tone-evoked hyperactivity in low-frequency MUC could manifest as hyperacusis whereas the discharge pattern changes could affect temporal resolution and integration. PMID:26701290

  15. Adapted wavelet transform improves time-frequency representations: a study of auditory elicited P300-like event-related potentials in rats.

    PubMed

    Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M; Graversen, Carina; Sørensen, Helge B D; Bastlund, Jesper F

    2017-04-01

    Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.

  16. Human cortical responses to slow and fast binaural beats reveal multiple mechanisms of binaural hearing.

    PubMed

    Ross, Bernhard; Miyazaki, Takahiro; Thompson, Jessica; Jamali, Shahab; Fujioka, Takako

    2014-10-15

    When two tones with slightly different frequencies are presented to both ears, they interact in the central auditory system and induce the sensation of a beating sound. At low difference frequencies, we perceive a single sound, which is moving across the head between the left and right ears. The percept changes to loudness fluctuation, roughness, and pitch with increasing beat rate. To examine the neural representations underlying these different perceptions, we recorded neuromagnetic cortical responses while participants listened to binaural beats at a continuously varying rate between 3 Hz and 60 Hz. Binaural beat responses were analyzed as neuromagnetic oscillations following the trajectory of the stimulus rate. Responses were largest in the 40-Hz gamma range and at low frequencies. Binaural beat responses at 3 Hz showed opposite polarity in the left and right auditory cortices. We suggest that this difference in polarity reflects the opponent neural population code for representing sound location. Binaural beats at any rate induced gamma oscillations. However, the responses were largest at 40-Hz stimulation. We propose that the neuromagnetic gamma oscillations reflect postsynaptic modulation that allows for precise timing of cortical neural firing. Systematic phase differences between bilateral responses suggest that separate sound representations of a sound object exist in the left and right auditory cortices. We conclude that binaural processing at the cortical level occurs with the same temporal acuity as monaural processing whereas the identification of sound location requires further interpretation and is limited by the rate of object representations. Copyright © 2014 the American Physiological Society.

  17. Characterization of a Piezoelectric AlN Beam Array in Air and Fluid for an Artificial Basilar Membrane

    NASA Astrophysics Data System (ADS)

    Jeon, Hyejin; Jang, Jongmoon; Kim, Sangwon; Choi, Hongsoo

    2018-03-01

    In this study, we present a piezoelectric artificial basilar membrane (ABM) composed of a 10-channel aluminum nitride beam array. Each beam varies in length from 1306 to 3194 μm for mimicking the frequency selectivity of the cochlea. To characterize the frequency selectivity of the ABM, we measured the mechanical displacement and piezoelectric output while applying acoustic stimulus at 100 dB sound pressure level in the range of 500 Hz-40 kHz. The resonance frequencies measured by mechanical displacement and piezoelectric output were in the range of 10.56-36.5 and 10.9-37.0 kHz, respectively. In addition, the electrical stimulus was applied to the ABMs to compare the mechanical responses in air and fluid. The measured resonance frequencies were in the range of 11.1-47.7 kHz in the air and 3.10-11.9 kHz in the fluid. Understanding the characteristics of the ABM is important for its potential use as a key technology for auditory prostheses.

  18. Short term hearing loss in general aviation operations, phase 1, part 1

    NASA Technical Reports Server (NTRS)

    Parker, J. F., Jr.

    1972-01-01

    The effects of light aircraft noise on six subjects during flight operations were investigated. The noise environment in the Piper Apache light aircraft was found to be capable of producing hearing threshold shifts. The following are the principal findings and conclusions: (1) Through most of the frequency range for which measurements were taken (500 to 6000 Hz), there was a regular progression showing increased loss of auditory acuity as a function of increased exposure time. (2) Extensive variability was found in the results among subjects, and in the measured loss at discrete frequencies for each subject. (3) The principal loss of hearing occurred at the low frequencies, around 500 Hz.

  19. Chirp-evoked potentials in the awake and anesthetized rat. A procedure to assess changes in cortical oscillatory activity.

    PubMed

    Pérez-Alcázar, M; Nicolás, M J; Valencia, M; Alegre, M; Iriarte, J; Artieda, J

    2008-03-01

    Steady-state potentials are oscillatory responses generated by rhythmic stimulation of a sensory pathway. The frequency of the response, which follows the frequency of stimulation and potentially indicates the preferential working frequency of the auditory neural network, is maximal at a stimulus rate of 40 Hz for auditory stimuli in humans, but may be different in other species. Our aim was to explore the responses to different frequencies in the rat. The stimulus was a tone modulated in amplitude by a sinusoid with linearly-increasing frequency from 1 to 250 Hz ("chirp"). Time-frequency transforms were used for response analysis in 12 animals, awake and under ketamine/xylazine anesthesia. We studied whether the responses were due to increases in amplitude or to phase-locking phenomena, using single-sweep time-frequency transforms and inter-trial phase analysis. A progressive decrease in the amplitude of the response was observed from the maximal values (around 15 Hz) up to the limit of the test (250 Hz). The high-frequency component was mainly due to phase-locking phenomena with a smaller amplitude contribution. Under anesthesia, the amplitude and phase-locking of lower frequencies (under 100 Hz) decreased, while the phase-locking over 200 Hz increased. In conclusion, amplitude-modulation following responses differ between humans and rats in response range and frequency of maximal amplitude. Anesthesia with ketamine/xylazine modifies differentially the amplitude and the phase-locking of the responses. These findings should be taken into account when assessing the changes in cortical oscillatory activity related to different drugs, in healthy rodents and in animal models of neurodegenerative diseases.

  20. Auditory and Linguistic Processes in the Perception of Intonation Contours.

    ERIC Educational Resources Information Center

    Studdert-Kennedy, Michael; Hadding, Kerstin

    By examining the relations among sections of the fundamental frequency contour used in judging an utterance as a question or statement, the experiment described in this report seeks a more detailed understanding of auditory-linguistic interaction in the perception of intonation contours. The perceptual process may be divided into stages (auditory,…

  1. Effect of FM Auditory Trainers on Attending Behaviors of Learning-Disabled Children.

    ERIC Educational Resources Information Center

    Blake, Ruth; And Others

    1991-01-01

    This study investigated the effect of FM (frequency modulation) auditory trainer use on attending behaviors of 36 students (ages 5-10) with learning disabilities. Children wearing the auditory trainers scored better than control students on eye contact, having body turned toward sound source, and absence of extraneous body movement and vocal…

  2. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    PubMed

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.

  3. Plasticity in Human Pitch Perception Induced by Tonotopically Mismatched Electro-Acoustic Stimulation

    PubMed Central

    Reiss, Lina A.J.; Turner, Christopher W.; Karsten, Sue A.; Gantz, Bruce J.

    2013-01-01

    Under normal conditions, the acoustic pitch percept of a pure tone is determined mainly by the tonotopic place of the stimulation along the cochlea. Unlike acoustic stimulation, electric stimulation of a cochlear implant (CI) allows for the direct manipulation of the place of stimulation in human subjects. CI sound processors analyze the range of frequencies needed for speech perception and allocate portions of this range to the small number of electrodes distributed in the cochlea. Because the allocation is assigned independently of the original resonant frequency of the basilar membrane associated with the location of each electrode, CI users who have access to residual hearing in either or both ears often have tonotopic mismatches between the acoustic and electric stimulation. Here we demonstrate plasticity of place pitch representations of up to 3 octaves in Hybrid CI users after experience with combined electro-acoustic stimulation. The pitch percept evoked by single CI electrodes, measured relative to acoustic tones presented to the non-implanted ear, changed over time in directions that reduced the electro-acoustic pitch mismatch introduced by the CI programming. This trend was particularly apparent when the allocations of stimulus frequencies to electrodes were changed over time, with pitch changes even reversing direction in some subjects. These findings show that pitch plasticity can occur more rapidly and on a greater scale in the mature auditory system than previously thought possible. Overall, the results suggest that the adult auditory system can impose perceptual order on disordered arrays of inputs. PMID:24157931

  4. Audiogram and auditory critical ratios of two Florida manatees (Trichechus manatus latirostris).

    PubMed

    Gaspard, Joseph C; Bauer, Gordon B; Reep, Roger L; Dziuk, Kimberly; Cardwell, Adrienne; Read, Latoshia; Mann, David A

    2012-05-01

    Manatees inhabit turbid, shallow-water environments and have been shown to have poor visual acuity. Previous studies on hearing have demonstrated that manatees possess good hearing and sound localization abilities. The goals of this research were to determine the hearing abilities of two captive subjects and measure critical ratios to understand the capacity of manatees to detect tonal signals, such as manatee vocalizations, in the presence of noise. This study was also undertaken to better understand individual variability, which has been encountered during behavioral research with manatees. Two Florida manatees (Trichechus manatus latirostris) were tested in a go/no-go paradigm using a modified staircase method, with incorporated 'catch' trials at a 1:1 ratio, to assess their ability to detect single-frequency tonal stimuli. The behavioral audiograms indicated that the manatees' auditory frequency detection for tonal stimuli ranged from 0.25 to 90.5 kHz, with peak sensitivity extending from 8 to 32 kHz. Critical ratios, thresholds for tone detection in the presence of background masking noise, were determined with one-octave wide noise bands, 7-12 dB (spectrum level) above the thresholds determined for the audiogram under quiet conditions. Manatees appear to have quite low critical ratios, especially at 8 kHz, where the ratio was 18.3 dB for one manatee. This suggests that manatee hearing is sensitive in the presence of background noise and that they may have relatively narrow filters in the tested frequency range.

  5. Frequency-specific adaptation and its underlying circuit model in the auditory midbrain.

    PubMed

    Shen, Li; Zhao, Lingyun; Hong, Bo

    2015-01-01

    Receptive fields of sensory neurons are considered to be dynamic and depend on the stimulus history. In the auditory system, evidence of dynamic frequency-receptive fields has been found following stimulus-specific adaptation (SSA). However, the underlying mechanism and circuitry of SSA have not been fully elucidated. Here, we studied how frequency-receptive fields of neurons in rat inferior colliculus (IC) changed when exposed to a biased tone sequence. Pure tone with one specific frequency (adaptor) was presented markedly more often than others. The adapted tuning was compared with the original tuning measured with an unbiased sequence. We found inhomogeneous changes in frequency tuning in IC, exhibiting a center-surround pattern with respect to the neuron's best frequency. Central adaptors elicited strong suppressive and repulsive changes while flank adaptors induced facilitative and attractive changes. Moreover, we proposed a two-layer model of the underlying network, which not only reproduced the adaptive changes in the receptive fields but also predicted novelty responses to oddball sequences. These results suggest that frequency-specific adaptation in auditory midbrain can be accounted for by an adapted frequency channel and its lateral spreading of adaptation, which shed light on the organization of the underlying circuitry.

  6. Frequency-specific adaptation and its underlying circuit model in the auditory midbrain

    PubMed Central

    Shen, Li; Zhao, Lingyun; Hong, Bo

    2015-01-01

    Receptive fields of sensory neurons are considered to be dynamic and depend on the stimulus history. In the auditory system, evidence of dynamic frequency-receptive fields has been found following stimulus-specific adaptation (SSA). However, the underlying mechanism and circuitry of SSA have not been fully elucidated. Here, we studied how frequency-receptive fields of neurons in rat inferior colliculus (IC) changed when exposed to a biased tone sequence. Pure tone with one specific frequency (adaptor) was presented markedly more often than others. The adapted tuning was compared with the original tuning measured with an unbiased sequence. We found inhomogeneous changes in frequency tuning in IC, exhibiting a center-surround pattern with respect to the neuron's best frequency. Central adaptors elicited strong suppressive and repulsive changes while flank adaptors induced facilitative and attractive changes. Moreover, we proposed a two-layer model of the underlying network, which not only reproduced the adaptive changes in the receptive fields but also predicted novelty responses to oddball sequences. These results suggest that frequency-specific adaptation in auditory midbrain can be accounted for by an adapted frequency channel and its lateral spreading of adaptation, which shed light on the organization of the underlying circuitry. PMID:26483641

  7. Treatment of Idiopathic Toe-Walking in Children with Autism Using GaitSpot Auditory Speakers and Simplified Habit Reversal

    ERIC Educational Resources Information Center

    Marcus, Ann; Sinnott, Brigit; Bradley, Stephen; Grey, Ian

    2010-01-01

    This study aimed to examine the effectiveness of a simplified habit reversal procedure (SHR) using differential reinforcement of incompatible behaviour (DRI) and a stimulus prompt (GaitSpot Auditory Squeakers) to reduce the frequency of idiopathic toe-walking (ITW) and increase the frequency of correct heel-to-toe-walking in three children with…

  8. Auditory stream segregation in monkey auditory cortex: effects of frequency separation, presentation rate, and tone duration

    NASA Astrophysics Data System (ADS)

    Fishman, Yonatan I.; Arezzo, Joseph C.; Steinschneider, Mitchell

    2004-09-01

    Auditory stream segregation refers to the organization of sequential sounds into ``perceptual streams'' reflecting individual environmental sound sources. In the present study, sequences of alternating high and low tones, ``...ABAB...,'' similar to those used in psychoacoustic experiments on stream segregation, were presented to awake monkeys while neural activity was recorded in primary auditory cortex (A1). Tone frequency separation (ΔF), tone presentation rate (PR), and tone duration (TD) were systematically varied to examine whether neural responses correlate with effects of these variables on perceptual stream segregation. ``A'' tones were fixed at the best frequency of the recording site, while ``B'' tones were displaced in frequency from ``A'' tones by an amount=ΔF. As PR increased, ``B'' tone responses decreased in amplitude to a greater extent than ``A'' tone responses, yielding neural response patterns dominated by ``A'' tone responses occurring at half the alternation rate. Increasing TD facilitated the differential attenuation of ``B'' tone responses. These findings parallel psychoacoustic data and suggest a physiological model of stream segregation whereby increasing ΔF, PR, or TD enhances spatial differentiation of ``A'' tone and ``B'' tone responses along the tonotopic map in A1.

  9. Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise.

    PubMed

    Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W

    2011-03-08

    How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.

  10. Cochlear Implant Electrode Array From Partial to Full Insertion in Non-Human Primate Model.

    PubMed

    Manrique-Huarte, Raquel; Calavia, Diego; Gallego, Maria Antonia; Manrique, Manuel

    2018-04-01

    To determine the feasibility of progressive insertion (two sequential surgeries: partial to full insertion) of an electrode array and to compare functional outcomes. 8 normal-hearing animals (Macaca fascicularis (MF)) were included. A 14 contact electrode array, which is suitably sized for the MF cochlea was partially inserted (PI) in 16 ears. After 3 months of follow-up revision surgery the electrode was advanced to a full insertion (FI) in 8 ears. Radiological examination and auditory testing was performed monthly for 6 months. In order to compare the values a two way repeated measures ANOVA was used. A p-value below 0.05 was considered as statistically significant. IBM SPSS Statistics V20 was used. Surgical procedure was completed in all cases with no complications. Mean auditory threshold shift (ABR click tones) after 6 months follow-up is 19 dB and 27 dB for PI and FI group. For frequencies 4, 6, 8, 12, and 16 kHz in the FI group, tone burst auditory thresholds increased after the revision surgery showing no recovery thereafter. Mean threshold shift at 6 months of follow- up is 19.8 dB ranging from 2 to 36dB for PI group and 33.14dB ranging from 8 to 48dB for FI group. Statistical analysis yields no significant differences between groups. It is feasible to perform a partial insertion of an electrode array and progress on a second surgical time to a full insertion (up to 270º). Hearing preservation is feasible for both procedures. Note that a minimal threshold deterioration is depicted among full insertion group, especially among high frequencies, with no statistical differences.

  11. Benefits of fading in perceptual learning are driven by more than dimensional attention.

    PubMed

    Wisniewski, Matthew G; Radell, Milen L; Church, Barbara A; Mercado, Eduardo

    2017-01-01

    Individuals learn to classify percepts effectively when the task is initially easy and then gradually increases in difficulty. Some suggest that this is because easy-to-discriminate events help learners focus attention on discrimination-relevant dimensions. Here, we tested whether such attentional-spotlighting accounts are sufficient to explain easy-to-hard effects in auditory perceptual learning. In two experiments, participants were trained to discriminate periodic, frequency-modulated (FM) tones in two separate frequency ranges (300-600 Hz or 3000-6000 Hz). In one frequency range, sounds gradually increased in similarity as training progressed. In the other, stimulus similarity was constant throughout training. After training, participants showed better performance in their progressively trained frequency range, even though the discrimination-relevant dimension across ranges was the same. Learning theories that posit experience-dependent changes in stimulus representations and/or the strengthening of associations with differential responses, predict the observed specificity of easy-to-hard effects, whereas attentional-spotlighting theories do not. Calibrating the difficulty and temporal sequencing of training experiences to support more incremental representation-based learning can enhance the effectiveness of practice beyond any benefits gained from explicitly highlighting relevant dimensions.

  12. Blast-Induced Tinnitus and Hearing Loss in Rats: Behavioral and Imaging Assays

    PubMed Central

    Mao, Johnny C.; Pace, Edward; Pierozynski, Paige; Kou, Zhifeng; Shen, Yimin; VandeVord, Pamela; Haacke, E. Mark; Zhang, Xueguo

    2012-01-01

    Abstract The current study used a rat model to investigate the underlying mechanisms of blast-induced tinnitus, hearing loss, and associated traumatic brain injury (TBI). Seven rats were used to evaluate behavioral evidence of tinnitus and hearing loss, and TBI using magnetic resonance imaging following a single 10-msec blast at 14 psi or 194 dB sound pressure level (SPL). The results demonstrated that the blast exposure induced early onset of tinnitus and central hearing impairment at a broad frequency range. The induced tinnitus and central hearing impairment tended to shift towards high frequencies over time. Hearing threshold measured with auditory brainstem responses also showed an immediate elevation followed by recovery on day 14, coinciding with behaviorally-measured results. Diffusion tensor magnetic resonance imaging results demonstrated significant damage and compensatory plastic changes to certain auditory brain regions, with the majority of changes occurring in the inferior colliculus and medial geniculate body. No significant microstructural changes found in the corpus callosum indicates that the currently adopted blast exposure mainly exerts effects through the auditory pathways rather than through direct impact onto the brain parenchyma. The results showed that this animal model is appropriate for investigation of the mechanisms underlying blast-induced tinnitus, hearing loss, and related TBI. Continued investigation along these lines will help identify pathology with injury/recovery patterns, aiding development of effective treatment strategies. PMID:21933015

  13. Spectral integration in primary auditory cortex attributable to temporally precise convergence of thalamocortical and intracortical input.

    PubMed

    Happel, Max F K; Jeschke, Marcus; Ohl, Frank W

    2010-08-18

    Primary sensory cortex integrates sensory information from afferent feedforward thalamocortical projection systems and convergent intracortical microcircuits. Both input systems have been demonstrated to provide different aspects of sensory information. Here we have used high-density recordings of laminar current source density (CSD) distributions in primary auditory cortex of Mongolian gerbils in combination with pharmacological silencing of cortical activity and analysis of the residual CSD, to dissociate the feedforward thalamocortical contribution and the intracortical contribution to spectral integration. We found a temporally highly precise integration of both types of inputs when the stimulation frequency was in close spectral neighborhood of the best frequency of the measurement site, in which the overlap between both inputs is maximal. Local intracortical connections provide both directly feedforward excitatory and modulatory input from adjacent cortical sites, which determine how concurrent afferent inputs are integrated. Through separate excitatory horizontal projections, terminating in cortical layers II/III, information about stimulus energy in greater spectral distance is provided even over long cortical distances. These projections effectively broaden spectral tuning width. Based on these data, we suggest a mechanism of spectral integration in primary auditory cortex that is based on temporally precise interactions of afferent thalamocortical inputs and different short- and long-range intracortical networks. The proposed conceptual framework allows integration of different and partly controversial anatomical and physiological models of spectral integration in the literature.

  14. Effect of low-frequency rTMS on electromagnetic tomography (LORETA) and regional brain metabolism (PET) in schizophrenia patients with auditory hallucinations.

    PubMed

    Horacek, Jiri; Brunovsky, Martin; Novak, Tomas; Skrdlantova, Lucie; Klirova, Monika; Bubenikova-Valesova, Vera; Krajca, Vladimir; Tislerova, Barbora; Kopecek, Milan; Spaniel, Filip; Mohr, Pavel; Höschl, Cyril

    2007-01-01

    Auditory hallucinations are characteristic symptoms of schizophrenia with high clinical importance. It was repeatedly reported that low frequency (

  15. Probing neural mechanisms underlying auditory stream segregation in humans by transcranial direct current stimulation (tDCS).

    PubMed

    Deike, Susann; Deliano, Matthias; Brechmann, André

    2016-10-01

    One hypothesis concerning the neural underpinnings of auditory streaming states that frequency tuning of tonotopically organized neurons in primary auditory fields in combination with physiological forward suppression is necessary for the separation of representations of high-frequency A and low-frequency B tones. The extent of spatial overlap between the tonotopic activations of A and B tones is thought to underlie the perceptual organization of streaming sequences into one coherent or two separate streams. The present study attempts to interfere with these mechanisms by transcranial direct current stimulation (tDCS) and to probe behavioral outcomes reflecting the perception of ABAB streaming sequences. We hypothesized that tDCS by modulating cortical excitability causes a change in the separateness of the representations of A and B tones, which leads to a change in the proportions of one-stream and two-stream percepts. To test this, 22 subjects were presented with ambiguous ABAB sequences of three different frequency separations (∆F) and had to decide on their current percept after receiving sham, anodal, or cathodal tDCS over the left auditory cortex. We could confirm our hypothesis at the most ambiguous ∆F condition of 6 semitones. For anodal compared with sham and cathodal stimulation, we found a significant decrease in the proportion of two-stream perception and an increase in the proportion of one-stream perception. The results demonstrate the feasibility of using tDCS to probe mechanisms underlying auditory streaming through the use of various behavioral measures. Moreover, this approach allows one to probe the functions of auditory regions and their interactions with other processing stages. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. OVERLAP OF HEARING AND VOICING RANGES IN SINGING

    PubMed Central

    Hunter, Eric J.; Titze, Ingo R.

    2008-01-01

    Frequency and intensity ranges in voice production by trained and untrained singers were superimposed onto the average normal human hearing range. The vocal output for all subjects was shown both in Voice Range Profiles and Spectral Level Profiles. Trained singers took greater advantage of the dynamic range of the auditory system with harmonic energy (45% of the hearing range compared to 38% for untrained vocalists). This difference seemed to come from the trained singers ablily to exploit the most sensitive part of the hearing range (around 3 to 4 kHz) through the use of the singer’s formant. The trained vocalists’ average maximum third-octave spectral band level was 95 dB SPL, compared to 80 dB SPL for untrained. PMID:19844607

  17. OVERLAP OF HEARING AND VOICING RANGES IN SINGING.

    PubMed

    Hunter, Eric J; Titze, Ingo R

    2005-04-01

    Frequency and intensity ranges in voice production by trained and untrained singers were superimposed onto the average normal human hearing range. The vocal output for all subjects was shown both in Voice Range Profiles and Spectral Level Profiles. Trained singers took greater advantage of the dynamic range of the auditory system with harmonic energy (45% of the hearing range compared to 38% for untrained vocalists). This difference seemed to come from the trained singers ablily to exploit the most sensitive part of the hearing range (around 3 to 4 kHz) through the use of the singer's formant. The trained vocalists' average maximum third-octave spectral band level was 95 dB SPL, compared to 80 dB SPL for untrained.

  18. Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model

    PubMed Central

    Nakao, Kazuhito; Nakazawa, Kazu

    2014-01-01

    In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs) are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR) hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP) activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 s, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1) a broadband increase in spontaneous LFP power in the absence of external inputs, and (2) a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The “paradoxically” high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception. PMID:25018691

  19. Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle

    NASA Astrophysics Data System (ADS)

    Oppenheim, Jacob N.; Magnasco, Marcelo O.

    2013-01-01

    The time-frequency uncertainty principle states that the product of the temporal and frequency extents of a signal cannot be smaller than 1/(4π). We study human ability to simultaneously judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing transient sounds, rule out simple “linear filter” models of early auditory processing, and highlight timing acuity as a central feature in auditory object processing.

  20. The Relationship between Auditory Processing and Restricted, Repetitive Behaviors in Adults with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Kargas, Niko; López, Beatriz; Reddy, Vasudevi; Morris, Paul

    2015-01-01

    Current views suggest that autism spectrum disorders (ASDs) are characterised by enhanced low-level auditory discrimination abilities. Little is known, however, about whether enhanced abilities are universal in ASD and how they relate to symptomatology. We tested auditory discrimination for intensity, frequency and duration in 21 adults with ASD…

  1. The Chronometry of Mental Ability: An Event-Related Potential Analysis of an Auditory Oddball Discrimination Task

    ERIC Educational Resources Information Center

    Beauchamp, Chris M.; Stelmack, Robert M.

    2006-01-01

    The relation between intelligence and speed of auditory discrimination was investigated during an auditory oddball task with backward masking. In target discrimination conditions that varied in the interval between the target and the masking stimuli and in the tonal frequency of the target and masking stimuli, higher ability participants (HA)…

  2. An Experimental Investigation of the Effect of Altered Auditory Feedback on the Conversational Speech of Adults Who Stutter

    ERIC Educational Resources Information Center

    Lincoln, Michelle; Packman, Ann; Onslow, Mark; Jones, Mark

    2010-01-01

    Purpose: To investigate the impact on percentage of syllables stuttered of various durations of delayed auditory feedback (DAF), levels of frequency-altered feedback (FAF), and masking auditory feedback (MAF) during conversational speech. Method: Eleven adults who stuttered produced 10-min conversational speech samples during a control condition…

  3. Temporal coherence for pure tones in budgerigars (Melopsittacus undulatus) and humans (Homo sapiens).

    PubMed

    Neilans, Erikson G; Dent, Micheal L

    2015-02-01

    Auditory scene analysis has been suggested as a universal process that exists across all animals. Relative to humans, however, little work has been devoted to how animals perceptually isolate different sound sources. Frequency separation of sounds is arguably the most common parameter studied in auditory streaming, but it is not the only factor contributing to how the auditory scene is perceived. Researchers have found that in humans, even at large frequency separations, synchronous tones are heard as a single auditory stream, whereas asynchronous tones with the same frequency separations are perceived as 2 distinct sounds. These findings demonstrate how both the timing and frequency separation of sounds are important for auditory scene analysis. It is unclear how animals, such as budgerigars (Melopsittacus undulatus), perceive synchronous and asynchronous sounds. In this study, budgerigars and humans (Homo sapiens) were tested on their perception of synchronous, asynchronous, and partially overlapping pure tones using the same psychophysical procedures. Species differences were found between budgerigars and humans in how partially overlapping sounds were perceived, with budgerigars more likely to segregate overlapping sounds and humans more apt to fuse the 2 sounds together. The results also illustrated that temporal cues are particularly important for stream segregation of overlapping sounds. Lastly, budgerigars were found to segregate partially overlapping sounds in a manner predicted by computational models of streaming, whereas humans were not. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  4. The role of spatiotemporal and spectral cues in segregating short sound events: evidence from auditory Ternus display.

    PubMed

    Wang, Qingcui; Bao, Ming; Chen, Lihan

    2014-01-01

    Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.

  5. Children with autism spectrum disorder have reduced otoacoustic emissions at the 1 kHz mid-frequency region.

    PubMed

    Bennetto, Loisa; Keith, Jessica M; Allen, Paul D; Luebke, Anne E

    2017-02-01

    Autism spectrum disorder (ASD) is a behaviorally diagnosed disorder of early onset characterized by impairment in social communication and restricted and repetitive behaviors. Some of the earliest signs of ASD involve auditory processing, and a recent study found that hearing thresholds in children with ASD in the mid-range frequencies were significantly related to receptive and expressive language measures. In addition, otoacoustic emissions have been used to detect reduced cochlear function in the presence of normal audiometric thresholds. We were interested then to know if otoacoustic emissions in children with normal audiometric thresholds would also reveal differences between children with ASD and typical developing (TD) controls in mid-frequency regions. Our objective was to specifically measure baseline afferent otoacoustic emissions (distortion-product otoacoustic emissions [DPOAEs]), transient-evoked otoacoustic emissions (TrOAEs), and efferent suppression, in 35 children with high-functioning ASD compared with 42 aged-matched TD controls. All participants were males 6-17 years old, with normal audiometry, and rigorously characterized via Autism Diagnostic Interview-Revised and Autism Diagnostic Observation Schedule. Children with ASD had greatly reduced DPOAE responses in the 1 kHz frequency range, yet had comparable DPOAE responses at 0.5 and 4-8 kHz regions. Furthermore, analysis of the spectral features of TrOAEs revealed significantly decreased emissions in ASD in similar frequencies. No significant differences were noted in DPOAE or TrOAE noise floors, middle ear muscle reflex activity, or efferent suppression between children with ASD and TD controls. In conclusion, attention to specific-frequency deficits using non-invasive measures of cochlear function may be important in auditory processing impairments found in ASD. Autism Res 2017, 10: 337-345. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  6. High-frequency tone burst-evoked ABR latency-intensity functions.

    PubMed

    Fausti, S A; Olson, D J; Frey, R H; Henry, J A; Schaffer, H I

    1993-01-01

    High-frequency tone burst stimuli (8, 10, 12, and 14 kHz) have been developed and demonstrated to provide reliable and valid auditory brainstem responses (ABRs) in normal-hearing subjects. In this study, latency-intensity functions (LIFs) were determined using these stimuli in 14 normal-hearing individuals. Significant shifts in response latency occurred as a function of stimulus intensity for all tone burst frequencies. For each 10 dB shift in intensity, latency shifts for waves I and V were statistically significant except for one isolated instance. LIF slopes were comparable between frequencies, ranging from 0.020 to 0.030 msec/dB. These normal LIFs for high-frequency tone burst-evoked ABRs suggest the degree of response latency change that might be expected from, for example, progressive hearing loss due to ototoxic insult, although these phenomena may not be directly related.

  7. Comparisons of Stuttering Frequency during and after Speech Initiation in Unaltered Feedback, Altered Auditory Feedback and Choral Speech Conditions

    ERIC Educational Resources Information Center

    Saltuklaroglu, Tim; Kalinowski, Joseph; Robbins, Mary; Crawcour, Stephen; Bowers, Andrew

    2009-01-01

    Background: Stuttering is prone to strike during speech initiation more so than at any other point in an utterance. The use of auditory feedback (AAF) has been found to produce robust decreases in the stuttering frequency by creating an electronic rendition of choral speech (i.e., speaking in unison). However, AAF requires users to self-initiate…

  8. The impact of perilaryngeal vibration on the self-perception of loudness and the Lombard effect.

    PubMed

    Brajot, François-Xavier; Nguyen, Don; DiGiovanni, Jeffrey; Gracco, Vincent L

    2018-04-05

    The role of somatosensory feedback in speech and the perception of loudness was assessed in adults without speech or hearing disorders. Participants completed two tasks: loudness magnitude estimation of a short vowel and oral reading of a standard passage. Both tasks were carried out in each of three conditions: no-masking, auditory masking alone, and mixed auditory masking plus vibration of the perilaryngeal area. A Lombard effect was elicited in both masking conditions: speakers unconsciously increased vocal intensity. Perilaryngeal vibration further increased vocal intensity above what was observed for auditory masking alone. Both masking conditions affected fundamental frequency and the first formant frequency as well, but only vibration was associated with a significant change in the second formant frequency. An additional analysis of pure-tone thresholds found no difference in auditory thresholds between masking conditions. Taken together, these findings indicate that perilaryngeal vibration effectively masked somatosensory feedback, resulting in an enhanced Lombard effect (increased vocal intensity) that did not alter speakers' self-perception of loudness. This implies that the Lombard effect results from a general sensorimotor process, rather than from a specific audio-vocal mechanism, and that the conscious self-monitoring of speech intensity is not directly based on either auditory or somatosensory feedback.

  9. AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX

    PubMed Central

    Weinberger, Norman M.

    2009-01-01

    Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002

  10. Automated cortical auditory evoked potentials threshold estimation in neonates.

    PubMed

    Oliveira, Lilian Sanches; Didoné, Dayane Domeneghini; Durante, Alessandra Spada

    2018-02-02

    The evaluation of Cortical Auditory Evoked Potential has been the focus of scientific studies in infants. Some authors have reported that automated response detection is effective in exploring these potentials in infants, but few have reported their efficacy in the search for thresholds. To analyze the latency, amplitude and thresholds of Cortical Auditory Evoked Potential using an automatic response detection device in a neonatal population. This is a cross-sectional, observational study. Cortical Auditory Evoked Potentials were recorded in response to pure-tone stimuli of the frequencies 500, 1000, 2000 and 4000Hz presented in an intensity range between 0 and 80dB HL using a single channel recording. P1 was performed in an exclusively automated fashion, using Hotelling's T 2 statistical test. The latency and amplitude were obtained manually by three examiners. The study comprised 39 neonates up to 28 days old of both sexes with presence of otoacoustic emissions and no risk factors for hearing loss. With the protocol used, Cortical Auditory Evoked Potential responses were detected in all subjects at high intensity and thresholds. The mean thresholds were 24.8±10.4dB NA, 25±9.0dB NA, 28±7.8dB NA and 29.4±6.6dB HL for 500, 1000, 2000 and 4000Hz, respectively. Reliable responses were obtained in the assessment of cortical auditory potentials in the neonates assessed with a device for automatic response detection. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  11. Auditory steady-state response in cochlear implant patients.

    PubMed

    Torres-Fortuny, Alejandro; Arnaiz-Marquez, Isabel; Hernández-Pérez, Heivet; Eimil-Suárez, Eduardo

    2018-03-19

    Auditory steady state responses to continuous amplitude modulated tones at rates between 70 and 110Hz, have been proposed as a feasible alternative to objective frequency specific audiometry in cochlear implant subjects. The aim of the present study is to obtain physiological thresholds by means of auditory steady-state response in cochlear implant patients (Clarion HiRes 90K), with acoustic stimulation, on free field conditions and to verify its biological origin. 11 subjects comprised the sample. Four amplitude modulated tones of 500, 1000, 2000 and 4000Hz were used as stimuli, using the multiple frequency technique. The recording of auditory steady-state response was also recorded at 0dB HL of intensity, non-specific stimulus and using a masking technique. The study enabled the electrophysiological thresholds to be obtained for each subject of the explored sample. There were no auditory steady-state responses at either 0dB or non-specific stimulus recordings. It was possible to obtain the masking thresholds. A difference was identified between behavioral and electrophysiological thresholds of -6±16, -2±13, 0±22 and -8±18dB at frequencies of 500, 1000, 2000 and 4000Hz respectively. The auditory steady state response seems to be a suitable technique to evaluate the hearing threshold in cochlear implant subjects. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.

  12. Pre-attentive, context-specific representation of fear memory in the auditory cortex of rat.

    PubMed

    Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu

    2013-01-01

    Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.

  13. Wide-dynamic-range forward suppression in marmoset inferior colliculus neurons is generated centrally and accounts for perceptual masking.

    PubMed

    Nelson, Paul C; Smith, Zachary M; Young, Eric D

    2009-02-25

    An organism's ability to detect and discriminate sensory inputs depends on the recent stimulus history. For example, perceptual detection thresholds for a brief tone can be elevated by as much as 50 dB when following a masking stimulus. Previous work suggests that such forward masking is not a direct result of peripheral neural adaptation; the central pathway apparently modifies the representation in a way that further attenuates the input's response to short probe signals. Here, we show that much of this transformation is complete by the level of the inferior colliculus (IC). Single-neuron extracellular responses were recorded in the central nucleus of the awake marmoset IC. The threshold for a 20 ms probe tone presented at best frequency was determined for various masker-probe delays, over a range of masker sound pressure levels (SPLs) and frequencies. The most striking aspect of the data was the increased potency of forward maskers as their SPL was increased, despite the fact that the excitatory response to the masker was often saturating or nonmonotonic over the same range of levels. This led to probe thresholds at high masker levels that were almost always higher than those observed in the auditory nerve. Probe threshold shifts were not usually caused by a persistent excitatory response to the masker; instead we propose a wide-dynamic-range inhibitory mechanism locked to sound offset as an explanation for several key aspects of the data. These findings further delineate the role of subcortical auditory processing in the generation of a context-dependent representation of ongoing acoustic scenes.

  14. Wide dynamic range forward suppression in marmoset inferior colliculus neurons is generated centrally and accounts for perceptual masking

    PubMed Central

    Nelson, Paul C.; Smith, Zachary M.; Young, Eric D.

    2009-01-01

    An organism’s ability to detect and discriminate sensory inputs depends on the recent stimulus history. For example, perceptual detection thresholds for a brief tone can be elevated by as much as 50 dB when following a masking stimulus. Previous work suggests that such forward masking is not a direct result of peripheral neural adaptation; the central pathway apparently modifies the representation in a way that further attenuates the input’s response to short probe signals. Here, we show that much of this transformation is complete by the level of the inferior colliculus (IC). Single-neuron extracellular responses were recorded in the central nucleus of the awake marmoset IC. The threshold for a 20-ms probe tone presented at best frequency was determined for various masker-probe delays, over a range of masker SPLs and frequencies. The most striking aspect of the data was the increased potency of forward maskers as their SPL was increased, despite the fact that the excitatory response to the masker was often saturating or non-monotonic over the same range of levels. This led to probe thresholds at high masker levels that were almost always higher than those observed in the auditory nerve. Probe threshold shifts were not usually caused by a persistent excitatory response to the masker; instead we propose a wide dynamic-range inhibitory mechanism locked to sound offset as an explanation for several key aspects of the data. These findings further delineate the role of subcortical auditory processing in the generation of a context-dependent representation of ongoing acoustic scenes. PMID:19244530

  15. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    NASA Astrophysics Data System (ADS)

    Kwon, Minseok

    While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system, the regulation of time constant update for filters in signal/control path as well as level-independent frequency glides with fixed frequency modulation. First, we scrutinized performance development in keyword recognition using the proposed methods in quiet and noise-corrupted environments. The results argue that multi-scale integration should be used along with CE in order to avoid ambiguous continuity in unvoiced segments. Moreover, the inclusion of the all modifications was observed to guarantee the noise-type-independent robustness particularly with severe interference. Moreover, the CASA with the auditory model was implemented into a single/dual-channel ASR using reference TIMIT corpus so as to get more general result. Hidden Markov model (HTK) toolkit was used for phone recognition in various environmental conditions. In a single-channel ASR, the results argue that unmasked acoustic features (unmasked GFCC) should combine with target estimates from the mask to compensate for missing information. From the observation of a dual-channel ASR, the combined GFCC guarantees the highest performance regardless of interferences within speech. Moreover, consistent improvement of noise robustness by GFCC (unmasked or combined) shows the validity of our proposed CASA implementation in dual microphone system. In conclusion, the proposed framework proves the robustness of the acoustic features in various background interferences via both direct distance evaluation and statistical assessment. In addition, the introduction of dual microphone system using the framework in this study shows the potential of the effective implementation of the auditory model-based CASA in ASR.

  16. Sensitivity and specificity of auditory steady‐state response testing

    PubMed Central

    Rabelo, Camila Maia; Schochat, Eliane

    2011-01-01

    INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442

  17. "Change deafness" arising from inter-feature masking within a single auditory object.

    PubMed

    Barascud, Nicolas; Griffiths, Timothy D; McAlpine, David; Chait, Maria

    2014-03-01

    Our ability to detect prominent changes in complex acoustic scenes depends not only on the ear's sensitivity but also on the capacity of the brain to process competing incoming information. Here, employing a combination of psychophysics and magnetoencephalography (MEG), we investigate listeners' sensitivity in situations when two features belonging to the same auditory object change in close succession. The auditory object under investigation is a sequence of tone pips characterized by a regularly repeating frequency pattern. Signals consisted of an initial, regularly alternating sequence of three short (60 msec) pure tone pips (in the form ABCABC…) followed by a long pure tone with a frequency that is either expected based on the on-going regular pattern ("LONG expected"-i.e., "LONG-expected") or constitutes a pattern violation ("LONG-unexpected"). The change in LONG-expected is manifest as a change in duration (when the long pure tone exceeds the established duration of a tone pip), whereas the change in LONG-unexpected is manifest as a change in both the frequency pattern and a change in the duration. Our results reveal a form of "change deafness," in that although changes in both the frequency pattern and the expected duration appear to be processed effectively by the auditory system-cortical signatures of both changes are evident in the MEG data-listeners often fail to detect changes in the frequency pattern when that change is closely followed by a change in duration. By systematically manipulating the properties of the changing features and measuring behavioral and MEG responses, we demonstrate that feature changes within the same auditory object, which occur close together in time, appear to compete for perceptual resources.

  18. Long-range synchrony of gamma oscillations and auditory hallucination symptoms in schizophrenia

    PubMed Central

    Mulert, C.; Kirsch; Pascual-Marqui, Roberto; McCarley, Robert W.; Spencer, Kevin M.

    2010-01-01

    Phase locking in the gamma-band range has been shown to be diminished in patients with schizophrenia. Moreover, there have been reports of positive correlations between phase locking in the gamma-band range and positive symptoms, especially hallucinations. The aim of the present study was to use a new methodological approach in order to investigate gamma-band phase synchronization between the left and right auditory cortex in patients with schizophrenia and its relationship to auditory hallucinations. Subjects were 18 patients with chronic schizophrenia (SZ) and 16 healthy control (HC) subjects. Auditory hallucination symptom scores were obtained using the Scale for the Assessment of Positive Symptoms. Stimuli were 40-Hz binaural click trains. The generators of the 40 Hz-ASSR were localized using eLORETA and based on the computed intracranial signals lagged interhemispheric phase locking between primary and secondary auditory cortices was analyzed. Current source density of the 40 ASSR response was significantly diminished in SZ in comparison to HC in the right superior and middle temporal gyrus (p<0.05). Interhemispheric phase locking was reduced in SZ in comparison to HC for the primary auditory cortices (p<0.05) but not in the secondary auditory cortices. A significant positive correlation was found between auditory hallucination symptom scores and phase synchronization between the primary auditory cortices (p<0.05, corrected for multiple testing) but not for the secondary auditory cortices. These results suggest that long-range synchrony of gamma oscillations is disturbed in schizophrenia and that this deficit is related to clinical symptoms such as auditory hallucinations. PMID:20713096

  19. Intracortical multiplication of thalamocortical signals in mouse auditory cortex.

    PubMed

    Li, Ling-yun; Li, Ya-tang; Zhou, Mu; Tao, Huizhong W; Zhang, Li I

    2013-09-01

    Cortical processing of sensory information begins with the transformation of thalamically relayed signals. We optogenetically silenced intracortical circuits to isolate thalamic inputs to layer 4 neurons and found that intracortical excitation linearly amplified thalamocortical responses underlying frequency and direction selectivity, with spectral range and tuning preserved, and prolonged the response duration. This signal pre-amplification and prolongation enhanced the salience of thalamocortically relayed information and ensured its robust, faithful and more persistent representation.

  20. An echolocation model for the restoration of an acoustic image from a single-emission echo

    NASA Astrophysics Data System (ADS)

    Matsuo, Ikuo; Yano, Masafumi

    2004-12-01

    Bats can form a fine acoustic image of an object using frequency-modulated echolocation sound. The acoustic image is an impulse response, known as a reflected-intensity distribution, which is composed of amplitude and phase spectra over a range of frequencies. However, bats detect only the amplitude spectrum due to the low-time resolution of their peripheral auditory system, and the frequency range of emission is restricted. It is therefore necessary to restore the acoustic image from limited information. The amplitude spectrum varies with the changes in the configuration of the reflected-intensity distribution, while the phase spectrum varies with the changes in its configuration and location. Here, by introducing some reasonable constraints, a method is proposed for restoring an acoustic image from the echo. The configuration is extrapolated from the amplitude spectrum of the restricted frequency range by using the continuity condition of the amplitude spectrum at the minimum frequency of the emission and the minimum phase condition. The determination of the location requires extracting the amplitude spectra, which vary with its location. For this purpose, the Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates were used. The location is estimated from the temporal changes of the amplitude spectra. .

  1. Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise

    PubMed Central

    Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P.; Ahlfors, Seppo P.; Huang, Samantha; Raij, Tommi; Sams, Mikko; Vasios, Christos E.; Belliveau, John W.

    2011-01-01

    How can we concentrate on relevant sounds in noisy environments? A “gain model” suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A “tuning model” suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for “frequency tagging” of attention effects on maskers. Noise masking reduced early (50–150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50–150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise. PMID:21368107

  2. Neurophysiological mechanisms of cortical plasticity impairments in schizophrenia and modulation by the NMDA receptor agonist D-serine

    PubMed Central

    Kantrowitz, Joshua T.; Epstein, Michael L.; Beggel, Odeta; Rohrig, Stephanie; Lehrfeld, Jonathan M.; Revheim, Nadine; Lehrfeld, Nayla P.; Reep, Jacob; Parker, Emily; Silipo, Gail; Ahissar, Merav; Javitt, Daniel C.

    2016-01-01

    Schizophrenia is associated with deficits in cortical plasticity that affect sensory brain regions and lead to impaired cognitive performance. Here we examined underlying neural mechanisms of auditory plasticity deficits using combined behavioural and neurophysiological assessment, along with neuropharmacological manipulation targeted at the N-methyl-D-aspartate type glutamate receptor (NMDAR). Cortical plasticity was assessed in a cohort of 40 schizophrenia/schizoaffective patients relative to 42 healthy control subjects using a fixed reference tone auditory plasticity task. In a second cohort (n = 21 schizophrenia/schizoaffective patients, n = 13 healthy controls), event-related potential and event-related time–frequency measures of auditory dysfunction were assessed during administration of the NMDAR agonist d-serine. Mismatch negativity was used as a functional read-out of auditory-level function. Clinical trials registration numbers were NCT01474395/NCT02156908. Schizophrenia/schizoaffective patients showed significantly reduced auditory plasticity versus healthy controls (P = 0.001) that correlated with measures of cognitive, occupational and social dysfunction. In event-related potential/time-frequency analyses, patients showed highly significant reductions in sensory N1 that reflected underlying impairments in θ responses (P < 0.001), along with reduced θ and β-power modulation during retention and motor-preparation intervals. Repeated administration of d-serine led to intercorrelated improvements in (i) auditory plasticity (P < 0.001); (ii) θ-frequency response (P < 0.05); and (iii) mismatch negativity generation to trained versus untrained tones (P = 0.02). Schizophrenia/schizoaffective patients show highly significant deficits in auditory plasticity that contribute to cognitive, occupational and social dysfunction. d-serine studies suggest first that NMDAR dysfunction may contribute to underlying cortical plasticity deficits and, second, that repeated NMDAR agonist administration may enhance cortical plasticity in schizophrenia. PMID:27913408

  3. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  4. Audiological manifestations in HIV-positive adults.

    PubMed

    Matas, Carla Gentile; Angrisani, Rosanna Giaffredo; Magliaro, Fernanda Cristina Leite; Segurado, Aluisio Augusto Cotrim

    2014-07-01

    To characterize the findings of behavioral hearing assessment in HIV-positive individuals who received and did not receive antiretroviral treatment. This research was a cross-sectional study. The participants were 45 HIV-positive individuals (18 not exposed and 27 exposed to antiretroviral treatment) and 30 control-group individuals. All subjects completed an audiological evaluation through pure-tone audiometry, speech audiometry, and high-frequency audiometry. The hearing thresholds obtained by pure-tone audiometry were different between groups. The group that had received antiretroviral treatment had higher thresholds for the frequencies ranging from 250 to 3000 Hz compared with the control group and the group not exposed to treatment. In the range of frequencies from 4000 through 8000 Hz, the HIV-positive groups presented with higher thresholds than did the control group. The hearing thresholds determined by high-frequency audiometry were different between groups, with higher thresholds in the HIV-positive groups. HIV-positive individuals presented poorer results in pure-tone and high-frequency audiometry, suggesting impairment of the peripheral auditory pathway. Individuals who received antiretroviral treatment presented poorer results on both tests compared with individuals not exposed to antiretroviral treatment.

  5. Is the Role of External Feedback in Auditory Skill Learning Age Dependent?

    ERIC Educational Resources Information Center

    Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat

    2017-01-01

    Purpose: The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Method: Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for…

  6. How Hearing Loss Impacts Communication. Tipsheet: Serving Students Who Are Hard of Hearing

    ERIC Educational Resources Information Center

    Atcherson, Samuel R.; Johnson, Marni I.

    2009-01-01

    Hearing, or auditory processing, involves the use of many hearing skills in a single or combined fashion. The sounds that humans hear can be characterized by their intensity (loudness), frequency (pitch), and timing. Impairment of any of the auditory structures from the visible ear to the central auditory nervous system within the brain can have a…

  7. Auditory-motor integration of subliminal phase shifts in tapping: better than auditory discrimination would predict.

    PubMed

    Kagerer, Florian A; Viswanathan, Priya; Contreras-Vidal, Jose L; Whitall, Jill

    2014-04-01

    Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (nine per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high-threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set. Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier sub-cortical circuitry in those with higher thresholds.

  8. Auditory-motor integration of subliminal phase shifts in tapping: Better than auditory discrimination would predict

    PubMed Central

    Kagerer, Florian A.; Viswanathan, Priya; Contreras-Vidal, Jose L.; Whitall, Jill

    2014-01-01

    Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (9 per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set (p=0.05). Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier subcortical circuitry in those with higher thresholds. PMID:24449013

  9. Speaker-independent factors affecting the perception of foreign accent in a second languagea)

    PubMed Central

    Levi, Susannah V.; Winters, Stephen J.; Pisoni, David B.

    2012-01-01

    Previous research on foreign accent perception has largely focused on speaker-dependent factors such as age of learning and length of residence. Factors that are independent of a speaker’s language learning history have also been shown to affect perception of second language speech. The present study examined the effects of two such factors—listening context and lexical frequency—on the perception of foreign-accented speech. Listeners rated foreign accent in two listening contexts: auditory-only, where listeners only heard the target stimuli, and auditory+orthography, where listeners were presented with both an auditory signal and an orthographic display of the target word. Results revealed that higher frequency words were consistently rated as less accented than lower frequency words. The effect of the listening context emerged in two interactions: the auditory +orthography context reduced the effects of lexical frequency, but increased the perceived differences between native and non-native speakers. Acoustic measurements revealed some production differences for words of different levels of lexical frequency, though these differences could not account for all of the observed interactions from the perceptual experiment. These results suggest that factors independent of the speakers’ actual speech articulations can influence the perception of degree of foreign accent. PMID:17471745

  10. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  11. Familial auditory neuropathy.

    PubMed

    Wang, Qiuju; Gu, Rui; Han, Dongyi; Yang, Weiyan

    2003-09-01

    Auditory neuropathy is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses and normal cochlear outer hair cell function as measured by otoacoustic emission recordings. Many risk factors are thought to be involved in its etiology and pathophysiology. Four Chinese pedigrees with familial auditory neuropathy were presented to demonstrate involvement of genetic factors in the etiology of auditory neuropathy. Probands of the above-mentioned pedigrees, who had been diagnosed with auditory neuropathy, were evaluated and followed in the Department of Otolaryngology-Head and Neck Surgery, China People Liberation Army General Hospital (Beijing, China). Their family members were studied, and the pedigree maps established. History of illness, physical examination, pure-tone audiometry, acoustic reflex, auditory brainstem responses, and transient evoked and distortion-product otoacoustic emissions were obtained from members of these families. Some subjects received vestibular caloric testing, computed tomography scan of the temporal bone, and electrocardiography to exclude other possible neuropathic disorders. In most affected patients, hearing loss of various degrees and speech discrimination difficulties started at 10 to 16 years of age. Their audiological evaluation showed absence of acoustic reflex and auditory brainstem responses. As expected in auditory neuropathy, these patients exhibited near-normal cochlear outer hair cell function as shown in distortion product otoacoustic emission recordings. Pure-tone audiometry revealed hearing loss ranging from mild to profound in these patients. Different inheritance patterns were observed in the four families. In Pedigree I, 7 male patients were identified among 43 family members, exhibiting an X-linked recessive pattern. Affected brothers were found in Pedigrees II and III, whereas in pedigree IV, two sisters were affected. All the patients were otherwise normal without evidence of peripheral neuropathy at the time of writing. Patients with characteristics of nonsyndromic hereditary auditory neuropathy were identified in one large and three smaller Chinese families. Pedigree analysis suggested an X-linked, recessive hereditary pattern in one pedigree and autosomal recessive inheritances in the other three pedigrees. The phenotypes in the study were typical of auditory neuropathy; they were transmitted in different inheritance patterns, indicating clinical and genetic heterogeneity of this disorder. The observed inheritance and clinical audiological findings are different from those previously described for nonsyndromic low-frequency sensorineural hearing loss. This information should facilitate future molecular linkage analyses and positional cloning for the relative genes contributing to auditory neuropathy.

  12. Sound level-dependent growth of N1m amplitude with low and high-frequency tones.

    PubMed

    Soeta, Yoshiharu; Nakagawa, Seiji

    2009-04-22

    The aim of this study was to determine whether the amplitude and/or latency of the N1m deflection of auditory-evoked magnetic fields are influenced by the level and frequency of sound. The results indicated that the amplitude of the N1m increased with sound level. The growth in amplitude with increasing sound level was almost constant with low frequencies (250-1000 Hz); however, this growth decreased with high frequencies (>2000 Hz). The behavior of the amplitude may reflect a difference in the increase in the activation of the peripheral and/or central auditory systems.

  13. Quantitative EEG and low resolution electromagnetic tomography (LORETA) imaging of patients with persistent auditory hallucinations.

    PubMed

    Lee, Seung-Hwan; Wynn, Jonathan K; Green, Michael F; Kim, Hyun; Lee, Kang-Joon; Nam, Min; Park, Joong-Kyu; Chung, Young-Cho

    2006-04-01

    Electrophysiological studies have demonstrated gamma and beta frequency oscillations in response to auditory stimuli. The purpose of this study was to test whether auditory hallucinations (AH) in schizophrenia patients reflect abnormalities in gamma and beta frequency oscillations and to investigate source generators of these abnormalities. This theory was tested using quantitative electroencephalography (qEEG) and low-resolution electromagnetic tomography (LORETA) source imaging. Twenty-five schizophrenia patients with treatment refractory AH, lasting for at least 2 years, and 23 schizophrenia patients with non-AH (N-AH) in the past 2 years were recruited for the study. Spectral analysis of the qEEG and source imaging of frequency bands of artifact-free 30 s epochs were examined during rest. AH patients showed significantly increased beta 1 and beta 2 frequency amplitude compared with N-AH patients. Gamma and beta (2 and 3) frequencies were significantly correlated in AH but not in N-AH patients. Source imaging revealed significantly increased beta (1 and 2) activity in the left inferior parietal lobule and the left medial frontal gyrus in AH versus N-AH patients. These results imply that AH is reflecting increased beta frequency oscillations with neural generators localized in speech-related areas.

  14. Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit.

    PubMed

    Arnold, Denis; Tomaschek, Fabian; Sering, Konstantin; Lopez, Florence; Baayen, R Harald

    2017-01-01

    Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.

  15. Auditory evoked potential (AEP) measurements in stranded rough-toothed dolphins (Steno bredanensis)

    NASA Astrophysics Data System (ADS)

    Cook, Mandy L. H.; Manire, Charles A.; Mann, David A.

    2005-04-01

    Thirty-six rough-toothed dolphins (Steno bredanensis) live-stranded on Hutchinson Island, FL on August 6, 2004. Seven animals were transported to Mote Marine Laboratory for rehabilitation. Two auditory evoked potential (AEP) measurements were performed on each of five of these dolphins in air using a jawphone to present acoustic stimuli. Modulation rate transfer functions (MRTFs) were measured to establish how well the auditory system follows the temporal envelope of acoustic stimuli. A 40 kHz stimulus carrier was amplitude modulated (AM) with varying rates ranging from 200 Hz to 1800 Hz, in 200 Hz steps. The best AM-rate from the first dolphin tested was 1500 Hz. This AM rate was used in subsequent AEP measurements to determine evoked-potential hearing thresholds between 5000 and 80000 Hz. These findings show that rough-toothed dolphins can detect sounds between 5 and 80 kHz, and are most likely capable of detecting frequencies much higher than 80 kHz. MRTF data suggest that rough-toothed dolphins have a high temporal resolution, similar to that of other cetaceans.

  16. Intracortical microstimulation induced changes in spectral and temporal response properties in cat auditory cortex.

    PubMed

    Valentine, Pamela A; Eggermont, Jos J

    2003-09-01

    Intracortical microstimulation (ICMS), consisting of a 40 ms burst (rate 300 Hz) of 10 microA pulses, repetitively administered once per second, for a total duration of 1 h, induced cortical reorganization in the primary auditory cortical field of the anesthetized cat. Multiple single-unit activity was simultaneously recorded from three to nine microelectrodes. Spiking activity was recorded from the same units prior to and following the application of ICMS in conjunction with tone pips at the characteristic frequency (CF) of the stimulus electrode. ICMS produced a significant increase in the mean firing rate, and in the occurrence of burst activity. There was an increase in the cross-correlation coefficient (R) for unit pairs recorded from sites distant from the ICMS site, and a decrease in R for unit pairs that were recorded at the stimulation site. ICMS induced a shift in the CF, dependent on the difference between the baseline CF and the ICMS-paired tone pip frequency. ICMS also resulted in broader tuning curves, increased driven peak firing rate and reduced response latency. This suggests a lasting reduction in inhibition in a small region surrounding the ICMS site that allows expansion of the frequency range normally represented in the vicinity of the stimulation electrode.

  17. In-air hearing of a diving duck: A comparison of psychoacoustic and auditory brainstem response thresholds

    USGS Publications Warehouse

    Crowell, Sara E.; Wells-Berlin, Alicia M.; Therrien, Ronald E.; Yannuzzi, Sally E.; Carr, Catherine E.

    2016-01-01

    Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000−3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.

  18. In-air hearing of a diving duck: A comparison of psychoacoustic and auditory brainstem response thresholds.

    PubMed

    Crowell, Sara E; Wells-Berlin, Alicia M; Therrien, Ronald E; Yannuzzi, Sally E; Carr, Catherine E

    2016-05-01

    Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000-3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.

  19. Cortical Auditory Evoked Potentials Recorded From Nucleus Hybrid Cochlear Implant Users.

    PubMed

    Brown, Carolyn J; Jeon, Eun Kyung; Chiou, Li-Kuei; Kirby, Benjamin; Karsten, Sue A; Turner, Christopher W; Abbas, Paul J

    2015-01-01

    Nucleus Hybrid Cochlear Implant (CI) users hear low-frequency sounds via acoustic stimulation and high-frequency sounds via electrical stimulation. This within-subject study compares three different methods of coordinating programming of the acoustic and electrical components of the Hybrid device. Speech perception and cortical auditory evoked potentials (CAEP) were used to assess differences in outcome. The goals of this study were to determine whether (1) the evoked potential measures could predict which programming strategy resulted in better outcome on the speech perception task or was preferred by the listener, and (2) CAEPs could be used to predict which subjects benefitted most from having access to the electrical signal provided by the Hybrid implant. CAEPs were recorded from 10 Nucleus Hybrid CI users. Study participants were tested using three different experimental processor programs (MAPs) that differed in terms of how much overlap there was between the range of frequencies processed by the acoustic component of the Hybrid device and range of frequencies processed by the electrical component. The study design included allowing participants to acclimatize for a period of up to 4 weeks with each experimental program prior to speech perception and evoked potential testing. Performance using the experimental MAPs was assessed using both a closed-set consonant recognition task and an adaptive test that measured the signal-to-noise ratio that resulted in 50% correct identification of a set of 12 spondees presented in background noise. Long-duration, synthetic vowels were used to record both the cortical P1-N1-P2 "onset" response and the auditory "change" response (also known as the auditory change complex [ACC]). Correlations between the evoked potential measures and performance on the speech perception tasks are reported. Differences in performance using the three programming strategies were not large. Peak-to-peak amplitude of the ACC was not found to be sensitive enough to accurately predict the programming strategy that resulted in the best performance on either measure of speech perception. All 10 Hybrid CI users had residual low-frequency acoustic hearing. For all 10 subjects, allowing them to use both the acoustic and electrical signals provided by the implant improved performance on the consonant recognition task. For most subjects, it also resulted in slightly larger cortical change responses. However, the impact that listening mode had on the cortical change responses was small, and again, the correlation between the evoked potential and speech perception results was not significant. CAEPs can be successfully measured from Hybrid CI users. The responses that are recorded are similar to those recorded from normal-hearing listeners. The goal of this study was to see if CAEPs might play a role either in identifying the experimental program that resulted in best performance on a consonant recognition task or in documenting benefit from the use of the electrical signal provided by the Hybrid CI. At least for the stimuli and specific methods used in this study, no such predictive relationship was found.

  20. Tonal frequency affects amplitude but not topography of rhesus monkey cranial EEG components.

    PubMed

    Teichert, Tobias

    2016-06-01

    The rhesus monkey is an important model of human auditory function in general and auditory deficits in neuro-psychiatric diseases such as schizophrenia in particular. Several rhesus monkey studies have described homologs of clinically relevant auditory evoked potentials such as pitch-based mismatch negativity, a fronto-central negativity that can be observed when a series of regularly repeating sounds is disrupted by a sound of different tonal frequency. As a result it is well known how differences of tonal frequency are represented in rhesus monkey EEG. However, to date there is no study that systematically quantified how absolute tonal frequency itself is represented. In particular, it is not known if frequency affects rhesus monkey EEG component amplitude and topography in the same way as previously shown for humans. A better understanding of the effect of frequency may strengthen inter-species homology and will provide a more solid foundation on which to build the interpretation of frequency MMN in the rhesus monkey. Using arrays of up to 32 cranial EEG electrodes in 4 rhesus macaques we identified 8 distinct auditory evoked components including the N85, a fronto-central negativity that is the presumed homolog of the human N1. In line with human data, the amplitudes of most components including the N85 peaked around 1000 Hz and were strongly attenuated above ∼1750 Hz. Component topography, however, remained largely unaffected by frequency. This latter finding may be consistent with the known absence of certain anatomical structures in the rhesus monkey that are believed to cause the changes in topography in the human by inducing a rotation of generator orientation as a function of tonal frequency. Overall, the findings are consistent with the assumption of a homolog representation of tonal frequency in human and rhesus monkey EEG. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Adapted wavelet transform improves time-frequency representations: a study of auditory elicited P300-like event-related potentials in rats

    NASA Astrophysics Data System (ADS)

    Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M.; Graversen, Carina; Sørensen, Helge B. D.; Bastlund, Jesper F.

    2017-04-01

    Objective. Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. Approach. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. Main results. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. Significance. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.

  2. In-Vivo Animation of Auditory-Language-Induced Gamma-Oscillations in Children with Intractable Focal Epilepsy

    PubMed Central

    Brown, Erik C.; Rothermel, Robert; Nishida, Masaaki; Juhász, Csaba; Muzik, Otto; Hoechstetter, Karsten; Sood, Sandeep; Chugani, Harry T.; Asano, Eishi

    2008-01-01

    We determined if high-frequency gamma-oscillations (50- to 150-Hz) were induced by simple auditory communication over the language network areas in children with focal epilepsy. Four children (ages: 7, 9, 10 and 16 years) with intractable left-hemispheric focal epilepsy underwent extraoperative electrocorticography (ECoG) as well as language mapping using neurostimulation and auditory-language-induced gamma-oscillations on ECoG. The audible communication was recorded concurrently and integrated with ECoG recording to allow for accurate time-lock upon ECoG analysis. In three children, who successfully completed the auditory-language task, high-frequency gamma-augmentation sequentially involved: i) the posterior superior temporal gyrus when listening to the question, ii) the posterior lateral temporal region and the posterior frontal region in the time interval between question completion and the patient’s vocalization, and iii) the pre- and post-central gyri immediately preceding and during the patient’s vocalization. The youngest child, with attention deficits, failed to cooperate during the auditory-language task, and high-frequency gamma-augmentation was noted only in the posterior superior temporal gyrus when audible questions were given. The size of language areas suggested by statistically-significant high-frequency gamma-augmentation was larger than that defined by neurostimulation. The present method can provide in-vivo imaging of electrophysiological activities over the language network areas during language processes. Further studies are warranted to determine whether recording of language-induced gamma-oscillations can supplement language mapping using neurostimulation in presurgical evaluation of children with focal epilepsy. PMID:18455440

  3. Analysis of subtle auditory dysfunctions in young normal-hearing subjects affected by Williams syndrome.

    PubMed

    Paglialonga, Alessia; Barozzi, Stefania; Brambilla, Daniele; Soi, Daniela; Cesarani, Antonio; Spreafico, Emanuela; Tognola, Gabriella

    2014-11-01

    To assess if young subjects affected by Williams syndrome (WS) with normal middle ear functionality and normal hearing thresholds might have subtle auditory dysfunctions that could be detected by using clinically available measurements. Otoscopy, acoustic reflexes, tympanometry, pure-tone audiometry, and distortion product otoacoustic emissions (DPOAEs) were measured in a group of 13 WS subjects and in 13 age-matched, typically developing control subjects. Participants were required to have normal otoscopy, A-type tympanogram, normal acoustic reflex thresholds, and pure-tone thresholds≤15 dB HL at 0.5, 1, and 2 kHz bilaterally. To limit the possible influence of middle ear status on DPOAE recordings, we analyzed only data from ears with pure-tone thresholds≤15 dB HL across all octave frequencies in the range 0.25-8 kHz, middle ear pressure (MEP)>-50 daPa, static compliance (SC) in the range 0.3-1.2 cm3, and ear canal volume (ECV) in the range 0.2-2 ml, and we performed analysis of covariance to remove the possible effects of middle ear variables on DPOAEs. No differences in mean hearing thresholds, SC, ECV, and gradient were observed between the two groups, whereas significantly lower MEP values were found in WS subjects as well as significantly decreased DPOAEs up to 3.2 kHz after adjusting for differences in middle ear status. Results revealed that WS subjects with normal hearing thresholds (≤15 dB HL) and normal middle ear functionality (MEP>-50 daPa, SC in the range 0.3-1.2 cm3, ECV in the range 0.2-2 ml) might have subtle auditory dysfunctions that can be detected by using clinically available methods. Overall, this study points out the importance of using otoacoustic emissions as a complement to routine audiological examinations in individuals with WS to detect, before the onset of hearing loss, possible subtle auditory dysfunctions so that patients can be early identified, better monitored, and promptly treated. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Item-nonspecific proactive interference in monkeys' auditory short-term memory.

    PubMed

    Bigelow, James; Poremba, Amy

    2015-09-01

    Recent studies using the delayed matching-to-sample (DMS) paradigm indicate that monkeys' auditory short-term memory (STM) is susceptible to proactive interference (PI). During the task, subjects must indicate whether sample and test sounds separated by a retention interval are identical (match) or not (nonmatch). If a nonmatching test stimulus also occurred on a previous trial, monkeys are more likely to incorrectly make a "match" response (item-specific PI). However, it is not known whether PI may be caused by sounds presented on prior trials that are similar, but nonidentical to the current test stimulus (item-nonspecific PI). This possibility was investigated in two experiments. In Experiment 1, memoranda for each trial comprised tones with a wide range of frequencies, thus minimizing item-specific PI and producing a range of frequency differences among nonidentical tones. In Experiment 2, memoranda were drawn from a set of eight artificial sounds that differed from each other by one, two, or three acoustic dimensions (frequency, spectral bandwidth, and temporal dynamics). Results from both experiments indicate that subjects committed more errors when previously-presented sounds were acoustically similar (though not identical) to the test stimulus of the current trial. Significant effects were produced only by stimuli from the immediately previous trial, suggesting that item-nonspecific PI is less perseverant than item-specific PI, which can extend across noncontiguous trials. Our results contribute to existing human and animal STM literature reporting item-nonspecific PI caused by perceptual similarity among memoranda. Together, these observations underscore the significance of both temporal and discriminability factors in monkeys' STM. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Magnetoencephalographic recording of auditory mismatch negativity in response to duration and frequency deviants in a single session in patients with schizophrenia.

    PubMed

    Suga, Motomu; Nishimura, Yukika; Kawakubo, Yuki; Yumoto, Masato; Kasai, Kiyoto

    2016-07-01

    Auditory mismatch negativity (MMN) and its magnetoencephalographic (MEG) counterpart (MMNm) are an established biological index in schizophrenia research. MMN in response to duration and frequency deviants may have differential relevance to the pathophysiology and clinical stages of schizophrenia. MEG has advantage in that it almost purely detects MMNm arising from the auditory cortex. However, few previous MEG studies on schizophrenia have simultaneously assessed MMNm in response to duration and frequency deviants or examined the effect of chronicity on the group difference. Forty-two patients with chronic schizophrenia and 74 matched control subjects participated in the study. Using a whole-head MEG, MMNm in response to duration and frequency deviants of tones was recorded while participants passively listened to an auditory sequence. Compared to healthy subjects, patients with schizophrenia exhibited significantly reduced powers of MMNm in response to duration deviant in both hemispheres, whereas MMNm in response to frequency deviant did not differ between the two groups. These results did not change according to the chronicity of the illness. These results, obtained by using a sequence-enabling simultaneous assessment of both types of MMNm, suggest that MEG recording of MMN in response to duration deviant may be a more sensitive biological marker of schizophrenia than MMN in response to frequency deviant. Our findings represent an important first step towards establishment of MMN as a biomarker for schizophrenia in real-world clinical psychiatry settings. © 2016 The Authors. Psychiatry and Clinical Neurosciences © 2016 Japanese Society of Psychiatry and Neurology.

  6. Neural mechanisms underlying auditory feedback control of speech

    PubMed Central

    Reilly, Kevin J.; Guenther, Frank H.

    2013-01-01

    The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557

  7. Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.

    PubMed

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne

    2016-12-01

    It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Gerbil middle-ear sound transmission from 100 Hz to 60 kHz1

    PubMed Central

    Ravicz, Michael E.; Cooper, Nigel P.; Rosowski, John J.

    2008-01-01

    Middle-ear sound transmission was evaluated as the middle-ear transfer admittance HMY (the ratio of stapes velocity to ear-canal sound pressure near the umbo) in gerbils during closed-field sound stimulation at frequencies from 0.1 to 60 kHz, a range that spans the gerbil’s audiometric range. Similar measurements were performed in two laboratories. The HMY magnitude (a) increased with frequency below 1 kHz, (b) remained approximately constant with frequency from 5 to 35 kHz, and (c) decreased substantially from 35 to 50 kHz. The HMY phase increased linearly with frequency from 5 to 35 kHz, consistent with a 20–29 μs delay, and flattened at higher frequencies. Measurements from different directions showed that stapes motion is predominantly pistonlike except in a narrow frequency band around 10 kHz. Cochlear input impedance was estimated from HMY and previously-measured cochlear sound pressure. Results do not support the idea that the middle ear is a lossless matched transmission line. Results support the ideas that (1) middle-ear transmission is consistent with a mechanical transmission line or multiresonant network between 5 and 35 kHz and decreases at higher frequencies, (2) stapes motion is pistonlike over most of the gerbil auditory range, and (3) middle-ear transmission properties are a determinant of the audiogram. PMID:18646983

  9. Masking in three pinnipeds: underwater, low-frequency critical ratios.

    PubMed

    Southall, B L; Schusterman, R J; Kastak, D

    2000-09-01

    Behavioral techniques were used to determine underwater masked hearing thresholds for a northern elephant seal (Mirounga angustirostris), a harbor seal (Phoca vitulina), and a California sea lion (Zalophus californianus). Octave-band white noise maskers were centered at five test frequencies ranging from 200 to 2500 Hz; a slightly wider noise band was used for testing at 100 Hz. Critical ratios were calculated at one masking noise level for each test frequency. Above 200 Hz, critical ratios increased with frequency. This pattern is similar to that observed in most animals tested, and indicates that these pinnipeds lack specializations for detecting low-frequency tonal sounds in noise. However, the individual pinnipeds in this study, particularly the northern elephant seal, detected signals at relatively low signal-to-noise ratios. These results provide a means of estimating zones of auditory masking for pinnipeds exposed to anthropogenic noise sources.

  10. Sensorineural hearing loss amplifies neural coding of envelope information in the central auditory system of chinchillas

    PubMed Central

    Zhong, Ziwei; Henry, Kenneth S.; Heinz, Michael G.

    2014-01-01

    People with sensorineural hearing loss often have substantial difficulty understanding speech under challenging listening conditions. Behavioral studies suggest that reduced sensitivity to the temporal structure of sound may be responsible, but underlying neurophysiological pathologies are incompletely understood. Here, we investigate the effects of noise-induced hearing loss on coding of envelope (ENV) structure in the central auditory system of anesthetized chinchillas. ENV coding was evaluated noninvasively using auditory evoked potentials recorded from the scalp surface in response to sinusoidally amplitude modulated tones with carrier frequencies of 1, 2, 4, and 8 kHz and a modulation frequency of 140 Hz. Stimuli were presented in quiet and in three levels of white background noise. The latency of scalp-recorded ENV responses was consistent with generation in the auditory midbrain. Hearing loss amplified neural coding of ENV at carrier frequencies of 2 kHz and above. This result may reflect enhanced ENV coding from the periphery and/or an increase in the gain of central auditory neurons. In contrast to expectations, hearing loss was not associated with a stronger adverse effect of increasing masker intensity on ENV coding. The exaggerated neural representation of ENV information shown here at the level of the auditory midbrain helps to explain previous findings of enhanced sensitivity to amplitude modulation in people with hearing loss under some conditions. Furthermore, amplified ENV coding may potentially contribute to speech perception problems in people with cochlear hearing loss by acting as a distraction from more salient acoustic cues, particularly in fluctuating backgrounds. PMID:24315815

  11. Auditory steady state response in sound field.

    PubMed

    Hernández-Pérez, H; Torres-Fortuny, A

    2013-02-01

    Physiological and behavioral responses were compared in normal-hearing subjects via analyses of the auditory steady-state response (ASSR) and conventional audiometry under sound field conditions. The auditory stimuli, presented through a loudspeaker, consisted of four carrier tones (500, 1000, 2000, and 4000 Hz), presented singly for behavioral testing but combined (multiple frequency technique), to estimate thresholds using the ASSR. Twenty normal-hearing adults were examined. The average differences between the physiological and behavioral thresholds were between 17 and 22 dB HL. The Spearman rank correlation between ASSR and behavioral thresholds was significant for all frequencies (p < 0.05). Significant differences were found in the ASSR amplitude among frequencies, and strong correlations between the ASSR amplitude and the stimulus level (p < 0.05). The ASSR in sound field testing was found to yield hearing threshold estimates deemed to be reasonably well correlated with behaviorally assessed thresholds.

  12. Behavioral and subcortical signatures of musical expertise in Mandarin Chinese speakers

    PubMed Central

    Tervaniemi, Mari; Aalto, Daniel

    2018-01-01

    Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for fo or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers. PMID:29300756

  13. Click train encoding in primary and non-primary auditory cortex of anesthetized macaque monkeys.

    PubMed

    Oshurkova, E; Scheich, H; Brosch, M

    2008-06-02

    We studied encoding of temporally modulated sounds in 28 multiunits in the primary auditory cortical field (AI) and in 35 multiunits in the secondary auditory cortical field (caudomedial auditory cortical field, CM) by presenting periodic click trains with click rates between 1 and 300 Hz lasting for 2-4 s. We found that all multiunits increased or decreased their firing rate during the steady state portion of the click train and that all except two multiunits synchronized their firing to individual clicks in the train. Rate increases and synchronized responses were most prevalent and strongest at low click rates, as expressed by best modulation frequency, limiting frequency, percentage of responsive multiunits, and average rate response and vector strength. Synchronized responses occurred up to 100 Hz; rate response occurred up to 300 Hz. Both auditory fields responded similarly to low click rates but differed at click rates above approximately 12 Hz at which more multiunits in AI than in CM exhibited synchronized responses and increased rate responses and more multiunits in CM exhibited decreased rate responses. These findings suggest that the auditory cortex of macaque monkeys encodes temporally modulated sounds similar to the auditory cortex of other mammals. Together with other observations presented in this and other reports, our findings also suggest that AI and CM have largely overlapping sensitivities for acoustic stimulus features but encode these features differently.

  14. Characterization of Hearing Thresholds from 500 to 16,000 Hz in Dentists: A Comparative Study

    PubMed Central

    Gonçalves, Claudia Giglio de Oliveira; Santos, Luciana; Lobato, Diolen; Ribas, Angela; Lacerda, Adriana Bender Moreira; Marques, Jair

    2014-01-01

    Introduction High-level noise exposure in dentists' workplaces may cause damages to the auditory systems. High-frequency audiometry is an important tool in the investigation in the early diagnosis of hearing loss. Objectives To analyze the auditory thresholds at frequencies from 500 to 16,000 Hz of dentists in the city of Curitiba. Methods This historic cohort study retrospectively tested hearing thresholds from 500 to 16,000 Hz with a group of dentists from Curitiba, in the state of Paraná, Brazil. Eighty subjects participated in the study, separated into a dentist group and a control group, with the same age range and gender across groups but with no history of occupational exposure to high levels of sound pressure in the control group. Subjects were tested with conventional audiometry and high-frequency audiometry and answered a questionnaire about exposure to noise. Results Results showed that 81% of dentists did not receive any information regarding noise at university; 6 (15%) dentists had sensorineural hearing impairment; significant differences were observed between the groups only at frequencies of 500 Hz and 1,000, 6,000 and 8,000 Hz in the right ear. There was no significant difference between the groups after analysis of mean hearing thresholds of high frequencies with the average hearing thresholds in conventional frequencies; subjects who had been working as dentists for longer than 10 years had worse tonal hearing thresholds at high frequencies. Conclusions In this study, we observed that dentists are at risk for the development of sensorineural hearing loss especially after 10 years of service. PMID:25992172

  15. Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences.

    PubMed

    Nozaradan, Sylvie; Mouraux, André; Cousineau, Marion

    2017-07-01

    The human auditory system presents a remarkable ability to detect rapid changes in fast, continuous acoustic sequences, as best illustrated in speech and music. However, the neural processing of rapid auditory contrast remains largely unclear, probably due to the lack of methods to objectively dissociate the response components specifically related to the contrast from the other components in response to the sequence of fast continuous sounds. To overcome this issue, we tested a novel use of the frequency-tagging approach allowing contrast-specific neural responses to be tracked based on their expected frequencies. The EEG was recorded while participants listened to 40-s sequences of sounds presented at 8Hz. A tone or interaural time contrast was embedded every fifth sound (AAAAB), such that a response observed in the EEG at exactly 8 Hz/5 (1.6 Hz) or harmonics should be the signature of contrast processing by neural populations. Contrast-related responses were successfully identified, even in the case of very fine contrasts. Moreover, analysis of the time course of the responses revealed a stable amplitude over repetitions of the AAAAB patterns in the sequence, except for the response to perceptually salient contrasts that showed a buildup and decay across repetitions of the sounds. Overall, this new combination of frequency-tagging with an oddball design provides a valuable complement to the classic, transient, evoked potentials approach, especially in the context of rapid auditory information. Specifically, we provide objective evidence on the neural processing of contrast embedded in fast, continuous sound sequences. NEW & NOTEWORTHY Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia might be an impaired processing of fast auditory changes, highlighting how the encoding of rapid acoustic information is critical for auditory communication. Here, we present a novel electrophysiological approach to capture in humans neural markers of contrasts in fast continuous tone sequences. Contrast-specific responses were successfully identified, even for very fine contrasts, providing direct insight on the encoding of rapid auditory information. Copyright © 2017 the American Physiological Society.

  16. Within-subject joint independent component analysis of simultaneous fMRI/ERP in an auditory oddball paradigm

    PubMed Central

    MANGALATHU-ARUMANA, J.; BEARDSLEY, S. A.; LIEBENTHAL, E.

    2012-01-01

    The integration of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) can contribute to characterizing neural networks with high temporal and spatial resolution. This research aimed to determine the sensitivity and limitations of applying joint independent component analysis (jICA) within-subjects, for ERP and fMRI data collected simultaneously in a parametric auditory frequency oddball paradigm. In a group of 20 subjects, an increase in ERP peak amplitude ranging 1–8 μV in the time window of the P300 (350–700ms), and a correlated increase in fMRI signal in a network of regions including the right superior temporal and supramarginal gyri, was observed with the increase in deviant frequency difference. JICA of the same ERP and fMRI group data revealed activity in a similar network, albeit with stronger amplitude and larger extent. In addition, activity in the left pre- and post- central gyri, likely associated with right hand somato-motor response, was observed only with the jICA approach. Within-subject, the jICA approach revealed significantly stronger and more extensive activity in the brain regions associated with the auditory P300 than the P300 linear regression analysis. The results suggest that with the incorporation of spatial and temporal information from both imaging modalities, jICA may be a more sensitive method for extracting common sources of activity between ERP and fMRI. PMID:22377443

  17. A Pole-Zero Filter Cascade Provides Good Fits to Human Masking Data and to Basilar Membrane and Neural Data

    NASA Astrophysics Data System (ADS)

    Lyon, Richard F.

    2011-11-01

    A cascade of two-pole-two-zero filters with level-dependent pole and zero dampings, with few parameters, can provide a good match to human psychophysical and physiological data. The model has been fitted to data on detection threshold for tones in notched-noise masking, including bandwidth and filter shape changes over a wide range of levels, and has been shown to provide better fits with fewer parameters compared to other auditory filter models such as gammachirps. Originally motivated as an efficient machine implementation of auditory filtering related to the WKB analysis method of cochlear wave propagation, such filter cascades also provide good fits to mechanical basilar membrane data, and to auditory nerve data, including linear low-frequency tail response, level-dependent peak gain, sharp tuning curves, nonlinear compression curves, level-independent zero-crossing times in the impulse response, realistic instantaneous frequency glides, and appropriate level-dependent group delay even with minimum-phase response. As part of exploring different level-dependent parameterizations of such filter cascades, we have identified a simple sufficient condition for stable zero-crossing times, based on the shifting property of the Laplace transform: simply move all the s-domain poles and zeros by equal amounts in the real-s direction. Such pole-zero filter cascades are efficient front ends for machine hearing applications, such as music information retrieval, content identification, speech recognition, and sound indexing.

  18. The steady-state response of the cerebral cortex to the beat of music reflects both the comprehension of music and attention

    PubMed Central

    Meltzer, Benjamin; Reichenbach, Chagit S.; Braiman, Chananel; Schiff, Nicholas D.; Hudspeth, A. J.; Reichenbach, Tobias

    2015-01-01

    The brain’s analyses of speech and music share a range of neural resources and mechanisms. Music displays a temporal structure of complexity similar to that of speech, unfolds over comparable timescales, and elicits cognitive demands in tasks involving comprehension and attention. During speech processing, synchronized neural activity of the cerebral cortex in the delta and theta frequency bands tracks the envelope of a speech signal, and this neural activity is modulated by high-level cortical functions such as speech comprehension and attention. It remains unclear, however, whether the cortex also responds to the natural rhythmic structure of music and how the response, if present, is influenced by higher cognitive processes. Here we employ electroencephalography to show that the cortex responds to the beat of music and that this steady-state response reflects musical comprehension and attention. We show that the cortical response to the beat is weaker when subjects listen to a familiar tune than when they listen to an unfamiliar, non-sensical musical piece. Furthermore, we show that in a task of intermodal attention there is a larger neural response at the beat frequency when subjects attend to a musical stimulus than when they ignore the auditory signal and instead focus on a visual one. Our findings may be applied in clinical assessments of auditory processing and music cognition as well as in the construction of auditory brain-machine interfaces. PMID:26300760

  19. [Which colours can we hear?: light stimulation of the hearing system].

    PubMed

    Wenzel, G I; Lenarz, T; Schick, B

    2014-02-01

    The success of conventional hearing aids and electrical auditory prostheses for hearing impaired patients is still limited in noisy environments and for sounds more complex than speech (e. g. music). This is partially due to the difficulty of frequency-specific activation of the auditory system using these devices. Stimulation of the auditory system using light pulses represents an alternative to mechanical and electrical stimulation. Light is a source of energy that can be very exactly focused and applied with little scattering, thus offering perspectives for optimal activation of the auditory system. Studies investigating light stimulation of sectors along the auditory pathway have shown stimulation of the auditory system is possible using light pulses. However, further studies and developments are needed before a new generation of light stimulation-based auditory prostheses can be made available for clinical application.

  20. [Low level auditory skills compared to writing skills in school children attending third and fourth grade: evidence for the rapid auditory processing deficit theory?].

    PubMed

    Ptok, M; Meisen, R

    2008-01-01

    The rapid auditory processing defi-cit theory holds that impaired reading/writing skills are not caused exclusively by a cognitive deficit specific to representation and processing of speech sounds but arise due to sensory, mainly auditory, deficits. To further explore this theory we compared different measures of auditory low level skills to writing skills in school children. prospective study. School children attending third and fourth grade. just noticeable differences for intensity and frequency (JNDI, JNDF), gap detection (GD) monaural and binaural temporal order judgement (TOJb and TOJm); grade in writing, language and mathematics. correlation analysis. No relevant correlation was found between any auditory low level processing variable and writing skills. These data do not support the rapid auditory processing deficit theory.

  1. Auditory Cortex Is Required for Fear Potentiation of Gap Detection

    PubMed Central

    Weible, Aldis P.; Liu, Christine; Niell, Cristopher M.

    2014-01-01

    Auditory cortex is necessary for the perceptual detection of brief gaps in noise, but is not necessary for many other auditory tasks such as frequency discrimination, prepulse inhibition of startle responses, or fear conditioning with pure tones. It remains unclear why auditory cortex should be necessary for some auditory tasks but not others. One possibility is that auditory cortex is causally involved in gap detection and other forms of temporal processing in order to associate meaning with temporally structured sounds. This predicts that auditory cortex should be necessary for associating meaning with gaps. To test this prediction, we developed a fear conditioning paradigm for mice based on gap detection. We found that pairing a 10 or 100 ms gap with an aversive stimulus caused a robust enhancement of gap detection measured 6 h later, which we refer to as fear potentiation of gap detection. Optogenetic suppression of auditory cortex during pairing abolished this fear potentiation, indicating that auditory cortex is critically involved in associating temporally structured sounds with emotionally salient events. PMID:25392510

  2. Cortical evoked potentials to an auditory illusion: binaural beats.

    PubMed

    Pratt, Hillel; Starr, Arnold; Michalewski, Henry J; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi

    2009-08-01

    To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.

  3. Cortical Evoked Potentials to an Auditory Illusion: Binaural Beats

    PubMed Central

    Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi

    2009-01-01

    Objective: To define brain activity corresponding to an auditory illusion of 3 and 6 Hz binaural beats in 250 Hz or 1,000 Hz base frequencies, and compare it to the sound onset response. Methods: Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000 Hz to one ear and 3 or 6 Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3 Hz and 6 Hz, in base frequencies of 250 Hz and 1000 Hz. Tones were 2,000 ms in duration and presented with approximately 1 s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. Results: All stimuli evoked tone-onset P50, N100 and P200 components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P50 had significantly different sources than the beats-evoked oscillations; and N100 and P200 sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Conclusions: Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Significance: Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp. PMID:19616993

  4. The effects of auditory stimulation with music on heart rate variability in healthy women.

    PubMed

    Roque, Adriano L; Valenti, Vitor E; Guida, Heraldo L; Campos, Mônica F; Knap, André; Vanderlei, Luiz Carlos M; Ferreira, Lucas L; Ferreira, Celso; Abreu, Luiz Carlos de

    2013-07-01

    There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level.

  5. The effects of auditory stimulation with music on heart rate variability in healthy women

    PubMed Central

    Roque, Adriano L.; Valenti, Vitor E.; Guida, Heraldo L.; Campos, Mônica F.; Knap, André; Vanderlei, Luiz Carlos M.; Ferreira, Lucas L.; Ferreira, Celso; de Abreu, Luiz Carlos

    2013-01-01

    OBJECTIVES: There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. METHODS: We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. RESULTS: The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. CONCLUSION: We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level. PMID:23917660

  6. The Auditory Skills Necessary for Echolocation: A New Explanation.

    ERIC Educational Resources Information Center

    Carlson-Smith, C.; Wiener, W. R.

    1996-01-01

    This study employed an audiometric test battery with nine blindfolded undergraduate students to explore success factors in echolocation. Echolocation performance correlated significantly with several specific auditory measures. No relationship was found between high-frequency sensitivity and echolocation performance. (Author/PB)

  7. Effects of frequency discrimination training on tinnitus: results from two randomised controlled trials.

    PubMed

    Hoare, Derek J; Kowalkowski, Victoria L; Hall, Deborah A

    2012-08-01

    That auditory perceptual training may alleviate tinnitus draws on two observations: (1) tinnitus probably arises from altered activity within the central auditory system following hearing loss and (2) sound-based training can change central auditory activity. Training that provides sound enrichment across hearing loss frequencies has therefore been hypothesised to alleviate tinnitus. We tested this prediction with two randomised trials of frequency discrimination training involving a total of 70 participants with chronic subjective tinnitus. Participants trained on either (1) a pure-tone standard at a frequency within their region of normal hearing, (2) a pure-tone standard within the region of hearing loss or (3) a high-pass harmonic complex tone spanning a region of hearing loss. Analysis of the primary outcome measure revealed an overall reduction in self-reported tinnitus handicap after training that was maintained at a 1-month follow-up assessment, but there were no significant differences between groups. Secondary analyses also report the effects of different domains of tinnitus handicap on the psychoacoustical characteristics of the tinnitus percept (sensation level, bandwidth and pitch) and on duration of training. Our overall findings and conclusions cast doubt on the superiority of a purely acoustic mechanism to underpin tinnitus remediation. Rather, the nonspecific patterns of improvement are more suggestive that auditory perceptual training affects impact on a contributory mechanism such as selective attention or emotional state.

  8. Hearing at low and infrasonic frequencies.

    PubMed

    Møller, H; Pedersen, C S

    2004-01-01

    The human perception of sound at frequencies below 200 Hz is reviewed. Knowledge about our perception of this frequency range is important, since much of the sound we are exposed to in our everyday environment contains significant energy in this range. Sound at 20-200 Hz is called low-frequency sound, while for sound below 20 Hz the term infrasound is used. The hearing becomes gradually less sensitive for decreasing frequency, but despite the general understanding that infrasound is inaudible, humans can perceive infrasound, if the level is sufficiently high. The ear is the primary organ for sensing infrasound, but at levels somewhat above the hearing threshold it is possible to feel vibrations in various parts of the body. The threshold of hearing is standardized for frequencies down to 20 Hz, but there is a reasonably good agreement between investigations below this frequency. It is not only the sensitivity but also the perceived character of a sound that changes with decreasing frequency. Pure tones become gradually less continuous, the tonal sensation ceases around 20 Hz, and below 10 Hz it is possible to perceive the single cycles of the sound. A sensation of pressure at the eardrums also occurs. The dynamic range of the auditory system decreases with decreasing frequency. This compression can be seen in the equal-loudness-level contours, and it implies that a slight increase in level can change the perceived loudness from barely audible to loud. Combined with the natural spread in thresholds, it may have the effect that a sound, which is inaudible to some people, may be loud to others. Some investigations give evidence of persons with an extraordinary sensitivity in the low and infrasonic frequency range, but further research is needed in order to confirm and explain this phenomenon.

  9. Audiogram of a striped dolphin (Stenella coeruleoalba)

    NASA Astrophysics Data System (ADS)

    Kastelein, Ronald A.; Hagedoorn, Monique; Au, Whitlow W. L.; de Haan, Dick

    2003-02-01

    The underwater hearing sensitivity of a striped dolphin was measured in a pool using standard psycho-acoustic techniques. The go/no-go response paradigm and up-down staircase psychometric method were used. Auditory sensitivity was measured by using 12 narrow-band frequency-modulated signals having center frequencies between 0.5 and 160 kHz. The 50% detection threshold was determined for each frequency. The resulting audiogram for this animal was U-shaped, with hearing capabilities from 0.5 to 160 kHz (8 13 oct). Maximum sensitivity (42 dB re 1 μPa) occurred at 64 kHz. The range of most sensitive hearing (defined as the frequency range with sensitivities within 10 dB of maximum sensitivity) was from 29 to 123 kHz (approximately 2 oct). The animal's hearing became less sensitive below 32 kHz and above 120 kHz. Sensitivity decreased by about 8 dB per octave below 1 kHz and fell sharply at a rate of about 390 dB per octave above 140 kHz.

  10. Sensitivity of human auditory cortex to rapid frequency modulation revealed by multivariate representational similarity analysis.

    PubMed

    Joanisse, Marc F; DeSouza, Diedre D

    2014-01-01

    Functional Magnetic Resonance Imaging (fMRI) was used to investigate the extent, magnitude, and pattern of brain activity in response to rapid frequency-modulated sounds. We examined this by manipulating the direction (rise vs. fall) and the rate (fast vs. slow) of the apparent pitch of iterated rippled noise (IRN) bursts. Acoustic parameters were selected to capture features used in phoneme contrasts, however the stimuli themselves were not perceived as speech per se. Participants were scanned as they passively listened to sounds in an event-related paradigm. Univariate analyses revealed a greater level and extent of activation in bilateral auditory cortex in response to frequency-modulated sweeps compared to steady-state sounds. This effect was stronger in the left hemisphere. However, no regions showed selectivity for either rate or direction of frequency modulation. In contrast, multivoxel pattern analysis (MVPA) revealed feature-specific encoding for direction of modulation in auditory cortex bilaterally. Moreover, this effect was strongest when analyses were restricted to anatomical regions lying outside Heschl's gyrus. We found no support for feature-specific encoding of frequency modulation rate. Differential findings of modulation rate and direction of modulation are discussed with respect to their relevance to phonetic discrimination.

  11. Segregating the neural correlates of physical and perceived change in auditory input using the change deafness effect.

    PubMed

    Puschmann, Sebastian; Weerda, Riklef; Klump, Georg; Thiel, Christiane M

    2013-05-01

    Psychophysical experiments show that auditory change detection can be disturbed in situations in which listeners have to monitor complex auditory input. We made use of this change deafness effect to segregate the neural correlates of physical change in auditory input from brain responses related to conscious change perception in an fMRI experiment. Participants listened to two successively presented complex auditory scenes, which consisted of six auditory streams, and had to decide whether scenes were identical or whether the frequency of one stream was changed between presentations. Our results show that physical changes in auditory input, independent of successful change detection, are represented at the level of auditory cortex. Activations related to conscious change perception, independent of physical change, were found in the insula and the ACC. Moreover, our data provide evidence for significant effective connectivity between auditory cortex and the insula in the case of correctly detected auditory changes, but not for missed changes. This underlines the importance of the insula/anterior cingulate network for conscious change detection.

  12. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    PubMed

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  13. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning

    PubMed Central

    Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359

  14. Auditory Deficits in Amusia Extend Beyond Poor Pitch Perception

    PubMed Central

    Whiteford, Kelly L.; Oxenham, Andrew J.

    2017-01-01

    Congenital amusia is a music perception disorder believed to reflect a deficit in fine-grained pitch perception and/or short-term or working memory for pitch. Because most measures of pitch perception include memory and segmentation components, it has been difficult to determine the true extent of pitch processing deficits in amusia. It is also unclear whether pitch deficits persist at frequencies beyond the range of musical pitch. To address these questions, experiments were conducted with amusics and matched controls, manipulating both the stimuli and the task demands. First, we assessed pitch discrimination at low (500 Hz and 2000 Hz) and high (8000 Hz) frequencies using a three-interval forced-choice task. Amusics exhibited deficits even at the highest frequency, which lies beyond the existence region of musical pitch. Next, we assessed the extent to which frequency coding deficits persist in one- and two-interval frequency-modulation (FM) and amplitude-modulation (AM) detection tasks at 500 Hz at slow (fm = 4 Hz) and fast (fm = 20 Hz) modulation rates. Amusics still exhibited deficits in one-interval FM detection tasks that should not involve memory or segmentation. Surprisingly, amusics were also impaired on AM detection, which should not involve pitch processing. Finally, direct comparisons between the detection of continuous and discrete FM demonstrated that amusics suffer deficits both in coding and segmenting pitch information. Our results reveal auditory deficits in amusia extending beyond pitch perception that are subtle when controlling for memory and segmentation, and are likely exacerbated in more complex contexts such as musical listening. PMID:28315696

  15. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising

    PubMed Central

    Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung

    2017-01-01

    Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework’s simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications. PMID:28350887

  16. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising.

    PubMed

    Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung

    2017-01-01

    Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.

  17. The effect of the inner-hair-cell mediated transduction on the shape of neural tuning curves

    NASA Astrophysics Data System (ADS)

    Altoè, Alessandro; Pulkki, Ville; Verhulst, Sarah

    2018-05-01

    The inner hair cells of the mammalian cochlea transform the vibrations of their stereocilia into releases of neurotransmitter at the ribbon synapses, thereby controlling the activity of the afferent auditory fibers. The mechanical-to-neural transduction is a highly nonlinear process and it introduces differences between the frequency-tuning of the stereocilia and that of the afferent fibers. Using a computational model of the inner hair cell that is based on in vitro data, we estimated that smaller vibrations of the stereocilia are necessary to drive the afferent fibers above threshold at low (≤0.5 kHz) than at high (≥4 kHz) driving frequencies. In the base of the cochlea, the transduction process affects the low-frequency tails of neural tuning curves. In particular, it introduces differences between the frequency-tuning of the stereocilia and that of the auditory fibers resembling those between basilar membrane velocity and auditory fibers tuning curves in the chinchilla base. For units with a characteristic frequency between 1 and 4 kHz, the transduction process yields shallower neural than stereocilia tuning curves as the characteristic frequency decreases. This study proposes that transduction contributes to the progressive broadening of neural tuning curves from the base to the apex.

  18. A bio-inspired auditory perception model for amplitude-frequency clustering (keynote Paper)

    NASA Astrophysics Data System (ADS)

    Arena, Paolo; Fortuna, Luigi; Frasca, Mattia; Ganci, Gaetana; Patane, Luca

    2005-06-01

    In this paper a model for auditory perception is introduced. This model is based on a network of integrate-and-fire and resonate-and-fire neurons and is aimed to control the phonotaxis behavior of a roving robot. The starting point is the model of phonotaxis in Gryllus Bimaculatus: the model consists of four integrate-and-fire neurons and is able of discriminating the calling song of male cricket and orienting the robot towards the sound source. This paper aims to extend the model to include an amplitude-frequency clustering. The proposed spiking network shows different behaviors associated with different characteristics of the input signals (amplitude and frequency). The behavior implemented on the robot is similar to the cricket behavior, where some frequencies are associated with the calling song of male crickets, while other ones indicate the presence of predators. Therefore, the whole model for auditory perception is devoted to control different responses (attractive or repulsive) depending on the input characteristics. The performance of the control system has been evaluated with several experiments carried out on a roving robot.

  19. Acute effects and after-effects of acoustic coordinated reset neuromodulation in patients with chronic subjective tinnitus.

    PubMed

    Adamchic, Ilya; Toth, Timea; Hauptmann, Christian; Walger, Martin; Langguth, Berthold; Klingmann, Ingrid; Tass, Peter Alexander

    2017-01-01

    Chronic subjective tinnitus is an auditory phantom phenomenon characterized by abnormal neuronal synchrony in the central auditory system. As shown computationally, acoustic coordinated reset (CR) neuromodulation causes a long-lasting desynchronization of pathological synchrony by downregulating abnormal synaptic connectivity. In a previous proof of concept study acoustic CR neuromodulation, employing stimulation tone patterns tailored to the dominant tinnitus frequency, was compared to noisy CR-like stimulation, a CR version significantly detuned by sparing the tinnitus-related pitch range and including substantial random variability of the tone spacing on the frequency axis. Both stimulation protocols caused an acute relief as measured with visual analogue scale scores for tinnitus loudness (VAS-L) and annoyance (VAS-A) in the stimulation-ON condition (i.e. 15 min after stimulation onset), but only acoustic CR neuromodulation had sustained long-lasting therapeutic effects after 12 weeks of treatment as assessed with VAS-L, VAS-A scores and a tinnitus questionnaire (TQ) in the stimulation-OFF condition (i.e. with patients being off stimulation for at least 2.5 h). To understand the source of the long-lasting therapeutic effects, we here study whether acoustic CR neuromodulation has different electrophysiological effects on oscillatory brain activity as compared to noisy CR-like stimulation under stimulation-ON conditions and immediately after cessation of stimulation. To this end, we used a single-blind, single application, cross over design in 18 patients with chronic tonal subjective tinnitus and administered three different 16-minute stimulation protocols: acoustic CR neuromodulation, noisy CR-like stimulation and low frequency range (LFR) stimulation, a CR type stimulation with deliberately detuned pitch and repetition rate of stimulation tones, as control stimulation. We measured VAS-L and VAS-A scores together with spontaneous EEG activity pre-, during- and post-stimulation. Under stimulation-ON conditions acoustic CR neuromodulation and noisy CR-like stimulation had similar effects: a reduction of VAS-L and VAS-A scores together with a decrease of auditory delta power and an increase of auditory alpha and gamma power, without significant differences. In contrast, LFR stimulation had significantly weaker EEG effects and no significant clinical effects under stimulation-ON conditions. The distinguishing feature between acoustic CR neuromodulation and noisy CR-like stimulation were the electrophysiological after-effects. Acoustic CR neuromodulation caused the longest significant reduction of delta and gamma and increase of alpha power in the auditory cortex region. Noisy CR-like stimulation had weaker and LFR stimulation hardly any electrophysiological after-effects. This qualitative difference further supports the assertion that long-term effects of acoustic CR neuromodulation on tinnitus are mediated by a specific disruption of synchronous neural activity. Furthermore, our results indicate that acute electrophysiological after-effects might serve as a marker to further improve desynchronizing sound stimulation.

  20. Influence of aging on human sound localization

    PubMed Central

    Dobreva, Marina S.; O'Neill, William E.

    2011-01-01

    Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age. PMID:21368004

  1. Effect of hearing aids on auditory function in infants with perinatal brain injury and severe hearing loss.

    PubMed

    Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio

    2012-01-01

    Approximately 2-4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs.

  2. Effect of Hearing Aids on Auditory Function in Infants with Perinatal Brain Injury and Severe Hearing Loss

    PubMed Central

    Moreno-Aguirre, Alma Janeth; Santiago-Rodríguez, Efraín; Harmony, Thalía; Fernández-Bouzas, Antonio

    2012-01-01

    Background Approximately 2–4% of newborns with perinatal risk factors present with hearing loss. Our aim was to analyze the effect of hearing aid use on auditory function evaluated based on otoacoustic emissions (OAEs), auditory brain responses (ABRs) and auditory steady state responses (ASSRs) in infants with perinatal brain injury and profound hearing loss. Methodology/Principal Findings A prospective, longitudinal study of auditory function in infants with profound hearing loss. Right side hearing before and after hearing aid use was compared with left side hearing (not stimulated and used as control). All infants were subjected to OAE, ABR and ASSR evaluations before and after hearing aid use. The average ABR threshold decreased from 90.0 to 80.0 dB (p = 0.003) after six months of hearing aid use. In the left ear, which was used as a control, the ABR threshold decreased from 94.6 to 87.6 dB, which was not significant (p>0.05). In addition, the ASSR threshold in the 4000-Hz frequency decreased from 89 dB to 72 dB (p = 0.013) after six months of right ear hearing aid use; the other frequencies in the right ear and all frequencies in the left ear did not show significant differences in any of the measured parameters (p>0.05). OAEs were absent in the baseline test and showed no changes after hearing aid use in the right ear (p>0.05). Conclusions/Significance This study provides evidence that early hearing aid use decreases the hearing threshold in ABR and ASSR assessments with no functional modifications in the auditory receptor, as evaluated by OAEs. PMID:22808289

  3. Neurophysiological mechanisms of cortical plasticity impairments in schizophrenia and modulation by the NMDA receptor agonist D-serine.

    PubMed

    Kantrowitz, Joshua T; Epstein, Michael L; Beggel, Odeta; Rohrig, Stephanie; Lehrfeld, Jonathan M; Revheim, Nadine; Lehrfeld, Nayla P; Reep, Jacob; Parker, Emily; Silipo, Gail; Ahissar, Merav; Javitt, Daniel C

    2016-12-01

    Schizophrenia is associated with deficits in cortical plasticity that affect sensory brain regions and lead to impaired cognitive performance. Here we examined underlying neural mechanisms of auditory plasticity deficits using combined behavioural and neurophysiological assessment, along with neuropharmacological manipulation targeted at the N-methyl-D-aspartate type glutamate receptor (NMDAR). Cortical plasticity was assessed in a cohort of 40 schizophrenia/schizoaffective patients relative to 42 healthy control subjects using a fixed reference tone auditory plasticity task. In a second cohort (n = 21 schizophrenia/schizoaffective patients, n = 13 healthy controls), event-related potential and event-related time-frequency measures of auditory dysfunction were assessed during administration of the NMDAR agonist d-serine. Mismatch negativity was used as a functional read-out of auditory-level function. Clinical trials registration numbers were NCT01474395/NCT02156908 Schizophrenia/schizoaffective patients showed significantly reduced auditory plasticity versus healthy controls (P = 0.001) that correlated with measures of cognitive, occupational and social dysfunction. In event-related potential/time-frequency analyses, patients showed highly significant reductions in sensory N1 that reflected underlying impairments in θ responses (P < 0.001), along with reduced θ and β-power modulation during retention and motor-preparation intervals. Repeated administration of d-serine led to intercorrelated improvements in (i) auditory plasticity (P < 0.001); (ii) θ-frequency response (P < 0.05); and (iii) mismatch negativity generation to trained versus untrained tones (P = 0.02). Schizophrenia/schizoaffective patients show highly significant deficits in auditory plasticity that contribute to cognitive, occupational and social dysfunction. d-serine studies suggest first that NMDAR dysfunction may contribute to underlying cortical plasticity deficits and, second, that repeated NMDAR agonist administration may enhance cortical plasticity in schizophrenia. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Development of the auditory system

    PubMed Central

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  5. The hazard of exposure to impulse noise as a function of frequency, volume 2

    NASA Astrophysics Data System (ADS)

    Patterson, James H., Jr.; Carrier, Melvin, Jr.; Bordwell, Kevin; Lomba, Ilia M.; Gautier, Roger P.

    1991-06-01

    The energy spectrum of a noise is known to be an important variable in determining the effects of a traumatic exposure. However, existing criteria for exposure to impulse noise do not consider the frequency spectrum of an impulse as a variable in the evaluation of the hazards to the auditory system. This report presents the results of a study that was designed to determine the relative potential that impulsive energy concentrated at different frequencies has in causing auditory systems trauma. One hundred and eighteen (118) chinchilla, divided into 20 groups with 5 to 7 animals per group, were used in these experiments. Pre- and post-exposure hearing thresholds were measured at 10 test frequencies between 0.125 and 8 kHz on each animal using avoidance conditioning procedures. Quantitative histology (cochleograms) was used to determine the extent and pattern of the sensory cell damage. The noise exposure stimuli consisted of six different computer-generated narrow band tone bursts having center frequencies located at 0.260, 0.775, 1.025, 1.350, 2.450, and 3.550 kHz. Each narrow band exposure stimulus was presented at two to four different intensities. An analysis of the audiometric and histological data allowed a frequency weighting function to be derived. The weighting function clearly demonstrates that equivalent amounts of impulsive energy concentrated at different frequencies is not equally hazardous to auditory function.

  6. Gender difference in the theta/alpha ratio during the induction of peaceful audiovisual modalities.

    PubMed

    Yang, Chia-Yen; Lin, Ching-Po

    2015-09-01

    Gender differences in emotional perception have been found in numerous psychological and psychophysiological studies. The conducting modalities in diverse characteristics of different sensory systems make it interesting to determine how cooperation and competition contribute to emotional experiences. We have previously estimated the bias from the match attributes of auditory and visual modalities and revealed specific brain activity frequency patterns related to a peaceful mood. In that multimodality experiment, we focused on how inner-quiet information is processed in the human brain, and found evidence of auditory domination from the theta-band activity. However, a simple quantitative description of these three frequency bands is lacking, and no studies have assessed the effects of peacefulness on the emotional state. Therefore, the aim of this study was to use magnetoencephalography to determine if gender differences exist (and when and where) in the frequency interactions underpinning the perception of peacefulness. This study provides evidence of auditory and visual domination in perceptual bias during multimodality processing of peaceful consciousness. The results of power ratio analyses suggest that the values of the theta/alpha ratio are associated with a modality as well as hemispheric asymmetries in the anterior-to-posterior direction, which shift from right to left with the auditory to visual stimulations in a peaceful mood. This means that the theta/alpha ratio might be useful for evaluating emotion. Moreover, the difference was found to be most pronounced for auditory domination and visual sensitivity in the female group.

  7. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123

    PubMed Central

    Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  8. Did You Listen to the Beat? Auditory Steady-State Responses in the Human Electroencephalogram at 4 and 7 Hz Modulation Rates Reflect Selective Attention.

    PubMed

    Jaeger, Manuela; Bleichner, Martin G; Bauer, Anna-Katharina R; Mirkovic, Bojana; Debener, Stefan

    2018-02-27

    The acoustic envelope of human speech correlates with the syllabic rate (4-8 Hz) and carries important information for intelligibility, which is typically compromised in multi-talker, noisy environments. In order to better understand the dynamics of selective auditory attention to low frequency modulated sound sources, we conducted a two-stream auditory steady-state response (ASSR) selective attention electroencephalogram (EEG) study. The two streams consisted of 4 and 7 Hz amplitude and frequency modulated sounds presented from the left and right side. One of two streams had to be attended while the other had to be ignored. The attended stream always contained a target, allowing for the behavioral confirmation of the attention manipulation. EEG ASSR power analysis revealed a significant increase in 7 Hz power for the attend compared to the ignore conditions. There was no significant difference in 4 Hz power when the 4 Hz stream had to be attended compared to when it had to be ignored. This lack of 4 Hz attention modulation could be explained by a distracting effect of a third frequency at 3 Hz (beat frequency) perceivable when the 4 and 7 Hz streams are presented simultaneously. Taken together our results show that low frequency modulations at syllabic rate are modulated by selective spatial attention. Whether attention effects act as enhancement of the attended stream or suppression of to be ignored stream may depend on how well auditory streams can be segregated.

  9. Textural timbre

    PubMed Central

    Hollins, Mark

    2009-01-01

    During haptic exploration of surfaces, complex mechanical oscillations—of surface displacement and air pressure—are generated, which are then transduced by receptors in the skin and in the inner ear. Tactile and auditory signals thus convey redundant information about texture, partially carried in the spectral content of these signals. It is no surprise, then, that the representation of temporal frequency is linked in the auditory and somatosensory systems. An emergent hypothesis is that there exists a supramodal representation of temporal frequency, and by extension texture. PMID:19721886

  10. NANOCI-Nanotechnology Based Cochlear Implant With Gapless Interface to Auditory Neurons.

    PubMed

    Senn, Pascal; Roccio, Marta; Hahnewald, Stefan; Frick, Claudia; Kwiatkowska, Monika; Ishikawa, Masaaki; Bako, Peter; Li, Hao; Edin, Fredrik; Liu, Wei; Rask-Andersen, Helge; Pyykkö, Ilmari; Zou, Jing; Mannerström, Marika; Keppner, Herbert; Homsy, Alexandra; Laux, Edith; Llera, Miguel; Lellouche, Jean-Paul; Ostrovsky, Stella; Banin, Ehud; Gedanken, Aharon; Perkas, Nina; Wank, Ute; Wiesmüller, Karl-Heinz; Mistrík, Pavel; Benav, Heval; Garnham, Carolyn; Jolly, Claude; Gander, Filippo; Ulrich, Peter; Müller, Marcus; Löwenheim, Hubert

    2017-09-01

    : Cochlear implants (CI) restore functional hearing in the majority of deaf patients. Despite the tremendous success of these devices, some limitations remain. The bottleneck for optimal electrical stimulation with CI is caused by the anatomical gap between the electrode array and the auditory neurons in the inner ear. As a consequence, current devices are limited through 1) low frequency resolution, hence sub-optimal sound quality and 2), large stimulation currents, hence high energy consumption (responsible for significant battery costs and for impeding the development of fully implantable systems). A recently completed, multinational and interdisciplinary project called NANOCI aimed at overcoming current limitations by creating a gapless interface between auditory nerve fibers and the cochlear implant electrode array. This ambitious goal was achieved in vivo by neurotrophin-induced attraction of neurites through an intracochlear gel-nanomatrix onto a modified nanoCI electrode array located in the scala tympani of deafened guinea pigs. Functionally, the gapless interface led to lower stimulation thresholds and a larger dynamic range in vivo, and to reduced stimulation energy requirement (up to fivefold) in an in vitro model using auditory neurons cultured on multi-electrode arrays. In conclusion, the NANOCI project yielded proof of concept that a gapless interface between auditory neurons and cochlear implant electrode arrays is feasible. These findings may be of relevance for the development of future CI systems with better sound quality and performance and lower energy consumption. The present overview/review paper summarizes the NANOCI project history and highlights achievements of the individual work packages.

  11. Clinical significance and developmental changes of auditory-language-related gamma activity

    PubMed Central

    Kojima, Katsuaki; Brown, Erik C.; Rothermel, Robert; Carlson, Alanna; Fuerst, Darren; Matsuzaki, Naoyuki; Shah, Aashit; Atkinson, Marie; Basha, Maysaa; Mittal, Sandeep; Sood, Sandeep; Asano, Eishi

    2012-01-01

    OBJECTIVE We determined the clinical impact and developmental changes of auditory-language-related augmentation of gamma activity at 50–120 Hz recorded on electrocorticography (ECoG). METHODS We analyzed data from 77 epileptic patients ranging 4 – 56 years in age. We determined the effects of seizure-onset zone, electrode location, and patient-age upon gamma-augmentation elicited by an auditory-naming task. RESULTS Gamma-augmentation was less frequently elicited within seizure-onset sites compared to other sites. Regardless of age, gamma-augmentation most often involved the 80–100 Hz frequency band. Gamma-augmentation initially involved bilateral superior-temporal regions, followed by left-side dominant involvement in the middle-temporal, medial-temporal, inferior-frontal, dorsolateral-premotor, and medial-frontal regions and concluded with bilateral inferior-Rolandic involvement. Compared to younger patients, those older than 10 years had a larger proportion of left dorsolateral-premotor and right inferior-frontal sites showing gamma-augmentation. The incidence of a post-operative language deficit requiring speech therapy was predicted by the number of resected sites with gamma-augmentation in the superior-temporal, inferior-frontal, dorsolateral-premotor, and inferior-Rolandic regions of the left hemisphere assumed to contain essential language function (r2=0.59; p=0.001; odds ratio=6.04 [95% confidence-interval: 2.26 to 16.15]). CONCLUSIONS Auditory-language-related gamma-augmentation can provide additional information useful to localize the primary language areas. SIGNIFICANCE These results derived from a large sample of patients support the utility of auditory-language-related gamma-augmentation in presurgical evaluation. PMID:23141882

  12. Noise levels from toys and recreational articles for children and teenagers.

    PubMed

    Hellstrom, P A; Dengerink, H A; Axelsson, A

    1992-10-01

    This study examined the noise level emitted by toys and recreational articles used by children and teenagers. The results indicate that many of the items tested emit sufficiently intense noise to be a source of noise induced hearing loss in school-age children. While the baby toys provided noise exposure within the limits of national regulations, they are most intense in a frequency range that corresponds to the resonance frequency of the external auditory canal of very young children. Hobby motors emit noise that may require protection depending upon the length of use. Fire-crackers and cap guns emit impulse noises that exceed even conservative standards for noise exposure.

  13. The Rhythm of Perception: Entrainment to Acoustic Rhythms Induces Subsequent Perceptual Oscillation.

    PubMed

    Hickok, Gregory; Farahbod, Haleh; Saberi, Kourosh

    2015-07-01

    Acoustic rhythms are pervasive in speech, music, and environmental sounds. Recent evidence for neural codes representing periodic information suggests that they may be a neural basis for the ability to detect rhythm. Further, rhythmic information has been found to modulate auditory-system excitability, which provides a potential mechanism for parsing the acoustic stream. Here, we explored the effects of a rhythmic stimulus on subsequent auditory perception. We found that a low-frequency (3 Hz), amplitude-modulated signal induces a subsequent oscillation of the perceptual detectability of a brief nonperiodic acoustic stimulus (1-kHz tone); the frequency but not the phase of the perceptual oscillation matches the entrained stimulus-driven rhythmic oscillation. This provides evidence that rhythmic contexts have a direct influence on subsequent auditory perception of discrete acoustic events. Rhythm coding is likely a fundamental feature of auditory-system design that predates the development of explicit human enjoyment of rhythm in music or poetry. © The Author(s) 2015.

  14. Sound texture perception via statistics of the auditory periphery: Evidence from sound synthesis

    PubMed Central

    McDermott, Josh H.; Simoncelli, Eero P.

    2014-01-01

    Rainstorms, insect swarms, and galloping horses produce “sound textures” – the collective result of many similar acoustic events. Sound textures are distinguished by temporal homogeneity, suggesting they could be recognized with time-averaged statistics. To test this hypothesis, we processed real-world textures with an auditory model containing filters tuned for sound frequencies and their modulations, and measured statistics of the resulting decomposition. We then assessed the realism and recognizability of novel sounds synthesized to have matching statistics. Statistics of individual frequency channels, capturing spectral power and sparsity, generally failed to produce compelling synthetic textures. However, combining them with correlations between channels produced identifiable and natural-sounding textures. Synthesis quality declined if statistics were computed from biologically implausible auditory models. The results suggest that sound texture perception is mediated by relatively simple statistics of early auditory representations, presumably computed by downstream neural populations. The synthesis methodology offers a powerful tool for their further investigation. PMID:21903084

  15. Equivalent mismatch negativity deficits across deviant types in early illness schizophrenia-spectrum patients.

    PubMed

    Hay, Rachel A; Roach, Brian J; Srihari, Vinod H; Woods, Scott W; Ford, Judith M; Mathalon, Daniel H

    2015-02-01

    Neurophysiological abnormalities in auditory deviance processing, as reflected by the mismatch negativity (MMN), have been observed across the course of schizophrenia. Studies in early schizophrenia patients have typically shown varying degrees of MMN amplitude reduction for different deviant types, suggesting that different auditory deviants are uniquely processed and may be differentially affected by duration of illness. To explore this further, we examined the MMN response to 4 auditory deviants (duration, frequency, duration+frequency "double deviant", and intensity) in 24 schizophrenia-spectrum patients early in the illness (ESZ) and 21 healthy controls. ESZ showed significantly reduced MMN relative to healthy controls for all deviant types (p<0.05), with no significant interaction with deviant type. No correlations with clinical symptoms were present (all ps>0.05). These findings support the conclusion that neurophysiological mechanisms underlying processing of auditory deviants are compromised early in illness, and these deficiencies are not specific to the type of deviant presented. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.

    PubMed

    de Jong, Ritske; Toffanin, Paolo; Harbers, Marten

    2010-01-01

    Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  17. Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain

    PubMed Central

    Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon

    2013-01-01

    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472

  18. Rapid change in articulatory lip movement induced by preceding auditory feedback during production of bilabial plosives.

    PubMed

    Mochida, Takemi; Gomi, Hiroaki; Kashino, Makio

    2010-11-08

    There has been plentiful evidence of kinesthetically induced rapid compensation for unanticipated perturbation in speech articulatory movements. However, the role of auditory information in stabilizing articulation has been little studied except for the control of voice fundamental frequency, voice amplitude and vowel formant frequencies. Although the influence of auditory information on the articulatory control process is evident in unintended speech errors caused by delayed auditory feedback, the direct and immediate effect of auditory alteration on the movements of articulators has not been clarified. This work examined whether temporal changes in the auditory feedback of bilabial plosives immediately affects the subsequent lip movement. We conducted experiments with an auditory feedback alteration system that enabled us to replace or block speech sounds in real time. Participants were asked to produce the syllable /pa/ repeatedly at a constant rate. During the repetition, normal auditory feedback was interrupted, and one of three pre-recorded syllables /pa/, /Φa/, or /pi/, spoken by the same participant, was presented once at a different timing from the anticipated production onset, while no feedback was presented for subsequent repetitions. Comparisons of the labial distance trajectories under altered and normal feedback conditions indicated that the movement quickened during the short period immediately after the alteration onset, when /pa/ was presented 50 ms before the expected timing. Such change was not significant under other feedback conditions we tested. The earlier articulation rapidly induced by the progressive auditory input suggests that a compensatory mechanism helps to maintain a constant speech rate by detecting errors between the internally predicted and actually provided auditory information associated with self movement. The timing- and context-dependent effects of feedback alteration suggest that the sensory error detection works in a temporally asymmetric window where acoustic features of the syllable to be produced may be coded.

  19. Perspectives on the Pure-Tone Audiogram.

    PubMed

    Musiek, Frank E; Shinn, Jennifer; Chermak, Gail D; Bamiou, Doris-Eva

    The pure-tone audiogram, though fundamental to audiology, presents limitations, especially in the case of central auditory involvement. Advances in auditory neuroscience underscore the considerably larger role of the central auditory nervous system (CANS) in hearing and related disorders. Given the availability of behavioral audiological tests and electrophysiological procedures that can provide better insights as to the function of the various components of the auditory system, this perspective piece reviews the limitations of the pure-tone audiogram and notes some of the advantages of other tests and procedures used in tandem with the pure-tone threshold measurement. To review and synthesize the literature regarding the utility and limitations of the pure-tone audiogram in determining dysfunction of peripheral sensory and neural systems, as well as the CANS, and to identify other tests and procedures that can supplement pure-tone thresholds and provide enhanced diagnostic insight, especially regarding problems of the central auditory system. A systematic review and synthesis of the literature. The authors independently searched and reviewed literature (journal articles, book chapters) pertaining to the limitations of the pure-tone audiogram. The pure-tone audiogram provides information as to hearing sensitivity across a selected frequency range. Normal or near-normal pure-tone thresholds sometimes are observed despite cochlear damage. There are a surprising number of patients with acoustic neuromas who have essentially normal pure-tone thresholds. In cases of central deafness, depressed pure-tone thresholds may not accurately reflect the status of the peripheral auditory system. Listening difficulties are seen in the presence of normal pure-tone thresholds. Suprathreshold procedures and a variety of other tests can provide information regarding other and often more central functions of the auditory system. The audiogram is a primary tool for determining type, degree, and configuration of hearing loss; however, it provides the clinician with information regarding only hearing sensitivity, and no information about central auditory processing or the auditory processing of real-world signals (i.e., speech, music). The pure-tone audiogram offers limited insight into functional hearing and should be viewed only as a test of hearing sensitivity. Given the limitations of the pure-tone audiogram, a brief overview is provided of available behavioral tests and electrophysiological procedures that are sensitive to the function and integrity of the central auditory system, which provide better diagnostic and rehabilitative information to the clinician and patient. American Academy of Audiology

  20. Low power adder based auditory filter architecture.

    PubMed

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  1. Vocal development and auditory perception in CBA/CaJ mice

    NASA Astrophysics Data System (ADS)

    Radziwon, Kelly E.

    Mice are useful laboratory subjects because of their small size, their modest cost, and the fact that researchers have created many different strains to study a variety of disorders. In particular, researchers have found nearly 100 naturally occurring mouse mutations with hearing impairments. For these reasons, mice have become an important model for studies of human deafness. Although much is known about the genetic makeup and physiology of the laboratory mouse, far less is known about mouse auditory behavior. To fully understand the effects of genetic mutations on hearing, it is necessary to determine the hearing abilities of these mice. Two experiments here examined various aspects of mouse auditory perception using CBA/CaJ mice, a commonly used mouse strain. The frequency difference limens experiment tested the mouse's ability to discriminate one tone from another based solely on the frequency of the tone. The mice had similar thresholds as wild mice and gerbils but needed a larger change in frequency than humans and cats. The second psychoacoustic experiment sought to determine which cue, frequency or duration, was more salient when the mice had to identify various tones. In this identification task, the mice overwhelmingly classified the tones based on frequency instead of duration, suggesting that mice are using frequency when differentiating one mouse vocalization from another. The other two experiments were more naturalistic and involved both auditory perception and mouse vocal production. Interest in mouse vocalizations is growing because of the potential for mice to become a model of human speech disorders. These experiments traced mouse vocal development from infant to adult, and they tested the mouse's preference for various vocalizations. This was the first known study to analyze the vocalizations of individual mice across development. Results showed large variation in calling rates among the three cages of adult mice but results were highly consistent across all infant vocalizations. Although the preference experiment did not reveal significant differences between various mouse vocalizations, suggestions are given for future attempts to identify mouse preferences for auditory stimuli.

  2. A Comparison of Persian Vowel Production in Hearing-Impaired Children Using a Cochlear Implant and Normal-Hearing Children.

    PubMed

    Jafari, Narges; Drinnan, Michael; Mohamadi, Reyhane; Yadegari, Fariba; Nourbakhsh, Mandana; Torabinezhad, Farhad

    2016-05-01

    Normal-hearing (NH) acuity and auditory feedback control are crucial for human voice production and articulation. The lack of auditory feedback in individuals with profound hearing impairment changes their vowel production. The purpose of this study was to compare Persian vowel production in deaf children with cochlear implants (CIs) and that in NH children. The participants were 20 children (12 girls and 8 boys) with age range of 5 years; 1 month to 9 years. All patients had congenital hearing loss and received a multichannel CI at an average age of 3 years. They had at least 6 months experience of their current device (CI). The control group consisted of 20 NH children (12 girls and 8 boys) with age range of 5 to 9 years old. The two groups were matched by age. Participants were native Persian speakers who were asked to produce the vowels /i/, /e/, /ӕ/, /u/, /o/, and /a/. The averages for first formant frequency (F1) and second formant frequency (F2) of six vowels were measured using Praat software (Version 5.1.44, Boersma & Weenink, 2012). The independent samples t test was conducted to assess the differences in F1 and F2 values and the area of the vowel space between the two groups. Mean values of F1 were increased in CI children; the mean values of F1 for vowel /i/ and /a/, F2 for vowel /a/ and /o/ were significantly different (P < 0.05). The changes in F1 and F2 showed a centralized vowel space for CI children. F1 is increased in CI children, probably because CI children tend to overarticulate. We hypothesis this is due to a lack of auditory feedback; there is an attempt by hearing-impaired children to compensate via proprioceptive feedback during articulatory process. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  3. A prediction of templates in the auditory cortex system

    NASA Astrophysics Data System (ADS)

    Ghanbeigi, Kimia

    In this study variation of human auditory evoked mismatch field amplitudes in response to complex tones as a function of the removal in single partials in the onset period was investigated. It was determined: 1-A single frequency elimination in a sound stimulus plays a significant role in human brain sound recognition. 2-By comparing the mismatches of the brain response due to a single frequency elimination in the "Starting Transient" and "Sustain Part" of the sound stimulus, it is found that the brain is more sensitive to frequency elimination in the Starting Transient. This study involves 4 healthy subjects with normal hearing. Neural activity was recorded with stimulus whole-head MEG. Verification of spatial location in the auditory cortex was determined by comparing with MRI images. In the first set of stimuli, repetitive ('standard') tones with five selected onset frequencies were randomly embedded in the string of rare ('deviant') tones with randomly varying inter stimulus intervals. In the deviant tones one of the frequency components was omitted relative to the deviant tones during the onset period. The frequency of the test partial of the complex tone was intentionally selected to preclude its reinsertion by generation of harmonics or combination tones due to either the nonlinearity of the ear, the electronic equipment or the brain processing. In the second set of stimuli, time structured as above, repetitive ('standard') tones with five selected sustained frequency components were embedded in the string of rare '(deviant') tones for which one of these selected frequencies was omitted in the sustained tone. In both measurements, the carefully frequency selection precluded their reinsertion by generation of harmonics or combination tones due to the nonlinearity of the ear, the electronic equipment and brain processing. The same considerations for selecting the test frequency partial were applied. Results. By comparing MMN of the two data sets, the relative contribution to sound recognition of the omitted partial frequency components in the onset and sustained regions has been determined. Conclusion. The presence of significant mismatch negativity, due to neural activity of auditory cortex, emphasizes that the brain recognizes the elimination of a single frequency of carefully chosen anharmonic frequencies. It was shown this mismatch is more significant if the single frequency elimination occurs in the onset period.

  4. Why is auditory frequency weighting so important in regulation of underwater noise?

    PubMed

    Tougaard, Jakob; Dähne, Michael

    2017-10-01

    A key question related to regulating noise from pile driving, air guns, and sonars is how to take into account the hearing abilities of different animals by means of auditory frequency weighting. Recordings of pile driving sounds, both in the presence and absence of a bubble curtain, were evaluated against recent thresholds for temporary threshold shift (TTS) for harbor porpoises by means of four different weighting functions. The assessed effectivity, expressed as time until TTS, depended strongly on choice of weighting function: 2 orders of magnitude larger for an audiogram-weighted TTS criterion relative to an unweighted criterion, highlighting the importance of selecting the right frequency weighting.

  5. The Contribution of Brainstem and Cerebellar Pathways to Auditory Recognition

    PubMed Central

    McLachlan, Neil M.; Wilson, Sarah J.

    2017-01-01

    The cerebellum has been known to play an important role in motor functions for many years. More recently its role has been expanded to include a range of cognitive and sensory-motor processes, and substantial neuroimaging and clinical evidence now points to cerebellar involvement in most auditory processing tasks. In particular, an increase in the size of the cerebellum over recent human evolution has been attributed in part to the development of speech. Despite this, the auditory cognition literature has largely overlooked afferent auditory connections to the cerebellum that have been implicated in acoustically conditioned reflexes in animals, and could subserve speech and other auditory processing in humans. This review expands our understanding of auditory processing by incorporating cerebellar pathways into the anatomy and functions of the human auditory system. We reason that plasticity in the cerebellar pathways underpins implicit learning of spectrotemporal information necessary for sound and speech recognition. Once learnt, this information automatically recognizes incoming auditory signals and predicts likely subsequent information based on previous experience. Since sound recognition processes involving the brainstem and cerebellum initiate early in auditory processing, learnt information stored in cerebellar memory templates could then support a range of auditory processing functions such as streaming, habituation, the integration of auditory feature information such as pitch, and the recognition of vocal communications. PMID:28373850

  6. Encoding of a spectrally-complex communication sound in the bullfrog's auditory nerve.

    PubMed

    Schwartz, J J; Simmons, A M

    1990-02-01

    1. A population study of eighth nerve responses in the bullfrog, Rana catesbeiana, was undertaken to analyze how the eighth nerve codes the complex spectral and temporal structure of the species-specific advertisement call over a biologically-realistic range of intensities. Synthetic advertisement calls were generated by Fourier synthesis and presented to individual eighth nerve fibers of anesthetized bullfrogs. Fiber responses were analyzed by calculating rate responses based on post-stimulus-time (PST) histograms and temporal responses based on Fourier transforms of period histograms. 2. At stimulus intensities of 70 and 80 dB SPL, normalized rate responses provide a fairly good representation of the complex spectral structure of the stimulus, particularly in the low- and mid-frequency range. At higher intensities, rate responses saturate, and very little of the spectral structure of the complex stimulus can be seen in the profile of rate responses of the population. 3. Both AP and BP fibers phase-lock strongly to the fundamental (100 Hz) of the complex stimulus. These effects are relatively resistant to changes in stimulus intensity. Only a small number of fibers synchronize to the low-frequency spectral energy in the stimulus. The underlying spectral complexity of the stimulus is not accurately reflected in the timing of fiber firing, presumably because firing is 'captured' by the fundamental frequency. 4. Plots of average localized synchronized rate (ALSR), which combine both spectral and temporal information, show a similar, low-pass shape at all stimulus intensities. ALSR plots do not generally provide an accurate representation of the structure of the advertisement call. 5. The data suggest that anuran peripheral auditory fibers may be particularly sensitive to the amplitude envelope of sounds.

  7. Noise-induced tinnitus: auditory evoked potential in symptomatic and asymptomatic patients.

    PubMed

    Santos-Filha, Valdete Alves Valentins dos; Samelli, Alessandra Giannella; Matas, Carla Gentile

    2014-07-01

    We evaluated the central auditory pathways in workers with noise-induced tinnitus with normal hearing thresholds, compared the auditory brainstem response results in groups with and without tinnitus and correlated the tinnitus location to the auditory brainstem response findings in individuals with a history of occupational noise exposure. Sixty individuals participated in the study and the following procedures were performed: anamnesis, immittance measures, pure-tone air conduction thresholds at all frequencies between 0.25-8 kHz and auditory brainstem response. The mean auditory brainstem response latencies were lower in the Control group than in the Tinnitus group, but no significant differences between the groups were observed. Qualitative analysis showed more alterations in the lower brainstem in the Tinnitus group. The strongest relationship between tinnitus location and auditory brainstem response alterations was detected in individuals with bilateral tinnitus and bilateral auditory brainstem response alterations compared with patients with unilateral alterations. Our findings suggest the occurrence of a possible dysfunction in the central auditory nervous system (brainstem) in individuals with noise-induced tinnitus and a normal hearing threshold.

  8. Gene expression underlying enhanced, steroid-dependent auditory sensitivity of hair cell epithelium in a vocal fish.

    PubMed

    Fergus, Daniel J; Feng, Ni Y; Bass, Andrew H

    2015-10-14

    Successful animal communication depends on a receiver's ability to detect a sender's signal. Exemplars of adaptive sender-receiver coupling include acoustic communication, often important in the context of seasonal reproduction. During the reproductive summer season, both male and female midshipman fish (Porichthys notatus) exhibit similar increases in the steroid-dependent frequency sensitivity of the saccule, the main auditory division of the inner ear. This form of auditory plasticity enhances detection of the higher frequency components of the multi-harmonic, long-duration advertisement calls produced repetitively by males during summer nights of peak vocal and spawning activity. The molecular basis of this seasonal auditory plasticity has not been fully resolved. Here, we utilize an unbiased transcriptomic RNA sequencing approach to identify differentially expressed transcripts within the saccule's hair cell epithelium of reproductive summer and non-reproductive winter fish. We assembled 74,027 unique transcripts from our saccular epithelial sequence reads. Of these, 6.4 % and 3.0 % were upregulated in the reproductive and non-reproductive saccular epithelium, respectively. Gene ontology (GO) term enrichment analyses of the differentially expressed transcripts showed that the reproductive saccular epithelium was transcriptionally, translationally, and metabolically more active than the non-reproductive epithelium. Furthermore, the expression of a specific suite of candidate genes, including ion channels and components of steroid-signaling pathways, was upregulated in the reproductive compared to the non-reproductive saccular epithelium. We found reported auditory functions for 14 candidate genes upregulated in the reproductive midshipman saccular epithelium, 8 of which are enriched in mouse hair cells, validating their hair cell-specific functions across vertebrates. We identified a suite of differentially expressed genes belonging to neurotransmission and steroid-signaling pathways, consistent with previous work showing the importance of these characters in regulating hair cell auditory sensitivity in midshipman fish and, more broadly, vertebrates. The results were also consistent with auditory hair cells being generally more physiologically active when animals are in a reproductive state, a time of enhanced sensory-motor coupling between the auditory periphery and the upper harmonics of vocalizations. Together with several new candidate genes, our results identify discrete patterns of gene expression linked to frequency- and steroid-dependent plasticity of hair cell auditory sensitivity.

  9. Design of Alarm Sound of Home Care Equipment Based on Age-related Auditory Sense

    NASA Astrophysics Data System (ADS)

    Shibano, Jun-Ichi; Tadano, Shigeru; Kaneko, Hirotaka

    A wide variety of home care equipment has been developed to support the independent lifestyle and care taking of elderly persons. Almost all of the equipment has an alarm designed to alert a care person or to sound a warning in case of an emergency. Due to the fact that aging human beings' senses physiologically, weaken and deteriorate, each alarm's sound must be designed to account for the full range of elderly person's hearing loss. Since the alarms are usually heard indoors, it is also necessary to evaluate the relationship between the basic characteristics of the sounds and living area's layout. In this study, we investigated the sounds of various alarms of the home care equipment based on both the age-related hearing characteristics of elderly persons and the propagation property of the sounds indoors. As a result, it was determined that the hearing characteristics of elderly persons are attuned to sounds which have a frequency from 700Hz to 1kHz, and it was learned that the indoor absorption ratio of sound is smallest when the frequency is 1kHz. Therefore, a frequency of 1kHz is good for the alarm sound of home care equipment. A flow chart to design the alarm sound of home care equipment was proposed, taking into account the extent of age-related auditory sense deterioration.

  10. Modelling of human low frequency sound localization acuity demonstrates dominance of spatial variation of interaural time difference and suggests uniform just-noticeable differences in interaural time difference.

    PubMed

    Smith, Rosanna C G; Price, Stephen R

    2014-01-01

    Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.

  11. Stimulus change detection in phasic auditory units in the frog midbrain: frequency and ear specific adaptation.

    PubMed

    Ponnath, Abhilash; Hoke, Kim L; Farris, Hamilton E

    2013-04-01

    Neural adaptation, a reduction in the response to a maintained stimulus, is an important mechanism for detecting stimulus change. Contributing to change detection is the fact that adaptation is often stimulus specific: adaptation to a particular stimulus reduces excitability to a specific subset of stimuli, while the ability to respond to other stimuli is unaffected. Phasic cells (e.g., cells responding to stimulus onset) are good candidates for detecting the most rapid changes in natural auditory scenes, as they exhibit fast and complete adaptation to an initial stimulus presentation. We made recordings of single phasic auditory units in the frog midbrain to determine if adaptation was specific to stimulus frequency and ear of input. In response to an instantaneous frequency step in a tone, 28% of phasic cells exhibited frequency specific adaptation based on a relative frequency change (delta-f=±16%). Frequency specific adaptation was not limited to frequency steps, however, as adaptation was also overcome during continuous frequency modulated stimuli and in response to spectral transients interrupting tones. The results suggest that adaptation is separated for peripheral (e.g., frequency) channels. This was tested directly using dichotic stimuli. In 45% of binaural phasic units, adaptation was ear specific: adaptation to stimulation of one ear did not affect responses to stimulation of the other ear. Thus, adaptation exhibited specificity for stimulus frequency and lateralization at the level of the midbrain. This mechanism could be employed to detect rapid stimulus change within and between sound sources in complex acoustic environments.

  12. Stimulus change detection in phasic auditory units in the frog midbrain: frequency and ear specific adaptation

    PubMed Central

    Ponnath, Abhilash; Hoke, Kim L.

    2013-01-01

    Neural adaptation, a reduction in the response to a maintained stimulus, is an important mechanism for detecting stimulus change. Contributing to change detection is the fact that adaptation is often stimulus specific: adaptation to a particular stimulus reduces excitability to a specific subset of stimuli, while the ability to respond to other stimuli is unaffected. Phasic cells (e.g., cells responding to stimulus onset) are good candidates for detecting the most rapid changes in natural auditory scenes, as they exhibit fast and complete adaptation to an initial stimulus presentation. We made recordings of single phasic auditory units in the frog midbrain to determine if adaptation was specific to stimulus frequency and ear of input. In response to an instantaneous frequency step in a tone, 28 % of phasic cells exhibited frequency specific adaptation based on a relative frequency change (delta-f = ±16 %). Frequency specific adaptation was not limited to frequency steps, however, as adaptation was also overcome during continuous frequency modulated stimuli and in response to spectral transients interrupting tones. The results suggest that adaptation is separated for peripheral (e.g., frequency) channels. This was tested directly using dichotic stimuli. In 45 % of binaural phasic units, adaptation was ear specific: adaptation to stimulation of one ear did not affect responses to stimulation of the other ear. Thus, adaptation exhibited specificity for stimulus frequency and lateralization at the level of the midbrain. This mechanism could be employed to detect rapid stimulus change within and between sound sources in complex acoustic environments. PMID:23344947

  13. The influence of cochlear spectral processing on the timing and amplitude of the speech-evoked auditory brain stem response

    PubMed Central

    Nuttall, Helen E.; Moore, David R.; Barry, Johanna G.; Krumbholz, Katrin

    2015-01-01

    The speech-evoked auditory brain stem response (speech ABR) is widely considered to provide an index of the quality of neural temporal encoding in the central auditory pathway. The aim of the present study was to evaluate the extent to which the speech ABR is shaped by spectral processing in the cochlea. High-pass noise masking was used to record speech ABRs from delimited octave-wide frequency bands between 0.5 and 8 kHz in normal-hearing young adults. The latency of the frequency-delimited responses decreased from the lowest to the highest frequency band by up to 3.6 ms. The observed frequency-latency function was compatible with model predictions based on wave V of the click ABR. The frequency-delimited speech ABR amplitude was largest in the 2- to 4-kHz frequency band and decreased toward both higher and lower frequency bands despite the predominance of low-frequency energy in the speech stimulus. We argue that the frequency dependence of speech ABR latency and amplitude results from the decrease in cochlear filter width with decreasing frequency. The results suggest that the amplitude and latency of the speech ABR may reflect interindividual differences in cochlear, as well as central, processing. The high-pass noise-masking technique provides a useful tool for differentiating between peripheral and central effects on the speech ABR. It can be used for further elucidating the neural basis of the perceptual speech deficits that have been associated with individual differences in speech ABR characteristics. PMID:25787954

  14. Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.

    PubMed

    Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S

    2002-06-01

    Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.

  15. Brain-stem evoked potentials and noise effects in seagulls.

    PubMed

    Counter, S A

    1985-01-01

    Brain-stem auditory evoked potentials (BAEP) recorded from the seagull were large-amplitude, short-latency, vertex-positive deflections which originate in the eighth nerve and several brain-stem nuclei. BAEP waveforms were similar in latency and configurations to that reported for certain other lower vertebrates and some mammals. BAEP recorded at several pure tone frequencies throughout the seagull's auditory spectrum showed an area of heightened auditory sensitivity between 1 and 3 kHz. This range was also found to be the primary bandwidth of the vocalization output of young seagulls. Masking by white noise and pure tones had remarkable effects on several parameters of the BAEP. In general, the tone- and click-induced BAEP were either reduced or obliterated by both pure tone and white noise maskers of specific signal to noise ratios and high intensity levels. The masking effects observed in this study may be related to the manner in which seagulls respond to intense environmental noise. One possible conclusion is that intense environmental noise, such as aircraft engine noise, may severely alter the seagull's localization apparatus and induce sonogenic stress, both of which could cause collisions with low-flying aircraft.

  16. Fitting prelingually deafened adult cochlear implant users based on electrode discrimination performance.

    PubMed

    Debruyne, Joke A; Francart, Tom; Janssen, A Miranda L; Douma, Kim; Brokx, Jan P L

    2017-03-01

    This study investigated the hypotheses that (1) prelingually deafened CI users do not have perfect electrode discrimination ability and (2) the deactivation of non-discriminable electrodes can improve auditory performance. Electrode discrimination difference limens were determined for all electrodes of the array. The subjects' basic map was subsequently compared to an experimental map, which contained only discriminable electrodes, with respect to speech understanding in quiet and in noise, listening effort, spectral ripple discrimination and subjective appreciation. Subjects were six prelingually deafened, late implanted adults using the Nucleus cochlear implant. Electrode discrimination difference limens across all subjects and electrodes ranged from 0.5 to 7.125, with significantly larger limens for basal electrodes. No significant differences were found between the basic map and the experimental map on auditory tests. Subjective appreciation was found to be significantly poorer for the experimental map. Prelingually deafened CI users were unable to discriminate between all adjacent electrodes. There was no difference in auditory performance between the basic and experimental map. Potential factors contributing to the absence of improvement with the experimental map include the reduced number of maxima, incomplete adaptation to the new frequency allocation, and the mainly basal location of deactivated electrodes.

  17. Apparatus for providing sensory substitution of force feedback

    NASA Technical Reports Server (NTRS)

    Massimino, Michael J. (Inventor); Sheridan, Thomas B. (Inventor)

    1995-01-01

    A feedback apparatus for an operator to control an effector that is remote from the operator to interact with a remote environment has a local input device to be manipulated by the operator. Sensors in the effector's environment are capable of sensing the amplitude of forces arising between the effector and its environment, the direction of application of such forces, or both amplitude and direction. A feedback signal corresponding to such a component of the force, is generated and transmitted to the environment of the operator. The signal is transduced into an auditory sensory substitution signal to which the operator is sensitive. Sound production apparatus present the auditory signal to the operator. The full range of the force amplitude may be represented by a single, audio speaker. Auditory display elements may be stereo headphones or free standing audio speakers, numbering from one to many more than two. The location of the application of the force may also be specified by the location of audio speakers that generate signals corresponding to specific forces. Alternatively, the location may be specified by the frequency of an audio signal, or by the apparent location of an audio signal, as simulated by a combination of signals originating at different locations.

  18. Clinical outcomes of scala vestibuli cochlear implantation in children with partial labyrinthine ossification.

    PubMed

    Lin, Yung-Song

    2009-03-01

    Cochlear implantation via the scala vestibuli is a viable approach in those with ossification in the scala tympani. With extended cochlear implant experience, there is no significant difference in the mapping parameters and auditory performance between those implanted via scala vestibuli and via scala tympani. To assess the clinical outcomes of cochlear implantation via scala vestibuli. In a cohort follow-up study, 11 prelingually deafened children who received cochlear implantation between age 3 and 10 years through the scala vestibuli served as participants. The mapping parameters (i.e. comfortable level (C), threshold level (T), dynamic range) and auditory performance of each participant were evaluated following initial cochlear implant stimulation, then at 3 month intervals for 2 years, then semi-annually. The follow-up period lasted for 9 years 9 months on average, with a minimum of 8 years 3 months. The clinical results of the mapping parameters and auditory performance of children implanted via the scala vestibuli were comparative to those who were implanted via the scala tympani. No balance problem was reported by any of these patients. One child exhibited residual low frequency hearing after implantation.

  19. Strength of German accent under altered auditory feedback

    PubMed Central

    HOWELL, PETER; DWORZYNSKI, KATHARINA

    2007-01-01

    Borden’s (1979, 1980) hypothesis that speakers with vulnerable speech systems rely more heavily on feedback monitoring than do speakers with less vulnerable systems was investigated. The second language (L2) of a speaker is vulnerable, in comparison with the native language, so alteration to feedback should have a detrimental effect on it, according to this hypothesis. Here, we specifically examined whether altered auditory feedback has an effect on accent strength when speakers speak L2. There were three stages in the experiment. First, 6 German speakers who were fluent in English (their L2) were recorded under six conditions—normal listening, amplified voice level, voice shifted in frequency, delayed auditory feedback, and slowed and accelerated speech rate conditions. Second, judges were trained to rate accent strength. Training was assessed by whether it was successful in separating German speakers speaking English from native English speakers, also speaking English. In the final stage, the judges ranked recordings of each speaker from the first stage as to increasing strength of German accent. The results show that accents were more pronounced under frequency-shifted and delayed auditory feedback conditions than under normal or amplified feedback conditions. Control tests were done to ensure that listeners were judging accent, rather than fluency changes caused by altered auditory feedback. The findings are discussed in terms of Borden’s hypothesis and other accounts about why altered auditory feedback disrupts speech control. PMID:11414137

  20. Cortico-Cortical Connectivity Within Ferret Auditory Cortex.

    PubMed

    Bizley, Jennifer K; Bajo, Victoria M; Nodal, Fernando R; King, Andrew J

    2015-10-15

    Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency-matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non-overlapping, consistent with the non-tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. © 2015 Wiley Periodicals, Inc.

  1. Comparison of auditory stream segregation in sighted and early blind individuals.

    PubMed

    Boroujeni, Fatemeh Moghadasi; Heidari, Fatemeh; Rouzbahani, Masoumeh; Kamali, Mohammad

    2017-01-18

    An important characteristic of the auditory system is the capacity to analyze complex sounds and make decisions on the source of the constituent parts of these sounds. Blind individuals compensate for the lack of visual information by an increase input from other sensory modalities, including increased auditory information. The purpose of the current study was to compare the fission boundary (FB) threshold of sighted and early blind individuals through spectral aspects using a psychoacoustic auditory stream segregation (ASS) test. This study was conducted on 16 sighted and 16 early blind adult individuals. The applied stimuli were presented sequentially as the pure tones A and B and as a triplet ABA-ABA pattern at the intensity of 40dBSL. The A tone frequency was selected as the basis at values of 500, 1000, and 2000Hz. The B tone was presented with the difference of a 4-100% above the basis tone frequency. Blind individuals had significantly lower FB thresholds than sighted people. FB was independent of the frequency of the tone A when expressed as the difference in the number of equivalent rectangular bandwidths (ERBs). Early blindness may increase perceptual separation of the acoustic stimuli to form accurate representations of the world. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Impaired movement timing in neurological disorders: rehabilitation and treatment strategies.

    PubMed

    Hove, Michael J; Keller, Peter E

    2015-03-01

    Timing abnormalities have been reported in many neurological disorders, including Parkinson's disease (PD). In PD, motor-timing impairments are especially debilitating in gait. Despite impaired audiomotor synchronization, PD patients' gait improves when they walk with an auditory metronome or with music. Building on that research, we make recommendations for optimizing sensory cues to improve the efficacy of rhythmic cuing in gait rehabilitation. Adaptive rhythmic metronomes (that synchronize with the patient's walking) might be especially effective. In a recent study we showed that adaptive metronomes synchronized consistently with PD patients' footsteps without requiring attention; this improved stability and reinstated healthy gait dynamics. Other strategies could help optimize sensory cues for gait rehabilitation. Groove music strongly engages the motor system and induces movement; bass-frequency tones are associated with movement and provide strong timing cues. Thus, groove and bass-frequency pulses could deliver potent rhythmic cues. These strategies capitalize on the close neural connections between auditory and motor networks; and auditory cues are typically preferred. However, moving visual cues greatly improve visuomotor synchronization and could warrant examination in gait rehabilitation. Together, a treatment approach that employs groove, auditory, bass-frequency, and adaptive (GABA) cues could help optimize rhythmic sensory cues for treating motor and timing deficits. © 2014 New York Academy of Sciences.

  3. Effect of the loss of auditory feedback on segmental parameters of vowels of postlingually deafened speakers.

    PubMed

    Schenk, Barbara S; Baumgartner, Wolf Dieter; Hamzavi, Jafar Sasan

    2003-12-01

    The most obvious and best documented changes in speech of postlingually deafened speakers are the rate, fundamental frequency, and volume (energy). These changes are due to the lack of auditory feedback. But auditory feedback affects not only the suprasegmental parameters of speech. The aim of this study was to determine the change at the segmental level of speech in terms of vowel formants. Twenty-three postlingually deafened and 18 normally hearing speakers were recorded reading a German text. The frequencies of the first and second formants and the vowel spaces of selected vowels in word-in-context condition were compared. All first formant frequencies (F1) of the postlingually deafened speakers were significantly different from those of the normally hearing people. The values of F1 were higher for the vowels /e/ (418+/-61 Hz compared with 359+/-52 Hz, P=0.006) and /o/ (459+/-58 compared with 390+/-45 Hz, P=0.0003) and lower for /a/ (765+/-115 Hz compared with 851+/-146 Hz, P=0.038). The second formant frequency (F2) only showed a significant increase for the vowel/e/(2016+/-347 Hz compared with 2279+/-250 Hz, P=0.012). The postlingually deafened people were divided into two subgroups according to duration of deafness (shorter/longer than 10 years of deafness). There was no significant difference in formant changes between the two groups. Our report demonstrated an effect of auditory feedback also on segmental features of speech of postlingually deafened people.

  4. Assessing the Underwater Acoustics of the World's Largest Vibration Hammer (OCTA-KONG) and Its Potential Effects on the Indo-Pacific Humpbacked Dolphin (Sousa chinensis)

    PubMed Central

    Wang, Zhitao; Wu, Yuping; Duan, Guoqin; Cao, Hanjiang; Liu, Jianchang; Wang, Kexiong; Wang, Ding

    2014-01-01

    Anthropogenic noise in aquatic environments is a worldwide concern due to its potential adverse effects on the environment and aquatic life. The Hongkong-Zhuhai-Macao Bridge is currently under construction in the Pearl River Estuary, a hot spot for the Indo-Pacific humpbacked dolphin (Sousa chinensis) in China. The OCTA-KONG, the world's largest vibration hammer, is being used during this construction project to drive or extract steel shell piles 22 m in diameter. This activity poses a substantial threat to marine mammals, and an environmental assessment is critically needed. The underwater acoustic properties of the OCTA-KONG were analyzed, and the potential impacts of the underwater acoustic energy on Sousa, including auditory masking and physiological impacts, were assessed. The fundamental frequency of the OCTA-KONG vibration ranged from 15 Hz to 16 Hz, and the noise increments were below 20 kHz, with a dominant frequency and energy below 10 kHz. The resulting sounds are most likely detectable by Sousa over distances of up to 3.5 km from the source. Although Sousa clicks do not appear to be adversely affected, Sousa whistles are susceptible to auditory masking, which may negatively impact this species' social life. Therefore, a safety zone with a radius of 500 m is proposed. Although the zero-to-peak source level (SL) of the OCTA-KONG was lower than the physiological damage level, the maximum root-mean-square SL exceeded the cetacean safety exposure level on several occasions. Moreover, the majority of the unweighted cumulative source sound exposure levels (SSELs) and the cetacean auditory weighted cumulative SSELs exceeded the acoustic threshold levels for the onset of temporary threshold shift, a type of potentially recoverable auditory damage resulting from prolonged sound exposure. These findings may aid in the identification and design of appropriate mitigation methods, such as the use of air bubble curtains, “soft start” and “power down” techniques. PMID:25338113

  5. Assessing the underwater acoustics of the world's largest vibration hammer (OCTA-KONG) and its potential effects on the Indo-Pacific humpbacked dolphin (Sousa chinensis).

    PubMed

    Wang, Zhitao; Wu, Yuping; Duan, Guoqin; Cao, Hanjiang; Liu, Jianchang; Wang, Kexiong; Wang, Ding

    2014-01-01

    Anthropogenic noise in aquatic environments is a worldwide concern due to its potential adverse effects on the environment and aquatic life. The Hongkong-Zhuhai-Macao Bridge is currently under construction in the Pearl River Estuary, a hot spot for the Indo-Pacific humpbacked dolphin (Sousa chinensis) in China. The OCTA-KONG, the world's largest vibration hammer, is being used during this construction project to drive or extract steel shell piles 22 m in diameter. This activity poses a substantial threat to marine mammals, and an environmental assessment is critically needed. The underwater acoustic properties of the OCTA-KONG were analyzed, and the potential impacts of the underwater acoustic energy on Sousa, including auditory masking and physiological impacts, were assessed. The fundamental frequency of the OCTA-KONG vibration ranged from 15 Hz to 16 Hz, and the noise increments were below 20 kHz, with a dominant frequency and energy below 10 kHz. The resulting sounds are most likely detectable by Sousa over distances of up to 3.5 km from the source. Although Sousa clicks do not appear to be adversely affected, Sousa whistles are susceptible to auditory masking, which may negatively impact this species' social life. Therefore, a safety zone with a radius of 500 m is proposed. Although the zero-to-peak source level (SL) of the OCTA-KONG was lower than the physiological damage level, the maximum root-mean-square SL exceeded the cetacean safety exposure level on several occasions. Moreover, the majority of the unweighted cumulative source sound exposure levels (SSELs) and the cetacean auditory weighted cumulative SSELs exceeded the acoustic threshold levels for the onset of temporary threshold shift, a type of potentially recoverable auditory damage resulting from prolonged sound exposure. These findings may aid in the identification and design of appropriate mitigation methods, such as the use of air bubble curtains, "soft start" and "power down" techniques.

  6. Auditory brainstem responses predict auditory nerve fiber thresholds and frequency selectivity in hearing impaired chinchillas

    PubMed Central

    Henry, Kenneth S.; Kale, Sushrut; Scheidt, Ryan E.; Heinz, Michael G.

    2011-01-01

    Non-invasive auditory brainstem responses (ABRs) are commonly used to assess cochlear pathology in both clinical and research environments. In the current study, we evaluated the relationship between ABR characteristics and more direct measures of cochlear function. We recorded ABRs and auditory nerve (AN) single-unit responses in seven chinchillas with noise induced hearing loss. ABRs were recorded for 1–8 kHz tone burst stimuli both before and several weeks after four hours of exposure to a 115 dB SPL, 50 Hz band of noise with a center frequency of 2 kHz. Shifts in ABR characteristics (threshold, wave I amplitude, and wave I latency) following hearing loss were compared to AN-fiber tuning curve properties (threshold and frequency selectivity) in the same animals. As expected, noise exposure generally resulted in an increase in ABR threshold and decrease in wave I amplitude at equal SPL. Wave I amplitude at equal sensation level (SL), however, was similar before and after noise exposure. In addition, noise exposure resulted in decreases in ABR wave I latency at equal SL and, to a lesser extent, at equal SPL. The shifts in ABR characteristics were significantly related to AN-fiber tuning curve properties in the same animal at the same frequency. Larger shifts in ABR thresholds and ABR wave I amplitude at equal SPL were associated with greater AN threshold elevation. Larger reductions in ABR wave I latency at equal SL, on the other hand, were associated with greater loss of AN frequency selectivity. This result is consistent with linear systems theory, which predicts shorter time delays for broader peripheral frequency tuning. Taken together with other studies, our results affirm that ABR thresholds and wave I amplitude provide useful estimates of cochlear sensitivity. Furthermore, comparisons of ABR wave I latency to normative data at the same SL may prove useful for detecting and characterizing loss of cochlear frequency selectivity. PMID:21699970

  7. Development of the acoustically evoked behavioral response in larval plainfin midshipman fish, Porichthys notatus.

    PubMed

    Alderks, Peter W; Sisneros, Joseph A

    2013-01-01

    The ontogeny of hearing in fishes has become a major interest among bioacoustics researchers studying fish behavior and sensory ecology. Most fish begin to detect acoustic stimuli during the larval stage which can be important for navigation, predator avoidance and settlement, however relatively little is known about the hearing capabilities of larval fishes. We characterized the acoustically evoked behavioral response (AEBR) in the plainfin midshipman fish, Porichthys notatus, and used this innate startle-like response to characterize this species' auditory capability during larval development. Age and size of larval midshipman were highly correlated (r(2) = 0.92). The AEBR was first observed in larvae at 1.4 cm TL. At a size ≥ 1.8 cm TL, all larvae responded to a broadband stimulus of 154 dB re1 µPa or -15.2 dB re 1 g (z-axis). Lowest AEBR thresholds were 140-150 dB re 1 µPa or -33 to -23 dB re 1 g for frequencies below 225 Hz. Larval fish with size ranges of 1.9-2.4 cm TL had significantly lower best evoked frequencies than the other tested size groups. We also investigated the development of the lateral line organ and its function in mediating the AEBR. The lateral line organ is likely involved in mediating the AEBR but not necessary to evoke the startle-like response. The midshipman auditory and lateral line systems are functional during early development when the larvae are in the nest and the auditory system appears to have similar tuning characteristics throughout all life history stages.

  8. Extraction of Inter-Aural Time Differences Using a Spiking Neuron Network Model of the Medial Superior Olive.

    PubMed

    Encke, Jörg; Hemmert, Werner

    2018-01-01

    The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs) which are first analyzed in the medial superior olive (MSO) in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.

  9. Generalization of conditioned suppression during salicylate-induced phantom auditory perception in rats.

    PubMed

    Brennan, J F; Jastreboff, P J

    1991-01-01

    Tonal frequency generalization was examined in a total of 114 pigmented male rats, 60 of which were tested under the influence of salicylate-induced phantom auditory perception, introduced before or after lick suppression training. Thirty control subjects received saline injections, and the remaining 24 subjects served as noninjected controls of tonal background effects on generalization. Rats were continuously exposed to background noise alone or with a superimposed tone. Offset of background noise alone (Experiment I), or combined with onset or continuation of the tone (Experiments II and III) served as the conditioned stimulus (CS). In Experiment I, tone presentations were introduced only after suppression training. Depending on the time of salicylate introduction, a strong and differential influence on generalization gradients was observed, which is consistent with subjects' detection of salicylate-induced, high-pitched sound. Moreover, when either 12- or 3 kHz tones were introduced before or after Pavlovian training to mimic salicylate effects in 24 rats, the distortions in generalization gradients resembled trends obtained from respective salicylate injected groups. Experiments II and III were aimed at evaluating the masking effect of salicylate-induced phantom auditory perception on external sounds, with a 5- or a 10-kHz tone imposed continuously on the noise or presented only during the CS. Tests of tonal generalization to frequencies ranging from 4- to 11- kHz showed that in this experimental context salicylate-induced perception did not interfere with the dominant influence of external tones, a result that further strengthens the conclusion of Experiment I.

  10. Noninvasive in vivo imaging reveals differences between tectorial membrane and basilar membrane traveling waves in the mouse cochlea

    PubMed Central

    Lee, Hee Yoon; Raphael, Patrick D.; Park, Jesung; Ellerbee, Audrey K.; Applegate, Brian E.; Oghalai, John S.

    2015-01-01

    Sound is encoded within the auditory portion of the inner ear, the cochlea, after propagating down its length as a traveling wave. For over half a century, vibratory measurements to study cochlear traveling waves have been made using invasive approaches such as laser Doppler vibrometry. Although these studies have provided critical information regarding the nonlinear processes within the living cochlea that increase the amplitude of vibration and sharpen frequency tuning, the data have typically been limited to point measurements of basilar membrane vibration. In addition, opening the cochlea may alter its function and affect the findings. Here we describe volumetric optical coherence tomography vibrometry, a technique that overcomes these limitations by providing depth-resolved displacement measurements at 200 kHz inside a 3D volume of tissue with picometer sensitivity. We studied the mouse cochlea by imaging noninvasively through the surrounding bone to measure sound-induced vibrations of the sensory structures in vivo, and report, to our knowledge, the first measures of tectorial membrane vibration within the unopened cochlea. We found that the tectorial membrane sustains traveling wave propagation. Compared with basilar membrane traveling waves, tectorial membrane traveling waves have larger dynamic ranges, sharper frequency tuning, and apically shifted positions of peak vibration. These findings explain discrepancies between previously published basilar membrane vibration and auditory nerve single unit data. Because the tectorial membrane directly overlies the inner hair cell stereociliary bundles, these data provide the most accurate characterization of the stimulus shaping the afferent auditory response available to date. PMID:25737536

  11. Performance of normal adults and children on central auditory diagnostic tests and their corresponding visual analogs.

    PubMed

    Bellis, Teri James; Ross, Jody

    2011-09-01

    It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.

  12. Event-related wave activity in the EEG provides new marker of ADHD.

    PubMed

    Alexander, David M; Hermens, Daniel F; Keage, Hannah A D; Clark, C Richard; Williams, Leanne M; Kohn, Michael R; Clarke, Simon D; Lamb, Chris; Gordon, Evian

    2008-01-01

    This study examines the utility of new measures of event-related spatio-temporal waves in the EEG as a marker of ADHD, previously shown to be closely related to the P3 ERP in an adult sample. Wave activity in the EEG was assessed during both an auditory Oddball and a visual continuous performance task (CPT) for an ADHD group ranging in age from 6 to 18 years and comprising mostly Combined and Inattentive subtypes, and for an age and gender matched control group. The ADHD subjects had less wave activity at low frequencies ( approximately 1 Hz) during both tasks. For auditory Oddball targets, this effect was shown to be related to smaller P3 ERP amplitudes. During CPT, the approximately 1 Hz wave activity in the ADHD subjects was inversely related to clinical and behavioral measures of hyperactivity and impulsivity. CPT wave activity at approximately 1 Hz was seen to "normalise" following treatment with stimulant medication. The results identify a deficit in low frequency wave activity as a new marker for ADHD associated with levels of hyperactivity and impulsivity. The marker is evident across a range of tasks and may be specific to ADHD. While lower approximately 1 Hz activity partly accounts for reduced P3 ERPs in ADHD, the effect also arises for tasks that do not elicit a P3. Deficits in behavioral inhibition are hypothesized to arise from underlying dysregulation of cortical inhibition.

  13. Attention selectively modulates cortical entrainment in different regions of the speech spectrum

    PubMed Central

    Baltzell, Lucas S.; Horton, Cort; Shen, Yi; Richards, Virginia M.; D'Zmura, Michael; Srinivasan, Ramesh

    2016-01-01

    Recent studies have uncovered a neural response that appears to track the envelope of speech, and have shown that this tracking process is mediated by attention. It has been argued that this tracking reflects a process of phase-locking to the fluctuations of stimulus energy, ensuring that this energy arrives during periods of high neuronal excitability. Because all acoustic stimuli are decomposed into spectral channels at the cochlea, and this spectral decomposition is maintained along the ascending auditory pathway and into auditory cortex, we hypothesized that the overall stimulus envelope is not as relevant to cortical processing as the individual frequency channels; attention may be mediating envelope tracking differentially across these spectral channels. To test this we reanalyzed data reported by Horton et al. (2013), where high-density EEG was recorded while adults attended to one of two competing naturalistic speech streams. In order to simulate cochlear filtering, the stimuli were passed through a gammatone filterbank, and temporal envelopes were extracted at each filter output. Following Horton et al. (2013), the attended and unattended envelopes were cross-correlated with the EEG, and local maxima were extracted at three different latency ranges corresponding to distinct peaks in the cross-correlation function (N1, P2, and N2). We found that the ratio between the attended and unattended cross-correlation functions varied across frequency channels in the N1 latency range, consistent with the hypothesis that attention differentially modulates envelope-tracking activity across spectral channels. PMID:27195825

  14. Auditory Spectral Integration in the Perception of Static Vowels

    ERIC Educational Resources Information Center

    Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun

    2011-01-01

    Purpose: To evaluate potential contributions of broadband spectral integration in the perception of static vowels. Specifically, can the auditory system infer formant frequency information from changes in the intensity weighting across harmonics when the formant itself is missing? Does this type of integration produce the same results in the lower…

  15. Auditory Attentional Capture: Effects of Singleton Distractor Sounds

    ERIC Educational Resources Information Center

    Dalton, Polly; Lavie, Nilli

    2004-01-01

    The phenomenon of attentional capture by a unique yet irrelevant singleton distractor has typically been studied in visual search. In this article, the authors examine whether a similar phenomenon occurs in the auditory domain. Participants searched sequences of sounds for targets defined by frequency, intensity, or duration. The presence of a…

  16. Visual and Auditory Memory: Relationships to Reading Achievement.

    ERIC Educational Resources Information Center

    Bruning, Roger H.; And Others

    1978-01-01

    Good and poor readers' visual and auditory memory were tested. No group differences existed for single mode presentation in recognition frequency or latency. With multimodal presentation, good readers had faster latencies. Dual coding and self-terminating memory search hypotheses were supported. Implications for the reading process and reading…

  17. Using a Function Generator to Produce Auditory and Visual Demonstrations.

    ERIC Educational Resources Information Center

    Woods, Charles B.

    1998-01-01

    Identifies a function generator as an instrument that produces time-varying electrical signals of frequency, wavelength, and amplitude. Sending these signals to a speaker or a light-emitting diode can demonstrate how specific characteristics of auditory or visual stimuli relate to perceptual experiences. Provides specific instructions for using…

  18. Testing resonating vector strength: Auditory system, electric fish, and noise

    NASA Astrophysics Data System (ADS)

    Leo van Hemmen, J.; Longtin, André; Vollmayr, Andreas N.

    2011-12-01

    Quite often a response to some input with a specific frequency ν○ can be described through a sequence of discrete events. Here, we study the synchrony vector, whose length stands for the vector strength, and in doing so focus on neuronal response in terms of spike times. The latter are supposed to be given by experiment. Instead of singling out the stimulus frequency ν○ we study the synchrony vector as a function of the real frequency variable ν. Its length turns out to be a resonating vector strength in that it shows clear maxima in the neighborhood of ν○ and multiples thereof, hence, allowing an easy way of determining response frequencies. We study this "resonating" vector strength for two concrete but rather different cases, viz., a specific midbrain neuron in the auditory system of cat and a primary detector neuron belonging to the electric sense of the wave-type electric fish Apteronotus leptorhynchus. We show that the resonating vector strength always performs a clear resonance correlated with the phase locking that it quantifies. We analyze the influence of noise and demonstrate how well the resonance associated with maximal vector strength indicates the dominant stimulus frequency. Furthermore, we exhibit how one can obtain a specific phase associated with, for instance, a delay in auditory analysis.

  19. Orthographic Consistency and Word-Frequency Effects in Auditory Word Recognition: New Evidence from Lexical Decision and Rime Detection

    PubMed Central

    Petrova, Ana; Gaskell, M. Gareth; Ferrand, Ludovic

    2011-01-01

    Many studies have repeatedly shown an orthographic consistency effect in the auditory lexical decision task. Words with phonological rimes that could be spelled in multiple ways (i.e., inconsistent words) typically produce longer auditory lexical decision latencies and more errors than do words with rimes that could be spelled in only one way (i.e., consistent words). These results have been extended to different languages and tasks, suggesting that the effect is quite general and robust. Despite this growing body of evidence, some psycholinguists believe that orthographic effects on spoken language are exclusively strategic, post-lexical, or restricted to peculiar (low-frequency) words. In the present study, we manipulated consistency and word-frequency orthogonally in order to explore whether the orthographic consistency effect extends to high-frequency words. Two different tasks were used: lexical decision and rime detection. Both tasks produced reliable consistency effects for both low- and high-frequency words. Furthermore, in Experiment 1 (lexical decision), an interaction revealed a stronger consistency effect for low-frequency words than for high-frequency words, as initially predicted by Ziegler and Ferrand (1998), whereas no interaction was found in Experiment 2 (rime detection). Our results extend previous findings by showing that the orthographic consistency effect is obtained not only for low-frequency words but also for high-frequency words. Furthermore, these effects were also obtained in a rime detection task, which does not require the explicit processing of orthographic structure. Globally, our results suggest that literacy changes the way people process spoken words, even for frequent words. PMID:22025916

  20. Psychometric functions for pure-tone frequency discrimination.

    PubMed

    Dai, Huanping; Micheyl, Christophe

    2011-07-01

    The form of the psychometric function (PF) for auditory frequency discrimination is of theoretical interest and practical importance. In this study, PFs for pure-tone frequency discrimination were measured for several standard frequencies (200-8000 Hz) and levels [35-85 dB sound pressure level (SPL)] in normal-hearing listeners. The proportion-correct data were fitted using a cumulative-Gaussian function of the sensitivity index, d', computed as a power transformation of the frequency difference, Δf. The exponent of the power function corresponded to the slope of the PF on log(d')-log(Δf) coordinates. The influence of attentional lapses on PF-slope estimates was investigated. When attentional lapses were not taken into account, the estimated PF slopes on log(d')-log(Δf) coordinates were found to be significantly lower than 1, suggesting a nonlinear relationship between d' and Δf. However, when lapse rate was included as a free parameter in the fits, PF slopes were found not to differ significantly from 1, consistent with a linear relationship between d' and Δf. This was the case across the wide ranges of frequencies and levels tested in this study. Therefore, spectral and temporal models of frequency discrimination must account for a linear relationship between d' and Δf across a wide range of frequencies and levels. © 2011 Acoustical Society of America

  1. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    PubMed

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  2. Behavioral Measures of Auditory Streaming in Ferrets (Mustela putorius)

    PubMed Central

    Ma, Ling; Yin, Pingbo; Micheyl, Christophe; Oxenham, Andrew J.; Shamma, Shihab A.

    2015-01-01

    An important aspect of the analysis of auditory “scenes” relates to the perceptual organization of sound sequences into auditory “streams.” In this study, we adapted two auditory perception tasks, used in recent human psychophysical studies, to obtain behavioral measures of auditory streaming in ferrets (Mustela putorius). One task involved the detection of shifts in the frequency of tones within an alternating tone sequence. The other task involved the detection of a stream of regularly repeating target tones embedded within a randomly varying multitone background. In both tasks, performance was measured as a function of various stimulus parameters, which previous psychophysical studies in humans have shown to influence auditory streaming. Ferret performance in the two tasks was found to vary as a function of these parameters in a way that is qualitatively consistent with the human data. These results suggest that auditory streaming occurs in ferrets, and that the two tasks described here may provide a valuable tool in future behavioral and neurophysiological studies of the phenomenon. PMID:20695663

  3. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  4. Development of auditory sensory memory from 2 to 6 years: an MMN study.

    PubMed

    Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar

    2008-08-01

    Short-term storage of auditory information is thought to be a precondition for cognitive development, and deficits in short-term memory are believed to underlie learning disabilities and specific language disorders. We examined the development of the duration of auditory sensory memory in normally developing children between the ages of 2 and 6 years. To probe the lifetime of auditory sensory memory we elicited the mismatch negativity (MMN), a component of the late auditory evoked potential, with tone stimuli of two different frequencies presented with various interstimulus intervals between 500 and 5,000 ms. Our findings suggest that memory traces for tone characteristics have a duration of 1-2 s in 2- and 3-year-old children, more than 2 s in 4-year-olds and 3-5 s in 6-year-olds. The results provide insights into the maturational processes involved in auditory sensory memory during the sensitive period of cognitive development.

  5. Baseline hearing abilities and variability in wild beluga whales (Delphinapterus leucas).

    PubMed

    Castellote, Manuel; Mooney, T Aran; Quakenbush, Lori; Hobbs, Roderick; Goertz, Caroline; Gaglione, Eric

    2014-05-15

    While hearing is the primary sensory modality for odontocetes, there are few data addressing variation within a natural population. This work describes the hearing ranges (4-150 kHz) and sensitivities of seven apparently healthy, wild beluga whales (Delphinapterus leucas) during a population health assessment project that captured and released belugas in Bristol Bay, Alaska. The baseline hearing abilities and subsequent variations were addressed. Hearing was measured using auditory evoked potentials (AEPs). All audiograms showed a typical cetacean U-shape; substantial variation (>30 dB) was found between most and least sensitive thresholds. All animals heard well, up to at least 128 kHz. Two heard up to 150 kHz. Lowest auditory thresholds (35-45 dB) were identified in the range 45-80 kHz. Greatest differences in hearing abilities occurred at both the high end of the auditory range and at frequencies of maximum sensitivity. In general, wild beluga hearing was quite sensitive. Hearing abilities were similar to those of belugas measured in zoological settings, reinforcing the comparative importance of both settings. The relative degree of variability across the wild belugas suggests that audiograms from multiple individuals are needed to properly describe the maximum sensitivity and population variance for odontocetes. Hearing measures were easily incorporated into field-based settings. This detailed examination of hearing abilities in wild Bristol Bay belugas provides a basis for a better understanding of the potential impact of anthropogenic noise on a noise-sensitive species. Such information may help design noise-limiting mitigation measures that could be applied to areas heavily influenced and inhabited by endangered belugas. © 2014. Published by The Company of Biologists Ltd.

  6. Speech training alters tone frequency tuning in rat primary auditory cortex

    PubMed Central

    Engineer, Crystal T.; Perez, Claudia A.; Carraway, Ryan S.; Chang, Kevin Q.; Roland, Jarod L.; Kilgard, Michael P.

    2013-01-01

    Previous studies in both humans and animals have documented improved performance following discrimination training. This enhanced performance is often associated with cortical response changes. In this study, we tested the hypothesis that long-term speech training on multiple tasks can improve primary auditory cortex (A1) responses compared to rats trained on a single speech discrimination task or experimentally naïve rats. Specifically, we compared the percent of A1 responding to trained sounds, the responses to both trained and untrained sounds, receptive field properties of A1 neurons, and the neural discrimination of pairs of speech sounds in speech trained and naïve rats. Speech training led to accurate discrimination of consonant and vowel sounds, but did not enhance A1 response strength or the neural discrimination of these sounds. Speech training altered tone responses in rats trained on six speech discrimination tasks but not in rats trained on a single speech discrimination task. Extensive speech training resulted in broader frequency tuning, shorter onset latencies, a decreased driven response to tones, and caused a shift in the frequency map to favor tones in the range where speech sounds are the loudest. Both the number of trained tasks and the number of days of training strongly predict the percent of A1 responding to a low frequency tone. Rats trained on a single speech discrimination task performed less accurately than rats trained on multiple tasks and did not exhibit A1 response changes. Our results indicate that extensive speech training can reorganize the A1 frequency map, which may have downstream consequences on speech sound processing. PMID:24344364

  7. Estimating human cochlear tuning behaviorally via forward masking

    NASA Astrophysics Data System (ADS)

    Oxenham, Andrew J.; Kreft, Heather A.

    2018-05-01

    The cochlea is where sound vibrations are transduced into the initial neural code for hearing. Despite the intervening stages of auditory processing, a surprising number of auditory perceptual phenomena can be explained in terms of the cochlea's biomechanical transformations. The quest to relate perception to these transformations has a long and distinguished history. Given its long history, it is perhaps surprising that something as fundamental as the link between frequency tuning in the cochlea and perception remains a controversial and active topic of investigation. Here we review some recent developments in our understanding of the relationship between cochlear frequency tuning and behavioral measures of frequency selectivity in humans. We show that forward masking using the notched-noise technique can produce reliable estimates of tuning that are in line with predictions from stimulus frequency otoacoustic emissions.

  8. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    PubMed

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  9. Receiver bias and the acoustic ecology of aye-ayes (Daubentonia madagascariensis).

    PubMed

    Ramsier, Marissa A; Dominy, Nathaniel J

    2012-11-01

    The aye-aye is a rare lemur from Madagascar that uses its highly specialized middle digit for percussive foraging. This acoustic behavior, also termed tap-scanning, produces dominant frequencies between 6 and 15 kHz. An enhanced auditory sensitivity to these frequencies raises the possibility that the acoustic and auditory specializations of aye-ayes have imposed constraints on the evolution of their vocal signals, especially their primary long-distance vocalization, the screech. Here we explore this concept, termed receiver bias, and suggest that the dominant frequency of the screech call (~2.7 kHz) represents an evolutionary compromise between the opposing adaptive advantages of long-distance sound propagation and enhanced detection by conspecific receivers.

  10. Structure and Topology Dynamics of Hyper-Frequency Networks during Rest and Auditory Oddball Performance.

    PubMed

    Müller, Viktor; Perdikis, Dionysios; von Oertzen, Timo; Sleimen-Malkoun, Rita; Jirsa, Viktor; Lindenberger, Ulman

    2016-01-01

    Resting-state and task-related recordings are characterized by oscillatory brain activity and widely distributed networks of synchronized oscillatory circuits. Electroencephalographic recordings (EEG) were used to assess network structure and network dynamics during resting state with eyes open and closed, and auditory oddball performance through phase synchronization between EEG channels. For this assessment, we constructed a hyper-frequency network (HFN) based on within- and cross-frequency coupling (WFC and CFC, respectively) at 10 oscillation frequencies ranging between 2 and 20 Hz. We found that CFC generally differentiates between task conditions better than WFC. CFC was the highest during resting state with eyes open. Using a graph-theoretical approach (GTA), we found that HFNs possess small-world network (SWN) topology with a slight tendency to random network characteristics. Moreover, analysis of the temporal fluctuations of HFNs revealed specific network topology dynamics (NTD), i.e., temporal changes of different graph-theoretical measures such as strength, clustering coefficient, characteristic path length (CPL), local, and global efficiency determined for HFNs at different time windows. The different topology metrics showed significant differences between conditions in the mean and standard deviation of these metrics both across time and nodes. In addition, using an artificial neural network approach, we found stimulus-related dynamics that varied across the different network topology metrics. We conclude that functional connectivity dynamics (FCD), or NTD, which was found using the HFN approach during rest and stimulus processing, reflects temporal and topological changes in the functional organization and reorganization of neuronal cell assemblies.

  11. [Some Features of Sound Signal Envelope by the Frog's Cochlear Nucleus Neurons].

    PubMed

    Bibikov, N G

    2015-01-01

    The responses of single neurons in the medullar auditory center of the grass frog were recorded extracellularly under the action of long tonal signals of the characteristic frequency modulated by repeating fragments of low-frequency (0-15 Hz, 0-50 Hz or 0-150 Hz) noise. Correlation method was used for evaluating the efficacy of different envelope fragments to ensure generation of a neuron pulse discharge. Carrying out these evaluations at different time intervals between a signal and a response the maximum delays were assessed. Two important envelope fragments were revealed. In majority of units the most effective was the time interval of the amplitude rise from mean value to maximum, and the fragment where the amplitude fall from maximum to mean value was the second by the efficacy. This type of response was observed in the vast majority of cells in the range of the envelope frequency bands 0-150 and 0-50 Hz. These cells performed half-wave rectification of such type of the envelope. However, in some neurons we observed more strong preference toward a time interval with growing amplitude, including even those where the amplitude value was smaller than the mean one. These properties were observed mainly for low-frequency (0-15 Hz) modulated signals at high modulation depth. The data show that even in medulla oblongata specialization of neural elements of the auditory pathway occurs with respect to time interval features of sound stimulus. This diversity is most evident for signals with a relatively slowly varying amplitude.

  12. Structure and Topology Dynamics of Hyper-Frequency Networks during Rest and Auditory Oddball Performance

    PubMed Central

    Müller, Viktor; Perdikis, Dionysios; von Oertzen, Timo; Sleimen-Malkoun, Rita; Jirsa, Viktor; Lindenberger, Ulman

    2016-01-01

    Resting-state and task-related recordings are characterized by oscillatory brain activity and widely distributed networks of synchronized oscillatory circuits. Electroencephalographic recordings (EEG) were used to assess network structure and network dynamics during resting state with eyes open and closed, and auditory oddball performance through phase synchronization between EEG channels. For this assessment, we constructed a hyper-frequency network (HFN) based on within- and cross-frequency coupling (WFC and CFC, respectively) at 10 oscillation frequencies ranging between 2 and 20 Hz. We found that CFC generally differentiates between task conditions better than WFC. CFC was the highest during resting state with eyes open. Using a graph-theoretical approach (GTA), we found that HFNs possess small-world network (SWN) topology with a slight tendency to random network characteristics. Moreover, analysis of the temporal fluctuations of HFNs revealed specific network topology dynamics (NTD), i.e., temporal changes of different graph-theoretical measures such as strength, clustering coefficient, characteristic path length (CPL), local, and global efficiency determined for HFNs at different time windows. The different topology metrics showed significant differences between conditions in the mean and standard deviation of these metrics both across time and nodes. In addition, using an artificial neural network approach, we found stimulus-related dynamics that varied across the different network topology metrics. We conclude that functional connectivity dynamics (FCD), or NTD, which was found using the HFN approach during rest and stimulus processing, reflects temporal and topological changes in the functional organization and reorganization of neuronal cell assemblies. PMID:27799906

  13. Can comodulation masking release occur when frequency changes could promote perceptual segregation of the on-frequency and flanking bands?

    PubMed

    Verhey, Jesko L; Epp, Bastian; Stasiak, Arkadiusz; Winter, Ian M

    2013-01-01

    A common characteristic of natural sounds is that the level fluctuations in different frequency regions are coherent. The ability of the auditory system to use this comodulation is shown when a sinusoidal signal is masked by a masker centred at the signal frequency (on-frequency masker, OFM) and one or more off-frequency components, commonly referred to as flanking bands (FBs). In general, the threshold of the signal masked by comodulated masker components is lower than when masked by masker components with uncorrelated envelopes or in the presence of the OFM only. This effect is commonly referred to as comodulation masking release (CMR). The present study investigates if CMR is also observed for a sinusoidal signal embedded in the OFM when the centre frequencies of the FBs are swept over time with a sweep rate of one octave per second. Both a common change of different frequencies and comodulation could serve as cues to indicate which of the stimulus components originate from one source. If the common fate of frequency components is the stronger binding cue, the sweeping FBs and the OFM with a fixed centre frequency should no longer form one auditory object and the CMR should be abolished. However, psychoacoustical results with normal-hearing listeners show that a CMR is also observed with sweeping components. The results are consistent with the hypothesis of wideband inhibition as the underlying physiological mechanism, as the CMR should only depend on the spectral position of the flanking bands relative to the inhibitory areas (as seen in physiological recordings using stationary flanking bands). Preliminary physiological results in the cochlear nucleus of the Guinea pig show that a correlate of CMR can also be found at this level of the auditory pathway with sweeping flanking bands.

  14. Blocking c-Fos Expression Reveals the Role of Auditory Cortex Plasticity in Sound Frequency Discrimination Learning.

    PubMed

    de Hoz, Livia; Gierej, Dorota; Lioudyno, Victoria; Jaworski, Jacek; Blazejczyk, Magda; Cruces-Solís, Hugo; Beroun, Anna; Lebitko, Tomasz; Nikolaev, Tomasz; Knapska, Ewelina; Nelken, Israel; Kaczmarek, Leszek

    2018-05-01

    The behavioral changes that comprise operant learning are associated with plasticity in early sensory cortices as well as with modulation of gene expression, but the connection between the behavioral, electrophysiological, and molecular changes is only partially understood. We specifically manipulated c-Fos expression, a hallmark of learning-induced synaptic plasticity, in auditory cortex of adult mice using a novel approach based on RNA interference. Locally blocking c-Fos expression caused a specific behavioral deficit in a sound discrimination task, in parallel with decreased cortical experience-dependent plasticity, without affecting baseline excitability or basic auditory processing. Thus, c-Fos-dependent experience-dependent cortical plasticity is necessary for frequency discrimination in an operant behavioral task. Our results connect behavioral, molecular and physiological changes and demonstrate a role of c-Fos in experience-dependent plasticity and learning.

  15. Syllabic (~2-5 Hz) and fluctuation (~1-10 Hz) ranges in speech and auditory processing

    PubMed Central

    Edwards, Erik; Chang, Edward F.

    2013-01-01

    Given recent interest in syllabic rates (~2-5 Hz) for speech processing, we review the perception of “fluctuation” range (~1-10 Hz) modulations during listening to speech and technical auditory stimuli (AM and FM tones and noises, and ripple sounds). We find evidence that the temporal modulation transfer function (TMTF) of human auditory perception is not simply low-pass in nature, but rather exhibits a peak in sensitivity in the syllabic range (~2-5 Hz). We also address human and animal neurophysiological evidence, and argue that this bandpass tuning arises at the thalamocortical level and is more associated with non-primary regions than primary regions of cortex. The bandpass rather than low-pass TMTF has implications for modeling auditory central physiology and speech processing: this implicates temporal contrast rather than simple temporal integration, with contrast enhancement for dynamic stimuli in the fluctuation range. PMID:24035819

  16. On pure word deafness, temporal processing, and the left hemisphere.

    PubMed

    Stefanatos, Gerry A; Gershkoff, Arthur; Madigan, Sean

    2005-07-01

    Pure word deafness (PWD) is a rare neurological syndrome characterized by severe difficulties in understanding and reproducing spoken language, with sparing of written language comprehension and speech production. The pathognomonic disturbance of auditory comprehension appears to be associated with a breakdown in processes involved in mapping auditory input to lexical representations of words, but the functional locus of this disturbance and the localization of the responsible lesion have long been disputed. We report here on a woman with PWD resulting from a circumscribed unilateral infarct involving the left superior temporal lobe who demonstrated significant problems processing transitional spectrotemporal cues in both speech and nonspeech sounds. On speech discrimination tasks, she exhibited poor differentiation of stop consonant-vowel syllables distinguished by voicing onset and brief formant frequency transitions. Isolated formant transitions could be reliably discriminated only at very long durations (> 200 ms). By contrast, click fusion threshold, which depends on millisecond-level resolution of brief auditory events, was normal. These results suggest that the problems with speech analysis in this case were not secondary to general constraints on auditory temporal resolution. Rather, they point to a disturbance of left hemisphere auditory mechanisms that preferentially analyze rapid spectrotemporal variations in frequency. The findings have important implications for our conceptualization of PWD and its subtypes.

  17. Music for the birds: effects of auditory enrichment on captive bird species.

    PubMed

    Robbins, Lindsey; Margulis, Susan W

    2016-01-01

    With the increase of mixed species exhibits in zoos, targeting enrichment for individual species may be problematic. Often, mammals may be the primary targets of enrichment, yet other species that share their environment (such as birds) will unavoidably be exposed to the enrichment as well. The purpose of this study was to determine if (1) auditory stimuli designed for enrichment of primates influenced the behavior of captive birds in the zoo setting, and (2) if the specific type of auditory enrichment impacted bird behavior. Three different African bird species were observed at the Buffalo Zoo during exposure to natural sounds, classical music and rock music. The results revealed that the average frequency of flying in all three bird species increased with naturalistic sounds and decreased with rock music (F = 7.63, df = 3,6, P = 0.018); vocalizations for two of the three species (Superb Starlings and Mousebirds) increased (F = 18.61, df = 2,6, P = 0.0027) in response to all auditory stimuli, however one species (Lady Ross's Turacos) increased frequency of duetting only in response to rock music (X(2) = 18.5, df = 2, P < 0.0001). Auditory enrichment implemented for large mammals may influence behavior in non-target species as well, in this case leading to increased activity by birds. © 2016 Wiley Periodicals, Inc.

  18. Audiological and electrophysiological assessment of professional pop/rock musicians.

    PubMed

    Samelli, Alessandra G; Matas, Carla G; Carvallo, Renata M M; Gomes, Raquel F; de Beija, Carolina S; Magliaro, Fernanda C L; Rabelo, Camila M

    2012-01-01

    In the present study, we evaluated peripheral and central auditory pathways in professional musicians (with and without hearing loss) compared to non-musicians. The goal was to verify if music exposure could affect auditory pathways as a whole. This is a prospective study that compared the results obtained between three groups (musicians with and without hearing loss and non-musicians). Thirty-two male individuals participated and they were assessed by: Immittance measurements, pure-tone air conduction thresholds at all frequencies from 0.25 to 20 kHz, Transient Evoked Otoacoustic Emissions, Auditory Brainstem Response (ABR), and Cognitive Potential. The musicians showed worse hearing thresholds in both conventional and high frequency audiometry when compared to the non-musicians; the mean amplitude of Transient Evoked Otoacoustic Emissions was smaller in the musicians group, but the mean latencies of Auditory Brainstem Response and Cognitive Potential were diminished in the musicians when compared to the non-musicians. Our findings suggest that the population of musicians is at risk for developing music-induced hearing loss. However, the electrophysiological evaluation showed that latency waves of ABR and P300 were diminished in musicians, which may suggest that the auditory training to which these musicians are exposed acts as a facilitator of the acoustic signal transmission to the cortex.

  19. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    NASA Astrophysics Data System (ADS)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  20. Auditory mismatch negativity deficits in long-term heavy cannabis users.

    PubMed

    Roser, Patrik; Della, Beate; Norra, Christine; Uhl, Idun; Brüne, Martin; Juckel, Georg

    2010-09-01

    Mismatch negativity (MMN) is an auditory event-related potential indicating auditory sensory memory and information processing. The present study tested the hypothesis that chronic cannabis use is associated with deficient MMN generation. MMN was investigated in age- and gender-matched chronic cannabis users (n = 30) and nonuser controls (n = 30). The cannabis users were divided into two groups according to duration and quantity of cannabis consumption. The MMNs resulting from a pseudorandomized sequence of 2 × 900 auditory stimuli were recorded by 32-channel EEG. The standard stimuli were 1,000 Hz, 80 dB SPL and 90 ms duration. The deviant stimuli differed in duration (50 ms) or frequency (1,200 Hz). There were no significant differences in MMN values between cannabis users and nonuser controls in both deviance conditions. With regard to subgroups, reduced amplitudes of frequency MMN at frontal electrodes were found in long-term (≥8 years of use) and heavy (≥15 joints/week) users compared to short-term and light users. The results indicate that chronic cannabis use may cause a specific impairment of auditory information processing. In particular, duration and quantity of cannabis use could be identified as important factors of deficient MMN generation.

  1. Intracranial mapping of auditory perception: event-related responses and electrocortical stimulation.

    PubMed

    Sinai, A; Crone, N E; Wied, H M; Franaszczuk, P J; Miglioretti, D; Boatman-Reich, D

    2009-01-01

    We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping.

  2. Intracranial mapping of auditory perception: Event-related responses and electrocortical stimulation

    PubMed Central

    Sinai, A.; Crone, N.E.; Wied, H.M.; Franaszczuk, P.J.; Miglioretti, D.; Boatman-Reich, D.

    2010-01-01

    Objective We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Methods Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. Results ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60 Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Conclusions Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. Significance These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping. PMID:19070540

  3. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise

    PubMed Central

    Ioannou, Christos I.; Pereda, Ernesto; Lindsen, Job P.; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies. PMID:26065708

  4. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise.

    PubMed

    Ioannou, Christos I; Pereda, Ernesto; Lindsen, Job P; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.

  5. Event-related alpha synchronization/desynchronization in a memory-search task in adolescent survivors of childhood cancer.

    PubMed

    Lähteenmäki, P M; Krause, C M; Sillanmäki, L; Salmi, T T; Lang, A H

    1999-12-01

    Event-related desynchronization (ERD) and synchronization (ERS) of the 8-10 and 10-12 Hz frequency bands of the background EEG were studied in 19 adolescent survivors of childhood cancer (11 leukemias, 8 solid tumors) and in 10 healthy control subjects performing an auditory memory task. The stimuli were auditory Finnish words presented as a Sternberg-type memory-scanning paradigm. Each trial started with the presentation of a 4 word set for memorization whereafter a probe word was presented to be identified by the subject as belonging or not belonging to the memorized set. Encoding of the memory set elicited ERS and retrieval ERD at both frequency bands. However, in the survivors of leukemia, ERS was turned to ERD during encoding at the lower alpha frequency band. ERD was lasting longer at the lower frequency band than at the higher frequency band, in each study group. At both frequency bands, the maximum of ERD was achieved later in the cancer survivors than in the control group. The previously reported type of ERD/ERS during an auditory memory task was reproducible also in the survivors of childhood cancer, at different alpha frequency bands. However, the temporal deviance in ERD/ERS magnitudes, in the cancer survivors, was interpreted to indicate that both survivor groups had prolonged information processing time and/or they used ineffective cognitive strategies. This finding was more pronounced in the group of leukemia survivors, at the lower alpha frequency band, suggesting that the main problem of this patient group might be in the field of attention.

  6. Correlation analysis of the long latency auditory evoked potential N2 and cognitive P3 with the level of lead poisoning in children

    PubMed Central

    Alvarenga, Kátia de Freitas; Alvarez Bernardez-Braga, Gabriela Rosito; Zucki, Fernanda; Duarte, Josilene Luciene; Lopes, Andrea Cintra; Feniman, Mariza Ribeiro

    2013-01-01

    Summary Introduction: The effects of lead on children's health have been widely studied. Aim: To analyze the correlation between the long latency auditory evoked potential N2 and cognitive P3 with the level of lead poisoning in Brazilian children. Methods: This retrospective study evaluated 20 children ranging in age from 7 to 14 years at the time of audiological and electrophysiological evaluations. We performed periodic surveys of the lead concentration in the blood and basic audiological evaluations. Furthermore, we studied the auditory evoked potential long latency N2 and cognitive P3 by analyzing the absolute latency of the N2 and P3 potentials and the P3 amplitude recorded at Cz. At the time of audiological and electrophysiological evaluations, the average concentration of lead in the blood was less than 10 ug/dL. Results: In conventional audiologic evaluations, all children had hearing thresholds below 20 dBHL for the frequencies tested and normal tympanometry findings; the auditory evoked potential long latency N2 and cognitive P3 were present in 95% of children. No significant correlations were found between the blood lead concentration and latency (p = 0.821) or amplitude (p = 0.411) of the P3 potential. However, the latency of the N2 potential increased with the concentration of lead in the blood, with a significant correlation (p = 0.030). Conclusion: Among Brazilian children with low lead exposure, a significant correlation was found between blood lead levels and the average latency of the auditory evoked potential long latency N2; however, a significant correlation was not observed for the amplitude and latency of the cognitive potential P3. PMID:25991992

  7. Hearing assessment during deep brain stimulation of the central nucleus of the inferior colliculus and dentate cerebellar nucleus in rat.

    PubMed

    Smit, Jasper V; Jahanshahi, Ali; Janssen, Marcus L F; Stokroos, Robert J; Temel, Yasin

    2017-01-01

    Recently it has been shown in animal studies that deep brain stimulation (DBS) of auditory structures was able to reduce tinnitus-like behavior. However, the question arises whether hearing might be impaired when interfering in auditory-related network loops with DBS. The auditory brainstem response (ABR) was measured in rats during high frequency stimulation (HFS) and low frequency stimulation (LFS) in the central nucleus of the inferior colliculus (CIC, n  = 5) or dentate cerebellar nucleus (DCBN, n  = 5). Besides hearing thresholds using ABR, relative measures of latency and amplitude can be extracted from the ABR. In this study ABR thresholds, interpeak latencies (I-III, III-V, I-V) and V/I amplitude ratio were measured during off-stimulation state and during LFS and HFS. In both the CIC and the CNBN groups, no significant differences were observed for all outcome measures. DBS in both the CIC and the CNBN did not have adverse effects on hearing measurements. These findings suggest that DBS does not hamper physiological processing in the auditory circuitry.

  8. Test-retest reliability of pure-tone thresholds from 0.5 to 16 kHz using Sennheiser HDA 200 and Etymotic Research ER-2 earphones.

    PubMed

    Schmuziger, Nicolas; Probst, Rudolf; Smurzynski, Jacek

    2004-04-01

    The purposes of the study were: (1) To evaluate the intrasession test-retest reliability of pure-tone thresholds measured in the 0.5-16 kHz frequency range for a group of otologically healthy subjects using Sennheiser HDA 200 circumaural and Etymotic Research ER-2 insert earphones and (2) to compare the data with existing criteria of significant threshold shifts related to ototoxicity and noise-induced hearing loss. Auditory thresholds in the frequency range from 0.5 to 6 kHz and in the extended high-frequency range from 8 to 16 kHz were measured in one ear of 138 otologically healthy subjects (77 women, 61 men; mean age, 24.4 yr; range, 12-51 yr) using HDA 200 and ER-2 earphones. For each subject, measurements of thresholds were obtained twice for both transducers during the same test session. For analysis, the extended high-frequency range from 8 to 16 kHz was subdivided into 8 to 12.5 and 14 to 16 kHz ranges. Data for each frequency and frequency range were analyzed separately. There were no significant differences in repeatability for the two transducer types for all frequency ranges. The intrasession variability increased slightly, but significantly, as frequency increased with the greatest amount of variability in the 14 to 16 kHz range. Analyzing each individual frequency, variability was increased particularly at 16 kHz. At each individual frequency and for both transducer types, intrasession test-retest repeatability from 0.5 to 6 kHz and 8 to 16 kHz was within 10 dB for >99% and >94% of measurements, respectively. The results indicated a false-positive rate of <3% in reference to the criteria for cochleotoxicity for both transducer types. In reference to the Occupational Safety and Health Administration Standard Threshold Shift criteria for noise-induced hazards, the results showed a minor false-positive rate of <1% for the HDA 200. Repeatability was similar for both transducer types. Intrasession test-retest repeatability from 0.5 to 12.5 kHz at each individual frequency including the frequency range susceptible to noise-induced hearing loss was excellent for both transducers. Repeatability was slightly, but significantly poorer in the frequency range from 14 to 16 kHz compared with the frequency ranges from 0.5 to 6 or 8 to 12.5 kHz. Measurements in the extended high-frequency range from 8 to 14 kHz, but not up to 16 kHz, may be recommended for monitoring purposes.

  9. A temperature rise reduces trial-to-trial variability of locust auditory neuron responses.

    PubMed

    Eberhard, Monika J B; Schleimer, Jan-Hendrik; Schreiber, Susanne; Ronacher, Bernhard

    2015-09-01

    The neurophysiology of ectothermic animals, such as insects, is affected by environmental temperature, as their body temperature fluctuates with ambient conditions. Changes in temperature alter properties of neurons and, consequently, have an impact on the processing of information. Nevertheless, nervous system function is often maintained over a broad temperature range, exhibiting a surprising robustness to variations in temperature. A special problem arises for acoustically communicating insects, as in these animals mate recognition and mate localization typically rely on the decoding of fast amplitude modulations in calling and courtship songs. In the auditory periphery, however, temporal resolution is constrained by intrinsic neuronal noise. Such noise predominantly arises from the stochasticity of ion channel gating and potentially impairs the processing of sensory signals. On the basis of intracellular recordings of locust auditory neurons, we show that intrinsic neuronal variability on the level of spikes is reduced with increasing temperature. We use a detailed mathematical model including stochastic ion channel gating to shed light on the underlying biophysical mechanisms in auditory receptor neurons: because of a redistribution of channel-induced current noise toward higher frequencies and specifics of the temperature dependence of the membrane impedance, membrane potential noise is indeed reduced at higher temperatures. This finding holds under generic conditions and physiologically plausible assumptions on the temperature dependence of the channels' kinetics and peak conductances. We demonstrate that the identified mechanism also can explain the experimentally observed reduction of spike timing variability at higher temperatures. Copyright © 2015 the American Physiological Society.

  10. A temperature rise reduces trial-to-trial variability of locust auditory neuron responses

    PubMed Central

    Schleimer, Jan-Hendrik; Schreiber, Susanne; Ronacher, Bernhard

    2015-01-01

    The neurophysiology of ectothermic animals, such as insects, is affected by environmental temperature, as their body temperature fluctuates with ambient conditions. Changes in temperature alter properties of neurons and, consequently, have an impact on the processing of information. Nevertheless, nervous system function is often maintained over a broad temperature range, exhibiting a surprising robustness to variations in temperature. A special problem arises for acoustically communicating insects, as in these animals mate recognition and mate localization typically rely on the decoding of fast amplitude modulations in calling and courtship songs. In the auditory periphery, however, temporal resolution is constrained by intrinsic neuronal noise. Such noise predominantly arises from the stochasticity of ion channel gating and potentially impairs the processing of sensory signals. On the basis of intracellular recordings of locust auditory neurons, we show that intrinsic neuronal variability on the level of spikes is reduced with increasing temperature. We use a detailed mathematical model including stochastic ion channel gating to shed light on the underlying biophysical mechanisms in auditory receptor neurons: because of a redistribution of channel-induced current noise toward higher frequencies and specifics of the temperature dependence of the membrane impedance, membrane potential noise is indeed reduced at higher temperatures. This finding holds under generic conditions and physiologically plausible assumptions on the temperature dependence of the channels' kinetics and peak conductances. We demonstrate that the identified mechanism also can explain the experimentally observed reduction of spike timing variability at higher temperatures. PMID:26041833

  11. Cochlear neuropathy and the coding of supra-threshold sound.

    PubMed

    Bharadwaj, Hari M; Verhulst, Sarah; Shaheen, Luke; Liberman, M Charles; Shinn-Cunningham, Barbara G

    2014-01-01

    Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses (SSSRs) in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds (NHTs), paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation (FM), reveal individual differences that correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers (ANFs) without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in SSSRs in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.

  12. Effects of auditory cues on gait initiation and turning in patients with Parkinson's disease.

    PubMed

    Gómez-González, J; Martín-Casas, P; Cano-de-la-Cuerda, R

    2016-12-08

    To review the available scientific evidence about the effectiveness of auditory cues during gait initiation and turning in patients with Parkinson's disease. We conducted a literature search in the following databases: Brain, PubMed, Medline, CINAHL, Scopus, Science Direct, Web of Science, Cochrane Database of Systematic Reviews, Cochrane Library Plus, CENTRAL, Trip Database, PEDro, DARE, OTseeker, and Google Scholar. We included all studies published between 2007 and 2016 and evaluating the influence of auditory cues on independent gait initiation and turning in patients with Parkinson's disease. The methodological quality of the studies was assessed with the Jadad scale. We included 13 studies, all of which had a low methodological quality (Jadad scale score≤2). In these studies, high-intensity, high-frequency auditory cues had a positive impact on gait initiation and turning. More specifically, they 1) improved spatiotemporal and kinematic parameters; 2) decreased freezing, turning duration, and falls; and 3) increased gait initiation speed, muscle activation, and gait speed and cadence in patients with Parkinson's disease. We need studies of better methodological quality to establish the Parkinson's disease stage in which auditory cues are most beneficial, as well as to determine the most effective type and frequency of the auditory cue during gait initiation and turning in patients with Parkinson's disease. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. Local and Global Auditory Processing: Behavioral and ERP Evidence

    PubMed Central

    Sanders, Lisa D.; Poeppel, David

    2007-01-01

    Differential processing of local and global visual features is well established. Global precedence effects, differences in event-related potentials (ERPs) elicited when attention is focused on local versus global levels, and hemispheric specialization for local and global features all indicate that relative scale of detail is an important distinction in visual processing. Observing analogous differential processing of local and global auditory information would suggest that scale of detail is a general organizational principle of the brain. However, to date the research on auditory local and global processing has primarily focused on music perception or on the perceptual analysis of relatively higher and lower frequencies. The study described here suggests that temporal aspects of auditory stimuli better capture the local-global distinction. By combining short (40 ms) frequency modulated tones in series to create global auditory patterns (500 ms), we independently varied whether pitch increased or decreased over short time spans (local) and longer time spans (global). Accuracy and reaction time measures revealed better performance for global judgments and asymmetric interference that were modulated by amount of pitch change. ERPs recorded while participants listened to identical sounds and indicated the direction of pitch change at the local or global levels provided evidence for differential processing similar to that found in ERP studies employing hierarchical visual stimuli. ERP measures failed to provide evidence for lateralization of local and global auditory perception, but differences in distributions suggest preferential processing in more ventral and dorsal areas respectively. PMID:17113115

  14. Deep transcranial magnetic stimulation add-on for the treatment of auditory hallucinations: a double-blind study

    PubMed Central

    2012-01-01

    Background About 25% of schizophrenia patients with auditory hallucinations are refractory to pharmacotherapy and electroconvulsive therapy. We conducted a deep transcranial magnetic stimulation (TMS) pilot study in order to evaluate the potential clinical benefit of repeated left temporoparietal cortex stimulation in these patients. The results were encouraging, but a sham-controlled study was needed to rule out a placebo effect. Methods A total of 18 schizophrenic patients with refractory auditory hallucinations were recruited, from Beer Yaakov MHC and other hospitals outpatient populations. Patients received 10 daily treatment sessions with low-frequency (1 Hz for 10 min) deep TMS applied over the left temporoparietal cortex, using the H1 coil at the intensity of 110% of the motor threshold. Procedure was either real or sham according to patient randomization. Patients were evaluated via the Auditory Hallucinations Rating Scale, Scale for the Assessment of Positive Symptoms-Negative Symptoms, Clinical Global Impressions, and Quality of Life Questionnaire. Results In all, 10 patients completed the treatment (10 TMS sessions). Auditory hallucination scores of both groups improved; however, there was no statistical difference in any of the scales between the active and the sham treated groups. Conclusions Low-frequency deep TMS to the left temporoparietal cortex using the protocol mentioned above has no statistically significant effect on auditory hallucinations or the other clinical scales measured in schizophrenic patients. Trial Registration Clinicaltrials.gov identifier: NCT00564096. PMID:22559192

  15. Sound-power collection by the auditory periphery of the Mongolian gerbil Meriones unguiculatus. I: Middle-ear input impedance.

    PubMed

    Ravicz, M E; Rosowski, J J; Voigt, H F

    1992-07-01

    This is the first paper of a series dealing with sound-power collection by the auditory periphery of the gerbil. The purpose of the series is to quantify the physiological action of the gerbil's relatively large tympanic membrane and middle-ear air cavities. To this end the middle-ear input impedance ZT was measured at frequencies between 10 Hz and 18 kHz before and after manipulations of the middle-ear cavity. The frequency dependence of ZT is consistent with that of the middle-ear transfer function computed from extant data. Comparison of the impedance and transfer function suggests a middle-ear transformer ratio of 50 at frequencies below 1 kHz, substantially smaller than the anatomical value of 90 [Lay, J. Morph. 138, 41-120 (1972)]. Below 1 kHz the data suggest a low-frequency acoustic stiffness KT for the middle ear of 970 Pa/mm3 and a stiffness of the middle-ear cavity of 720 Pa/mm3 (middle-ear volume V MEC of 195 mm3); thus the middle-ear air spaces contribute about 70% of the acoustic stiffness of the auditory periphery. Manipulations of a middle-ear model suggest that decreases in V MEC lead to proportionate increases in KT but that further increases in middle-ear cavity volume produce only limited decreases in middle-ear stiffness. The data and the model point out that the real part of the middle-ear impedance at frequencies below 100 Hz is determined primarily by losses within the middle-ear cavity. The measured impedance is comparable in magnitude and frequency dependence to the impedance in several larger mammalian species commonly used in auditory research. A comparison of low-frequency stiffness and anatomical dimensions among several species suggests that the large middle-ear cavities in gerbil act to reduce the middle-ear stiffness at low frequencies. A description of sound-power collection by the gerbil ear requires a description of the function of the external ear.

  16. Contrast sensitivity test and conventional and high frequency audiometry: information beyond that required to prescribe lenses and headsets

    NASA Astrophysics Data System (ADS)

    Comastri, S. A.; Martin, G.; Simon, J. M.; Angarano, C.; Dominguez, S.; Luzzi, F.; Lanusse, M.; Ranieri, M. V.; Boccio, C. M.

    2008-04-01

    In Optometry and in Audiology, the routine tests to prescribe correction lenses and headsets are respectively the visual acuity test (the first chart with letters was developed by Snellen in 1862) and conventional pure tone audiometry (the first audiometer with electrical current was devised by Hartmann in 1878). At present there are psychophysical non invasive tests that, besides evaluating visual and auditory performance globally and even in cases catalogued as normal according to routine tests, supply early information regarding diseases such as diabetes, hypertension, renal failure, cardiovascular problems, etc. Concerning Optometry, one of these tests is the achromatic luminance contrast sensitivity test (introduced by Schade in 1956). Concerning Audiology, one of these tests is high frequency pure tone audiometry (introduced a few decades ago) which yields information relative to pathologies affecting the basal cochlea and complements data resulting from conventional audiometry. These utilities of the contrast sensitivity test and of pure tone audiometry derive from the facts that Fourier components constitute the basis to synthesize stimuli present at the entrance of the visual and auditory systems; that these systems responses depend on frequencies and that the patient's psychophysical state affects frequency processing. The frequency of interest in the former test is the effective spatial frequency (inverse of the angle subtended at the eye by a cycle of a sinusoidal grating and measured in cycles/degree) and, in the latter, the temporal frequency (measured in cycles/sec). Both tests have similar duration and consist in determining the patient's threshold (corresponding to the inverse multiplicative of the contrast or to the inverse additive of the sound intensity level) for each harmonic stimulus present at the system entrance (sinusoidal grating or pure tone sound). In this article the frequencies, standard normality curves and abnormal threshold shifts inherent to the contrast sensitivity test (which for simplicity could be termed "visionmetry") and to pure tone audiometry (also termed auditory sensitivity test) are analyzed with the purpose of contributing to divulge their ability to supply early information associated to pathologies not solely related to the visual and auditory systems respectively.

  17. Thalamocortical Dysrhythmia: A Theoretical Update in Tinnitus

    PubMed Central

    De Ridder, Dirk; Vanneste, Sven; Langguth, Berthold; Llinas, Rodolfo

    2015-01-01

    Tinnitus is the perception of a sound in the absence of a corresponding external sound source. Pathophysiologically it has been attributed to bottom-up deafferentation and/or top-down noise-cancelling deficit. Both mechanisms are proposed to alter auditory ­thalamocortical signal transmission, resulting in thalamocortical dysrhythmia (TCD). In deafferentation, TCD is characterized by a slowing down of resting state alpha to theta activity associated with an increase in surrounding gamma activity, resulting in persisting cross-frequency coupling between theta and gamma activity. Theta burst-firing increases network synchrony and recruitment, a mechanism, which might enable long-range synchrony, which in turn could represent a means for finding the missing thalamocortical information and for gaining access to consciousness. Theta oscillations could function as a carrier wave to integrate the tinnitus-related focal auditory gamma activity in a consciousness enabling network, as envisioned by the global workspace model. This model suggests that focal activity in the brain does not reach consciousness, except if the focal activity becomes functionally coupled to a consciousness enabling network, aka the global workspace. In limited deafferentation, the missing information can be retrieved from the auditory cortical neighborhood, decreasing surround inhibition, resulting in TCD. When the deafferentation is too wide in bandwidth, it is hypothesized that the missing information is retrieved from theta-mediated parahippocampal auditory memory. This suggests that based on the amount of deafferentation TCD might change to parahippocampocortical persisting and thus pathological theta–gamma rhythm. From a Bayesian point of view, in which the brain is conceived as a prediction machine that updates its memory-based predictions through sensory updating, tinnitus is the result of a prediction error between the predicted and sensed auditory input. The decrease in sensory updating is reflected by decreased alpha activity and the prediction error results in theta–gamma and beta–gamma coupling. Thus, TCD can be considered as an adaptive mechanism to retrieve missing auditory input in tinnitus. PMID:26106362

  18. Ontogenetic Development of Weberian Ossicles and Hearing Abilities in the African Bullhead Catfish

    PubMed Central

    Lechner, Walter; Heiss, Egon; Schwaha, Thomas; Glösmann, Martin; Ladich, Friedrich

    2011-01-01

    Background The Weberian apparatus of otophysine fishes facilitates sound transmission from the swimbladder to the inner ear to increase hearing sensitivity. It has been of great interest to biologists since the 19th century. No studies, however, are available on the development of the Weberian ossicles and its effect on the development of hearing in catfishes. Methodology/Principal Findings We investigated the development of the Weberian apparatus and auditory sensitivity in the catfish Lophiobagrus cyclurus. Specimens from 11.3 mm to 85.5 mm in standard length were studied. Morphology was assessed using sectioning, histology, and X-ray computed tomography, along with 3D reconstruction. Hearing thresholds were measured utilizing the auditory evoked potentials recording technique. Weberian ossicles and interossicular ligaments were fully developed in all stages investigated except in the smallest size group. In the smallest catfish, the intercalarium and the interossicular ligaments were still missing and the tripus was not yet fully developed. Smallest juveniles revealed lowest auditory sensitivity and were unable to detect frequencies higher than 2 or 3 kHz; sensitivity increased in larger specimens by up to 40 dB, and frequency detection up to 6 kHz. In the size groups capable of perceiving frequencies up to 6 kHz, larger individuals had better hearing abilities at low frequencies (0.05–2 kHz), whereas smaller individuals showed better hearing at the highest frequencies (4–6 kHz). Conclusions/Significance Our data indicate that the ability of otophysine fish to detect sounds at low levels and high frequencies largely depends on the development of the Weberian apparatus. A significant increase in auditory sensitivity was observed as soon as all Weberian ossicles and interossicular ligaments are present and the chain for transmitting sounds from the swimbladder to the inner ear is complete. This contrasts with findings in another otophysine, the zebrafish, where no threshold changes have been observed. PMID:21533262

  19. Phencyclidine Disrupts the Auditory Steady State Response in Rats

    PubMed Central

    Leishman, Emma; O’Donnell, Brian F.; Millward, James B.; Vohs, Jenifer L.; Rass, Olga; Krishnan, Giri P.; Bolbecker, Amanda R.; Morzorati, Sandra L.

    2015-01-01

    The Auditory Steady-State Response (ASSR) in the electroencephalogram (EEG) is usually reduced in schizophrenia (SZ), particularly to 40 Hz stimulation. The gamma frequency ASSR deficit has been attributed to N-methyl-D-aspartate receptor (NMDAR) hypofunction. We tested whether the NMDAR antagonist, phencyclidine (PCP), produced similar ASSR deficits in rats. EEG was recorded from awake rats via intracranial electrodes overlaying the auditory cortex and at the vertex of the skull. ASSRs to click trains were recorded at 10, 20, 30, 40, 50, and 55 Hz and measured by ASSR Mean Power (MP) and Phase Locking Factor (PLF). In Experiment 1, the effect of different subcutaneous doses of PCP (1.0, 2.5 and 4.0 mg/kg) on the ASSR in 12 rats was assessed. In Experiment 2, ASSRs were compared in PCP treated rats and control rats at baseline, after acute injection (5 mg/kg), following two weeks of subchronic, continuous administration (5 mg/kg/day), and one week after drug cessation. Acute administration of PCP increased PLF and MP at frequencies of stimulation below 50 Hz, and decreased responses at higher frequencies at the auditory cortex site. Acute administration had a less pronounced effect at the vertex site, with a reduction of either PLF or MP observed at frequencies above 20 Hz. Acute effects increased in magnitude with higher doses of PCP. Consistent effects were not observed after subchronic PCP administration. These data indicate that acute administration of PCP, a NMDAR antagonist, produces an increase in ASSR synchrony and power at low frequencies of stimulation and a reduction of high frequency (> 40 Hz) ASSR activity in rats. Subchronic, continuous administration of PCP, on the other hand, has little impact on ASSRs. Thus, while ASSRs are highly sensitive to NMDAR antagonists, their translational utility as a cross-species biomarker for NMDAR hypofunction in SZ and other disorders may be dependent on dose and schedule. PMID:26258486

  20. The effect of superior canal dehiscence on cochlear potential in response to air-conducted stimuli in chinchilla

    PubMed Central

    Songer, Jocelyn E.; Rosowski, John J.

    2006-01-01

    A superior semicircular canal dehiscence (SCD) is a break or hole in the bony wall of the superior semicircular canal. Patients with SCD syndrome present with a variety of symptoms: some with vestibular symptoms, others with auditory symptoms (including low-frequency conductive hearing loss) and yet others with both. We are interested in whether or not mechanically altering the superior canal by introducing a dehiscence is sufficient to cause the low-frequency conductive hearing loss associated with SCD syndrome. We evaluated the effect of a surgically introduced dehiscence on auditory responses to air-conducted (AC) stimuli in 11 chinchilla ears. Cochlear potential (CP) was recorded at the round-window before and after a dehiscence was introduced. In each ear, a decrease in CP in response to low frequency (<2 kHz) sound stimuli was observed after the introduction of the dehiscence. The dehiscence was then patched with cyanoacrylate glue leading to a reversal of the dehiscence-induced changes in CP. The reversible decrease in auditory sensitivity observed in chinchilla is consistent with the elevated AC thresholds observed in patients with SCD. According to the ‘third-window’ hypothesis the SCD shunts sound-induced stapes velocity away from the cochlea, resulting in decreased auditory sensitivity to AC sounds. The data collected in this study are consistent with predictions of this hypothesis. PMID:16150562

Top